This WWW version was prepared by the author from the text submitted for publication, and appears by permission of the copyright holder. Copyright © 1994 by the Association for Computing Machinery, Inc. Please click here for full ACM copyright notice.
Let us begin with a few observations, some of them self-evident, all of them well-known and readily verifiable.
There are a large number of different programming languages, in serious active use by significant numbers of people, most of them available on a variety of hardware and operating system platforms.
Most of these differ markedly one from another in style, substance and appearance, so much so that usually the language concerned can be identified from only a small fragment of code.
These differences can be accounted for by the fact that numerous different criteria can be specified as potentially desirable, some of them mutually conflicting, and very different languages tend to result from the choice of criteria and the relative priorities assigned to them.
Nevertheless all languages have basically the same purpose, to allow programmers to express what they want an IT system to do, and even very disparate languages have much more in common than is obvious on the surface.
This paper seeks to argue that too much emphasis is placed on the differences, while commonality has never been properly exploited. An analogy, whose aptness may surprise some people, is the way differences in origin, culture, environment etc between people, too often cause their underlying commonality as human beings to be ignored. The aptness is not total, of course; if people were like programming languages, then to relieve headaches, "aspirins" with different chemical formulae - not just different words for the instructions on the bottle - would be needed for every community on earth.
Providing language-independent facilities to relieve headaches caused by unnecessary diversity in languages is a difficult business. The diversity may be unnecessary, but that does not mean it is easy to cope with, let alone remove. Historically, new languages have arisen where people, sometimes just individuals, have perceived that no existing and available language met the needs of their application or environment and that it was easier to develop their own. It does not matter whether or not they were right in thinking it was easier, nor even in thinking already available languages were unsuitable. It does not matter if their false perception was based on ignorance. It does not even matter if all they were doing was rationalising a desire to create their own thing even though this was demonstrably unnecessary. They did it, they redid all the design (usually from the raw hardware up), made many fundamental design decisions based on their intended application or facilities, or the hardware available, or just ad hoc, and built them so deep into the fabric of the language that they became impossible to change.
In many cases - most - these artefacts wither away and die, or else survive on a small scale in isolated corners where conditions are not too hostile, like so many weird and lovely (or unlovely) plants. These need not concern us. We are concerned with the still sizeable minority which gain a following, and start spreading in use and influence.
That some do is of course accounted for by the third general observation that we started with. But usually there comes a time when use of the language is no longer confined to its community without interaction with the outside world, in particular the world of other languages. Implementors encounter problems when the new language's view of the world differs somewhat from those of the host platform or other languages already supported. Users have problems when they try to exploit the facilities. Programmers and their employers have problems when they have to adjust to or retrain for the unfamiliar language. Often the minor differences cause more trouble than the major and hence obvious ones; things are taken for granted which ought not to be, and you only find out later, often the hard way, when things don't work as expected.
Implementors have traditionally tended to deal with this by bending the language to suit the platform, or into how they thought it ought to have been designed rather than how it actually was, hence leading to dialects and further problems for users, even those using just the one language.
It is the job of standardisation - proper, official standardisation that is - to provide means of coping with or alleviating the effects of incompatibilities. Again the traditional approach is to do it wholly within the language community concerned, i.e. dealing with the dialect problem but not the inter-language problem. However, a few years ago the subcommittee responsible for programming languages, then TC97/SC5, now JTC1/SC22, set up a working group (WG10 in SC22) to look at issues of commonality. As well as producing guidelines for programming language standardisation, which I as WG convenor had the task but also the privilege to edit, this group also looked at areas suitable for cross-language standardisation, where underlying commonality could not just be exploited but bring significant benefits.
There were several contenders, of which more will be said at the end, but two stood out as being highest priority: datatypes and procedure calling. All languages specify the processing of data and all, even those sometimes called "untyped", have some recognition of different datatypes of data, e.g. that numeric data is different from textual data; while all have somewhere the concept of specifying a "procedure" (not always called that) and invoking it from elsewhere in the program. When eventually these topics were approved as standards projects, they were assigned to another working group, now SC22/WG11, entitled "language bindings" for historical reasons not relevant here. Much of this paper will be about these projects and related work.
Before embarking on discussion of these projects it is worth saying a little more about general issues. Language independent standards (or specifications) may be base standards, which specify the basic but non-linguistic building blocks from which languages are made; functional standards, which add functionality which the languages otherwise do not have; and generic standards, which provide a specification in a language-independent manner of concepts which all (or many) languages have in common. Note that these terms are used here not in the rather specialist senses in which they are applied to open systems standards, though there are similarities arising from the everyday meanings of the words base, functional etc. OSI does not have a monopoly of these terms, and functional standards were being talked of in connection with languages long before OSI's profiles were ever dreamed of!
In the current context, character sets are the archetypal base standards, whereas graphics standards like GKS are the archetypal functional standards. Datatypes and procedure calling are clearly in the generic category. Note that this classification is rather of roles than categories; a particular standard may have more than one of these roles, or assume different roles for different languages (e.g. a standard may be generic for languages with certain facilities built in but functional for those that do not).
A second important distinction is between levels of abstraction. Language definitions and hence standards, being machine-independent, should exist at a level higher than that of representation on actual machines. On the other hand they cannot be purely abstract, because ability to represent and manipulate is a prerequisite. Mathematics, at the abstract level, can readily deal with infinite sets, for example. So between the two there is this "computational" level:
------------------ | abstract | | computational | | representational | ------------------
Language-independent datatypes and procedure calling both definitely belong to the computational level, though unfortunately some actual languages confuse the issue by having explicit or implicit representational assumptions, and/or specify things in too abstract a way, which means that incompatibilities can arise.
An example might be the definition of integer as the (infinite) mathematical domain at the abstract level. At the computational level it becomes a finite subrange lowerbound:upperbound of integer, with constraints, for example on the magnitudes of the bounds and the need to handle both positive and negative values (too many language definitions ignore this). In an actual implementation it will become an actual subrange with (if you are lucky or the standard is good enough) known actual bounds so you know what you are getting.
Even this last is logically above the level of actual representation, though often driven by what is convenient to represent, and it turns out that it is often useful to make a further distinction between two sublevels of the computational level:
------------------ | abstract | | linguistic | | operational | | representational | ------------------
By happy chance the pyramidal image produced by the lengths of these English words neatly illustrates the extra detail you need as you drop down the levels. Troubles arise if levels of abstraction are confused.
Let us now turn to the projects themselves.
Language Independent Datatypes (LID)
This generic standard, currently (at end 1993) at draft stage as ISO/IEC DIS 11404, provides a reference collection of datatypes which can be used as common ground by actual programming languages. (DIS - Draft International Standard - means that the technical content has been finalised, no only minor corrections or editorial changes remain.) The original title was "Common Language Independent Datatypes", but the word "common" was later droppe (in line with the Language Independent Arithmetic standard, see below) as being redundant - "language independent" is enough. However, some older papers, e.g. [Meek 1990], have referred to it by its former title, or its abbreviation, CLID.
LID is an enabling standard, to aid the specification and development of tools and services which all languages can share, and exists at the linguistic level of abstraction. For languages to avail themselves of such LID-based facilities, mapping standards will be needed to bind the native datatypes of each language to LID, and vice versa. The LID standard specifies conformity rules for such mapping standards and provides guidance in performing the association. In particular LID specifies mappings of datatype to datatype, and of value-of-a-given-datatype to value-of-a-given-datatype, but no more. The LID standard specifies "characterising operations" for values of each of its datatypes but it is not a requirement that a language must support all of those operations for values of the native datatype associated with the relevant LID standard datatype; that would be dropping to the operational level.
It may then be asked, why does LID mention operations at all? The answer is simply to help those attempting to define mappings, between actual languages and the LID standard datatypes, to recognise the most appropriate match for their purpose - the "best fit". In general, for each datatype the characterising operations listed are neither exhaustive nor minimal. The aim has been to provide enough characterising operations to distinguish a given LID standard datatype from others with a similar "value space", and leave users of the standard (envisaged after all as typical and experienced "language people") in no doubt as to the nature of the datatype intended.
It must be borne in mind throughout that the LID standard provides a very general conceptual model not aimed at any specific application. Despite some misapprehensions, it does not exist just to support parameter passing in common language-independent procedure calling, the other project referred to and described later, though that is undoubtedly an important application. Similarly it does not exist just to support transmission of data between one language processor and another, though that too is important.
An implication of the LID standard is that if language-independent facilities define their own datatypes then this multiplies the number of mappings that have to be defined. If an application needs more than one such facility then this number is multiplied again. Datatypes so defined are likely to be limited to those required for the facility and may be further constrained because the facility requires a representational model to be added. This could lead to a situation where, in some applications or environments, one facility is restricted by the datatypes supported by the other.
Basing all such language-independent facilities on the LID standard datatypes avoids the need for a multiplicity of mappings and reduces the danger of one facility being constrained by the limitations of another, at least in respect of datatypes. It is of course important to ensure that the LID standard is sufficiently general to support all kinds of language-independent facilities.
An important characteristic of LID is that it is not based on any particular language style or "paradigm". (That word has to be worked in somewhere these days to ensure that the paper is seen to be intellectually respectable.) The primary design decisions were that it had to be rich in datatypes ("maximalist" rather than "minimalist") and support strong typing. Richness is necessary so as not to reproduce the "representationalist" error of forcing things into a predetermined mould; you may need to do that at the data transmission level, for example, but not here. Strong typing is necessary because it is easy to get weak typing by relaxation of rules and automatic datatype conversions, but not easy to do the reverse. The strong need not use their strength; the weak have no choice.
The result is that LID has a wide variety of constructed datatypes, like aggregates of numerous kinds, a rich variety of primitive datatypes from which to build them, facilities for specifying derived datatypes such as subranges, and means of specifying new, logically distinct instances of existing datatypes. We did stop short of including object-oriented datatypes, as being too open-ended, though a future revision might include them. We stuck to datatypes with operational properties with in general no semantic connotations. (An exception to this is date-and-time, since some languages have it explicitly; but some of us are uneasy about it!) For the moment, the provision of object-oriented datatypes has to be done indirectly by the facilities in LID to construct and declare additional datatypes not explicitly defined in the standard.
Even datatypes with only two values are distinguished one from another, even though the transformations between them are simple - bit, with values 0 and 1 and operations + and *; boolean, with values true and false and operations and, or and not; state, with two suitably named values such as yes and no or on and off; enumerated of length two, also with suitably named values; and range 0:1 of integer, with all the integer operations but (unlike with bit) the possibility of overflow. Of these, boolean and state are unordered, while the others are ordered - another characterising feature. Further discussion and explanation of the LID approach, using this illustration, can be found in [Meek 1990].
The approach, and the whole project, has encountered a variety of attitudes, as might be expected. As alluded to earlier, language communities have traditionally tended to be rather parochial and ingrown, and many people within them regard the project with indifference, ranging through suspicion, to outright hostility: whatever their language, it looks alien (which in many ways it is, of course!). Interestingly, the most general positive support for it in the standards world has come from outside the language committees - but then they know all too well the difficulties of dealing with a multiplicity of languages with incompatible views of the world. Gaining acceptance is needing a long, patient process of education and re-education, that it is not a threat, and not an overt (or even a covert) attempt to undermine their world and make them add a mass of outlandish, unfamiliar datatypes to their language. But old-established cultural traditions die hard.
Language Independent Arithmetic (LIA)
While LID datatypes in general are readily defined in terms of their sets of values, there is one common datatype for which this is not straightforward, namely real, the approximation at computational level of the mathematical domain of real numbers at the abstract level. The reason is of course that the values themselves are approximate, which you cannot specify precisely at the language level unless the language has means of requiring a fixed-point representation. LID defined such specific fixed-point subsets of the real domain as scaled and so in LID real does mean the non-fixed approximations - usually floating-point, of course, but for LID exactly how the approximations are arrived at and represented is not relevant. The characterising features of real in LID are that the exact values are not known at the linguistic level; neither are the results of applying the arithmetic operations.
This brings us to the next language-independent standard project in WG11, one not previously mentioned because it began outside the official standards world and only joined WG11 when the project gained official recognition.
The motivation for this project was that as far as real, i.e. approximate arithmetic is concerned (at the operational level now, not linguistic), users still cannot rely on hardware design engineers taking fully into account all of the factors that numerical analysts would wish, especially with respect to minimising accumulated errors. Even if the accuracy of approximation of the original values can be controlled, the accuracy of the computations carried out on them cannot. Again it is partly the cultural heritage of the designers and implementors - speed at all costs, even at the expense of accuracy. For many applications, on modern hardware the approximations to values are close enough that, even if the computational accuracy could be improved, the resulting errors are not large enough to matter - for example they may still be swamped by experimental errors in the original data. However, for some applications it does matter, and one does need the best possible achievable accuracy, or at the very least for the accuracy to be predictable. Experience shows this not to be the case; and of course predictability is specially important if one wants to write software which is portable between different platforms.
In fact it is not, generally, that hardware is incapable of meeting quite strict requirements with regard to accuracy and predictability. The problem for users is usually a combination of hardware design and the way that implementors of languages and packages exploit it. But it is not unreasonable for users to wish that the hardware will not even permit the software writers to use it in an arithmetically unsafe way - or at the very least make it much easier for them to use it safely than unsafely. Yet still there are no standards to require this. The aim of the Language Independent Arithmetic (LIA) project is to provide such a standard.
There is, to be fair, one honourable attempt which went a long way towards this goal - the floating-point standard ANSI/IEEE 754:1985. This does provide the required properties, including means for detecting spurious results. Since its inception it has been widely adopted for the powerful workstations increasingly used in the late 1980s and 1990s by professional scientists and engineers. There are, indeed, some who advocate its universal adoption, as a permanent solution to the unsafe arithmetic problem.
However, the IEEE standard is not just an arithmetic standard but a hardware architecture standard, albeit an abstract one; it may have become popular for workstations (though its penetration into other areas has been limited), but there is no need to specify hardware architecture precisely in order to achieve safe arithmetic. That is another example of confusing levels of abstraction and purposes of standards. Perhaps even more important, while the IEEE standard does indeed make it likely that arithmetic will be safe, it does not actually guarantee it, because of that perpetual bugbear of IT standards, the presence of options. Sensible use of the options will preserve arithmetic safety, but you cannot be sure, and it is above all certainty that users need. Even on the same IEEE-conforming hardware, significantly different arithmetic results can and have been obtained for the same computation carried out using different compilers - for the same language.
LIA's approach is to define an abstract set of required properties for arithmetic operations and functions which will ensure arithmetic safety. It is therefore at the operational level and hence complements LID as far as datatype real is concerned. It has representational aspects only to the extent that it is expressed in terms a floating point format - you need that amount of "representation" to specify anything at all. However, this does not mean, and LIA does not require, that underlying hardware supporting a language implementation has to provide it, though as things are today, people, if not the standard might expect it. For the purely language point of view, how the requirements are implemented is irrelevant, even if they may be of interest to the programmer for a particular application.
The LIA standard is in three parts, and the first part, on the basic arithmetic operations, is available in draft form as ISO/IEC DIS 10967-1; the other two parts, on mathematical procedures and complex arithmetic, are still being prepared. An early draft of part one, under its original title of "Language Compatible Arithmetic", is available in the general literature [Payne, Schaffert and Wichmann 1990]; while many details have changed during the formal development to DIS stage, the basic principles can be obtained from that. Further discussion may be found in [Wichmann 1990].
Because of its implications for hardware and its potential impact on numerical software, LIA part 1 has engendered a good deal of interest and comment, some of it quite trenchant (see for example [Kahan 1992]). Doubts were raised about motivation, in that one of the large suppliers was providing a good deal of support for the project. It is not unusual for substantial involvement in a standards project by one major supplier to be looked at with suspicion by other suppliers and by users! In all branches of politics "guilt by association" is sometimes resorted to as a line of argument so such concerns can perhaps be dismissed, unless of course backed up by technical arguments. There are enough instances in IT of standards being little more than a standards body wrapping round a thinly-disguised product manual, but even the earliest versions of LIA are a far cry from that.
Some critics have argued that LIA would undermine IEEE 754, or weaken it in some way; supporters of LIA argued in return that it actually strengthened IEEE 754, and the combination of the two standards was the best that could reasonably be hoped for given the state of the art in the 1990s; yet it also offered hope and support to those who for one reason or another did not have the benefits of IEEE 754 to support them. In any case, if the IEEE 754 approach is so good, how can this really be a threat? This surely hints of protecting something weak rather than robust! Or is it simply wishing to be dog in the "safe arithmetic" manger, or taking umbrage at anyone straying off the One True Path?
In fact, very likely most of it is the result of nothing more than simple misunderstanding of the aims of the project. In particular, there is no such thing as an LIA machine in same sense as that there is undoubtedly such a thing as "an IEEE machine", and once that essential point has been grasped then some of the objections should be defused. Another red herring has been concern that LIA is so "weak" as to permit approximate representations which are inadequate for "serious" numerical analysis. This also misses the point: safe, predictable approximate arithmetic calculations are needed for a wide range of purposes, and not all of these would qualify as "serious numerical analysis". If a representation which numerical analysts would scorn is nevertheless adequate for some purposes, why should users be deprived of the protection that conformity to LIA provides? It is not for the LIA standard to dictate to users what they should or should not use in the context of a given application, only that the arithmetic performed meets certain criteria of accuracy.
Fortunately the debate also threw up useful constructive points which have improved the end product, and now LIA is more explicitly supportive of IEEE 754 without requring it or otherwise compromising its original aims. By mid-1993 the outstanding concerns had been sufficiently cleared up, enabling it, as noted earlier, to progress to DIS.
Language Independent Procedure Calling (LIPC)
As mentioned earlier, procedure calling was the second of the topics identified as a priority for a language independent standard. As with datatypes, the purpose of the LIPC standard is to provide a common reference point to which all languages can relate. Again, it is an enabling standard to aid the development of language-independent tools and services; again, mappings to actual languages will be needed; and for parameter passing it will need to use the LID standard, which would have been required for this purpose even if it were not needed otherwise.
It will aid the development of common procedure libraries. Operating systems very often contain such libraries, of things like the mathematical functions, which are shared by the various language processors running under them. However, they tend to be embedded in the environment and to be system-specific - in other words, not portable between platforms. The LIPC standard will enable such libraries to be specified in a standard way and built in a standardised language such as C or Fortran. It can be noted that numerical procedure libraries will be able to make good use of all three of LIPC, LID and LIA. Portability between platforms is of particular importance for language-independent functional standards like graphics, which can be implemented in this way even though the procedure calling may be disguised when used from inside a particular actual language.
Another use of LIPC is to aid mixed language programming; in fact it was demands for this to be supported, in a standardised way portable between platforms, which led to the work item proposal. In mixed language applications, called procedures would run on language processors operating in "server" mode, and the procedures would be called from language processors operating in "client" mode. Note that the languages need not be different, and if the processors are the same the model collapses into conventional single processor programming. However, one can envisage some applications using a language processor with good diagnostics or a good human-machine interface to call procedures written in the same language running, say, under an optimising compiler or a vector processing server. (Hence the term should really be "mixed language processor computing" though that is rather clumsy.) In such a use of LIPC, the procedure calling mappings would be straightforward but not totally redundant; the LID-based parameter passing would still usually be needed as a filter for incompatibilities caused by the use of optional features or implementation-dependent aspects allowed by the language standard. This could be dispensed with only if it were certain that the processors shared identical characteristics in every relevant respect.
LIPC of course exists at the operational as well as the linguistic level because it is a dynamic and not just a static concept. Originally the title was "Common Language Independent Procedure Calling Mechanism". In some older papers it is therefore abbreviated to CLIPCM or CLIP-CM, though others have used CLIPC or CLIP. The word "mechanism" is not incorrect, since what it refers to is a language mechanism. Such a concept is understood (usually) in programming language circles but open to misinterpretation outside - it tends to lead people into thinking it is an implementation mechanism as well. Hence it was dropped, as was "Common" for the same reasons as for LID.
If that is what LIPC is for, why is a standard needed? There would of course have been no need, had commonality already existed. But whereas most if not all programming languages include the concepts of procedures and their invocation, they vary in the way that they view them, especially with respect to parameter passing - and this irrespective of any differences there may be about datatypes.
In Appendix B (RPC tutorial) of the ECMA-127 standard for remote procedure calling (RPC) using OSI (Open Systems Interconnection), it is argued that "the idea of remote procedure calls is simple" and that procedure calls are "a well-known and well-understood mechanism...within a program running on a single computer". Both statements are true at a given level. Procedure calling is a simple concept, at the level of provision of functionality. It becomes less so at the linguistic level (let alone the operational and representational or implementation levels) because of its interaction with both datatyping and program structure, as is testified by the numerous variations and (apparently) arbitrary restrictions on procedure definitions and calling in existing languages. It is also undoubtedly true that procedure calling is "well-understood" almost universally in the language community. The trouble is that this general understanding is not necessarily the same understanding.
This is to some extent acknowledged in section 3.4 of Appendix B of ECMA-127. However, once these points are put together, along with the absence hitherto of LID, the "simple and well understood" view of procedure calling becomes increasingly elusive; more tenuous, harder to sustain.
The LIPC standard specifies conformity rules for language processors operating in client mode and in server mode. It is expected that many processors will conform in both modes. Taking the LID datatypes for the parameter datatypes, what it adds are language-independent operational definitions of different kinds of parameter passing, based upon the "contract" concept that the client "undertakes" to supply a particular item of that datatype (e.g. a value or a reference) at a certain time (e.g. when the server "accepts" the contract, or when the server, while executing the procedure, requests it), and the server similarly "undertakes" to return the results of execution, in the form of changes to parameters or (in the case of "function" procedures, as an overall result (though that can trivially be identified on the server side as an extra "out" parameter).
It is important to note that LIPC does not address the question of how this contract is reached - how the procedure call initiated by the client mode processor is communicated to the server mode processor, or how results are returned. LIPC is concerned with mappings of calls at the conceptual language level, not at the communications protocol level, a point we shall return to in a moment. In the examples cited earlier, of generic procedure libraries and mixed language programming in the same host environment, such communications may be routine, even trivial. This was a deliberate design decision at the time that the scope of the project was defined: knowing that important application areas needing the standard existed in single environments containing no significant communications problems, it was felt that the LIPC standard should not run the risk of unduly restricting language-independent procedure calling because of constraints made necessary in contexts where communications limitations were greater. There was no need to go any lower than the computational levels defined earlier.
This brings us to the relationship between LIPC and RPC.
Remote Procedure Calling using OSI (RPC)
While this work was going on, ECMA was working on its own standard, already cited, for remote procedure calling, where the client's call of a procedure and the return of the results by the server are communicated through OSI protocols. That this was happening gives evidence of the need for a standard definition of procedure calling, but (as is all too common in the standards world) it was done without, for a time, any apparent awareness of the project in WG11. In the tradition of the area, the RPC folk went ahead and defined their own, based on their perception of need, but also a specific model of procedure calling in an OSI environment. When the two projects later came together in ISO (though in different committees and working groups), this provided valuable source material for the development of LIPC, but it was apparent that, being predicated on the client and server communicating by OSI protocols, in LIPC terms it was constrained by limitations at the communication level which are independent of the language level which LIPC addresses. Another way of putting it is that RPC is driven by OSI rather than by procedure calling.
In principle LIPC and RPC were and are not in conflict; the task remaining for both WG11 and the equivalent OSI group, SC21/WG8/RPC, is to ensure that they can coexist without conflict. It is vital that LIPC is not defined in a way that would create problems for the RPC/OSI community, since RPC applications are potentially just as important as procedure library and mixed language applications - indeed can complement them by extending their range. However, as mentioned earlier it is equally vital that LIPC itself not be constrained by communications considerations. These matters are being handled through consistent use of definitions in a common Interface Definition Notation (IDN) employed in both RPC and LIPC and, because of the need in both for definitions of parameter datatypes, in LID also.
The model that has developed can be shown in schematic form in the following way. In the purely LIPC situation, the model is simply:
client --------------------------------------------- server | | | | |--------------- common environment ---------------|
where the common understanding above the line is LIPC's concern, and how the common environment is provided to support facilities using that common understanding is below the line and is not LIPC's concern.
Since RPC is dependent on OSI protocols, it is inevitably involved with the representational and implementation levels, at least to the extent of requesting lower level OSI protocols, so the model becomes this:
client ---------- virtual contract ---------- server | | | | RPC client --------- service contract --------- RPC server
in which, as before, LIPC is concerned with what is above the line and RPC is concerned with what is below the line. What matters is to ensure that LIPC virtual contract and the RPC service contract match, and share the same view of what it means to call a procedure, to pass a parameter, and to pass back results. The two have been well aligned for a long time now, and any difficulties in maintaining that are likely to be procedural rather than technical. The two are (end 1993) at different stages - RPC at DIS but LIPC one stage back at CD (Committee Draft) - but the important aspects of LIPC have been stable for a considerable time, most recent work being devoted to presentation of the model rather than the underlying model itself. LIPC has lagged slightly partly because LID needed to be in place first - RPC needs only parts of LID, not all of it - but mostly because delays caused by enforced changes in editor have not, given the required periods for comment in the standardisation process, been caught up.
Further possible language independent standards
Those are the actual projects in train - and mostly nearing completion - at present, but there are numerous other possibilities for generic language independent standards. Input-output is an obvious example: for text I/O at least, it is ludicrous that every language definition has to have long chapters each reinventing, with variations, that particular wheel. Yet old attitudes and habits of mind die hard; not very long ago someone suggested to a language standards group modelling their I/O on what was in another language standard, and this was rejected as unthinkable! Some related aspects of this problem are discussed in [Meek 1993].
Further possibilities are file handling, array handling, parallel processing, and exception handling (though not "event handling" in general, since this would cover process control facilities and would qualify as "functional" for most existing languages). Some of these have already been the subject of some study, though none have yet reached the status of an official project.
Much may depend on how well the standards already described are accepted when they appear; if they are successful, the benefits of language independent definitions for the many areas of commonality will be more widely appreciated. Much will depend upon us, the users, insisting that these standards are indeed used, and incorporated in products. We are the ones, after all, who most suffer the difficulties and costs of incompatibilities.
It is clear that we are still on a steep section of the learning curve for dealing with language independent facilities and standards, both for those working on the language independent projects and for those in the various language communities. As we have seen, traditionally the language communities have been largely independent and disjoint, doing things their own way and talking only to their own kind - and those who move to the cross-language projects cannot always cast off immediately all the inbuilt assumptions from their respective backgrounds, whether they be from Fortran, Cobol, Pascal, C or whatever. It is hoped that this paper will help to clarify some of the issues and help readers to climb a bit further up the learning curve.
Although revised and substantially rewritten for the purposes of this paper, a substantial part of the material used above was originally drafted for a WG11 document, reference N194R, while much of that used in the discussion of LIA was originally drafted and appears (see pp 50-52) in the book User Needs in Information Technology Standards [Evans, Meek and Walker 1992]. The author is grateful to various members of WG11, in particular Willem Wakker and Brian Wichmann, for their helpful comments. However, the author is of course solely responsible for the paper in its current (non-WG11) form, an earlier version of which was presented at the DECUS Symposium in Solihull, UK, in May 1992. It is in order not to do too much violence to this earlier version that this paper overruns the length limit now normally in operation for this journal, and the author is especially grateful to the editor for allowing this indulgence.
Standards and drafts
ANSI/IEEE Std 754:1985, standard for binary floating-point arithmetic
ECMA-127, RPC (Remote procedure call using OSI), final draft 2nd edition, January 1990
ISO/IEC TR 10176, Guidelines for the preparation of programming language standards, 1991
ISO/IEC DIS 10967-1, Language Independent Arithmetic, Part 1: Integer and floating point arithmetic, 1993
ISO/IEC DIS 11404, Language Independent Datatypes, 1993
ISO/IEC JTC1 DIS 11578, Remote Procedure Call, 1993
ISO/IEC JTC1 CD 13886, Language Independent Procedure Calling, 1993
[Evans, Meek and Walker 1993] C.D. Evans, B.L. Meek and R.S. Walker, User Needs in Information Technology Standards, Butterworth-Heinemann 1993
[Kahan 1992] Kahan, W., Analysis and refutation of the LCAS, Sigplan Notices of the ACM, Vol 27 No. 1, pp 61-74
[Meek 1990] Meek, B.L., Two-valued datatypes, Sigplan Notices of the ACM, Vol 25 No 8, pp 75-79, August 1990
[Meek 1993] Meek, B.L., Problems of language bindings, Computer Standards and Interfaces, Vol 15 no 4, pp 353-360
[Payne, Schaffert and Wichmann 1990] Payne, M., Schaffert, C. and Wichmann, B., Proposal for a language compatible arithmetic standard, Sigplan Notices of the ACM, Vol 25 No 1, pp 59-86, January 1990
[Wichmann 1990] Wichmann, B.A., Getting the correct answers, National Physical Laboratory Report DITC 167/90, June 1990
Copyright © 1994 by the Association for Computing Machinery, Inc. Please click here for full ACM copyright notice.
Note added in 1995: there are now two follow-up papers, A taxonomy of datatypes and What is a procedure call?.