17. Portability: Software Portability and Keeping Up Standards

The realization that the operating systems of the target machines were as great an obstacle to portability as their hardware architecture led us to a seemingly radical suggestion: to evade that part of the problem altogether by moving the operating system itself.

Portability of C Programs and the UNIX System (1978)

Unix was the first production operating system to be ported between differing processor families (Version 6 Unix, 1976–77). Today, Unix is routinely ported to every new machine powerful enough to sport a memory-management unit. Unix applications are routinely moved between Unixes running on wildly differing hardware; in fact, it is unheard of for a port to fail.

Portability has always been one of Unix’s principal advantages. Unix programmers tend to write on the assumption that hardware is evanescent and only the Unix API is stable, making as few assumptions as possible about machine specifics such as word length, endianness or memory architecture. In fact, code that is hardware-dependent in any way that goes beyond the abstract machine model of C is considered bad form in Unix circles, and only really tolerated in very special cases like operating system kernels.

Unix programmers have learned that it is easy to be wrong when anticipating that a software project will have a short lifetime.1 Thus, they tend to avoid making software dependent on specific and perishable technologies, and to lean heavily on open standards. These habits of writing for portability are so ingrained in the Unix tradition that they are applied even to small single-use projects that are thought of as throwaway code. They have had secondary effects all through the design of the Unix development toolkit, and on programming languages like Perl and Python and Tcl that were developed under Unix.

1 PDP-7 Unix and Linux were both examples of unexpected persistence. Unix originated as a research toy hacked together by some researchers between projects, half to play with file-system ideas and half to host a game. The second was summed up by its creator as “My terminal emulator grew legs” [Torvalds].

The direct benefit of portability is that it is normal for Unix software to outlive its original hardware platform, so tools and applications don’t have to be reinvented every few years. Today, applications originally written for Version 7 Unix (1979) are routinely used not merely on Unixes genetically descended from V7, but on variants like Linux in which the operating system API was written from a Unix specification and shares no code with the Bell Labs source tree.

The indirect benefits are less obvious but may be more important. The discipline of portability tends to exert a simplifying influence on architectures, interfaces, and implementations. This both increases the odds of project success and reduces life-cycle maintenance costs.

In this chapter, we’ll survey the scope and history of Unix standards. We’ll discuss which ones are still relevant today and describe the areas of greater and lesser variance in the Unix API. We’ll examine the tools and practices that Unix developers use to keep code portable, and develop some guides to good practice.

17.1 Evolution of C

The central fact of the Unix programming experience has always been the stability of the C language and the handful of service interfaces that always travel with it (notably, the standard I/O library and friends). The fact that a language originated in 1973 has required as little change as this one has in thirty years of heavy use is truly remarkable, and without parallels anywhere else in computer science or engineering.

In Chapter 4, we argued that C has been successful because it acts as a layer of thin glue over computer hardware approximating the “standard architecture” of [BlaauwBrooks]. There is, of course, more to the story than that. To understand the rest of the story, we’ll need to take a brief look at the history of C.

17.1.1 Early History of C

C began life in 1971 as a systems-programming language for the PDP-11 port of Unix, based on Ken Thompson’s earlier B interpreter which had in turn been modeled on BCPL, the Basic Common Programming Language designed at Cambridge University in 1966–67.2

2 The ’C’ in C therefore stands for Common—or, perhaps, for ’Christopher’. BCPL originally stood for “Bootstrap CPL”—a much simplified version of CPL, the very interesting but overly ambitious and never implemented Common Programming Language of Oxford and Cambridge, also known affectionately as “Christopher’s Programming Language” after its prime advocate, computer-science pioneer Christopher Strachey.

Dennis Ritchie’s original C compiler (often called the “DMR” compiler after his initials) served the rapidly growing community around Unix versions 5, 6, and 7. Version 6 C spawned Whitesmiths C, a reimplementation that became the first commercial C compiler and the nucleus of IDRIS, the first Unix workalike. But most modern C implementations are patterned on Steven C. Johnson’s “portable C compiler” (PCC) which debuted in Version 7 and replaced the DMR compiler entirely in both System V and the BSD 4.x releases.

In 1976, Version 6 C introduced the typedef, union, and unsigned int declarations. The approved syntax for variable initializations and some compound operators also changed.

The original description of C was Brian Kernighan and Dennis M. Ritchie’s original The C Programming Language aka “the White Book” [Kernighan-Ritchie]. It was published in 1978, the same year the Whitemiths C compiler became available.

The White Book described enhanced Version 6 C, with one significant exception involving the handling of public storage. Ritchie’s original intention had been to model C’s rules on FORTRAN COMMON declarations, on the theory that any machine that could handle FORTRAN would be ready for C. In the common-block model, a public variable may be declared multiple times; identical declarations are merged by the linker. But two early C ports (to Honeywell and IBM 360 mainframes) happened to be to machines with very limited common storage or a primitive linker or both. Thus, the Version 6 C compiler was moved to the stricter definition-reference model (requiring at most one definition of any given public variable and the extern keyword tagging references to it) described in [Kernighan-Ritchie].

This decision was reversed in the C compiler that shipped with Version 7 after it developed that a great deal of existing source depended on the looser rules. Pressure for backward-compatibility would foil yet another attempt to switch (in 1983’s System V Release 1) before the ANSI Draft Standard finally settled on definition-reference rules in 1988. Common-block public storage is still admitted as an acceptable variation by the standard.

V7 C introduced enum and treated struct and union values as first-class objects that could be assigned, passed as arguments, and returned from functions (rather than being passed around by address).

Another major change in V7 was that Unix data structure declarations were now documented on header files, and included. Previous Unixes had actually printed the data structures (e.g., for directories) in the manual, from which people would copy it into their code. Needless to say, this was a major portability problem.

—Steve Johnson

The System III C version of the PCC compiler (which also shipped with BSD 4.1c) changed the handling of struct declarations so that members with the same names in different structs would not clash. It also introduced void and unsigned char declarations. The scope of extern declarations local to a function was restricted to the function, and no longer included all code following it.

The ANSI C Draft Proposed Standard added const (for read-only storage) and volatile (for locations such as memory-mapped I/O registers that might be modified asynchronously from the thread of program control). The unsigned type modifier was generalized to apply to any type, and a symmetrical signed was added. Initialization syntax for auto array and structure initializers and union types was added. Most importantly, function prototypes were added.

The most important changes in early C were the switch to definition-reference and the introduction of function prototypes in the Draft Proposed ANSI C Standard. The language has been essentially stable since copies of the X3J11 committee’s working papers on the Draft Proposed Standard signaled the committee’s intentions to compiler implementers in 1985–1986.

A more detailed history of early C, written by its designer, can be found at [Ritchie93].

17.1.2 C Standards

C standards development has been a conservative process with great care taken to preserve the spirit of the original C language, and an emphasis on ratifying experiments in existing compilers rather than inventing new features. The C9X charter3 document is an excellent expression of this mission.

3 Available on the Web <http://anubis.dkuug.dk/JTC1/SC22/WG14/www/charter>.

Work on the first official C standard began in 1983 under the auspices of the X3J11 ANSI committee. The major functional additions to the language were settled by the end of 1986, at which point it became common for programmers to distinguish between “K&R C” and “ANSI C”.

Many people don’t realize how unusual the C standardization effort, especially the original ANSI C work, was in its insistence on standardizing only tested features. Most language standard committees spend much of their time inventing new features, often with little consideration of how they might be implemented. Indeed, the few ANSI C features that were invented from scratch—e.g., the notorious “trigraphs”—were the most disliked and least successful features of C89.

—Henry Spencer

Void pointers were invented as part of the standards effort, and have been a winner. But Henry’s point is still well taken.

—Steve Johnson

While the core of ANSI C was settled early, arguments over the contents of the standard libraries dragged on for years. The formal standard was not issued until the end of 1989, well after most compilers had implemented the 1985 recommendations. The standard was originally known as ANSI X3.159, but was redesignated ISO/IEC 9899:1990 when the International Standards Organization (ISO) took over sponsorship in 1990. The language variant it describes is generally known as C89 or C90.

The first book on C and Unix portability practice, Portable C and Unix Systems Programming [Lapin], was published in 1987 (I wrote it under a corporate pseudonym forced on me by my employers at the time). The Second Edition of [Kernighan-Ritchie] came out in 1988.

A very minor revision of C89, known as Amendment 1, AM1, or C93, was floated in 1993. It added more support for wide characters and Unicode. This became ISO/IEC 9899–1:1994.

Revision of the C89 standard began in 1993. In 1999, ISO/IEC 9899 (generally known as C99) was adopted by ISO. It incorporated Amendment 1, and added a great many minor features. Perhaps the most significant one for most programmers is the C++-like ability to declare variables at any point in a block, rather than just at the beginning. Macros with a variable number of arguments were also added.

The C9X working group has a Web page <http://anubis.dkuug.dk/JTC1/SC22/WG14/www/projects>, but no third standards effort is planned as of mid-2003. They are developing an addendum on C for embedded systems.

Standardization of C has been greatly aided by the fact that working and largely compatible implementations were running on a wide variety of systems before standards work was begun. This made it harder to argue about what features should be in the standard.

17.2 Unix Standards

The 1973 rewrite of Unix in C made it unprecedentedly easy to port and modify. As a result, the ancestral Unix diverged into a family of operating systems early on. Unix standards originally developed to reconcile the APIs of the different branches of the family tree.

The Unix standards that evolved after 1985 were quite successful at this—so much so that they serve as valuable documentation of the API of modern Unix implementations. In fact, real-world Unixes follow published standards so closely that developers can (and frequently do) lean more on documents like the POSIX specification than on the official manual pages for the Unix variant they happen to be using.

In fact, on the newer open-source Unixes (such as Linux), it is common for operating-system features to have been engineered using published standards as the specification. We’ll return to this point when we examine the RFC standards process later in this chapter.

17.2.1 Standards and the Unix Wars

The original motivation for the development of Unix standards was the split between the AT&T and Berkeley lines of development that we examined in Chapter 2.

The 4.x BSD Unixes were descended from the 1979 Version 7. After the release of 4.1BSD in 1980 the BSD line quickly developed a reputation as the cutting edge of Unix. Important additions included the vi visual editor, job control facilities for managing multiple foreground and background tasks from a single console, and improvements in signals (see Chapter 7). By far the most important addition was to be TCP/IP networking, but though Berkeley got the contract to do it in 1980, TCP/IP was not to ship in an external release for three years.

But another version, 1981’s System III, became the basis of AT&T’s later development. System III reworked the Version 7 terminals interface into a cleaner and more elegant form that was completely incompatible with the Berkeley enhancements. It retained the older (non-resetting) semantics of signals (again, see Chapter 7 for discussion of this point). The January 1983 release of System V Release 1 incorporated some BSD utilities (such as vi(1)).

The first attempt to bridge the gap came in February 1983 from UniForum, an influential Unix user group. Their Uniforum 1983 Draft Standard (UDS 83) described a “core Unix System” consisting of a subset of the System III kernel and libraries plus a file-locking primitive. AT&T declared support for UDS 83, but the standard was an inadequate subset of evolving practice based on 4.1BSD. The problem was exacerbated by the July 1983 release of 4.2BSD, which added many new features (including TCP/IP networking) and introduced some subtle incompatibilities with the ancestral Version 7.

The 1984 divestiture of the Bell operating companies and the beginnings of the Unix wars (see Chapter 2) significantly complicated matters. Sun Microsystems was leading the workstation industry in a BSD direction; AT&T was trying to get into the computer business and use control of Unix as a strategic weapon even as it continued to license the operating system to competitors like Sun. All the vendors were making business decisions to differentiate their versions of Unix for competitive advantage.

During the Unix wars, technical standardization became something that cooperating technical people pushed for and most product managers accepted grudgingly or actively resisted. The one large and important exception was AT&T, which declared its intention to cooperate with user groups in setting standards when it announced System V Release 2 (SVr2) in January 1984. The second revision of the UniForum Draft Standard, in 1984, both tracked and influenced the API of SVr2. Later Unix standards also tended to track System V except in areas where BSD facilities were clearly functionally superior (thus, for example, modern Unix standards describe the System V terminal controls rather than the BSD interface to the same facilities).

In 1985, AT&T released the System V Interface Definition (SVID). SVID provided a more formal description of the SVr2 API, incorporating UDS 84. Later revisions SVID2 and SVID3 tracked the interfaces of System V releases 3 and 4. SVID became the basis for the POSIX standards, which ultimately tipped most of the Berkeley/AT&T disputes over system and C library calls in AT&T’s favor.

But this would not become obvious for a few years yet; meanwhile, the Unix wars raged on. For example, 1985 saw the release of two competing API standards for file system sharing over networks: Sun’s Network File System (NFS) and AT&T’s Remote File System (RFS). Sun’s NFS prevailed because Sun was willing to share not merely specifications but open-source code with others.

The lesson of this success should have been all the more pointed because on purely logical grounds RFS was the superior model. It supported better file-locking semantics and better mapping among user identities on different systems, and generally made an effort to get the finer details of Unix file system semantics precisely right, unlike NFS. The lesson was ignored, however, even when it was repeated in 1987 by the open-source X windowing system’s victory over Sun’s proprietary Networked Window System (NeWS).

After 1985 the main thrust of Unix standardization passed to the Institute of Electrical and Electronic Engineers (IEEE). The IEEE’s 1003 committee developed a series of standards generally known as POSIX.4 These went beyond describing merely systems calls and C library facilities; they specified detailed semantics of a shell and a minimum command set, and also detailed bindings for various non-C programming languages. The first release in 1990 was followed by a second edition in 1996. The International Standards Organization adopted them as ISO/IEC 9945.

4 The original 1986 trial-use standard was called IEEE-IX. The name ’POSIX’ was suggested by Richard Stallman. The introduction to POSIX.1 says: “It is expected to be pronounced pahz-icks as in positive, not poh-six, or other variations. The pronounciation has been published in an attempt to promulgate a standardized way of referring to a standard operating system interface”.

Key POSIX standards include the following:

1003.1 (released 1990)

Library procedures. Described the C system call API, much like Version 7 except for signals and the terminal-control interface.

1003.2 (released 1992)

Standard shell and utilities. Shell semantics strongly resemble those of the System V Bourne shell.

1003.4 (released 1993)

Real-time Unix. Binary semaphores, process memory locking, memory-mapped files, shared memory, priority scheduling, real-time signals, clocks and timers, IPC message passing, synchronized I/O, asynchronous I/O, real-time files.

In the 1996 Second Edition, 1003.4 was split into 1003.1b (real-time) and 1003.1c (threads).

Despite being underspecified in a couple of key areas such as signal-handling semantics and omitting BSD sockets, the original POSIX standards became the basis of all later Unix standardization work. They are still cited as an authority, albeit indirectly through references like POSIX Programmer’s Guide [Lewine]. The de facto Unix API standard is still “POSIX plus sockets”, with later standards mainly adding features and specifying conformance in unusual edge cases more closely.

The next player on the scene was X/Open (later renamed the Open Group), a consortium of Unix vendors formed in 1984. Their X/Open Portability Guides (XPGs) initially developed in parallel with the POSIX drafts, then after 1990 the XPGs incorporated and extended POSIX. Unlike POSIX, which attempted to capture a safe subset of all Unixes, the XPGs were oriented more toward common practice at the leading edge; even XPG1 in 1985, spanning SVr2 and 4.2BSD,  included sockets.

XPG2 in 1987 added a terminal-handling API that was essentially System V curses(3). XPG3 in 1990 merged in the X11 API. XPG4 in 1992 mandated full compliance with the 1989 ANSI C standard. XPG2, 3, and 4 were heavily concerned with support of internationalization and described an elaborate API for handling codesets and message catalogs.

In reading about Unix standards you might come across references to “Spec 1170” (from 1993), “Unix 95” (from 1995) and “Unix 98” (from 1998). These were certification marks based on the X/Open standards; they are now of historical interest only. But the work done on XPG4 turned into Spec 1170, which turned into the first version of the Single Unix Specification (SUS).

In 1993 seventy-five systems and software vendors including every major Unix company put a final end to the Unix wars when they declared backing for X/Open to develop a common definition of Unix. As part of the arrangement, X/Open acquired the rights to the Unix trademark. The merged standard became Single Unix Standard version 1. It was followed in 1997 by a version 2. In 1999 X/Open absorbed the POSIX activity.

In 2001, X/Open (now The Open Group) issued the Single Unix Standard version 3 <http://www.unix.org/version3/>. All the threads of Unix API standardization were finally gathered into one bundle. This reflected facts on the ground; the different varieties of Unix had re-converged on a common API. And, at least among old-timers who remembered the turbulence of the 1980s, there was much rejoicing.

17.2.2 The Ghost at the Victory Banquet

There was, unfortunately, an awkward detail—the old-school Unix vendors who had backed the effort were under severe pressure from the new school of open-source Unixes, and were in some cases in the process of abandoning (in favor of Linux) the proprietary Unixes for which they had gone to so much effort to secure conformance.

The conformance testing needed to verify Single Unix Specification conformance is an expensive proposition. It would need to be done on a per-distribution basis, but is well out of the reach of most distributors of open-source operating systems. In any case, Linux changes so fast that any given release of a distribution would probably be obsolete by the time it could get certified.5

5 One Linux distributor, Lasermoon in Great Britain, did achieve POSIX.1 FIPS 151–2 certification—and went out of business, because potential customers didn’t care.

Standards like the Single Unix Specification have not entirely lost their relevance. They’re still valuable guides for Unix implementers. But how The Open Group and other institutions of the old-school Unix standardization process will adapt to the rapid tempo of open-source releases (and to the low- or zero-budget operation of open-source development groups!) remains to be seen.

17.2.3 Unix Standards in the Open-Source World

In the mid-1990s, the open-source community began standardization efforts of its own. These efforts built on the source-code-level compatibility secured by POSIX and its descendants. Linux, in particular, had been written from scratch in a way that depended on the availability of Unix API standards like POSIX.6

6 See Just for Fun [Torvalds] for discussion.

In 1998 Oracle ported its market-leading database product to Linux, in a move that was rightly seen as a major breakthrough in Linux’s mainstream acceptance. The engineer in charge of the port provided a definitive demonstration that API standards had done their job when he was asked by a reporter what technical challenges Oracle had had to surmount. The engineer’s reply was “We typed ’make’.”

The problem for the new-school Unixes, therefore, was not API compatibility at the source-code level. Everybody took for granted the ability to move source code between different Linux, BSD, and proprietary-Unix distributions without more than a trivial amount of porting labor. The new problem was not source compatibility but binary compatibility. For the ground under Unix had shifted in a subtle way as a consequence of the triumph of commodity PC hardware.

In the old days, each Unix had run on what was effectively its own hardware platform. There was enough variety in processor instruction sets and machine architectures that applications had to be ported at source level to move at all. On the other hand, there were a relatively few major Unix releases, each with relatively long service lifetimes. Application vendors like Oracle could afford the cost of building and shipping separate binary distributions for each of three or four hardware/software combinations, because they could amortize the low cost of source-code porting over large customer populations and a long enough product life cycle.

But then the minicomputer and workstation vendors were swamped by inexpensive 386-based supermicros, and open-source Unixes changed the rules. Vendors found they no longer had a stable platform to ship their binaries to.

The superficial problem, at first, was the large number of Unix distributors—but as the Linux distribution market consolidated, it became clear that the real issue was the rate of change over time. APIs were stable, but the expected locations of system administrative files, utility programs, and things like the prefix of the paths to user mailbox names and system log files kept changing.

The first standards effort to develop within the new-school Linux and BSD community itself (beginning in 1993) was the Filesystem Hierarchy Standard (FHS). This was incorporated into the Linux Standards Base (LSB), which also standardized an expected set of service libraries and helper applications. Both standards became activities of the Free Standards Group <http://www.freestandards.org/>, which by 2001 developed a role similar to X/Open’s position amidst the old-school Unix vendors.

17.3 IETF and the RFC Standards Process

When the Unix community merged with the culture of Internet engineers, it also inherited a mindset formed by the RFC standards process of the Internet Engineering Task Force (IETF). In IETF tradition, standards have to arise from experience with a working prototype implementation—but once they become standards, code that does not conform to them is considered broken and mercilessly scrapped.

This is not, sadly, the way standards are normally developed. The history of computing is full of instances in which technical standards were derived by a process that combined the worst features of philosophical axe-grinding with murky back-room politics—producing specifications that failed to resemble anything ever implemented. Worse, many were either so demanding that they could not be practically implemented or so underspecified that they caused more confusion than they resolved. Then they were promulgated to vendors who ignored them wherever they were inconvenient.

One of the more notorious examples of standards nonsense was the Open Systems Interconnect networking protocols that briefly contended with TCP/IP in the 1980s—its 7-layer model looked elegant from a distance but proved overcomplicated and unimplementable in practice.7 The ANSI X3.64 standard for video-display terminal capabilities is another famous horror story; bedeviled by subtle incompatibilities between legally conformant implementations. Even after character-cell terminals have been largely displaced by bitmapped displays these continue to cause problems (in particular, this is why the function and special keys in your xterm(1) will occasionally break). The RS232 standard for serial communications was so underspecified that it sometimes seemed that no two serial cables were alike. Standards horror stories of similar kind could fill a book the size of this one.

7 A Web search is likely to turn up a popular page comparing the OSI 7-layer model with the Taco Bell 7-layer burrito—unfavorably to the former.

The IETF’s philosophy has been famously summarized as “We reject kings, presidents, and voting. We believe in rough consensus and running code”.8 That demand for a working implementation first has saved it from the worst category of blunders. In fact its criterion is stronger:

8 This line was first uttered by senior IETF cadre Dave Clark at the tumultuous 1992 meeting during which the IETF rejected the Open Systems Interconnect protocol.

[A] candidate specification must be implemented and tested for correct operation and interoperability by multiple independent parties and utilized in increasingly demanding environments, before it can be adopted as an Internet Standard.

The Internet Standards Process—Revision 3 (RFC 2026)

All IETF standards pass through a stage as RFCs (Requests for Comment). The submission process for RFCs is deliberately informal. RFCs may propose standards, survey results, suggest philosophical bases for subsequent RFCs, or even make jokes. The appearance of the annual April 1st RFC is the closest equivalent of a high holy day observance among Internet hackers, and has produced such gems as A Standard for the Transmission of IP Datagrams on Avian Carriers (RFC 1149)9 the The Hyper Text Coffee Pot Control Protocol (RFC 2324),10 and The Security Flag in the IPv4 Header (RFC 3514).11

9 RFC 1149 is available on the Web <http://www.ietf.org/rfc/rfc1149.txt>. Not only that, it has been implemented <http://www.blug.linux.no/rfc1149/writeup.html>.

10 RFC 2324 is available on the Web <http://www.ietf.org/rfc/rfc2324.txt>.

11 RFC 3514 is available on the Web <http://www.ietf.org/rfc/rfc3514.txt>.

But joke RFCs are about the only sort of submission that instantly becomes an RFC. Serious proposals actually start as “Internet-Drafts” floated for public comment via IETF directories on several well-known hosts. Individual Internet-Drafts have no formal status and can be changed or dropped by their originators at any time. If they are neither withdrawn nor promoted to RFC status, they are removed after six months.

Internet-Drafts are not specifications, and software implementers and vendors are specifically barred from claiming compliance with them as if they were specifications. Internet-Drafts are focal points for discussion, usually in a working group connected through an electronic mailing list. When the working group leadership deems fit, the Internet-Draft is submitted to the RFC editor for assignment of an RFC number.

Once an Internet-Draft has been published with an RFC number, it is a specification to which implementers may claim conformance. It is expected that the authors of the RFC and the community at large will begin correcting the specification with field experience.

Some RFCs go no further. A specification that fails to attract use and survive field testing can be quietly forgotten, and eventually marked “Not recommended” or “Superseded” by the RFC editor. Failed proposals are accepted as one of the overheads of the process, and no stigma is attached to being associated with one.

The steering committee of the IETF (IESG, or Internet Engineering Steering Group) is responsible for putting successful RFCs on the standards track. They do this by designating the RFC a ’Proposed Standard’. For the RFC to qualify, the specification must be stable, peer-reviewed, and have attracted significant interest from the Internet community. Implementation experience is not absolutely required before an RFC is given Proposed Standard designation, but it is considered highly desirable, and the IESG may require it if the RFC touches the Internet core protocols or might be otherwise destabilizing.

Proposed Standards are still subject to revision, and may even be withdrawn if the IESG and IETF identify a better solution. They are not recommended for use in “disruption-sensitive environments”—don’t put them in your air-traffic-control systems or on intensive-care equipment.

When there are at least two working, complete, independently originated, and interoperable implementations of a Proposed Standard, the IESG may elevate it to Draft Standard status. RFC 2026 says: “Elevation to Draft Standard is a major advance in status, indicating a strong belief that the specification is mature and will be useful”.

Once an RFC has reached Draft Standard status, it will be changed only to address bugs in the logic of the specification. Draft Standards are expected to be ready for deployment in disruption-sensitive environments.

When a Draft Standard has passed the test of widespread implementation and reached general acceptance, it may be blessed as an Internet Standard. Internet Standards keep their RFC numbers, but also get a number in the STD series. At time of writing there are over 3000 RFCs but only 60 STDs.

RFCs not on standards track may be labeled Experimental, Informational (the joke RFCs get this label), or Historic. The Historic label is applied to obsolete standards. RFC 2026 notes: “(Purists have suggested that the word should be ’Historical’; however, at this point, the use of ’Historic’ is historical.)”

The IETF standards process is designed to encourage standardization driven by practice rather than theory, and to ensure that standard protocols have undergone rigorous peer review and testing. The success of this model is evident in its results—the worldwide Internet.

17.4 Specifications as DNA, Code as RNA

Even in the paleolithic period of the PDP-7, Unix programmers had always been more prone than their counterparts elsewhere to treat old code as disposable. This was doubtless a product of the Unix tradition’s emphasis on modularity, which makes it easier to discard and replace small pieces of systems without losing everything. Unix programmers have learned by experience that trying to salvage bad code or a bad design is often more work than rebooting the project. Where in other programming cultures the instinct would be to patch the monster monolith because you have so much work invested in it, the Unix instinct is usually to scrap and rebuild.

The IETF tradition reinforced this by teaching us to think of code as secondary to standards. Standards are what enable programs to cooperate; they knit our technologies into wholes that are more than the sum of the parts. The IETF showed us that careful standardization, aimed at capturing the best of existing practice, is a powerful form of humility that achieves more than grandiose attempts to remake the world around a never-implemented ideal.

After 1980, the impact of that lesson was increasingly widely felt in the Unix community. Thus, while the ANSI/ISO C standard from 1989 is not completely without flaws, it is exceptionally clean and practical for a standard of its size and importance. The Single Unix Specification contains fossils from three decades of experimentation and false starts in a more complicated domain, and is therefore messier than ANSI C. But the component standards it was composed from are quite good; strong evidence for this is the fact that Linus Torvalds successfully built a Unix from scratch by reading them. The IETF’s quiet but powerful example created one of the critical pieces of context that made Linus Torvalds’s feat possible.

Respect for published standards and the IETF process has become deeply ingrained in the Unix culture; deliberately violating Internet STDs is simply Not Done. This can sometimes create chasms of mutual incomprehension between people with a Unix background and others prone to assume that the most popular or widely deployed implementation of a protocol is by definition correct—even if it breaks the standard so severely that it will not interoperate with properly conforming software.

The Unix programmer’s respect for published standards is more interesting because he is likely to be rather hostile to a-priori specifications of other kinds. By the time the ’waterfall model’ (specify exhaustively first, then implement, then debug, with no reverse motion at any stage) fell out of favor in the software-engineering literature, it had been an object of derision among Unix programmers for years. Experience, and a strong tradition of collaborative development, had already taught them that prototyping and repeated cycles of test and evolution are a better way.

The Unix tradition clearly recognizes that there can be great value in good specifications, but it demands that they be treated as provisional and subject to revision through field experience in the way that Internet-Drafts and Proposed Standards are. In best Unix practice, the documentation of the program is used as a specification subject to revision analogously to an Internet Proposed Standard.

Unlike other environments, in Unix development the documentation is often written before the program, or at least in conjunction with it. For X11, the core X standards were finished before the first release of X and have remained essentially unchanged since that date. Compatibility among different X systems is improved further by rigorous specification-driven testing.

The existence of a well-written specification made the development of the X test suite much easier. Each statement in the X specification was translated into code to test the implementation, a few minor inconsistencies were uncovered in the specification during this process, but the result is a test suite that covers a significant fraction of the code paths within the sample X library and server, and all without referring to the source code of that implementation.

—Keith Packard

Semiautomation of the test-suite generation proved to be a major advantage. While field experience and advances in the state of the graphics art led many to criticize X on design grounds, and various portions of it (such as the security and user-resource models) came to seem clumsy and over-engineered, the X implementation achieved a remarkable level of stability and cross-vendor interoperation.

In Chapter 9 we discussed the value of pushing coding up to the highest possible level to minimize the effects of constant defect density. Implicit in Keith Packard’s account is the idea that the X documentation constituted no mere wish-list but a form of high-level code. Another key X developer confirms this:

In X, the specification has always ruled. Sometimes specs have bugs that need to be fixed too, but code is usually buggier than specs (for any spec worth its ink, anyway).

—Jim Gettys

Jim goes on to observe that X’s process is actually quite similar to the IETF’s. Nor is its utility limited to constructing good test suites; it means that arguments about the system’s behavior can be conducted at a functional level with respect to the specification, avoiding too much entanglement in implementation issues.

Having a well-considered specification driving development allows for little argument about bug vs. feature; a system which incorrectly implements the specification is broken and should be fixed.

I suspect this is so ingrained into most of us that we lose sight of its power.

A friend of mine who worked for a small software firm east of Bellevue wondered how Linux applications developers could get OS changes synchronized with application releases. In that company, major system-level APIs change frequently to accommodate application whims and so essential OS functionality must often be released along with each application.

I described the power held by the specifications and how the implementation was subservient to them, and then went on to assert that an application which got an unexpected result from a documented interface was either broken or had discovered a bug. He found this concept startling.

Discerning such bugs is a simple matter of verifying the implementation of the interface against the specification. Of course, having source for the implementation makes that a bit easier.

—Keith Packard

This standards-come-first attitude has benefits for end users as well. While that no-longer-small company east of Bellevue has trouble keeping its office suite compatible with its own previous releases, GUI applications written for X11 in 1988 still run without change on today’s X implementations. In the Unix world, this sort of longevity is normal—and the standards-as-DNA attitude is the reason why.

Thus, experience shows that the standards-respecting, scrap-and-rebuild culture of Unix tends to yield better interoperability over extended time than perpetual patching of a code base without a standard to provide guidance and continuity. This may, indeed, be one of the most important Unix lessons.

Keith’s last comment brings us directly to an issue that the success of open-source Unixes has brought to the forefront—the relationship between open standards and open source. We’ll address this at the end of the chapter—but before doing that, it’s time to address the practical question of how Unix programmers can actually use the tremendous body of accumulated standards and lore to achieve software portability.

17.5 Programming for Portability

Software portability is usually thought of in quasi-spatial terms: can this code be moved sideways to existing hardware and software platforms other than the one it was built for? But Unix experience over decades tells us that durability down through time is just as important, if not more so. If we could predict the future of software in detail it would probably be the present—nevertheless, in programming for portability we should try to think about making choices that will base the software on the features of its environment that are likeliest to persist, and avoid technologies that seem likely to face end-of-life in the foreseeable future.

Under Unix, two decades of attention to the issues of specifying portable APIs has largely solved that problem. Facilities described in the Single Unix Specification are likely to be present on all modern Unix platforms today and rather unlikely to go unsupported in the future.

But not all platform dependencies have to do with the system or library APIs. Your implementation language can matter; file-system layout and configuration differences between the source and target system can be a problem as well. But Unix practice has evolved ways to cope.

17.5.1 Portability and Choice of Language

The first issue in programming for portability is your choice of implementation language. All the major languages we surveyed in Chapter 14 are highly portable in the sense that compatible implementations are available across all modern Unixes; for most, implementations under Windows and MacOS are available as well. Portability problems tend to arise not in the core languages but in support libraries and degree of integration with the local environment (especially IPC and concurrent-process management, including the infrastructure for GUIs).

17.5.1.1 C Portability

The core C language is extremely portable. The standard Unix implementation is the GNU C compiler, which is ubiquitous not only in open-source Unixes but modern proprietary Unixes as well. GNU C has been ported to Windows and classic MacOS, but is not widely used in either environment because it lacks portable bindings to the native GUI.

The standard I/O library, mathematics routines, and internationalization support are portable across all C implementations. File I/O, signals, and process control are portable across Unixes provided one takes care to use only the modern APIs described in the Single Unix Specification; older C code often has thickets of preprocessor conditionals for portability, but those handle legacy pre-POSIX interfaces from older proprietary Unixes that are obsolete or close to it in 2003.

C portability starts to be a more serious problem near IPC, threads, and GUI interfaces. We discussed IPC and threads portability issues in Chapter 7. The real practical problem is GUI toolkits. A number of open-source GUI toolkits are universally portable across modern Unixes and to Windows and classic MacOS as well—Tk, wxWindows, GTK, and Qt are four well-known ones with source code and documentation readily discoverable by Web search. But none of them is shipped with all platforms, and (for reasons more legal than technical) none of these offers the native-GUI look and feel on all platforms. We gave some guidelines for coping in Chapter 15.

Volumes have been written on the subject of how to write portable C code. This book is not going to be one of them. Instead, we recommend a careful reading of Recommended C Style and Coding Standards [Cannon] and the chapter on portability in The Practice of Programming [Kernighan-Pike99].

17.5.1.2 C++ Portability

C++ has all the same operating-system-level portability issues as C, and some of its own. An additional one is that the open-source GNU compiler for C++ has lagged substantially behind the proprietary implementations for most of its existence; thus, there is not yet as of mid-2003 a universally deployed equivalent of GNU C on which to base a de-facto standard. Furthermore, no C++ compiler yet implements the full C++99 ISO standard for the language, though GNU C++ comes closest.

17.5.1.3 Shell Portability

Shell-script portability is, unfortunately, poor. The problem is not shell itself; bash(1) (the open-source Bourne Again shell) has become sufficiently ubiquitous that pure shellscripts can run almost anywhere. The problem is that most shellscripts make heavy use of other commands and filters that are much less portable, and by no means guaranteed to be in the toolkit in any given target machine.

This problem can be overcome by dint of heroic effort, as in the autoconf(1) tools. But it is sufficiently severe that most of the heavier sort of programming that used to be done in shell has moved to second-generation scripting languages like Perl, Python, and Tcl.

17.5.1.4 Perl Portability

Perl has good portability. Stock Perl even offers a portable set of bindings to the Tk toolkit that supports portable GUIs across Unix, MacOS and Windows. One issue dogs it, however. Perl scripts often require add-on libraries from CPAN (the Comprehensive Perl Archive Network) which are not guaranteed to be present with every Perl implementation.

17.5.1.5 Python Portability

Python has excellent portability. Like Perl, stock Python even offers a portable set of bindings to the Tk toolkit that supports portable GUIs across Unix, MacOS, and Windows.

Stock Python has a much richer standard library than does Perl and no equivalent of CPAN for programmers to rely on; instead, important extension modules are routinely incorporated into the stock Python distribution during minor releases. This trades a spatial problem for a temporal one, making Python much less subject to the missing-module effect at the cost of making Python minor version numbers somewhat more important than Perl release levels are. In practice, the tradeoff seems to favor Python.

17.5.1.6 Tcl Portability

Tcl portability is good, overall, but varies sharply by project complexity. The Tk toolkit for cross-platform GUI programming is native to Tcl. As with Python, evolution of the core language has been relatively smooth, with few version-skew problems. Unfortunately, Tcl relies even more heavily than Perl on extension facilities that are not guaranteed to ship with every implementation—and there is no equivalent of CPAN to centrally distribute them.

For smaller projects not reliant on extensions, therefore, Tcl portability is excellent. But larger projects tend to depend heavily on both extensions and (as with shell programming) calling external commands that may or may not be present on the target machine; their portability tends to be poor.

Tcl may have suffered, ironically, from the ease of adding extensions to it. By the time a particular extension started to look interesting as part of the standard distribution, there typically were several different versions of it in existence. At the 1995 Tcl/Tk Workshop, John Ousterhout explained why there was no OO support in the standard Tcl distribution:

Think of five mullahs sitting around in a circle, all saying “Kill him, he’s a heathen”. If I put a specific OO scheme into the core, then one of them will say “Bless you, my son, you may kiss my ring”, and the other four will say “Kill him, he’s a heathen”.

The lot of a language designer is not necessarily a happy one.

17.5.1.7 Java Portability

Java portability is excellent—it was, after all, designed with “write once, run everywhere” as a primary goal. Portability fails, however, to be perfect. The difficulties are mostly version-skew problems between JDK 1.1 and the older AWT GUI toolkit (on the one hand) and JDK 1.2 with the newer Swing GUI toolkit. There are several important reasons for these:

• Sun’s AWT design was so deficient that it had to be replaced with Swing.

Microsoft’s refusal to support Java development on Windows and attempt to replace it with C#.

• Microsoft’s decision to hold Internet Explorer’s applet support at the JDK 1.1 level.

• Sun licensing terms that make open-source implementations of JDK 1.2 impossible, retarding its deployment (especially in the Linux world).

For programs that involve GUIs, Java developers seeking portability will, for the foreseeable future, face a choice: Stay in JDK 1.1/AWT with a poorly designed toolkit for maximum portability (including to Microsoft Windows), or get the better toolkit and capabilities of JDK 1.2 at the cost of sacrificing some portability.

Finally, as we noted previously, the Java thread support has portability problems. The Java API, unlike less ambitious operating-system bindings for other languages, bravely tried to bridge the gaps between the diverging process models offered by different operating systems. It does not quite manage the trick.

17.5.1.8 Emacs Lisp Portability

Emacs Lisp portability is excellent. Emacs installations tend to be upgraded frequently, so seriously out-of-date environments are rare. The same extension Lisp is supported everywhere and effectively all extensions are shipped with Emacs itself.

Then, too, the primitive set of Emacs is quite stable. It achieved completeness for the things an editor has to do (manipulating buffers, bashing text) years ago. Only the introduction of X has disturbed this picture at all, and very few Emacs modes need to be aware of X. Portability problems are usually manifestations of quirks in the C-level bindings of operating-system facilities; control of subordinate processes in modes like mail agents is about the only issue where such problems manifest with any frequency.

17.5.2 Avoiding System Dependencies

Once your language and support libraries are chosen, the next portability issue is usually the location of key system files and directories: mail spools, logfile directories and the like. The archetype of this sort of problem is whether the mail spool directory is /var/spool/mail or /var/mail.

Often, you can avoid this sort of dependency by stepping back and reframing the problem. Why are you opening a file in the mail spool directory, anyway? If you’re writing to it, wouldn’t it be better to simply invoke the local mail transport agent to do it for you so the file-locking gets done right? If you’re reading from it, might it be better to query it through a POP3 or IMAP server?

The same sort of question applies elsewhere. If you find yourself opening logfiles manually, shouldn’t you be using syslog(3) instead? Function-call interfaces through the C library are better standardized than system file locations. Use that fact!

If you must have system file locations in your code, your best alternative depends on whether you will be distributing in source code or binary form. If you are distributing in source, the autoconf tools we discuss in the next section will help you. If you’re distributing in binary, then it’s good practice to have your program poke around at runtime and see if it can automatically adapt itself to local conditions—say, by actually checking for the existence of /var/mail and /var/spool/mail.

17.5.3 Tools for Portability

You can often use the open-source GNU autoconf(1) we surveyed in Chapter 15 to handle portability issues, do system-configuration probes, and tailor your makefiles. People building from sources today expect to be able to type configure; make; make install and get a clean build. There is a good tutorial on these tools <http://seul.org/docs/autotut/>. Even if you’re distributing in binary, the autoconf(1) tools can help automate away the problem of conditionalizing your code for different platforms.

Other tools that address this problem; two of the better known are the Imake(1) tool associated with the X windowing system and the Configure tool built by Larry Wall (later the inventor of Perl) and adapted for many different projects. All are at least as complicated as the autoconf suite, and no longer as often used. They don’t cover as wide a range of target systems.

17.6 Internationalization

An in-depth discussion of code internationalization—designing software so the interface readily incorporates multiple languages and the vagaries of different character sets—would be out of scope for this book. However, a few lessons for good practice do stand out from Unix experience.

First, separate the message base from the code. Good Unix practice is to separate the message strings a program uses from its code. so that message dictionaries in other languages can be plugged in without modifying the code.

The best-known tool for this job is GNU gettext, which requires that you wrap native-language strings that need to be internationalized in a special macro. The macro uses each string as a key into per-language dictionaries which can be supplied as separate files. If no such dictionaries are available (or if they are but the string lookup does not return a match), the macro simply returns its argument, implicitly falling back on the native language in the code.

While gettext itself is messy and fragile as of mid-2003, its general philosophy is sound. For many projects, it is possible to craft a lighter-weight version of this idea with good results.

Second, there is a clear trend in modern Unixes to scrap all the historical cruft associated with multiple character sets and make applications natively speak UTF-8, the 8-bit shift encoding of the Unicode character set (as opposed to, say, making them natively speak 16-bit wide characters). The low 128 characters of UTF-8 are ASCII, and the low 256 are Latin-1, which means this choice is backward-compatible with the two most widely used character sets. The fact that XML and Java have made this choice helps, but the momentum is present even where XML and Java are not.

Third, beware of character ranges in regular expressions. The element [a-z] will not necessarily catch all lower-case letters if the script or program it’s in is applied to (say) German, where the sharp-s or ß character is considered lower-case but does not fall in that range; similar problems arise with French accented letters. Its safer to use [[:lower:]]. and other symbolic ranges described in the POSIX standard.

17.7 Portability, Open Standards, and Open Source

Portability requires standards. Open-source reference implementations are the most effective method known for both promulgating a standard and for pressuring proprietary vendors into conforming. If you are a developer, open-source implementations of a published standard can both tremendously reduce your coding workload and allow your product to benefit (in ways both expected and unexpected) from the labor of others.

Let’s suppose, for example, you are designing image-capture software for a digital camera. Why write your own format for saving image bits or buy proprietary code when (as we noted in Chapter 5) there is a well-tested, full-featured library for writing PNGs in open source?

The (re)invention of open source has had a significant impact on the standards process as well. Though it is not formally a requirement, the IETF has since around 1997 grown increasingly resistant to standard-tracking RFCs that do not have at least one open-source reference implementation. In the future, it seems likely that conformance to any given standard will increasingly be measured by conformance to (or outright use of!) open-source implementations that have been blessed by the standard’s authors.

The flip side of this is that often the best way to make something a standard is to distribute a high-quality open-source implementation of it.

—Henry Spencer

In the end, the most effective step you can take to ensure the portability of your code is to not rely on proprietary technology. You never know when the closed-source library or tool or code generator or network protocol you are depending on will be end-of-lifed, or when the interface will be changed in some backwards-incompatible way that breaks your project. With open-source code, you have a path forward even if the leading-edge version changes in a way that breaks your project; because you have access to source code, you can forward-port it to new platforms if you need to.

Until the late 1990s this advice would have been impractical. The few alternatives to relying on proprietary operating systems and development tools were noble experiments, academic proofs-of-concept, or toys. But the Internet changed everything; in mid-2003 Linux and the other open-source Unixes exist and have proven their mettle as platforms for delivering production-quality software. Developers have a better option now than being dependent on short-term business decisions designed to protect someone else’s monopoly. Practice defensive design—build on open source and don’t get stranded!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.118.159