© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2022
M. KalinModern C Up and Runninghttps://doi.org/10.1007/978-1-4842-8676-0_5

5. Input and Output

Martin Kalin1  
(1)
Chicago, IL, USA
 

5.1 Overview

Programs of all sorts regularly perform input/output (I/O) operations , and programmers soon learn the pitfalls of these operations: trying to open a nonexistent file, having too many files open at the same time, accidentally overwriting a file and thereby losing its data, and so on. Nonetheless, I/O operations remain at the core of programming.

C has two APIs for I/O operations: a low-level or system-level API, which is byte-oriented, and a high-level API, which deals with multibyte data types such as integers, floating-point types, and strings. The system-level functions are ideal for fine-grained control, and the high-level functions are there to hide the byte-level details. Although the two APIs can be mixed, as various code examples show, this must be done with caution. This chapter covers both APIs and examines options such as nonblocking and nonsequential for I/O operations.

Files and I/O operations are one way to support interprocess communication (IPC) . Recall that separate processes have separate address spaces by default, which means that shared memory, although possible, requires setup for processes to communicate with one another. Local files, by contrast, can be used readily for IPC: one process can produce data that is streamed to a file, while another process can consume the data streamed from this file. A later section examines how to synchronize process access to shared files.

The API for I/O operations extends to networking, in particular to socket connections between processes running on different machines. This chapter thus provides background for the next.

5.2 System-Level I/O

A short review of some basic concepts should be helpful in clarifying system-level I/O in C. A process, as a program in execution, requires shared system resources from at least two but typically from three categories:
  • Processors to execute the program’s instructions (at least one required)

  • Memory to store the program’s instructions and data (required)

  • Input/output devices to connect to the outside world (optional but usual)

Some special-purpose utility processes (background processes) may require access to few, if any, I/O devices . For convenience, a normal process is one that uses resources from all three categories. When a normal process starts, the operating system automatically gives the process access to three files, where a file is a collection of words and a word is a formatted collection of bits (e.g., bits that represent printable characters such as A and Z in a character-encoding scheme such as ASCII). These three files have traditional names, and they are associated by default with particular I/O devices:
  • The standard input defaults to the keyboard but can be redirected to some other device (e.g., a network connection).

  • The standard output defaults to the screen but can be redirected to some other device (e.g., a printer).

  • The standard error defaults to the screen but can be redirected to some other device (e.g., a log file on the local disk).

At the command line on modern systems, the less-than sign < redirects the standard input; the greater-than sign > redirects the standard output; and the combined symbols 2> redirect the standard error. Examples are forthcoming, together with a clarification of why the numeral in 2> is 2.

In system-level I/O, nonnegative integer values called file descriptors are used to identify, within a process, the files that the process has opened. Recall that files can be used for interprocess communication (IPC) . If two processes were to open a file to share data using system-level I/O, then each process would have a file descriptor identifying the file; the file descriptor values would not have to be the same because the operating system maintains a global file table that tracks which processes have opened which files.
Table 5-1

File descriptor and FILE* overview

Name

File descriptor

Macro

FILE*

standard input

0

STDIN_FILENO

stdin

standard output

1

STDOUT_FILENO

stdout

standard error

2

STDERR_FILENO

stderr

Table 5-1 summarizes the basics about the three files to which a normal process automatically gets access. For other files, access is achieved through a successful call to an open function: in low-level I/O, the basic function is named open, and in high-level I/O, the basic function is named fopen . The table now can be clarified further:
  • In system-level I/O, a program can use the three reserved file descriptors (0, 1, and 2) for I/O operations. A short example follows. The integer values themselves can be used, or the macros (defined in unistd.h) shown in the third column.

  • In high-level I/O, the header file stdio.h includes three pointers to a FILE structure, which contains pertinent information about an opened file. The pointer stdin is the high-level counterpart of file descriptor 0, stdout is the high-level counterpart of file descriptor 1, and stderr is the high-level counterpart of file descriptor 2.

A first code example draws these introductory points together.
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#define BuffSize 4
void main() {
  const char* prompt = "Four characters, please: ";
  char buffer[BuffSize]; /* 4-byte buffer */
  /* write returns -1 on error, count of bytes written on success */
  write(STDOUT_FILENO, prompt, strlen(prompt));
  ssize_t flag = read(0, buffer, sizeof(buffer)); /* 0 == stdin */
if (flag < 0)
  perror("Ooops...");   /* this string + a system msg explaining errno */
else
  write(1, buffer, sizeof(buffer));               /* 1 == stdout */
putchar(' ');
}
Listing 5-1

Some basic I/O operations using the system-level API

The ioLL program (see Listing 5-1) is a first look at low-level or byte-oriented I/O. The program uses two of the three automatically supplied file descriptors: 0 for the standard input (keyboard) and 1 for the standard output (screen). The key features of the program can be summarized as follows:
  • The program writes a prompt, implemented as a string literal, to the standard output. The write function takes three arguments:
    • The first argument specifies the destination for the write, in this case the standard output. The file descriptor value 1 could be used here instead of the macro STDOUT_FILENO.

    • The second argument is the source of the bytes, in this case the address of the first character F in the prompt string.

    • The third argument is the number of bytes to be written, in this case the value of strlen(prompt). The characters are, by default, encoded in ASCII; hence, strlen effectively returns the number of bytes to be written.

The read function likewise expects three arguments:
  • The first argument specifies the source from which the bytes are read, in this case the standard input (0), the keyboard by default.

  • The second argument specifies where the bytes should be stored, in this case the char (byte) array named buffer.

  • The third argument specifies the number of bytes to be read into the buffer, in this case four.

Like many of the low-level I/O functions, read returns an int value : the number of bytes read, on success, and -1, on error. If an error occurs, an error code is available in the global variable errno, which is declared in the header file errno.h. The perror function prints a human-readable description of this error. This function takes a single string argument so that the user can add a customized error message to which perror appends a system error message. If only the system error message is of interest, perror can be called with NULL as its argument.

The program concludes with another call to write, this time using 1 to designate the standard output. The bytes to be written come from the array buffer, and the number of bytes is computed as sizeof(buffer), which returns the number of bytes in the array, not the size of the pointer constant buffer.

The buffer does not include extra space for a null terminator: the program does not treat the input from keyboard as a string, but rather as four independent bytes. The write function takes the same approach: no string terminator is needed because the last argument to write specifies exactly how many bytes should be written, in this case four.

A short experiment underscores the level at which the functions read and write work. The experiment is to replace
char buffer[BuffSize];
with
int buffer; /* sizeof(int) is 4 */
or, indeed, with a variable of any data type whose size is at least 4 bytes. The read call now changes to
ssize_t flag = read(0, &buffer, sizeof(buffer));                             /* &buffer == address of buffer */
The 4 bytes are to be put into a single int variable, which now acts like a 4-byte buffer. The write statement requires only a minor but critical change:
write(1, &buffer, sizeof(buffer));  /* need buffer's address */

The address operator must be applied to buffer, which is now just a scalar int variable.

This experiment underscores that system-level I/O does not honor multibyte types. For example, the bytes read into the int variable buffer could be any characters whatsoever. Here is a screen capture of a sample run of the revised rwLL program:
% ./ioLL
Four characters, please: !$ef
!$ef

These characters are not numerals, of course. The low-level read and write functions treat these simply as 8-bit bytes stored together in a 4-byte variable named buffer.

5.2.1 Low-Level Opening and Closing

The next two code examples introduce the byte-oriented open and close functions . The sysWrite program writes an array of int values, 4 bytes apiece, to a disk file, and the sysRead program reads the bytes from the same file in two different ways. The file descriptors 0 (standard input), 1 (standard output), and 2 (standard error) identify files that are opened automatically when a process begins execution; hence, there is no need for the program to call open on these three. For other files, however, a call to open is required, and a matching call to close is sound practice. (When a program terminates, the system closes any files that the program may have opened.) The open function, like so many in the standard libraries, takes a variable number of arguments.
#include <stdio.h>
#include <unistd.h>
#include <fcntl.h>
#define FILE_NAME "nums.dat"
void main() {
  /* Open a file for reading and writing. */
  int fd = open(FILE_NAME,                    /* name */
                O_CREAT | O_RDWR,             /* create, read/write */
                S_IRUSR | S_IWUSR | S_IXUSR | /* owner's rights */
                S_IROTH | S_IWOTH | S_IXOTH); /* others' rights */
  if (fd < 0) { /* -1 on error, positive value on success */
    perror(NULL);
    return;
  }
  /* Write some data. */
  int nums[ ] = {9, 7, 5, 3, 1}; /* int[ ] type */
  ssize_t flag = write(fd, nums, sizeof(nums));
  if (flag < 0) { /* -1 on error, count of written bytes on success */
    perror(NULL);
    return;
  }
  /* Close the file. */
  flag = close(fd);
  if (flag < 0) perror(NULL);
}
Listing 5-2

Writing to a local file with system-level I/O

The sysWrite program (see Listing 5-2) tries to open a file on the local disk, creating this file if necessary. The program sets the access rights for the file’s owner and for others. The program then writes five integers to the file and closes the file. There is error-checking on all three of these I/O operations.

In this example, the call to the open function has three arguments, but the open function also can be called with only the first two arguments. The arguments in this case are as follows:
  • The first argument is the name of the file to open. In this case, the full path is not used; hence, the file will be created in the directory from which the sysWrite program is launched.

  • The second argument consists of flags, perhaps bitwise or-ed together as in this case. The pair

    O_CREAT | O_RDWR

    signals that the file should be created, if necessary, and opened for both read and write operations.

  • The third argument consists of bitwise or-ed values that specify access permissions on the file. In this example, the file’s owner has read/write/execute permissions, as do others. In a production environment, the access permissions of owner and others might differ.

If the call to open succeeds , a file descriptor is returned. Its value is the smallest positive value not currently in use by the process as a file descriptor. Since the file descriptor for the standard error (2) is in use, the smallest available value in this case would be 3. A print statement could be added to confirm that the value of fd is, indeed, 3.

If the call to open fails, -1 is returned to signal some error or other. (The next code example shows a sample perror message.) The call to write again has the three required arguments: the destination for the written bytes, the source of these bytes, and the number of bytes to write. Here is the relevant code segment :
int nums[ ] = {9, 7, 5, 3, 1}; /* int[ ] type */
ssize_t flag = write(fd, nums, sizeof(nums));                          /* ssize_t is a signed integer type */

No looping is needed to write the array’s contents because the third argument, sizeof(nums), is the number of bytes in the array as a whole. In this example, the bytes are written as integer values because the array’s elements are stored in memory as int instances. In short, the target file nums.dat contains binary data, not text. Checking the size of the file nums.dat confirms that it holds 20 bytes, 4 bytes apiece for the 5 integers written to this file.

The sysWrite program opens a file by specifying access rights for the file’s owner and for others. In general, these rights are divided into three categories: owner, group, and other. The macros such as S_IRUSR and S_IWUSR are assigned values such that their bitwise or-ing yields unique values. For example:
S_IRUSR | S_IWUSR == 384 ## decimal
whereas
S_IRUSR | S_IRWXU == 448 ## decimal
The bitwise or-ings can be as complicated as needed. It is common in Unix-like systems to set file permissions from the command line with octal values that reflect the bitwise or-ing of the values shown. For example:
S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH == 0664 ## octal
Table 5-2

Access permissions

Octal code

Symbolic code

Meaning

0001

S_IXOTH

Others can execute.

0002

S_IWOTH

Others can write.

0004

S_IROTH

Others can read.

0007

S_IRWXO

Others can do anything.

0010

S_IXGRP

Group can execute.

0020

S_IWGRP

Group can write.

0040

S_IRGRP

Group can read.

0070

S_IRWXG

Group can do anything.

0100

S_IXUSR

Owner can execute.

0200

S_IWUSR

Owner can write.

0400

S_IRUSR

Owner can read.

0700

S_IRWXU

Owner can do anything.

Table 5-2 summarizes the access permissions on files. In the left column, the values are octal. In C programs, an integer constant that starts with a 0 is interpreted as being in base-8, just as one starting with 0x or 0X is interpreted as being in base-16. It is common to use the octal values in command-line utilities such as chmod, but the symbolic constants are the way to go in programs. Note, by the way, that the permission values are such that any bitwise or-ing still yields a unique value. Also, mistakes such as
S_IWUSR | S_IXGRP | S_IWUSR /* S_IWUSR occurs twice */
are harmless.
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#define FILE_NAME "nums.dat"
void main() {
  int fd = open(FILE_NAME, O_RDONLY); /* open for reading only */
  if (fd < 0) { /* -1 on error, > 2 on success */
    perror(NULL); /* "No such file or directory" if nums.dat doesn't exist */
    return;
  }
  int read_in[5]; /* buffer to hold the bytes */
  ssize_t how_many = read(fd, read_in, sizeof(read_in));
  if (how_many < 0) {
    perror(NULL);
    return;
  }
  close(fd); /* no error check this time */
  int i;
  int n = how_many / sizeof(int); /* from byte count to number of ints */
  for (i = 0; i < n; i++) printf("%i ", read_in[i] * 10);                                 /* 90 70 50 30 10 */
}
Listing 5-3

Reading from a local file with system-level I/O

The sysRead program (see Listing 5-3) reads five 4-byte int values from the same file that the sysWrite program populates with these integers. In the sysRead program, the file is opened for read-only. The available macro flags for a call to open, together with their values, are
#define O_RDONLY  0x0000  /* open for reading only */
#define O_WRONLY  0x0001  /* open for writing only */
#define O_RDWR    0x0002  /* open for reading and writing */

The source code documentation shows the perror message if the file nums.dat does not exist.

Once the file is opened, the read function requires a buffer in which to place the bytes, in this case the read_in array that can hold five int elements, or 20 bytes in all. The read function, like the others seen so far, returns -1 in case of error; 0 on end of file; and otherwise the number of bytes read.

A read operation is the inverse of a write operation, and the arguments passed to read and write reflect this relationship. The first argument to read is a file descriptor for the source of bytes, whereas this argument specifies the destination in the case of write. The second argument to read is the destination buffer, whereas this argument specifies the source in a write. The last argument is the same in both: the number of bytes involved.

The sysRead program uses the high-level printf function to print the int values. Each value is multiplied by 10 to confirm that int instances have been read into memory from the source file. Recall that a successful read returns the number of bytes, in this case stored in the local variable how_many; hence, how_many is divided by sizeof(int) to get the number of 4-byte integers, in this case five.

Together the sysWrite and sysRead programs illustrate how local disk files can support basic interprocess communication. The programs would need to be amended so that, for example, the sysRead program would wait for the nums.dat file to be created and populated with integer values before trying to read from that file. A later code example covers file locking for synchronizing access to shared files.

5.3 Redirecting the Standard Input, Standard Output, and Standard Error

Redirecting the standard input, the standard output, and the standard error with programs launched from the command line is straightforward. A simplified version of an earlier program illustrates. This approach brings the advantage of using one and the same program for reading and writing arbitrarily many files, but without editing and then recompiling the source code.
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#define BuffSize 8
void main() {
  char buffer[BuffSize]; /* 8-byte buffer */
  ssize_t flag = read(0, buffer, sizeof(buffer)); /* 0 == stdin */
  if (flag < 0) {
    perror("Ooops...");
    return;
  }
  char ws = ' ';
  write(1, buffer, sizeof(buffer));    /* 1 == stdout */
  write(1, &ws, 1);                    /* ditto */
  write(2, buffer, sizeof(buffer));    /* 2 == stderr */
  putchar(' ');
}
Listing 5-4

Redirecting I/O

The ioRedirect program (see Listing 5-4) expects to read 8 bytes from the standard input and then echoes these bytes to the standard output and the standard error. If the bytes are ASCII character codes, the program is easy to follow. Here is a screen capture of a sample run; my comments start with ##:
% ./ioRedirect      ## on Windows, drop the ./
12345678            ## typed in from the keyboard, echoed on the screen
12345678 12345678   ## 1st 8 to standard output, 2nd 8 to standard error
The file infile contains a single line:
abcdefgh
To redirect the standard input to this file, the command is
% ./ioRedirect < infile ## < redirects the standard input
The output now is
abcdefgh abcdefgh
To redirect the standard output to the file outfile, the command is
% ./ioRedirect > outfile

The eight characters entered on the keyboard now appear once on the screen (default for the standard error) and once in the local disk file outfile. By the way, if outfile already exists, then the redirection purges this file and then repopulates it; hence, caution is in order.

Redirection to the standard error differs only slightly. Recall that 2 is the file descriptor for the standard error:
% ./ioRedirect 2> logfile
Redirections can be combined as needed, for example:
% ./ioRedirect < infile 2> logfile
Assuming that infile is the same as before, the contents of logfile are
abcdefgh  abcdefgh

5.4 Nonsequential I/O

The examples so far have dealt with sequential I/O: bytes are read in sequence and written in sequence. It is convenient at times, however, to have random or nonsequential access to a file’s contents. A short code example illustrates the basic API.
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#define FILE_NAME "test.dat"
void main() {
  const char* bytes = "abcdefghijklmnopqrstuvwxyz";
  int len = strlen(bytes);
  char buffer[len / 2];
  char big_N = 'N';
  /* Open the file and populate it with some bytes. */
  int fd = open(FILE_NAME,
                O_RDWR | O_CREAT,             /* flags */
                S_IRUSR | S_IWUSR | S_IXUSR); /* owner's rights */
  write(fd, bytes, len);
  off_t offset = len / 2;          /* twelve bytes in is character n */
  lseek(fd, offset, SEEK_SET);     /* SEEK_SET is the start of the file */
  write(fd, &big_N, sizeof(char)); /* overwrite 'n' with 'N' */
  close(fd);
  fd = open(FILE_NAME, O_RDONLY);
  lseek(fd, offset, SEEK_SET);
  read(fd, buffer, len / 2);
  close(fd);
  write(1, buffer, len / 2); /* Nopqrstuvwxyz */
  putchar(' ');
}
Listing 5-5

Random or nonsequential file access

The nonseq program (see Listing 5-5) skips the error checking to minimize the clutter, thereby keeping the focus on the nonsequential file access. The program first writes 26 bytes (the lowercase characters in the English alphabet) to a file. After closing the file, the program reopens the file to do an lseek operation that sets up another write operation, this time a write of just one byte. As the name indicates, the function lseek performs a seeking operation, which can change the current file-position marker. A closer look at lseek clarifies.

The library function lseek takes three arguments. They are, in order:
  • A file descriptor

  • A byte offset from a designated position in the file

  • The start position for the offset, with three convenient macros to define the usual positions:
    • SEEK_SET is the start position in the file.

    • SEEK_CUR is the current position in the file.

    • SEEK_END is the end position in the file.

The lseek function returns -1 in case of an error, or the offset to indicate success. The returned offset could be saved for later use. The offsets for lseek are like indexes in a char array: an offset of 0 is the position of the first byte in the file from the seek position, and an offset of 1 is the position of the second byte in the file from the seek position, and so on. In this example, the offset is 13, the position of the ASCII character code for lowercase n. An lseek operation beyond the current end of a file does not expand the file’s size; a subsequent write operation would be required to do so.

Once the current position has been reset with lseek, the program overwrites the lowercase n with an uppercase N. The file then is closed again only to be reopened one more time. There is another lseek to the position of the now uppercase N and a read operation to get the bytes for N through z into the char array named buffer. For confirmation, buffer is printed to the standard output.

5.5 High-Level I/O

System-level I/O is low level because it works with bytes , the char type in C; by contrast, high-level I/O can work with multibyte data types such as integers, floating-point numbers, and strings. To take but one convenient example, the API for the high-level I/O makes it straightforward to convert between, for example, integers and strings. High-level I/O can work at the byte (char) level, but this kind of I/O is especially useful above the byte level.

The names are similar for some functions in the high-level and the low-level API. For example, there is a low-level open function and the high-level fopen function , as well as the low-level close and the high-level fclose functions. There is an fread function in the high-level API that matches up with the read function in the low-level API. The functions differ in syntax, of course, but also in how they work at the byte level. The low-level functions work only at the byte level, whereas the high-level API can work directly with multibyte types such as int and double.

There is crossover. For example, the high-level fdopen function takes a low-level file descriptor as an argument but returns the high-level type FILE*, the return type for various high-level library functions. Consider this contrast for opening and closing a file on the local disk:
int fd = open("input.dat", O_RDONLY);  /* low-level: -1 on failure */
FILE* fptr = fopen("input.dat", "r");  /* high-level: NULL on failure */
The corresponding function calls to close the opened file would be
close(fd);    /* fd is an int value */
fclose(fptr); /* fptr is a FILE* value */

In general, a file opened with the low-level open function is closed with the low-level close function. In a similar fashion, a file opened with fopen is closed with the fclose function. By the way, there is a limit on how many files a process can have open at a time; hence, it is critical to close files when keeping them open is no longer important.

In the low-level API, the integer values 0, 1, and 2 identify the standard input, the standard output, and the standard error, respectively. In the high-level API, the FILE* pointers stdin, stdout, and stderr do the same. The data type of interest in high-level I/O is FILE*, not FILE. It would be highly unusual for a program to declare a variable of type FILE, but typical for a program to assign the value returned from a high-level I/O function to a variable of type FILE*.

The following code segment summarizes the contrast between low-level and high-level I/O, with variable fd as a file descriptor and variable fptr as a pointer to FILE:
int buffer[5];                        /* 5 ints == 20 bytes */
read(fd, buffer, sizeof(int) * 5);    /* byte level read: read 20 bytes */
fread(buffer, sizeof(int), 5, fptr);  /* int level read: read 5 ints */

The low-level read function reads a specified number of bytes and stores them somewhere—in this case, in a 20-byte buffer that happens to be an int array of size five. By contrast, the high-level fread function can read multibyte chunks, in this case five int values, which are 4 bytes apiece.

Some in the C community believe that FILE should have been named STREAM, and it is common to describe high-level I/O as stream-based I/O. In a technical sense, C has two ways for a program to connect to any file, including the standard input, a local disk file, and so on:
  • Through a file descriptor, an integer value that identifies the opened file.

  • Through a stream, a channel that connects a source and a destination: the file could be either the source (read operation) or destination (write operation).

To study the API for the high-level I/O is, in effect, to study various ways of managing I/O streams. The forthcoming examples do so.
#include <stdio.h>
#define FILE_NAME "data.in"
void main() {
  float num;
  printf("A floating-point value, please: ");
  int how_many_floats = fscanf(stdin, "%f", &num);                           /* last arg must be an address */
  if (how_many_floats < 1)
    fprintf(stderr, "Bad scan -- probably bad characters ");
  else
    fprintf(stdout, "%f times 2.1 is %f ", num, num * 2.1);
  FILE* fptr = fopen(FILE_NAME, "w");  /* write only */
  if (!fptr) perror("Error on fopen"); /* fptr is NULL (0) if fopen fails */
  int i;
  for (i = 0; i < 5; i++)
    fprintf(fptr, "%i ", i + 1);
  fclose(fptr);
  fptr = fopen(FILE_NAME, "r");
  int n;
  puts(" Scanning from the input file:");
  while (fscanf(fptr, "%i", &n) != EOF)  /* EOF == -1 == all 1s in binary */
    printf("%i ", n);
  fclose(fptr);
}
Listing 5-6

Basics of high-level I/O

The scanPrint program (see Listing 5-6) covers some basics of high-level I/O, beginning with scanning a file for input. The statement
int how_many_floats = fscanf(stdin, "%f", &num);
highlights some distinctive features of the high-level API. The function fscanf , with f for file, is structured as follows:
  • The first argument specifies the source from which to scan for input, in this case stdin. The shortcut function scanf is hard-wired to read from the standard input, but fscanf explicitly names the source as its first argument. The first argument to scanf is the second argument to fscanf, the format string:

    int how_many_floats = scanf("%f", &num); /* scanf instead of fscanf */
  • The second argument to fscanf is the format string, which specifies how scanned bytes are to be converted into an instance of some type, including a multibyte type such as the 4-byte float. The format string can contain arbitrarily many formatters.

  • The third argument is the destination address, that is, the address of where the formatted bytes are to be stored. In this example, the third argument is &num. The scanning functions in general, including fscanf, return the number of properly formatted instances of the specified data type, in this case float. The format string requests that only a single float be formatted; hence, the returned value is either 0 (failure) or 1 (success).

Why is the Address Operator & So Critical in the Scanning Functions?
A typical call to scanf is
int num;           /* num is a local variable, and so contains random bits */
scanf("%i", &num); /* read an int, store it at the address of n */

If the address operator & were missing from &num in the scanf call, the contents of num would be interpreted as an address, and it is highly unlikely that these random bits make up an address within the executing program’s address space. If num is a local variable, for example, its contents are random bits from the stack or a register.

The scanPrint program prompts the user to enter a floating-point value. If inappropriate characters such as abc.de are entered instead, the program prints an error message to that effect. The fprintf function is used to print to the standard error:
if (how_many_floats < 1)
  fprintf(stderr, "Bad scan -- probably bad characters ");

Otherwise, the scanned float value is multiplied to confirm that the conversion from bytes to a float instance indeed succeeded. The printf function is hard-wired for printing to the standard output, just as the scanf function is hard-wired for scanning from the standard input. In general, error messages should have the standard error as their destination; hence, the scanPrint function uses fprintf with stderr as the first argument.

The last loop in the program is a while loop, and the loop’s condition is a common one in programs that use high-level I/O to read from files:
while (fscanf(fptr, "%i", &n) != EOF) /* EOF == -1 == all 1s in binary */
The value returned from fscanf in particular, and the related scanning functions in general, is tricky:
  • If fscanf is successful in reading and converting, it returns the number of such successes. This number could be zero, which does not represent an input error, but rather a data conversion failure.

  • If an end-of-stream condition occurs before a successful scan-and-convert, the function returns -1 (the value of the macro EOF). The high-level API also includes the function feof(), which returns true (nonzero) to signal end of file and false (zero) otherwise.

  • If an input error occurs (e.g., the data source is absent), fscanf also returns -1.

At issue, then, is how to distinguish between EOF , a normal eventuality when reading from a stream, and an outright error. The library function ferror returns nonzero (true) to indicate an error condition in the stream, and the global variable errno contains an error code under the same condition; as usual, the perror function can be used to print a corresponding error message. For the programmer, however, the difference may not matter: fscanf returns a negative value to signal, in effect, that a scan-and-convert operation on a stream has failed. The ferror function and the errno variable then can be used, if needed, to get more information on why the failure occurred.

A final point about EOF is in order. The EOF value (32 1s in binary) marks the end of a stream, and streams can differ in their sources. If the source is a file on a local disk, then the EOF is generated when a read operation tries to read beyond the last byte stored in the file. If the source is a pipe, a one-way channel between two processes, then the EOF is generated when the pipe is closed on the sending side. An EOF thus should be treated as a condition, rather than as just another data item. To be sure, a program recognizes the EOF condition by reading the 32 bits that make up the EOF value; but these 32 bits differ in meaning from whatever else happens to be read from the stream.

High-level I/O is appropriately named, for this level focuses on the multibyte data types that are dominant in high-level programming languages. There may be times at which any program must drop down to the byte level, but the usual level is awash with integers, strings, floating-point values, and other instances of multibyte types. C works well at either I/O level. Other technical aspects of high-level I/O will be explored in forthcoming examples, which provide context for exploring this API.

5.6 Unbuffered and Buffered I/O

There is yet another way to contrast low-level and high-level I/O: low-level I/O operations are said to be unbuffered, whereas the high-level ones are said to be buffered. It is important, however, to consider carefully what it means for low-level I/O to be unbuffered. A buffer in this context is a system-supplied , in-memory storage area between the executing program, on the one side, and the data source, on the other side.

Consider a code segment that reads a single byte:
char byte;
read(fd, &byte, 1); /* fd identifies a local disk file */
For reasons of efficiency, no modern operating system would fetch a single byte from disk into memory. Instead, the system would fetch a block of bytes into a memory buffer and then deliver the single byte from this buffer to the program:
            block of bytes +---------------+ 1 byte to read
local disk---------------->| memory buffer |---------------->read(fd, &byte, 1)
                           +---------------+

To call low-level I/O unbuffered is not to deny system buffering under the hood. Instead, the point is that the low-level API supports the reading of just one byte, regardless of exactly how that byte might have been delivered to the program that invokes the read function with a third argument of 1.

The high-level fread function is essentially a wrapper around the low-level read function. Each can read a single byte:
char byte;
read(0, &byte, 1);          /* one byte from standard input */
fread(&byte, 1, 1, stdin);  /* ditto */
There are also high-level functions such as fgetc that seem to read a single byte, as the c for char in the function’s name suggests. But the return type for fgetc and related high-level functions is int, not char. The fgetc function, like its high-level cousins, returns EOF to signal the end-of-stream condition, and EOF is a 4-byte int value. In situations other than EOF, the fgetc function returns a byte packaged in an int whose high-order 24 bits are zeroed out; the byte of interest occupies the low-order 8 bits.
#include <unistd.h>
#include <stdio.h>
void main() {
  int i = 0, n = 8;
  char byte;
  /* unbuffered */
  while (i++ < n) {
    read(0, &byte, 1);   /* read a single byte */
    write(1, &byte, 1);  /* write it */
  }
  /* buffered */
  i = 0;
  while (i++ < n) {
    int next = fgetc(stdin); /* char read in a 4-byte int */
    fputc(next, stdout);     /* char written as a 4-byte int */
  }
  putchar(' ');
}
/* stdin is: 12345678abcdefgh */
Listing 5-7

A program contrasting read and fgetc

The buffer program (see Listing 5-7) contrasts byte-fetching in the low-level and the high-level APIs. The low-level read stores the byte in a char variable, and sizeof(char) is guaranteed to be 1 byte. By contrast, the high-level fgetc function returns a 4-byte int. From the command line, the program can be tested against the in.dat file, whose contents are shown in the comment at the bottom:
% buffer < in.dat

Otherwise, all 16 characters should be entered at once from the keyboard, and only then should the Return key be hit.

The traditional contrast between buffered and unbuffered I/O can be misleading, as emphasized in the previous discussion. It is more useful to focus on program requirements. If a program needs to work directly with bytes, then the low-level API is designed to do precisely this. If a program deals mostly with multibyte types but occasionally drops down to the byte level, then the high-level API, which includes wrappers such as fread for low-level functions, is the sensible alternative.

5.7 Nonblocking I/O

Nonblocking I/O has become a popular technique for boosting performance. For example, a production-grade web server is likely to include nonblocking I/O in the mix of acceleration techniques. The potential boost in performance is likewise a challenge to the programmer: nonblocking I/O is simply trickier to manage than its blocking counterpart.

As the name indicates, nonblocking I/O operations do not block—that is, wait—until a read, write, or other I/O operation completes. Consider this code segment in system-level I/O:
int n;                     /* 4 bytes */
read(fd, &n, sizeof(int)); /* blocking read operation */
printf("%i ", n);         /* next statement after blocking read */

The file descriptor fd might identify a local file on the disk but also a less reliable source of bytes such as a network connection. If the read operation in the second statement blocks, then the printf statement immediately thereafter does not execute until the read call returns, perhaps because of an error.

If the read call were nonblocking, the code segment would need a more complicated approach. A nonblocking call returns immediately, and there are now various possibilities to consider, including the following:
  • The read call got all of the expected bytes, in this case four.

  • The read call got only some of the expected bytes and perhaps none at all.

  • The read call encountered an error or end-of-stream condition.

The program now needs logic to handle such cases. Consider the second case. If one call to a nonblocking read gets only some of the expected bytes, then these bytes need to be saved, and another read attempted to get the rest. Perhaps a loop becomes part of the read logic: loop until all of the expected bytes arrive or an error occurs. At the very least, it seems that the printf statement would need to occur inside an if test that checks whether enough bytes were received to go on with the printf.

Is Nonblocking I/O the Same as Asynchronous I/O?

The use of the terms blocking/nonblocking and synchronous/asynchronous varies enough to rule out a simple yes or no answer. My preference is for the blocking/nonblocking pair because they seem more intuitive. That said, code examples are the best way to clarify exactly what these terms mean in practice.

5.7.1 A Named Pipe for Nonblocking I/O

The next code example uses the nonblocking read operation as representative of nonblocking I/O operations in general. For the example to be realistic, it should have two features :
  • The data consumed in a nonblocking read operation should arrive randomly; otherwise, the nonblocking reads might behave exactly as blocking reads would have.

  • After an attempted nonblocking read operation, the program should have meaningful work to do before the next read operation: the appeal of nonblocking I/O is that it frees up a program to do something else besides just waiting for an I/O operation to complete.

Accordingly, the code example consists of two programs: one writes in a pseudorandom fashion to a named pipe, and the other reads from this pipe . A pipe is a connection between processes, and one way in that one end of the pipe is for writing, and the other is for reading. There are both unnamed (or anonymous) and named pipes, both of which are used widely across modern systems for interprocess communication. A later example covers unnamed pipes.

Unix-like systems , and Cygwin for Windows, have command-line utilities that make it easy to demonstrate named pipes. The steps are as follows:
  1. 1.

    Open two terminal windows so that two command-line prompts are available. The working directory should be the same for both command-line prompts.

     
  2. 2.

    In one of the terminal windows, enter these two commands (my comments start with ##):

    % mkfifo tester  ## creates special file named tester, which implements the pipe
    % cat tester     ## type the pipe's contents to the standard output

    To begin, nothing should appear in the window because nothing has been written yet to the named pipe.

     
  3. 3.

    In the second terminal window, enter the following command:

    % cat > tester ## redirect keyboard input to the pipe
    hello, world!  ## then hit Return key
    bye, bye       ## ditto
    <Control-C>    ## terminate session with a Control-C

    Whatever is typed into this terminal window is echoed in the other. Once Control-C is entered, the regular command-line prompt returns in both windows: the pipe has been closed.

     
  4. 4.

    For cleanup, remove the file that implements the named pipe:

    % rm tester
     
As the name mkfifo suggests, a named pipe also is called a fifo for first in, first out (FIFO). A named pipe implements the FIFO discipline so that the pipe acts like a normal queue: the first byte into the pipe is the first byte out, and so on. There is also a library function named mkfifo , which is used in the next code example.
#include <fcntl.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
#define MaxLoops 12000   /* outer loop */
#define ChunkSize 16     /* how many written at a time */
#define IntsPerChunk 4   /* four 4-byte ints per chunk */
#define MaxZs 250        /* max microseconds to sleep */
void main() {
  const char* pipeName = "./fifoChannel";
  mkfifo(pipeName, 0666);   /* read/write for user/group/others */
  int fd = open(pipeName, O_CREAT | O_WRONLY);   /* open as write-only */
  sleep(2); /* give user a chance to start the fifoReader */
  int i;
  for (i = 0; i < MaxLoops; i++) {    /* write MaxWrites times */
    int j;
    for (j = 0; j < ChunkSize; j++) { /* each time, write ChunkSize bytes */
      int k;
      int chunk[IntsPerChunk];
      for (k = 0; k < IntsPerChunk; k++)
        chunk[k] = rand();
      write(fd, chunk, sizeof(chunk));
    }
    usleep((rand() % MaxZs) + 1); /* pause a bit for realism */
  }
  close(fd);                      /* close pipe: generates an end-of-file */
  unlink(pipeName);               /* unlink from the implementing file */
  printf("%i ints sent to the pipe. ", MaxLoops * ChunkSize * IntsPerChunk);
}
Listing 5-8

A named pipe writer

The fifoWriter program (see Listing 5-8) creates and then writes sporadically to the named pipe called fifoChannel. Two statements at the start do the setup:
mkfifo(pipeName, 0666);  /* read/write for user/group/others */
int fd = open(pipeName, O_CREAT | O_WRONLY);  /* open as write-only */

The first statement calls the library function mkfifo with two arguments: the name of the implementing file and the access permissions in octal. The second statement invokes the by-now-familiar open function, specifying that the file underlying the named pipe be created if necessary; the fifoWriter is restricted to write operations because of the O_WRONLY flag.

The fifoWriter then pauses for two seconds to give the user a chance to start the other program, the fifoReader. The fifoWriter needs to start first because it creates and opens the named pipe; but the two-second pause is there only for convenience. The fifoWriter program then loops MaxLoops times (currently 12,000), writing multibyte chunks rather than single bytes to the pipe. A chunk is an array of four 4-byte int values. After writing the bytes to the pipe, the program pauses a pseudorandom number of microseconds, thereby making the write operations somewhat unpredictable. In all, the fifoWriter writes 768,000 int values to the pipe.

The program does cleanup at the end. The file descriptor fd is used to close the pipe, which generates an end-of-file signal for the reader side. The call to the unlink function unlinks the fifoWriter program from the implementation file fifoChannel. When all of the processes connected to the pipe unlink, the system is free to remove the file. In the current example, there is only a single writer process to the pipe and a single reader process from the pipe; hence, only two unlink operations are required.
#include <fcntl.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
unsigned is_prime(unsigned n) { /* not pretty, but efficient */
  if (n <= 3) return n > 1;
  if (0 == (n % 2) || 0 == (n % 3)) return 0;
  unsigned i;
  for (i = 5; (i * i) <= n; i += 6)
    if (0 == (n % i) || 0 == (n % (i + 2))) return 0;
  return 1;
}
void main() {
  const char* file = "./fifoChannel";
  int fd = open(file, O_RDONLY | O_NONBLOCK); /* non-blocking */
  if (fd < 0) return; /* no point in continuing */
  unsigned primes_count = 0, success = 0, failure = 0;
  while (1) {
    int next;
    int i;
    ssize_t count = read(fd, &next, sizeof(int));
    if (0 == count)
      break;                  /* end of stream */
    else if (count == sizeof(int)) { /* read a 4-byte int value */
      success++;
      if (is_prime(next)) primes_count++;
    }
    else                             /* includes errors, and < 4 bytes read */
      failure++;
  }
  close(fd);     /* close pipe from read end */
  unlink(file);  /* unlink from the underlying file */
  printf("Success: %u Primes: %u Failure: %u ", success,   primes_count, failure);
}
Listing 5-9

A named pipe reader

The fifoReader program (see Listing 5-9) reads from the named pipe that the fifoWriter creates and then populates with chunks of int values. The program configures the pipe for nonblocking read operations with the O_NONBLOCK flag passed as an argument to the open function:
int fd = open(file, O_RDONLY | O_NONBLOCK); /* non-blocking */
The utility function fcntl also could be used to set the nonblocking status, as illustrated shortly. The program tries to read int values from the pipe:
ssize_t count = read(fd, &next, sizeof(int)); /* 4-byte int values */
Recall that the fifoWriter writes an array of four int values at a time and does so sporadically. Because the read operation in the fifoReader is nonblocking, three cases are singled out for application logic:
  • If the read function returns 0, this signals an end-of-stream condition in the named pipe: no further bytes are coming from the one and only writer, and so the fifoReader breaks out of its infinite loop.

  • If the read function yields exactly 4 bytes, then the program checks whether the integer value is a prime; this check represents the do something step before attempting the next read operation.

  • If the read function fails to read exactly 4 bytes, or detects an error condition of any kind, then the program records the failure. The fifoReader program does not distinguish between partial reads (e.g., 2 bytes instead of the expected 4) and miscellaneous but nonfatal errors.

The fifoReader , like the fifoWriter, cleans up by closing the named file and unlinking from the implementation file. The fifoReader generates a short report at the end. On a sample run, the output (formatted for readability) was
Success: 768,000 Primes: 37,682 Failure: 31,642,062

Recall that the thirty-one million or so failures cover partial reads (read returns less than sizeof(int)) and nonfatal errors. In the end, the fifoReader does manage to read all of the 768,000 4-byte integer values that the fifoWriter writes to the pipe; but the fifoReader has plenty of unsuccessful reads as well: the fifoWriter sleeps between write operations, which gives the fifoReader ample opportunity to attempt nonblocking read operations doomed to fail because no unread bytes remain in the channel. In short, the output from the fifoReader is not surprising.

The fifoReader program has a dismal record of successful reads: about 2% of its read operations succeed in getting desired 4-byte int values, and the remaining read operations fail. The next chapter introduces an event-driven approach to read operations. This new approach first checks a channel for available bytes before even attempting a read operation.

The fifoReader program uses a flag passed to the open function to set the nonblocking status. The standard libraries include an fcntl utility, declared in the header file fcntl.h, that can do the same. The fcntl function has many uses and a correspondingly long documentation.
unsigned set_nonblock(int fd) {
  int flags = fcntl(fd, F_GETFL);         /* get the current flag values */
  if (-1 == flags) return 0;              /* on error, return false */
  flags |= O_NONBLOCK;                    /* add non-blocking */
  return -1 != fcntl(fd, F_SETFL, flags); /* 1 == success, 0 == failure */
}
Listing 5-10

A function to set the nonblocking feature

The setNonBlock example (see Listing 5-10) shows how a file descriptor can be used to change the status from blocking to nonblocking. The set_nonblock function takes a file descriptor as its only argument and returns either true (1) or false (0) to signal whether the attempt succeeded. The function first gets the flags currently set (e.g., O_CREAT and O_RDONLY); if an error occurs here, false is returned. Otherwise, the function adds the O_NONBLOCK flag and then uses the fcntl function for updating. If the update succeeds, set_nonblock returns true, and false otherwise.

5.8 What’s Next?

Network programming centers on the socket API, where a socket is an endpoint in a point-to-point connection between two processes. If the processes are running on physically distinct hosts (machines), a network socket is in play. If the processes are running on the same host, a domain socket could be used instead. (Domain sockets are a popular way for large systems, such as database systems, to interact with clients.) The very same I/O API used to interact with disk files works with sockets as well. Sockets, unlike pipes, are bidirectional.

This chapter has focused on I/O operations on a single machine. The next chapter broadens the study to include I/O operations across machines, and the chapter also explores an event-driven alternative to the nonblocking I/O introduced in this chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.234.225