12. The Preprocessor

The preprocessor provides the tools that enable you to develop programs that are easier to develop, read, modify, and port to different systems. You can also use the preprocessor to literally customize the Objective-C language to suit a particular programming application or your own programming style.

The preprocessor is a part of the Objective-C compilation process that recognizes special statements that can be interspersed throughout a program. As its name implies, the preprocessor actually processes these statements before analysis of the Objective-C program itself takes place. Preprocessor statements are identified by the presence of a pound sign (#), which must be the first nonspace character on the line. As you will see, preprocessor statements have a syntax that is slightly different from that of normal Objective-C statements. We will begin by examining the #define statement.

The #define Statement

One of the primary uses of the #define statement is to assign symbolic names to program constants. The preprocessor statement

#define  TRUE  1

defines the name TRUE and makes it equivalent to the value 1. The name TRUE can subsequently be used anywhere in the program where the constant 1 could be used. Whenever this name appears, its defined value of 1 is automatically substituted into the program by the preprocessor. For example, you might have the following Objective-C statement that uses the defined name TRUE:

gameOver = TRUE;

This statement assigns the value of TRUE to gameOver. You don't need to concern yourself with the actual value you defined for TRUE, but because you do know that you defined it to be 1, the preceding statement would have the effect of assigning 1 to gameOver. The preprocessor statement

#define  FALSE  0

defines the name FALSE and makes its subsequent use in the program equivalent to specifying the value 0. Therefore, the statement

gameOver = FALSE;

assigns the value of FALSE to gameOver, and the statement

if ( gameOver == FALSE )
...

compares the value of gameOver against the defined value of FALSE.

A defined name is not a variable. Therefore, you cannot assign a value to it, unless the result of substituting the defined value is in fact a variable. Whenever a defined name is used in a program, whatever appears to the right of the defined name in the #define statement is automatically substituted into the program by the preprocessor. It's analogous to doing a search and replace with a text editor; in this case, the preprocessor replaces all occurrences of the defined name with its associated text.

You will notice that the #define statement has a special syntax: No equal sign is used to assign the value 1 to TRUE. Furthermore, a semicolon does not appear at the end of the statement. Soon you will understand why this special syntax exists.

#define statements are often placed toward the beginning of the program, after #import or #include statements. This is not required; they can appear anywhere in the program. However, a name must be defined before it is referenced by the program. Defined names do not behave like variables: There is no such thing as a local define. After a name has been defined, it can subsequently be used anywhere in the program. Most programmers place their defines inside header files so they can be used by more than one source file.

As another example of the use of a defined name, suppose you wanted to write two methods to find the area and circumference of a Circle object. Because both of these methods need to use the constant π, which is not a particularly easy constant to remember, it might make sense to define the value of this constant once at the start of the program and then use this value where necessary in each method.

So, you could include the following in your program:

#define PI    3.141592654

Then, you could use it in your two Circle methods (this assumes the Circle class has an instance variable called radius), like so:

-(double) area
{
    return PI * radius * radius;
}

-(double) circumference
{
     return 2.0 * PI * radius;
}

Assignment of a constant to a symbolic name frees you from having to remember the particular constant value every time you want to use it in a program. Furthermore, if you ever needed to change the value of the constant (if perhaps you found out that you were using the wrong value, for example), you would have to change the value in only one place in the program: in the #define statement. Without this approach, you would have to search throughout the program and explicitly change the value of the constant whenever it was used.

You might have realized that all the defines shown so far (TRUE, FALSE, and PI) have been written in capital letters. The reason this is done is to visually distinguish a defined value from a variable. Some programmers adopt the convention that all defined names be capitalized, so that determining when a name represents a variable or an object, a class name, or a defined name is easy. Another common convention is to prefix the define with the letter k. In that case, the following characters of the name are not capitalized. kMaximumValues and kSignificantDigits are examples of two defined names that adhered to this convention.

Using a defined name for a constant value helps make programs more readily extendable. For example, when you learn how to work with arrays, instead of hard-coding in the size of the array you want to allocate, you can define a value such as follows:

#define MAXIMUM_DATA_VALUES  1000

Then you can base all references on the array's size (such as allocation of the array in memory) and valid indices into this array on this defined value.

Also, if the program were written to use MAXIMUM_DATA_VALUES in all cases where the size of the array was used, the preceding definition could be the only statement in the program that would have to be changed if you later needed to change the array size.

More Advanced Types of Definitions

A definition for a name can include more than a simple constant value. It can include an expression and, as you will see shortly, just about anything else!

The following defines the name TWO_PI as the product of 2.0 and 3.141592654:

#define TWO_PI  2.0 * 3.141592654

You can subsequently use this defined name anywhere in a program where the expression 2.0 * 3.141592654 would be valid. So, you could have replaced the return statement of the circumference method from the previous example with the following statement:

return TWO_PI * radius ;

Whenever a defined name is encountered in an Objective-C program, everything that appears to the right of the defined name in the #define statement is literally substituted for the name at that point in the program. Thus, when the preprocessor encounters the name TWO_PI in the return statement shown previously, it substitutes for this name whatever appeared in the #define statement for this name. Therefore, 2.0 * 3.141592654 is literally substituted by the preprocessor whenever the defined name TWO_PI occurs in the program.

The fact that the preprocessor performs a literal text substitution whenever the defined name occurs explains why you don't usually want to end your #define statement with a semicolon. If you did, the semicolon would also be substituted into the program wherever the defined name appeared. If you had defined PI as

#define PI    3.141592654;

and then written

return 2.0 * PI * r;

the preprocessor would replace the occurrence of the defined name PI by 3.141592654;. The compiler would therefore see this statement as

return 2.0 * 3.141592654; * r;

after the preprocessor had made its substitution, which would result in a syntax error. Remember not to put a semicolon at the end of your define statements unless you're really sure you want one there.

A preprocessor definition does not have to be a valid Objective-C expression in its own right, as long as the resulting expression is valid wherever it is used. For instance, you could set up these definitions:

#define AND    &&
#define OR    ||

Then, you could write expressions such as

if ( x > 0 AND x < 10 )
  ...

and

if ( y == 0 OR y == value )
  ...

You could even include a define for the equality test:

#define EQUALS  ==

Then, you could write the following statement:

if ( y EQUALS 0 OR y EQUALS value )
  ...

This removes the very real possibility of mistakenly using a single equal sign for the equality test.

Although these examples illustrate the power of the #define, you should note that it is commonly considered bad programming practice to redefine the syntax of the underlying language in such a manner. Plus, it makes it harder for someone else to understand your code.

To make things even more interesting, a defined value can itself reference another defined value. So, the two defines

#define PI    3.141592654
#define TWO_PI  2.0 * PI

are perfectly valid. The name TWO_PI is defined in terms of the previously defined name PI, thus obviating the need to spell out the value 3.141592654 again.

Reversing the order of the defines, as in

#define TWO_PI  2.0 * PI
#define PI    3.141592654

is also valid. The rule is that you can reference other defined values in your definitions provided everything is defined at the time the defined name is used in the program.

Good use of defines often reduces the need for comments within the program. Consider the following statement:

if ( year % 4 == 0 && year % 100 != 0 || year % 400 == 0 )
  ...

This expression tests whether the variable year is a leap year. Now, consider the following define and the subsequent if statement:

#define IS_LEAP_YEAR  year % 4 == 0 && year % 100 != 0
                   || year % 400 == 0
  ...
if ( IS_LEAP_YEAR )
  ...

Normally, the preprocessor assumes that a definition is contained on a single line of the program. If a second line is needed, the last character on the line must be a backslash character. This character signals a continuation to the preprocessor and is otherwise ignored. The same holds true for more than one continuation line; each line to be continued must be ended with a backslash character.

The preceding if statement is far easier to understand than the one shown directly before it. No comment is needed because the statement is self-explanatory. Of course, the definition restricts you to testing the variable year to see whether it's a leap year. If would be nice if you could write a definition to see whether any year were a leap year and not just the variable year. Actually, you can write a definition to take one or more arguments, which leads us to our next point of discussion.

IS_LEAP_YEAR can be defined to take an argument called y as follows:

#define IS_LEAP_YEAR(y)  y % 4 == 0 && y % 100 != 0
                      || y % 400 == 0

Unlike a method definition, you do not define the type of the argument y here because you are merely performing a literal text substitution and not calling a function. Note that when defining a name with arguments, no spaces are permitted between the defined name and the left parenthesis of the argument list.

With the previous definition, you can write a statement such as follows:

if ( IS_LEAP_YEAR (year) )
  ...

This tests whether the value of year is a leap year. Or, you could write

if ( IS_LEAP_YEAR (nextYear) )
  ...

to test whether the value of nextYear is a leap year. In the preceding statement, the definition for IS_LEAP_YEAR is directly substituted inside the if statement, with the argument nextYear replacing y wherever it appears in the definition. So, the if statement would actually be seen by the compiler as follows:

if ( nextYear % 4 == 0 && nextYear % 100 != 0 || nextYear % 400 == 0 )
  ...

Definitions are frequently called macros. This terminology is more often applied to definitions that take one or more arguments.

Here's a macro called SQUARE that simply squares its argument:

#define SQUARE(x) x * x

Although the macro definition for SQUARE is straightforward, there is an interesting pitfall that you must be careful to avoid when defining macros. As we have described, the statement

y = SQUARE (v);

assigns the value of v2 to y. What do you think would happen in the case of the statement

y = SQUARE (v + 1);

This statement does not assign the value of (v + 1)2 to y as you would expect. Because the preprocessor performs a literal text substitution of the argument into the macro definition, the preceding expression is actually evaluated as follows:

y = v + 1 * v + 1;

This obviously does not produce the expected results. To handle this situation properly, parentheses are needed in the definition of the SQUARE macro:

#define SQUARE(x)  ( (x) * (x) )

Even though the previous definition might look strange, remember that it is the entire expression as given to the SQUARE macro that is literally substituted wherever x appears in the definition. With your new macro definition for SQUARE, the statement

y = SQUARE (v + 1);

is then correctly evaluated as

y = ( (v + 1) * (v + 1) );

The following macro lets you to easily create new fractions from your Fraction class “on-the-fly”:

#define MakeFract(x,y) ([[Fraction alloc] initWith: x over: y]])

Then you can write expressions such as

myFract = MakeFract (1, 3);  // Make the fraction 1/3

or even

sum = [MakeFract (n1, d1) add: MakeFract (n2, d2)];

to add the fractions n1/d1 and n2/d2 together.

The conditional expression operator can be particularly handy when defining macros. The following defines a macro called MAX that gives the maximum of two values:

#define MAX(a,b)  ( ((a) > (b)) ? (a) : (b) )

This macro enables you to subsequently write statements such as this:

limit = MAX (x + y, minValue);

This assigns to limit the maximum of x + y and minValue. Parentheses are placed around the entire MAX definition to ensure that an expression such as

MAX (x, y) * 100

is evaluated properly; and parentheses are individually placed around each argument to ensure that expressions such as the following are correctly evaluated:

MAX (x & y, z)

The & operator is the bitwise AND operator, and it has lower precedence than the > operator used in the macro. Without the parentheses in the macro definition, the > operator would be evaluated before the bitwise AND, producing the incorrect result.

The following macro tests whether a character is a lowercase letter:

#define IS_LOWER_CASE(x) ( ((x) >= 'a') && ((x) <= 'z') )

It thereby permits expressions such as

if ( IS_LOWER_CASE (c) )
   ...

to be written. You can even use this macro in another macro definition to convert a character from lowercase to uppercase, leaving any nonlowercase character unchanged:

#define TO_UPPER(x) ( IS_LOWER_CASE (x) ? (x) - 'a' + 'A' : (x) )

Again, you are dealing with a standard ASCII character set here. When you learn about Foundation string objects in Part II, you'll see how to perform case conversion that will work for international (Unicode) character sets as well.

The # Operator

If you place a # in front of a parameter in a macro definition, the preprocessor creates a constant C-style string out of the macro argument when the macro is invoked. For example, the definition

#define str(x)  # x

causes the subsequent invocation

str (testing)

to be expanded into

"testing"

by the preprocessor. The printf call

printf (str (Programming in Objective-C is fun. ));

is therefore equivalent to

printf ("Programming in Objective-C is fun. ");

The preprocessor literally inserts double quotation marks around the actual macro argument. Any double quotation marks or backslashes in the argument are preserved by the preprocessor. So

str ("hello")

produces

""hello""

A more practical example of the use of the # operator might be in the following macro definition:

#define printint(var)  printf (# var " = %i ", var)

This macro is used to display the value of an integer variable. If count is an integer variable with a value of 100, the statement

printint (count);

is expanded into

printf ("count" " = %i ", count);

The compiler concatenates two adjacent literal strings together to make a single string out of them. Therefore, after concatenation is performed on the two adjacent strings, the statement becomes the following:

printf ("count = %i ", count);

The ## Operator

This operator is used in macro definitions to join two tokens together. It is preceded (or followed) by the name of a parameter to the macro. The preprocessor takes the actual argument to the macro that is supplied when the macro is invoked and creates a single token out of that argument and whatever token follows (or precedes) the ##.

Suppose, for example, you have a list of variables x1 through x100. You can write a macro called printx that simply takes as its argument an integer value 1–100 and displays the corresponding x variable as shown here:

#define printx(n)  printf ("%i ", x ## n)

The portion of the define that reads

x ## n

says to take the tokens that occur before and after the ## (the letter x and the argument n, respectively) and make a single token out of them. So the call

printx (20);

is expanded into the following:

printf ("%i ", x20);

The printx macro can even use the previously defined printint macro to get the variable name as well as its value displayed:

#define printx(n)  printint(x ## n)

The invocation

printx (10);

first expands into

printint (x10);

and then into

printf ("x10" " = %i ", x10);

and finally into the following:

printf ("x10 = %i ", x10);

The #import and #include Statements

After you have programmed in Objective-C for a while, you will find yourself developing your own set of macros, which you will want to use in each of your programs. But instead of having to type these macros into each new program you write, the preprocessor enables you to collect all your definitions into a separate file and then include them in your program, using the #import statement. These files—similar to the ones you've previously encountered but haven't written yourself—normally end with the characters .h and are referred to as header or include files.

The #include statement can also be used to include the contents of a file into your program. The difference is that using #import guarantees that the file is included only once in your program and not multiple times, which frequently causes compiler errors. This can happen inadvertently—for example, when you include a class definition header file in your program. That header file likely includes its own header files, some of which can overlap with previously included files. When #include is used instead of #import, the file is included at that point in the program, whether it has been previously included or not.1 You can work around this when using #include, and we'll talk about that later in this chapter.

Suppose you were writing a series of programs for performing various metric conversions. You might want to set up some defines for the various constants you would need for performing your conversions:

#define INCHES_PER_CENTIMETER  0.394
#define CENTIMETERS_PER_INCH  (1 / INCHES_PER_CENTIMETER)

#define QUARTS_PER_LITER       1.057
#define LITERS_PER_QUART      (1 / QUARTS_PER_LITER)

#define OUNCES_PER_GRAM        0.035
#define GRAMS_PER_OUNCE       (1 / OUNCES_PER_GRAM)
  ...

Suppose you entered the previous definitions into a separate file on the system called metric.h. Any program that subsequently needed to use any of the definitions contained in the metric.h file could then do so by simply issuing the preprocessor directive:

#import "metric.h"

This statement must appear before any of the defines contained in metric.h are referenced and is typically placed at the beginning of the source file. The preprocessor looks for the specified file on the system and effectively copies the contents of the file into the program at the precise point at which the #import statement appears. So, any statements inside the file are treated just as if they had been directly typed into the program at that point.

The double quotation marks around the header filename instruct the preprocessor to look for the specified file in one or more file directories (typically, first in the same directory that contains the source file, but the actual places the preprocessor searches are system dependent). If the file isn't located, the preprocessor automatically searches other special directories as described in the following.

Enclosing the filename within the characters < and > instead, as in

#import <stdio.h>

causes the preprocessor to look for the include file in the special “system” header file directory or directories. Once again, these directories are system dependent. On Unix (including Mac OS X) systems, the system include file directory is /usr/include. So, on those systems the standard header file objc/Object.h is found in /usr/include/objc/Object.h.

To see how include files are used in an actual program example, type the six defines given previously into a file called metric.h. Then type and run Program 12.1 in the normal manner.

Program 12.1.


/* Illustrate the use of the #import statement
  Note: This program assumes that definitions are
  set up in a file called metric.h       */

#import "metric.h"

main ()
{
   float liters, gallons;

   printf ("*** Liters to Gallons *** ");
   printf ("Enter the number of liters: ");
   scanf ("%f", &liters);

   gallons = liters * QUARTS_PER_LITER / 4.0;
   printf ("%g liters = %g gallons ", liters, gallons);
}


Program 12.1. Output


*** Liters to Gallons ***

Enter the number of liters: 55.75
55.75 liters = 14.7319 gallons.


Program 12.1 is a rather simple one because it shows only a single defined value (QUARTS_PER_LITER) being referenced from the include file metric.h. Nevertheless, the point is well made: After the definitions have been entered into metric.h, they can be used in any program that uses an appropriate #import statement.

One of the nicest things about the import file capability is that it enables you to centralize your definitions, thus ensuring that all programs reference the same value. Furthermore, errors discovered in one of the values contained in the include file need be corrected in only that one spot, thus eliminating the need to correct each and every program that uses the value. Any program that referenced the incorrect value would simply have to be recompiled and would not have to be edited.

Besides stdio.h and objc/Object.h, two other useful system include files are limits.h and float.h. The first file, limits.h, contains system-dependent values that specify the sizes of various character and integer data types.2 For instance, the maximum size of an int is defined by the name INT_MAX inside this file. The maximum size of an unsigned long int is defined by ULONG_MAX, and so on.

The float.h header file gives information about floating-point data types. For example, FLT_MAX specifies the maximum floating-point number, and FLT_DIG specifies the number of decimal digits of precision for a float type.

Other system include files contain declarations for various functions stored inside the system library. For example, the include file string.h contains declarations for the library routines that perform character string operations such as copying, comparing, and concatenating. If you're working with the Foundation string classes exclusively (discussed in Chapter 15, “Numbers, Strings, and Collections”), you probably won't need to use any of these routines in your programs.

Conditional Compilation

The Objective-C preprocessor offers a feature known as conditional compilation. Conditional compilation is often used to create one program that can be compiled to run on different computer systems. It is also often used to switch on or off various statements in the program, such as debugging statements that print the values of variables or trace the flow of program execution.

The #ifdef, #endif, #else, and #ifndef Statements

Unfortunately, a program sometimes must rely on system-dependent parameters—on a filename, for example—that can be specified differently on different systems or on a particular feature of the operating system.

If you had a large program that had many such dependencies on the particular hardware and/or software of the computer system (and this should be minimized as much as possible), you might end up with many defines whose values would have to be changed when the program was moved to another computer system.

You can help reduce the problem of having to change these defines when the program is moved and can incorporate into the program the values of these defines for each different machine by using the conditional compilation capabilities of the preprocessor. As a simple example, the statements

#ifdef UNIX
#  define DATADIR  "/uxn1/data"
#else
#  define DATADIR  "usrdata"
#endif

have the effect of defining DATADIR to "/uxn1/data" if the symbol UNIX has been previously defined and to "usrdata" otherwise. As you can see here, you are allowed to put one or more spaces after the # that begins a preprocessor statement.

The #ifdef, #else, and #endif statements behave as you would expect. If the symbol specified on the #ifdef line has been already defined—through a #define statement or through the command line when the program is compiled—lines that follow up to a #else, #elif, or #endif are processed by the compiler; otherwise, they are ignored.

To define the symbol UNIX to the preprocessor, the statement

#define UNIX  1

or even just

#define UNIX

will suffice. As you can see, no text at all has to appear after the defined name to satisfy the #ifdef test. The compiler also permits you to define a name to the preprocessor when the program is compiled by using a special option to the compiler command. The command line

gcc -D UNIX program.m -lobjc

defines the name UNIX to the preprocessor, causing all #ifdef UNIX statements inside program.m to evaluate as TRUE (note that the -D UNIX must be typed before the program name on the command line). This technique enables names to be defined without having to edit the source program.

A value can also be assigned to the defined name on the command line. For example

gcc -D GNUDIR=/c/gnustep program.m

invokes the compiler on the file program.m, defining the name GNUDIR to be the text /c/gnustep.

The #ifndef statement follows along the same lines as the #ifdef. This statement is used in a similar way, except it causes the subsequent lines to be processed if the indicated symbol is not defined. This statement is often used to avoid multiple inclusion of a file in a program, and it is the method recommended by GNU developers as the way around use of the #import statement.

For example, inside a header file you want to include just once in a program, you typically define a unique identifier that can be tested later. Consider this sequence of statements:

#ifndef _OBJC_OBJECT_H_
#define _OBJC_OBJECT_H_
...
#endif /* _OBJC_OBJECT_H */

Suppose you typed this into a file called obj.h.

If you included this file in your program with an #include statement, such as

#include "obj.h"

The #ifndef would test whether OBJC_OBJECT_H were defined. Because it wouldn't be, the lines between the #ifndef and the matching #endif would be included in the program. Notice that the very next line defines _OBJC_OBJECT_H. If an attempt were made to again include the file in the program, _OBJC_OBJECT_H would be defined, so the statements that followed would not be included in the program, thus avoiding multiple inclusion of the header file in the program.

The lines shown previously are actually from the standard header file <objc/Object.h>. So, if you did use #include instead of #import in your program, you could do so without worrying about duplicate inclusion of the file.

As already mentioned, conditional compilation is useful when debugging programs. You might have many printf calls embedded in your program that are used to display intermediate results and trace the flow of execution. These statements can be turned on by conditionally compiling them into the program if a particular name, say DEBUG, is defined. For example, a sequence of statements such as the following could be used to display the value of some variables only if the program had been compiled with the name DEBUG defined:

#ifdef DEBUG
  printf ("User name = %s, id = %i ", userName, userId);
#endif

You might have many such debugging statements throughout the program. Whenever the program is being debugged, it can be compiled with the -D DEBUG command-line option to have all the debugging statements compiled. When the program is working correctly, it can be recompiled without the -D option. This also has the added benefit of reducing the size of the program because all your debugging statements are not compiled in.

The #if and #elif Preprocessor Statements

The #if preprocessor statement offers a more general way of controlling conditional compilation. The #if statement can be used to test whether a constant expression evaluates to nonzero. If the result of the expression is nonzero, subsequent lines up to a #else, #elif, or #endif are processed; otherwise, they are skipped. As an example of how this can be used, assume you define the name OS, which is set to 1 if the operating system is Macintosh OS, to 2 if the operating system is Windows, to 3 if the operating system is Unix, and so on. You could write a sequence of statements to conditionally compile statements based on the value of OS as follows:

#if  OS == 1 /* Mac OS */
...
#elif OS == 2 /* Windows */
  ...
#elif OS == 3 /* Unix */
  ...
#else
  ...
#endif

With most compilers, you can assign a value to the name OS on the command line using the -D option discussed earlier. The command line

gcc -D OS=2 program.m –lobjc

compiles program.m with the name OS defined as 2. This causes the program to be compiled to run under Windows.

The special operator

defined (name)

can also be used in #if statements. The set of preprocessor statements

#if defined (DEBUG)
  ...
#endif

and

#ifdef DEBUG
  ...
#endif

do the same thing. The statements

#if defined (WINDOWS) || defined (WINDOWSNT)
# define BOOT_DRIVE "C:/"
#else
# define BOOT_DRIVE "D:/"
#endif

define BOOT_DRIVE as "C:/" if either WINDOWS or WINDOWSNT is defined and define it as "D:/" otherwise.

Another common use of #if is in code sequences that look like this:

#if defined (DEBUG) && DEBUG
...
#endif

This causes the statements after the #if and up to the #endif to be processed only if DEBUG is defined and has a nonzero value.

The #undef Statement

On some occasions, you might need to cause a defined name to become undefined. This is done with the #undef statement. To remove the definition of a particular name, you write the following:

#undef name

Thus, the statement

#undef LINUX

removes the definition of LINUX. Subsequent #ifdef LINUX or #if defined (LINUX) statements evaluate to FALSE.

This concludes our discussion on the preprocessor. Some other preprocessor statements that weren't described here are described in Appendix B, “Objective-C Language Summary.”

Exercises

  1. Locate the system header files stdio.h, limits.h, and float.h on your machine (on Unix systems, look inside the /usr/include directory). Examine the files to see what's in them. If these files include other header files, be sure to track them down as well to examine their contents.
  2. Define a macro called MIN that gives the minimum of two values. Then write a program to test the macro definition.
  3. Define a macro called MAX3 that gives the maximum of three values. Write a program to test the definition.
  4. Write a macro called IS_UPPER_CASE that gives a nonzero value if a character is an uppercase letter.
  5. Write a macro called IS_ALPHABETIC that gives a nonzero value if a character is an alphabetic character. Have the macro use the IS_LOWER_CASE macro defined in the chapter text and the IS_UPPER_CASE macro defined in exercise 4.
  6. Write a macro called IS_DIGIT that gives a nonzero value if a character is a digit '0' through '9'. Use this macro in the definition of another macro called IS_SPECIAL, which gives a nonzero result if a character is a special character—that is, not alphabetic and not a digit. Be sure to use the IS_ALPHABETIC macro developed in exercise 5.
  7. Write a macro called ABSOLUTE_VALUE that computes the absolute value of its argument. Make sure that an expression such as

    ABSOLUTE_VALUE (x + delta)

    is properly evaluated by the macro.

  8. Consider the definition of the printint macro from this chapter:

    #define printx(n)  printf ("%i ", x ## n)

    Could the following be used to display the values of the 100 variables x1x100? Why or why not?

    for ( i = 1; i <= 100; ++i )
      printx (i);

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.135.198.174