Chapter 9. Remote Procedure Calls

Remote Procedure Calls

Introduction

So far, the examples we have worked with have been run on the same workstation or host. However, as we gain expertise with interprocess communication techniques, it becomes evident that there will be many occasions when we will want to communicate with processes that may reside on different workstations. These workstations might be on our own local area network or part of a larger wide area network. In a UNIX-based, networked computing setting, there are several ways that communications of this nature can be implemented. This chapter examines the techniques involved with remote procedure calls (RPC).[1] As a programming interface, RPCs are designed to resemble standard, local procedure (function) calls. The client process (the process making the request) invokes a local procedure commonly known as a client stub that contains the network communication details and the actual RPC. The server process (the process performing the request) has a similar server stub, which contains its network communication details. Neither the client nor the server needs to be aware of the underlying network transport protocols. The programming stubs are usually created using a protocol compiler, such as Sun Microsystems rpcgen. This chapter is based on Sun's RPC implementation as ported to Linux. The protocol compiler is passed a protocol definition file written in a C-like language. For rpcgen, the language used is called RPC language. The protocol definition file contains a definition of the remote procedure, its parameters with data types, and its return data type.

When the client invokes an RPC (generates a request), the client waits for the server to reply. Since the client must wait for a response, several coordination issues are of concern:

  • How long should a client wait for a reply from the server (the server could be down or very busy)? In general, RPCs address this problem by using a default timeout to limit the client's wait time.

  • If the client makes multiple, identical requests, how should the server handle it? The resolution of this problem proves to be program-specific. Depending upon the type of processing (such as a read request), the requested activity may indeed be done several times. In other settings, such as transaction processing, the request must be done only once. In these settings, the software must implement its own management routines. By definition, RPCs are independent of transport protocols; however, if an RPC runs on top of a reliable transport (such as TCP), the client can infer from receiving a reply from the server process that its request will be executed.

  • How can call-by-reference (the passing of address pointers) be implemented when the processes reside in separate process spaces? Further, it is entirely possible that the client and server processes, while not being on the same system, may even be executing on different platforms (e.g., Sun, VAX, IBM, etc.). To resolve these issues and to ensure that client and server processes can communicate using RPC, the data that is passed between the processes is converted to an architecture-independent representation. The data format used by Sun is known as XDR (eXternal Data Representation). The client and server program stubs are responsible for translating the transmitted data to and from the XDR format. This process is known as serialization and deserialization. The high-level relationships of client and server processes using an RPC are shown in Figure 9.1.

    An RPC client–server communications overview.

    Figure 9.1. An RPC client–server communications overview.

We will find that, while hidden from the casual user, RPC uses socket-based communication. The details of socket-based communication are covered in Chapter 10, “Sockets.”

At a system level, the rpcinfo command can be used to direct the system to display all of the currently registered RPC services. When the -p flag is passed to rpcinfo, the services for the current host are displayed. The rpcinfo command is often located in /usr/sbin. If this is the case on your system, /usr/sbin should be part of the search path (so you will not have to fully qualify the path for each invocation). Some versions of rpcinfo require the host name to be specified. If you do not know the host name, the hostname command will display the name of the host upon which you are working.

Executing Remote Commands at a System Level

Before delving into the fine points of RPCs from a programming standpoint, it is instructive to look at the execution of remote commands at a system (command-line) level. Most UNIX systems offer several commands that allow the user to execute commands on a remote system. Historically, the most commonly supported remote execution command is rsh (the remote shell command).

The rsh command connects to a specified host and executes the indicated command. Standard input from the local host is copied to standard input on the remote host. The remote host's standard output and error will be copied to the local host's standard output and error respectively. Signals such as interrupt, quit, and terminate are passed on to the remote host. As the rsh command has proven to be a security risk, users are encouraged to use, in its place, the ssh command found in the OpenSSH suite of tools. The ssh command provides secure encrypted communication between two hosts and supports more secure methods for authenticating a user (more on this will follow in a bit).

The general syntax for the ssh command is

linux$ ssh remote_host_name the_command

Figure 9.2 demonstrates using ssh on a system called linux to run the who command on the remote system called morpheus.

Example 9.2. A typical ssh command.

linux$ ssh morpheus who
gray@morpheus's password:        <-- 1
root     console    Feb 18 11:54     (:0)
root     pts/6      Mar 28 14:03     (:0.0)
gray     pts/2      Apr  8 11:29     (zeus)
root     pts/7      Apr  5 12:37     (:0.0)
root     pts/8      Mar 28 13:02     (:0.0)
root     pts/3      Mar 14 12:11     (:0.0)
root     pts/9      Apr  4 12:10     (:0.0)
root     pts/10     Apr  4 12:15     (:0.0)
  • (1)No echo of password when entered.

The remote system (in this case morpheus) prompts for the user's password (as required for the remote system). The output of the command is displayed on the local host linux. It is possible to redirect the output produced by the remote command. However, there are some interesting wrinkles that we should be aware of when we specify I/O redirection with the command to be remotely executed. For example, the two command sequences that follow appear to be very similar:

linux$ ssh morpheus who > /tmp/whoosie
linux$ ssh morpheus who ">" /tmp/whoosie

The first command sequence places the output of the who command in the file whoosie in the tmp directory of the local host linux. The second command sequence places the output of the who command in the file whoosie in the tmp directory of the remote host morpheus! This occurs because in the second command sequence the redirection, which has been placed in quotes, is passed as part of the remote command and is not acted upon by the local host. If ssh is passed just the host name and not passed a command to execute, it will log into the specified host and provide the shell specified in the user's password entry. All communications with the remote shell are encrypted.

For ssh to execute a remote command, the user issuing the command must be authenticated.[2] This can be accomplished in a number of ways. Similar to rsh, if the host the user is logging in from is listed in the remote host's /etc/hosts.equiv (or /etc/shosts.equiv) file, and the user's login name is the same on both systems, the user is permitted access. Failing this, if the user's $HOME/.rhosts (or $HOME/.shosts) file on the remote host has the name of the local host and user's login name, then the user is are granted access. However, as this sort of authentication is inherently insecure (due to IP, DNS, and routing spoofing), it is normally combined with public–private key encryption authentication.

For ssh public–private authentication can be specified in a number of ways (the following is an overview—see the manual page on ssh for the all the gory details). The configuration file sshd_config (which is most often in the /etc/ssh directory) designates the authentication method. While four different approaches are available, most system administrators opt to let the system authenticate a request either by checking the user's public key or by prompting the user for his or her normal login password (the default).

The public–private key approach deserves some additional discussion. A user generates a public–private key pair by running the ssh-keygen utility. Newer versions of this utility permit the user to specify the type (-t) of key to be created. The choices are protocol version 1 (specified as rsa1) or protocol version 2 (specified as rsa or dsa). If rsa1 is specified, the keys are placed in separate files (usually called identity and identity.pub for private and public keys respectively) in the $HOME/.ssh directory. While permission-wise the identity.pub file is accessible to all, the identity file should not be readable by anyone other than its owner. The first time ssh is used to access a remote system, authentication information is added by ssh to the user's $HOME/.ssh/known_hosts file. Figure 9.3 shows the process of generating a public–private key and then using ssh to connect from a remote host back to the author's base system (linux.hartford.edu).

Example 9.3. Creating a public–private key and using ssh to access a system for the first time.

[gray@remote_sys ~]$ ssh-keygen -t rsa1        <-- 1

Generating public/private rsa1 key pair.
Enter file in which to save the key (/home/gray/.ssh/identity):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/gray/.ssh/identity.
Your public key has been saved in /home/gray/.ssh/identity.pub.
The key fingerprint is:
6b:8d:a5:32:7d:8e:cc:66:56:c2:60:5b:a3:76:23:10 gray@remote_sys.somewhere.edu

[gray@remote_sys ~] ssh linux.hartford.edu        <-- 2

The authenticity of host 'linux.hartford.edu(137.49.6.1)' can't be established.
RSA key fingerprint is 4b:a4:ac:a6:4f:22:43:e1:1a:35:6d:b9:19:41:fd:ba.
Are you sure you want to continue connecting (yes/no)? yes
Warning:Permanently added 'linux.hartford.edu,137.49.6.1' (RSA) to the list of known hosts.
[email protected]'s password:
Last login: Tue Apr  9 08:20:26 2002 from remote_sys.somewhere.edu
Red Hat Linux Thu Mar 29 18:44:10 CST 2001
[gray@linux ~]$
  • (1)Create a rsa1 type public and private key pair.

  • (2)From remote system, use ssh to access home system.

Executing Remote Commands in a Program

The library function rexec can be used in a program to execute a system-level command on a remote host. In many ways we can think of the rexec library function as a remote version of the system call system that was discussed earlier, as it allows us to request the execution of a command on a remote system. The syntax for rexec is summarized in Table 9.1.

Table 9.1. Summary of the rexec Library Call.

Include File(s)

<netdb.h>

Manual Section

3

Summary

int rexec( char **ahost,    unsigned short inport,
           char *user,      char *passwd,
           char *cmd,       int *fd2p );

Return

Success

Failure

Sets errno

A stream socket file descriptor

−1

 

The rexec library call takes six arguments. The first is a reference to the name of the remote host. This reference is passed by rexec to the gethostbyname network call for authentication (the details of the gethostbyname function are covered in Chapter 10). The second argument, inport, is an integer value that indicates the port to be used for the connection. Most often, the port number used with rexec is 512 (the port associated with the execution of remote commands, using TCP protocol). The port argument is followed by two character-string reference arguments that indicate the user's name and password respectively. If these entries are set to NULL, the system checks the contents of the file .netrc that resides in the user's home directory for machine (host), login (user name), and password information. If the $HOME/.netrc file does not exist or it contains only partial information, the user is prompted for his or her name and/or password. The sixth argument to rexec is a reference to an integer. If this value is not 0, rexec assumes it is a reference to a valid file descriptor and maps a standard error from the execution of the remote command to the indicated file descriptor. If the rexec command is successful, it returns a valid stream socket file descriptor that is mapped to the local host's standard input and output. If the rexec function fails, it returns a −1.

Program 9.1 demonstrates the use of the rexec library call.

Example 9.1. Using rexec in a program.

File : p9.1.cxx                           Note that on some systems you may need to install
  |     /*                                the nfs-utils, rsh, and rsh-server
  |          Using rexec                  packages to run this program. The package
  |     */                                manager rpm can be used to check if these
  |     #define _GNU_SOURCE               packages have been installed, e.g.,
  +     #include <iostream>
  |     #include <cstdio>                 $ /bin/rpm -qv nfs-utils
  |     #include <sys/types.h>
  |     #include <unistd.h>               Additionally, the rexec server service may need to
  |     #include <netinet/in.h>           be turned on. This can be accomplished with the
 10     #include <netdb.h>                chkconfig utility:
  |     using namespace std;
  |     int                               # /sbin/chkconfig --level 5 rexec on
  |     main(int argc, char *argv[]) {
  |       int    fd, count;
  +       char   buffer[BUFSIZ], *command, *host;
  |       if (argc != 3) {
  |         cerr <<  "Usage " << argv[0] << " host command" << endl;
  |         return 1;
  |       }
 20       host   = argv[1];
  |       command= argv[2];
  |       if ((fd=rexec(&host,htons(512),NULL,NULL,command,(int *)0)) == -1) {
  |         perror("rexec ");
  |         return 2;
  +       }
  |       while ((count = read(fd, buffer, BUFSIZ)) > 0)
  |         fwrite(buffer, count, 1, stdout);
  |       return 0;
  |     }

In Program 9.1 the first command-line argument is the host on which the remote command will be executed. The second command-line argument is the command that will be passed to the remote host. The invocation of the rexec function (line 22) uses the htons network call on its second argument to ensure the proper network byte ordering when specifying the port number.[3] The prototype for htons resides in the include file <netinet/in.h>. The arguments for the user name and password are set to NULL. This directs rexec to first check the .netrc file in the owner's home directory for user name and password information. If the .netrc file is not present or is incomplete, rexec prompts the user for this information. Note that while this is technically how things should work, on our system (running Red Hat Linux version 7.1) unless the .netrc file is present and its contents complete (includes the host and the user's login and password), the rexec call will fail. In a weak attempt to gain at least a semblance of security, rexec will read .netrc files whose permissions are read only for the file's owner. If the rexec call completes without error, the output from the execution of the command on the remote host is read and displayed to standard output on the local host. Figure 9.4 shows a compilation and run of Program 9.1. Note in some versions of the OS the library -lnsl and/or -lsocket may need to be included as the object code for the network functions may reside in these separate libraries.

Example 9.4. Using Program 9.1.

linux$ g++ p9.1.cxx -o p9.1
linux$ p9.1 morpheus df
/                  (/dev/dsk/c0t0d0s0 ): 1066026 blocks   241598 files
/usr               (/dev/dsk/c0t0d0s4 ): 3538746 blocks   384628 files
/proc              (/proc             ):       0 blocks     3768 files
/dev/fd            (fd                ):       0 blocks        0 files
/etc/mnttab        (mnttab            ):       0 blocks        0 files
. . .

The rexec function communicates with rexecd (the remote execution daemon) on the host system. While the rexec function is interesting and does provide a somewhat painless (but generally insecure) way to execute commands on a remote host, we more frequently will want to write our own client– server pairs that will perform specific, directed tasks.

To round out the discussion, a command-line version of rexec can also be found in Linux. Usually, it resides in the /usr/bin directory. Its general syntax is

linux$  rexec [options] -l user_name  -p password  host  the_command

Unlike its library function counterpart, the command-line version of rexec does not seem to choke if the user's .netrc file is not fully qualified, and it will know enough to prompt if a user's login or password are omitted.

Transforming a Local Function Call into a Remote Procedure

We begin our exploration of RPC programming by converting a simple program with a single local function call into a client–server configuration with a single RPC. Once generated, this RPC-based program can be run in a distributed setting whereby the server process, which will contain the function to be executed, can reside on a host different from the client process. The program that we will convert (Program 9.2) is a C program[4] that invokes a single local function, print_hello, which generates the message Hello, world. As written, the print_hello function will display its message and return to the function main the value returned from printf. The returned value indicates whether printf was successful in carrying out its action.[5]

Example 9.2. A simple C program to display a message.

File : hello.c
  |     /*
  |           A C program with a local function
  |     */
  |     #include <stdio.h>
  +     int print_hello( );
  |     int
  |     main( ){
  |       printf("main : Calling function.
");
  |       if (print_hello())
 10         printf("main : Mission accomplished.
");
  |       else
  |         printf("main : Unable to display message.");
  |       return 0;
  |     }
  +     int
  |     print_hello( ) {
  |       return printf("funct: Hello, world.
");
  |     }

In its current configuration, the print_hello function and its invocation reside in a single source file. The output of Program 9.2 when compiled and run is shown in Figure 9.5.

Example 9.5. Output of Program 9.2.

linux$ hello
main : Calling function.
funct: Hello, world.
main : Mission accomplished

The first step in converting a program with a local function call to an RPC is for the programmer to create a protocol definition file. This file will help the system keep track of what procedures are to be associated with the server program. The definition file is also used to define the data type returned by the remote procedure and the data types of its arguments. When using RPC, the remote procedure is part of a remote program that runs as the server process. The RPC language is used to define the remote program and its component procedures. The RPC language is actually XDR with the inclusion of two extensions—the program and version types. Appendix C addresses the syntax of the RPC language. For the diligent, the manual pages on xdr provide a good overview of XDR data type definitions and syntax.

Figure 9.6 contains the protocol definition file for the print_hello function. Syntactically, the RPC language is a mix of C and Pascal. By custom, the extension for protocol definition files is .x.

The keyword program marks the user-defined identifier DISPLAY_PRG as the name of the remote procedure program.[6] The program name, like the program name in a Pascal program, does not need to be the same as the name of the executable file. The program block encloses a group of related remote procedures. Nested within the program definition block is the keyword version followed by a second user-generated identifier, DISPLAY_VER, which is used to identify the version of the remote procedure. It is permissible to have several versions of the same procedure, each indicated by a different integer value. The ability to have different versions of the same procedure eases the upgrade process when updating software by facilitating backward compatibility. If the number of arguments, the data type of an argument, or the data type returned by the function change, the version number should be changed.

Example 9.6. Protocol definition file hello.x.

File : hello.x
  |     /*
  |         This is the protocol definition file. The programmer writes
  |         this file using the RPC language. This file is passed to the
  |         protocol generator rpcgen. Every remote procedure is part of
  +         a remote program. Each procedure has a name and number. A
  |         version number is also supplied so different versions of the
  |         same procedure may be generated.
  |     */
  |     program DISPLAY_PRG {
 10       version DISPLAY_VER {
  |         int print_hello( void ) = 1;
  |       } = 1;
  |     } = 0x20000001;

As this is our first pass at generating a remote procedure, the version number is set to 1 after the closing brace for the version block. Inside the version block is the declaration for the remote procedure (line 11).[7] A procedure number follows the remote procedure declaration. As there is only one procedure defined, the value is set to 1. An eight-digit hexadecimal program number follows the closing brace for the program block. The program, version, and procedure numbers form a triplet that uniquely identifies a specific remote procedure. To prevent conflicts, the numbering scheme shown in Table 9.2 should be used in assigning version numbers.

Protocol specifications can be registered with Sun by sending a request (including the protocol definition file) to [email protected]. Accepted specifications will receive a unique program number from Sun (in the range 00000000–1FFFFFFF).

Table 9.2. RPC Program Numbers.

Numbers

Description

00000000 - 1FFFFFFF

Defined by Sun

20000000 - 3FFFFFFF

User-defined

40000000 - 5FFFFFFF

User-defined for programs that dynamically allocate numbers

60000000 - FFFFFFFF

Reserved for future use

A check of the file /etc/rpc on your system will display a list of some of the RPC programs (and their program numbers) known to the system.

As shown below, the name of the protocol definition file is passed to the RPC protocol compiler, rpcgen, on the command line

$ rpcgen -C hello.x

The rpcgen compiler produces the requisite C code to implement the defined RPCs. There are a number of command-line options for rpcgen, of which we will explore only a limited subset. A summary of the command-line options and syntax for rpcgen is given in Figure 9.7.

Example 9.7. Command-line options for rpcgen.

usage: rpcgen infile
   rpcgen [-abkCLNTM][-Dname[=value]] [-i size] [-I [-K seconds]] [-Y path] infile
   rpcgen [-c | -h | -l | -m | -t | -Sc | -Ss | -Sm] [-o outfile] [infile]
   rpcgen [-s nettype]* [-o outfile] [infile]
   rpcgen [-n netid]* [-o outfile] [infile]
options:
-a              generate all files, including samples
-b              backward compatibility mode (generates code for SunOS 4.1)
-c              generate XDR routines
-C              ANSI C mode
-Dname[=value]  define a symbol (same as #define)
-h              generate header file
-i size         size at which to start generating inline code
-I              generate code for inetd support in server (for SunOS 4.1)
-K seconds      server exits after K seconds of inactivity
-l              generate client side stubs
-L              server errors will be printed to syslog
-m              generate server side stubs
-M              generate MT-safe code
-n netid        generate server code that supports named netid
-N              supports multiple arguments and call-by-value
-o outfile      name of the output file
-s nettype      generate server code that supports named nettype
-Sc             generate sample client code that uses remote procedures
-Ss             generate sample server code that defines remote procedures
-Sm             generate makefile template
-t              generate RPC dispatch table
-T              generate code to support RPC dispatch tables
-Y path         directory name to find C preprocessor (cpp)

In our invocation, we have specified the -C option requesting rpcgen output conform to the standards for ANSI C. While some versions of rpcgen generate ANSI C output by default, the extra keystrokes ensure rpcgen generates the type of output you want. When processing the hello.x file, rpcgen creates three output files—a header file, a client stub, and a server stub file. Again, by default rpcgen gives the same name to the header file as the protocol definition file, replacing the .x extension with .h.[8] In addition, the client stub file is named hello_clnt.c (the rpcgen source file name with _clnt.c appended), and the server stub file is named hello_svc.c (using a similar algorithm). Should the default naming convention be too restrictive, the header file as well as the client and server stub files can be generated independently and their names uniquely specified. For example, to generate the header file with a uniquely specified name, rpcgen would be passed the following options and file names:

linux$ rpcgen -C -h -o unique_file_name  hello.x

With this invocation, rpcgen will generate a header file called unique_file_name.h. Using a similar technique, unique names for the client and server stub files can be specified with the -Sc and -Ss options (see Figure 9.7 for syntax details).

The contents of the header file, hello.h, generated by rpcgen is shown in Figure 9.8.

Example 9.8. File hello.h generated by rpcgen from the protocol definition file hello.x.

File : hello.h
  |     /*
  |      * Please do not edit this file.
  |      * It was generated using rpcgen.
  |      */
  +
  |     #ifndef _HELLO_H_RPCGEN
  |     #define _HELLO_H_RPCGEN
  |
  |     #include <rpc/rpc.h>
 10
  |
  |     #ifdef __cplusplus
  |     extern "C" {
  |     #endif
  +
  |
  |     #define DISPLAY_PRG 0x20000001
  |     #define DISPLAY_VER 1
  |
 20     #if defined(__STDC__) || defined(__cplusplus)
  |     #define print_hello 1
  |     extern  int * print_hello_1(void *, CLIENT *);
  |     extern  int * print_hello_1_svc(void *, struct svc_req *);
  |     extern int display_prg_1_freeresult (SVCXPRT *, xdrproc_t, caddr_t);
  +
  |     #else /* K&R C */
  |     #define print_hello 1
  |     extern  int * print_hello_1();
  |     extern  int * print_hello_1_svc();
 30     extern int display_prg_1_freeresult ();
  |     #endif /* K&R C */
  |
  |     #ifdef __cplusplus
  |     }
  +     #endif
  |
  |     #endif /* !_HELLO_H_RPCGEN */

The hello.h file created by rpcgen is referenced as an include file in both the client and server stub files. The #ifndef _HELLO_H_RPCGEN, #define _HELLO_H_RPCGEN, and #endif preprocessor directives prevent the hello.h file from being included multiple times. Within the file hello.h, the inclusion of the file <rpc/rpc.h>, as noted in its internal comments, “. . . just includes the billions of rpc header files necessary to do remote procedure calling.”[9] The variable __cplusplus (see line 20) is used to determine if a C++ programming environment is present. In a C++ environment, the compiler internally adds a series of suffixes to function names to encode the data types of its parameters. These new “mangled” function names allow C++ to check functions to ensure parameters match correctly when the function is invoked. The C compiler does not provide the mangled function names that the C++ compiler needs. The C++ compiler has to be warned that standard C linking conventions and non-mangled function names are to be used. This is accomplished by the lines following the #ifdef __cplusplus compiler directive.

The program and version identifiers specified in the protocol definition file are found in the hello.h file, as defined constants (lines 17 and 18). These constants are assigned the value specified in the protocol definition file. Since we indicated the -C option to rpcgen (standard ANSI C), the if branch of the preprocessor directive (i.e., #if defined (__STDC__)) contains the statements we are interested in. If the remote procedure name in the protocol definition file was specified in uppercase, it is mapped to lowercase in the header file. The procedure name is defined as an integer and assigned the value previously given as its procedure number. Note that we will find this defined constant used again in a switch statement in the server stub to select the code to be executed when calling the remote procedure.

Following this definition are two print_hello function prototypes. The first prototype, print_hello_1, is used by the client stub file. The second, print_hello_1_svc, is used by the server stub file. The naming convention used by rpcgen is to use the name of the remote procedure as the root and append an underscore (_), version number (1), for the client stub, and underscore, version number, underscore, and svc for the server. The else branch of the preprocessor directive contains a similar set of statements that are used in environments that do not support standard C prototyping.

Before we explore the contents of the client and server stub files created by rpcgen, we should look at how to split our initial program into client and server components. Once the initial program (for example hello.c) is split, and we have run rpcgen, we will have the six files shown in Figure 9.9 available to us.

Client-server files and relationships.

Figure 9.9. Client-server files and relationships.

We begin with writing the client component. As in the initial program, the client invokes the print_hello function. However, in our new configuration, the code for the print_hello function, which used to be a local function, resides in a separate program that is run by the server process. The code for the client component program, which has been placed in a file named hello_client.c, is shown in Program 9.3.

Example 9.3. The client program hello_client.c.

File : hello_client.c
  |     /*
  |         The CLIENT program:  hello_client.c
  |         This will be the client code executed by the local client process.
  |      */
  +     #include <stdio.h>
  |     #include "hello.h"             /* Generated by rpcgen from hello.x  */
  |     int
  |     main(int argc, char *argv[]) {
  |       CLIENT         *client;
 10       int            *return_value, filler;
  |       char           *server;
  |     /*
  |         We must specify a host on which to run.  We will get the host name
  |         from the command line as argument 1.
  +      */
  |       if (argc != 2) {
  |         fprintf(stderr, "Usage: %s host_name
", *argv);
  |         exit(1);
  |       }
 20        server = argv[1];
  |     /*
  |         Generate the client handle to call the server
  |      */
  |        if ((client=clnt_create(server,      DISPLAY_PRG,
  +                                DISPLAY_VER, "tcp")) == (CLIENT *) NULL) {
  |         clnt_pcreateerror(server);
  |         exit(2);
  |       }
  |       printf("client : Calling function.
");
 30       return_value = print_hello_1((void *) &filler, client);
  |       if (*return_value)
  |         printf("client : Mission accomplished.
");
  |       else
  |         printf("client : Unable to display message.
");
  +       return 0;
  |     }

While much of the code is similar to the original hello.c program, some changes have been made to accommodate the RPC. Let's examine these changes point by point. At line 6 the file hello.h is included. This file, generated by rpcgen and whose contents were discussed previously, is assumed to reside locally.

In this example, we pass information from the command line to the function main in the client program. Therefore, the empty parameter list for main has been replaced with standard C syntax to reference the argc and argv parameters. Following this, in the declaration section of the client program, a pointer to the data type CLIENT is allocated. A description of the CLIENT data type is shown in Figure 9.10.

The CLIENT typedef is found in the include file <rpc/clnt.h>. The reference to the CLIENT data structure will be used when the client handle is generated. Following the declarations in Program 9.3 is a section of code to obtain the host name on which the server process will be running. In the previous invocation, this was not a concern, as all code was executed locally. However, in this new configuration, the client process must know the name of the host where the server process is located; it cannot assume the server program is running on the local host. The name of the host is passed via the command line as the first argument to hello_client. As written, there is no checking to determine if a valid, reachable host name has been passed. The client handle is created next (line 24). This is done with a call to the clnt_create library function. The clnt_create library function, which is part of a suite of remote procedure functions, is summarized in Table 9.3.

Example 9.10. The CLIENT data structure.

struct CLIENT {
  AUTH  *cl_auth;                                /* authenticator           */
  struct clnt_ops {
    enum clnt_stat (*cl_call) (CLIENT *, u_long, xdrproc_t, caddr_t, xdrproc_t,
                               caddr_t, struct timeval);
                                                 /* call remote procedure   */
    void (*cl_abort) (void);                     /* abort a call            */
    void (*cl_geterr) (CLIENT *, struct rpc_err *);
                                                 /* get specific error code */
    bool_t (*cl_freeres) (CLIENT *, xdrproc_t, caddr_t);
                                                 /* frees results           */
    void (*cl_destroy) (CLIENT *);               /* destroy this structure  */
    bool_t (*cl_control) (CLIENT *, int, char *);
                                                  /* the ioctl() of rpc     */
  } *cl_ops;
  caddr_t cl_private;                             /* private stuff          */
};

Table 9.3. Summary of the clnt_create Library Call.

Include File(s)

<rpc/rpc.h>

Manual Section

3N

Summary

CLIENT *clnt_create(char *host,  u_long prog,
                    u_long vers, char *proto );

Return

Success

Failure

Sets errno

A valid client handle

NULL

Yes

The clnt_create library call requires four arguments. The first, host, a character string reference, is the name of the remote host where the server process is located. The next two arguments, prog and vers, are, respectively, the program and version number. These values are used to indicate the specific remote procedure. Notice the defined constants generated by rpcgen are used for these two arguments. The proto argument is used to designate the class of transport protocol. In Linux, this argument may be set to either tcp or udp. Keep in mind that UDP (Unreliable Datagram Protocol) encoded messages are limited to 8KB of data. Additionally, UDP is, by definition, less reliable than TCP (Transmission Control Protocol). However, UPD does require less system overhead.

Table 9.4. Summary of the clnt_pcreateerror Library Call.

Include File(s)

<rpc/rpc.h>

Manual Section

3N

Summary

void clnt_pcreateerror(char *s);

Return

Success

Failure

Sets errno

Print an RPC create error message to standard error.

  

If the clnt_create library call fails, it returns a NULL value. If this occurs, as shown in the example, the library routine clnt_pcreateerror can be invoked to display a message that indicates the reason for failure. See Table 9.4.

The error message generated by clnt_pcreateerror, which indicates why the creation of the client handle failed, are appended to the string passed as clnt_pcreateerror's single argument (see Table 9.5 for details). The argument string and the error message are separated with a colon, and the entire message is followed by a newline. If you want more control over the error messaging process, there is another library call, clnt_spcreateerror (char *s ), that will return an error message string that can be incorporated in a personalized error message. In addition, the cf_stat member of the external structure rpc_createerr may be examined directly to determine the source of the error.

Table 9.5. clnt_creat Error Messages.

#

Constant

clnt_pcreate error Message

Explanation

13

RPC_UNKNOWNHOST

Unknown host

Unable to find the referenced host system.

17

RPC_UNKNOWNPROTO

Unknown protocol

The protocol indicated by the proto argument is not found or is invalid.

19

RPC_UNKNOWNADDR

Remote server address unknown

Unable to resolve address of remote server.

21

RPC_NOBROADCAST

Broadcast not supported

System does not allow broadcasting of messages (i.e., sending to all rpcbind daemons on a network).

Returning to the client program, the prototype for the print_hello function has been eliminated. The function prototype is now in the hello.h header file. The invocation of the print_hello function uses its new name, print_hello_1. The function now returns not an integer value but a pointer to an integer, and has two arguments (versus none). By design, all RPCs return a pointer reference. In general, all arguments passed to the RPC are passed by reference, not by value. As this function originally did not have any parameters, the identifier filler is used as a placeholder. The second argument to print_hello_1, client, is the reference to the client structure returned by the clnt_create call. The server component, which now resides in the file hello_server.c, is shown in Program 9.4.

Example 9.4. The hello_server.c component.

File : hello_server.c
  |     /*
  |         The SERVER program: hello_server.c
  |         This will be the server code executed by the "remote" process
  |     */
  +     #include <stdio.h>
  |     #include "hello.h"          /* is generated by rpcgen from hello.x  */
  |     int *
  |     print_hello_1_svc(void * filler, struct svc_req * req) {
  |       static int  ok;
 10       ok = printf("server : Hello, world.
");
  |       return (&ok);
  |     }

The server component contains the code for the print_hello function. Notice that to accommodate the RPC, several things have been added and/or modified. First, as noted in the discussion of the client program, the print_hello function now returns an integer pointer, not an integer value (line 7). In this example, the address that is to be returned is associated with the identifier ok. This identifier is declared to be of storage class static (line 9). It is imperative that the return identifier referenced be of type static, as opposed to local. Local identifiers are allocated on the stack, and a reference to their contents would be invalid once the function returns. The name of the function has had an additional _1 appended to it (the version number). As the -C option was used with rpcgen, the auxiliary suffix _svc has also been added to the function name. Do not be concerned by the apparent mismatch of function names. The mapping of the function invocation as print_hello_1 in the client program to print_hello_1_svc in the server program is done by the code found in the stub file hello_svc.c produced by rpcgen.

The first argument passed to the print_hello function is a pointer reference. If needed, multiple items (representing multiple parameters) can be placed in a structure and the reference to the structure passed. In newer versions of rpcgen, the -N flag can be used to write multiple argument RPCs when a parameter is to be passed by value, not reference, or when a value, not a pointer reference, is to be returned by the RPC. A second argument, struct svc_req *req, has also been added. This argument will be used to communicate invocation information.

The client component (program) is compiled first. When only a few files are involved, a straight command-line compilation sequence is adequate. Later we will discuss how to generate a make file to automate the compilation process. The compiler is passed the names of the two client files, hello_client.c (which we wrote) and hello_clnt.c (which was generated by rpcgen). We specify the executable to be placed in the file client. Figure 9.11 shows details of the compilation command.

Example 9.11. Compiling the client component.

linux$ gcc hello_client.c hello_clnt.c -o client

The server component (program) is compiled in a similar manner (Figure 9.12).

Example 9.12. Compiling the server component.

linux$ gcc hello_server.c hello_svc.c -o server

Initially, we test the program by running both the client and server programs on the same workstation. We begin by invoking the server by typing its name on the command line. The server process is not automatically placed in the background, and thus a trailing & is needed.[10] A check of the ps command will verify the server process is running (see Figure 9.13).

Example 9.13. Running the server program and checking for its presence with ps.

linux$ server &
[1] 21149
[linux$ ps -ef | grep server
. . .
gray     21149 15854  0 08:09 pts/5    00:00:00 server
gray     21154 15854  0 08:10 pts/5    00:00:00 grep server

The ps command reports that the server process, in this case process ID 21149, is in memory. Its parent process ID is 15854 (in this case the login shell) and its associated controlling terminal device is listed as pts/5. The server process will remain in memory even after the user who initiated it has logged out. When generating and testing RPC programs, it is important the user remember to remove extraneous RPC-based server type processes before they log out.

When the process is run locally, the client program is invoked by name and passed the name of the current workstation. When this is done, the output will be as shown in Figure 9.14. Notice that since our system has an existing program called client that resides in the /usr/sbin directory, the call to our client program is made with a relative reference (i.e., ./client).

Example 9.14. Running the client program on the same host as the server program.

linux$ ./client linux
client : Calling function.
server : Hello, world.
client : Mission accomplished.

While our client–server application still needs some polishing, we can test it in a setting whereby the server runs on one host and the client on another. Say we have the setting shown in Figure 9.15, where one host is called medusa and the other linux.

Running the client program on a remote host.

Figure 9.15. Running the client program on a remote host.

On the host linux the server program is run in the background. On the host medusa the client program is passed the name of the host running the server program. Interestingly, on the host medusa the messages Calling function. and Mission accomplished. are displayed, but the message Hello, world. is displayed on the host linux. This is not surprising, as each program writes to its standard output, which in turn is associated with a controlling terminal (in our example this is the same terminal that is associated with the user's login shell). However, it is just as likely that the server program will write to its standard output, but what it has written will not be seen. This happens when there is no controlling terminal device associated with the server process. Remember that the server process remains in memory until removed. It is not removed when the user logs out. However, when the user does log out, the operating system drops the controlling terminal device for the process (a call to ps will list the controlling terminal device for the process as ?). If, in a standard setting, there is no controlling terminal device associated with a process, anything the process sends to standard output goes into the bit bucket!

There are several ways of correcting this problem. First, the output from the server could be hardcoded to be displayed on the console. In this scenario, the server would, upon invocation, execute an fopen on the /dev/console device. The FILE pointer returned by the fopen call could then be used with the fprintf function to display the output on the console. Unfortunately, there is a potential problem with this solution: The user may not have access to the console device. If this is so, the fopen will fail. A second approach is to pass the console device of the client process to the server as the first parameter of the RPC. This is a somewhat better solution, but will still fail when the client and server processes are on different workstations with different output devices. A third approach is to have the server process return its message to the client and have the client display it locally.

We should also examine the two RPC stub files generated by rpcgen. The hello_clnt.c file is quite small (Figure 9.16). This file contains the actual call to the print_hello_1 function.

Example 9.16. The hello_clnt.c file.

File : hello_clnt.c
  |     /*
  |      * Please do not edit this file.
  |      * It was generated using rpcgen.
  |      */
  +
  |     #include <memory.h>            /* for memset */
  |     #include "hello.h"
  |
  |     /* Default timeout can be changed using clnt_control() */
 10     static struct timeval TIMEOUT = { 25, 0 };
  |
  |     int *
  |     print_hello_1(void *argp, CLIENT *clnt) {
  |             static int clnt_res;
  +             memset((char *)&clnt_res, 0, sizeof(clnt_res));
  |             if (clnt_call (clnt, print_hello,
  |                           (xdrproc_t) xdr_void, (caddr_t) argp,
  |                           (xdrproc_t) xdr_int,  (caddr_t) &clnt_res,
  |                           TIMEOUT) != RPC_SUCCESS) {
 20                             return (NULL);
  |                           }
  |             return (&clnt_res);
  |     }

As we are using rpcgen to reduce the complexity of the RPC, we will not formally present the clnt_call. However, in passing, we note that the clnt_call function (which actually does the RPC) is passed, as its first argument, the client handle that was generated from the previous call to clnt_creat. The second argument for clnt_call is obtained from the hello.h include file and is actually the print_hello constant therein. The third and fifth arguments are references to the XDR data encoding/ decoding routines. Sandwiched between these arguments is a reference, argp, to the initial argument that will be passed to the remote procedure by the server process. The sixth argument for clnt_creat is a reference to the location where the return data will be stored. The seventh and final argument is the TIMEOUT value. While the cautionary comments indicate you should not edit this file, and in general you should not, the TIMEOUT value can be changed from the default of 25 to some other reasonable user-imposed maximum.

The code in the hello_svc.c file is much more complex and, in the interest of space, not presented here. Interested readers are encouraged to enter the protocol definition in hello.x and to generate and view the hello_svc.c file. At this juncture it is sufficient to note that the hello_svc.c file contains the code for the server process. Once invoked, the server process will remain in memory. When notified by a client process, it will execute the print_hello_1_svc function.

Debugging RPC Applications

Debugging RPC Applications

Because of their distributed nature, RPC applications can be very difficult to debug. One easy way to test and debug an RPC application with, say, gdb, is to link the client and server programs without their rpcgen stubs. To do this, comment out the RPC reference in the client program. If the -C option was passed to rpcgen, then you must adjust the name of the function call appropriately (i.e., add the _svc suffix). In addition, you may need to cast the function call argument with the client reference to the correct type (i.e., struct svc_req *). Incorporating these changes with preprocessor directives, our hello_client.c file now would be as shown in Figure 9.17.

Example 9.17. A “debug ready” version of hello_client.c.

File : hello_client_gdb.c
  |     /*
  |         The CLIENT program:  hello_client.c
  |         This will be the client code executed by the local client process.
  |      */
  +     #include <stdio.h>
  |     #include "hello.h"          /* Generated by rpcgen from hello.x  */
  |     int
  |     main(int argc, char *argv[]) {
   . . .                            /* SAME AS LINES 9-20 in hello_client.c */
  |     /*
  |         Generate the client handle to call the server
  |      */
  |     #ifndef DEBUG
  +        if ((client=clnt_create(server,      DISPLAY_PRG,
  |                                DISPLAY_VER, "tcp")) == (CLIENT *) NULL) {
  |         clnt_pcreateerror(server);
  |         exit(2);
  |       }
 30       printf("client : calling function.
");
  |       return_value = print_hello_1((void *) &filler, client);
  |     #else
  |       printf("client : calling function.
");
  |       return_value = print_hello_1_svc((void *) &filler,
                                           (struct svc_req *)client);
  +     #endif
  |       if (*return_value)
  |         printf("client : Mission accomplished
");
  |       else
  |         printf("client : Unable to display message
");
 40       return 0;
  |     }

We would compile this modified version with the command sequence shown in Figure 9.18. As none of the network libraries are referenced, the libnsl library does not need to be linked (for most versions of gcc, this is not a concern). The compiler is passed the -g flag (to generate the symbol table information for gdb) and -DDEBUG is specified to define the DEBUG constant the preprocessor will test.

Example 9.18. Debugging the client–server application with gdb.

linux$ gcc -DDEBUG -g hello_client_gdb.c hello_server.c        <-- 1

linux$ gdb -q a.out
(gdb) list 25,35
25         if ((client=clnt_create(server,      DISPLAY_PRG,
26                                 DISPLAY_VER, "tcp")) == (CLIENT *) NULL) {
27          clnt_pcreateerror(server);
28          exit(2);
29        }
30        printf("client : calling function.
");
31        return_value = print_hello_1((void *) &filler, client);
32      #else
33        printf("client : calling function.
");
34        return_value=print_hello_1_svc((void *) &filler,(struct svc_req *)client);
35      #endif

(gdb) break 34        <-- 2
Breakpoint 1 at 0x804853f: file hello_client_gdb.c, line 34.

(gdb) run kahuna
Starting program: /home/faculty/gray/revision/09/hello_files/a.out kahuna
client : calling function.

Breakpoint 1, main (argc=2, argv=0xbffffc34) at hello_client_gdb.c:34
34        return_value=print_hello_1_svc((void *)&filler,(struct svc_req *)client);

(gdb) step        <-- 3
print_hello_1_svc (filler=0xbffffbbc, req=0x80497ec) at hello_server.c:10
10        ok = printf("server : Hello, world.
");

(gdb) list        <-- 4
5       #include <stdio.h>
6       #include "hello.h"        /* is generated by rpcgen from hello.x  */
7       int            *
8       print_hello_1_svc(void * filler, struct svc_req * req) {
9         static int  ok;
10        ok = printf("server : Hello, world.
");
11        return (&ok);
12      }

(gdb) quit
The program is running.  Exit anyway? (y or n) y
  • (1)Compile with gcc. Define the DEBUG constant and generate the symbol table information.

  • (2)Set a break point at line 34 in the client program.

  • (3)Step into what was formerly the remote procedure.

  • (4)This is now the code for the server.

Using RPCGEN to Generate Templates and a MAKEFILE

The rpcgen command has additional functionality to assist the developer of RPC applications. If the -a flag (see Figure 9.7) is passed to rpcgen, it will generate, in addition to the client and server stub files and header file, a set of template files for the client and server and a makefile for the entire application. Unlike the -C flag, which will cause rpcgen to overwrite preexisting stub files, the -a flag will cause rpcgen to halt with a warning message if the template files (with the default names) are present in the current directory. Therefore, it is best to use the -a flag only when you are positive the protocol definition file is accurate; otherwise, you must manually remove or rename the previously generated template files.

For example, suppose we have a program called fact.c (Program 9.5) that requests an integer value and returns the factorial of that value if it is within the range of values that can be stored in a long integer; otherwise, it returns a value of 0.

Example 9.5. The original factorial program, fact.c.

File : fact.c
  |     /*
  |         A program to calculate Factorial numbers
  |      */
  |     #include <stdio.h>
  +     int
  |     main( ){
  |       long int        f_numb, calc_fact(int);
  |       int             number;
  |       printf("Factorial Calculator
");
 10       printf("Enter a positive integer value ");
  |       scanf("%d", &number);
  |       if (number < 0)
  |         printf("Positive values only!
");
  |       else if ((f_numb = calc_fact(number)) > 0)
  +         printf("%d! = %d
", number, f_numb);
  |       else
  |         printf("Sorry %d! is out of my range!
", number);
  |       return 0;
  |     }
 20     /*
  |        Calculate the factorial number and return the result or return 0
  |        if value is out of range.
  |      */
  |     long int
  +     calc_fact(int n){
  |       long int        total = 1, last = 0;
  |       int             idx;
  |       for (idx = n; idx - 1; --idx) {
  |         total *= idx;
 30         if (total <= last)                 /* Have we gone out of range? */
  |           return (0);
  |         last = total;
  |       }
  |       return (total);
  +     }

We would like to turn the factorial program into a client–server application whereby the client could make a request for a factorial value from the remote factorial server. To accomplish this, we begin by writing the protocol definition file shown in Figure 9.19.

Example 9.19. The protocol definition file for the factorial program.

File : fact.x
  |     /*
  |         The protocol definition file for the factorial program.
  |         The programmer generates this file.
  |     */
  +     program FACTORIAL {
  |       version ONE {
  |          long int CALC_FAC( int ) = 1;
  |       } = 1;
  |     } = 0x20000049;

We then use rpcgen with the -a and -C flags to generate the header file, the client and server stub files, the client and server template files, and the application Makefile. The details of and output from this process are shown in Figure 9.20.

Example 9.20. Using rpcgen with the -a and -C flags.

linux$ ls
fact.x

linux$ rpcgen -a -C fact.x

linux$ ls -x
fact_client.c  fact_clnt.c  fact.h  fact_server.c  fact_svc.c  fact.x
Makefile.fact

As shown, passing rpcgen the protocol definition file with the -a and -C flags generates six files: the header file, fact.h, and the RPC stub files, fact_clnt.c and fact_svc.c, which are similar in content and nature to those in the previous example. The three new files created by rpcgen bear further investigation. The client template file is fact_client.c. Again, rpcgen has used the file name of the protocol definition file as the root for the file name and added the _client.c suffix. The contents of the fact_client.c file are shown in Figure 9.21.

Example 9.21. The fact_client.c template client file generated by rpcgen.

File : fact_client.c
  |     /*
  |        This is sample code generated by rpcgen.
  |        These are only templates and you can use them
  |        as a guideline for developing your own functions.
  +     */
  |     #include "fact.h"
  |     void
  |     factorial_1(char *host) {
  |       CLIENT *clnt;
 10       long  *result_1;
  |       int  calc_fac_1_arg;
  |
  |     #ifndef DEBUG
  |       clnt = clnt_create (host, FACTORIAL, ONE, "udp");
  +       if (clnt == NULL) {
  |         clnt_pcreateerror (host);
  |         exit (1);
  |       }
  |     #endif                                 /* DEBUG */
 20       result_1 = calc_fac_1(&calc_fac_1_arg, clnt);
  |       if (result_1 == (long *) NULL) {
  |         clnt_perror (clnt, "call failed");
  |       }
  |     #ifndef DEBUG
  +       clnt_destroy (clnt);
  |     #endif                                 /* DEBUG */
  |     }
  |     int
  |     main (int argc, char *argv[]) {
 30       char *host;
  |
  |       if (argc < 2) {
  |         printf ("usage: %s server_host
", argv[0]);
  |         exit (1);
  +       }
  |       host = argv[1];
  |       factorial_1 (host);
  |       exit (0);
  |     }

In the template file rpcgen has created a function called factorial_1 (lines 7 through 27). The function name is derived from the program name given in the protocol definition file with a suffix of _1 (the version number). As shown, the factorial_1 function is passed the host name. This function is used to make the RPC clnt_create call and the remote calc_fac_1 function call. Notice that variables for the correct argument type and function return type have been placed at the top of the factorial_1 function. By default, the transport protocol for the clnt_create call is specified as udp (versus tcp, which was used in the previous example). The call to the remote cal_fac_1 function is followed by a check of its return value. If the return value is NULL, indicating a failure, the library function clnt_perror (Table 9.6) is called to display an error message.

Table 9.6. Summary of the clnt_perror Library Call.

Include File(s)

<rpc/rpc.h>

Manual Section

3N

Summary

void clnt_perror(CLIENT *clnt, char *s );

Return

Success

Failure

Sets errno

Print message to standard error indicating why the RPC call failed.

  

The clnt_perror library call is passed the client handle from the clnt_create call and an informational message string. The clnt_perror message will have the informational message prefaced with an intervening colon.

A call to the library function clnt_destroy is also generated (Table 9.7).

The clnt_destroy function is used to return the resources allocated by the clnt_create function to the system. As would be expected, once a client RPC handle has been destroyed, it is undefined and can no longer be referenced.

To facilitate testing, rpcgen has also placed a series of preprocessor directives in the template file. However, it seems to overlook the fact that the call to clnt_perror requires the network library and thus may also need to be commented out when debugging the application. As in the previous example, if the -C option for rpcgen has been specified and a call to the remote factorial function (calc_fac_1) is to be made in a debug setting, the function name should have the string _svc appended, and the clnt argument should be cast to the data type ( struct svc_req * ).

Table 9.7. Summary of the clnt_destroy Library Call.

Include File(s)

<rpc/rpc.h>

Manual Section

3N

Summary

void clnt_destroy( CLIENT *clnt );

Return

Success

Failure

Sets errno

   

We can now edit the fact_client.c program and add the appropriate code from the function main in our initial fact.c example. The modified fact_client.c program is shown in Figure 9.22. Note the change in the call to the calc_fact function to the factorial_1 function.

Example 9.22. The fact_client.c template file with modifications.

File : fact_client.c
  |     /*
  |        This is sample code generated by rpcgen.
  |        These are only templates and you can use them
  |        as a guideline for developing your own functions.
  +     */
  |     #include "fact.h"
  |     long int                               /* Returns a long int  */
  |     factorial_1(int  calc_fac_1_arg, char *host) {
  |       CLIENT *clnt;
 10       long  *result_1;
  |                                            /* int  calc_fac_1_arg;*/
  |
  |     #ifndef DEBUG
  |       clnt = clnt_create (host, FACTORIAL, ONE, "udp");
  +       if (clnt == NULL) {
  |         clnt_pcreateerror (host);
  |         exit (1);
  |       }
  |     #endif                                 /* DEBUG */
 20       result_1 = calc_fac_1(&calc_fac_1_arg, clnt);
  |       if (result_1 == (long *) NULL) {
  |         clnt_perror (clnt, "call failed");
  |       }
  |     #ifndef DEBUG
  +       clnt_destroy (clnt);
  |     #endif                                 /* DEBUG */
  |       return *result_1;                   /* return value to main */
  |     }
  |     int
 30     main (int argc, char *argv[]) {
  |       char *host;
  |       long int f_numb;                     /* Own declarations     */
  |       int      number;
  |       if (argc < 2) {
  +         printf ("usage: %s server_host
", argv[0]);
  |         exit (1);
  |       }
  |       host = argv[1];
  |                                            /* factorial_1 (host);  */
 40       /*
  |          Replace canned call with code from previous main in program fact.c
  |       */
  |       printf("Factorial Calculator
");
  |       printf("Enter a positive integer value ");
  +       scanf("%d", &number);
  |       if (number < 0)
  |         printf("Positive values only!
");
  |       else if ((f_numb = factorial_1(number, host)) > 0)
  |         printf("%d! = %d
", number, f_numb);
 50       else
  |         printf("Sorry %d! is out of my range!
", number);
  |       exit (0);
  |     }

In order, the modifications to the client program were as follows. First, the return type of the generated function (factorial_1) is changed from void to a long int. Second, the factorial_1 argument list is adjusted to include the numeric value that is passed. The data type and other information for this argument was listed at the top of the function. Note that to prevent the overshadowing of the parameter, this previous declaration must be deleted or commented out (as done in line 11). Third, the return type for the factorial_1 function is added at the foot of the function. Fourth, in main the appropriate declarations are added (see lines 32 and 33). Fifth, and last, the bulk of the code from main in the fact.c is copied into the main of this program. When this is done, the canned call to factorial_1 must be removed (or commented out) and, most importantly, the name of the function to be invoked must be changed from its original calc_fact to factorial_1.

Example 9.23. Server template file fact_server.c generated by rpcgen.

File : fact_server.c
  |     /*
  |        This is sample code generated by rpcgen.
  |        These are only templates and you can use them
  |        as a guideline for developing your own functions.
  +      */
  |
  |     #include "fact.h"
  |     long *
  |     calc_fac_1_svc(int *argp, struct svc_req *rqstp) {
 10       static long  result;
  |       /*
  |        * insert server code here
  |        */
  |
  +       return &result;
  |     }

The server template file generated by rpcgen is shown in Figure 9.23.

As with the client template file, we now can modify the server template to incorporate the code for the remote procedure. The modified fact_server.c file is shown in Figure 9.24.

Example 9.24. The fact_server.c template file with modifications.

File : fact_server.c
  |     /*
  |        This is sample code generated by rpcgen.
  |        These are only templates and you can use them
  |        as a guideline for developing your own functions.
  +      */
  |
  |     #include "fact.h"
  |     long *
  |     calc_fac_1_svc(int *argp, struct svc_req *rqstp) {
 10       static long  result;
  |       /*
  |        * insert server code here
  |        */
  |       long int        total = 1, last = 0;
  +       int             idx;
  |       for (idx = *argp; idx - 1; ––idx) {
  |         total *= idx;
  |         if (total <= last) {               /* Have we gone out of range? */
  |           result = 0;
 20           return (&result);
  |         }
  |         last = total;
  |       }
  |       result = total;
  +       return &result;
  |     }

The changes for the server program are more straightforward than those for the client program. Essentially, the function code is pasted into the indicated location. The only coding adjustment occurs in line 17 where idx is initialized. As the argument passed to this function is as reference (versus a value) it must be dereferenced.

The Makefile generated by rpcgen is shown in Figure 9.25.

Example 9.25. Makefile.fact, generated by rpcgen.

File : Makefile.fact
  |
  |     # This is a template Makefile generated by rpcgen
  |
  |     # Parameters
  +
  |     CLIENT = fact_client
  |     SERVER = fact_server
  |
  |     SOURCES_CLNT.c =
 10     SOURCES_CLNT.h =
  |     SOURCES_SVC.c =
  |     SOURCES_SVC.h =
  |     SOURCES.x = fact.x
  |
  +     TARGETS_SVC.c = fact_svc.c fact_server.c
  |     TARGETS_CLNT.c = fact_clnt.c fact_client.c
  |     TARGETS = fact.h   fact_clnt.c fact_svc.c fact_client.c fact_server.c
  |
  |     OBJECTS_CLNT = $(SOURCES_CLNT.c:%.c=%.o) $(TARGETS_CLNT.c:%.c=%.o)
 20     OBJECTS_SVC = $(SOURCES_SVC.c:%.c=%.o) $(TARGETS_SVC.c:%.c=%.o)
|     # Compiler flags
  |
  |     CFLAGS += -g
  |     LDLIBS += -lnsl
  +     RPCGENFLAGS =
  |
  |     # Targets
  |
  |     all : $(CLIENT) $(SERVER)
 30
  |     $(TARGETS) : $(SOURCES.x)
  |             rpcgen $(RPCGENFLAGS) $(SOURCES.x)
  |
  |     $(OBJECTS_CLNT) : $(SOURCES_CLNT.c) $(SOURCES_CLNT.h) $(TARGETS_CLNT.c)
  +
  |     $(OBJECTS_SVC) : $(SOURCES_SVC.c) $(SOURCES_SVC.h) $(TARGETS_SVC.c)
  |
  |     $(CLIENT) : $(OBJECTS_CLNT)
  |             $(LINK.c) -o $(CLIENT) $(OBJECTS_CLNT) $(LDLIBS)
 40

While the makefile can be used pretty much as generated, you may want to modify some of the entries in the compiler flag section. For example, you may want to add the -C flag to RPCGENFLAGS, or indicate that the math library should be linked by adding -lm to LDLIBS. If a compiler other than the default compiler (gcc on most systems) is to be used, you would add the notation in this section (e.g., CC = cc for Sun's C compiler or CC = CC for Sun's C++ compiler). The make utility will assume the Makefile it is to process is called makefile. As rpcgen creates a Makefile whose name is makefile with a period (.) root name of the protocol definition file (fact) appended, the user is left with two remedies. First, rename the generated Makefile to makefile by using the mv command, or second, use the -f flag for make. If the -f flag is used with make, then the name of the file for make to use should immediately follow the -f flag.

Figure 9.26 presents the sequence of events on a local system when the make utility with the -f flag is used to generate the factorial application.

Example 9.26. Using the Makefile.fact file.

linux$ make -f Makefile.fact
cc -g    -c -o fact_clnt.o   fact_clnt.c
cc -g    -c -o fact_client.o fact_client.c
cc -g       -o fact_client   fact_clnt.o   fact_client.o -lnsl
cc -g    -c -o fact_svc.o    fact_svc.c
cc -g    -c -o fact_server.o fact_server.c
cc -g       -o fact_server   fact_svc.o    fact_server.o -lnsl

Figure 9.27 shows a sequence for running the factorial client-server application.

In the previous example, the factorial server program is invoked on the workstation called linux. The ps command verifies the presence of the fact_server process. The factorial client program is invoked and passed the name of the host that is running the factorial server process. The client process requests the user to input an integer value. The user enters the value 11. The client process makes a remote call to the server process, passing it the value 11. The server process responds by calculating the factorial value and returning it to the client. The client process displays the returned result. The client process is invoked a second time and passed a value of 15. The value 15! is beyond the storage range for an integer on the server. Thus, the server returns the value 0, indicating it was unable to complete the calculation. The client displays the corresponding error message. Next, the user has logged onto another workstation on the same network (medusa) and changes to the directory where the executables for the factorial application reside. The ps command is used to check if the factorial server process is present on this workstation—it is not. The factorial client is invoked again and passed the name of the workstation running the server process (linux). The client program requests an integer value (entered as 12). This value is passed, via the RPC, to the server process on the workstation linux. The factorial value is calculated by the server process on linux and returned to the client process on medusa, which displays the results to the screen.

Example 9.27. Running the factorial client-server application.

linux$ fact_server &        <-- 1
[1] 24366

linux$ ps -ef | grep fact
gray     24366 24036  0 14:30 pts/2    00:00:00 fact_server
gray     24368 24036  0 14:30 pts/2    00:00:00 grep gray

linux$ fact_client linux        <-- 2
Factorial Calculator
Enter a positive integer value 11
11! = 39916800

linux$ fact_client linux
Factorial Calculator
Enter a positive integer value 15
Sorry 15! is out of my range!

medusa$ ps -ef | grep fact_server        <-- 3
gray     28332 28192  0 15:17 pts/1    00:00:00 grep fact_server

medusa$ fact_client linux        <-- 4
Factorial Calculator
Enter a positive integer value 12
12! = 479001600
  • (1)Put the server in the background on the host called linux.

  • (2)Run the client program and pass it the host name linux.

  • (3)On a different host (medusa) verify the server program is not running.

  • (4)Run the client program and pass it the host name linux.

Encoding and Decoding Arbitrary Data Types

Encoding and Decoding Arbitrary Data Types

For RPCs to pass data between systems with differing architectures, data is first converted to a standard XDR format. The conversion from a native representation to XDR format is called serialization. When the data is received in XDR format, it is converted back to the native representation for the recipient process. The conversion from XDR format to native format is called deserialization. To be transparent, the conversion process must take into account such things as native byte order,[11] integer size, representation of floating-point values, and representation of character strings. Some of these differences may be hardware-dependent, while others may be programming language-dependent. Once the data is converted, it is assumed that individual bytes of data (each consisting of eight bits) are in themselves portable from one platform to another. Data conversion for standard simple data types (such as integers, floats, characters, etc.) are implemented via a series of predefined XDR primitive type library routines,[12] which are sometimes called filters. These filters return the simple data type if they successfully convert the data; otherwise, they return a value of 0. Each primitive routine takes a pointer to the result and a pointer to the XDR handle. The primitive routines are incorporated into more complex routines for complex data types, such as arrays. The specifics of how the data conversion is actually done, while interesting, is beyond the scope of a one-chapter overview of RPC. However, it is important to keep in mind that such routines are necessary and that when passing data with RPCs, the proper conversion routines must be selected. Fortunately, when using rpcgen, the references for the appropriate XDR conversion routines are automatically generated and placed in another C source file the application can reference. This file, containing the conversion routines for both the client and server, will have the suffix _xdr.c appended to the protocol definition file name.

To illustrate how data conversion is done, we will create an application that performs a remote directory tree listing. When the server for this application is passed a valid directory name, it will traverse the indicated directory and produce an indented listing of all of the directory's underlying subdirectories. For example, say we have the directory structure shown in Figure 9.28.

A hypothetical directory structure.

Figure 9.28. A hypothetical directory structure.

If we request the application to produce a directory tree of the directory /usr0, the output returned from the directory tree application would be similar to that shown in Figure 9.29.

Example 9.29. The directory tree listing of /usr0.

/usr0:
   home
      joe
      bill
   prgm
      ex0
      ex1
      ex2

The directory traversed is listed with a trailing colon. Following this, each subdirectory displayed is indented to indicate its relationship to its parent directory and sibling subdirectories. The subdirectories home and prgm are at the same level and thus are indented the same number of spaces. The subdirectories joe and bill, which are beneath the home directory, are indented to the same level as are the subdirectories ex0, ex1, and ex2, which are beneath the prgm directory.

As written, the application will pass from the client to the server the name of the directory to be traversed on the server. The server will allocate an array of a fixed size[13] to store the tree directory listing and will return the contents of the array if it is successful. If the server fails, it will return a NULL value. The server using the following high-level algorithm fills the array with the directory tree information.

The passed directory reference is opened. While the directory reference is not NULL, each entry is checked to determine if it is accessible. Note that if the server process does not have root privileges, this may cause some entries to be skipped. If the entry is accessible and is a directory reference (versus a file reference) but not a dot entry (we are looking to skip the “.” and “..” entries for obvious reasons), the entry is stored with the proper amount of indenting in the allocated array. For display purposes, each stored entry has an appended newline (n) to separate it from the following entry. Since directory structures are recursive in nature, after processing an accessible directory entry, the tree display routine will call itself again, passing the name of the new directory entry. Once the entire contents of a directory have been processed, the directory is closed. When all directories and subdirectories have been processed, the array, with the contents of the directory tree, is returned to the client process, which will display its contents. The partial contents of the array returned for the previous example is shown in Figure 9.30.

A partial listing of the directory tree array for /usr0.

Figure 9.30. A partial listing of the directory tree array for /usr0.

The protocol definition file tree.x for the tree program is shown in Figure 9.31.

Example 9.31. The protocol definition file, tree.x.

File : tree.x
  |     /*
  |             Tree protocol definition file
  |     */
  |     const   MAXP  = 4096;   /* Upper limit on packet size it 8K.       */
  +     const   DELIM = "
";   /* To separate each directory entry.       */
  |     const   INDENT= 5;      /* # of spaces to indent for one level.    */
  |     const   DIR_1 = 128;    /* Maximum length of any one directory
                                   entry.                                  */
  |
  |     typedef char line[MAXP]; /* A large storage location for all the
                                   entries                                 */
 10     typedef line *line_ptr; /* A reference to the storage location.    */

  |                             /* If no errors return reference else
                                   return void                             */
  |     union dir_result
  |       switch( int errno ) {
  |       case 0:
  +           line *line_ptr;
  |       default:
  |           void;
  |     };
  |     /*
 20      *   The do_dir procedure will take a reference to a directory and return
  |      *   a reference to an array containing the directory tree.
  |      */
  |     program TREE {
  |       version one{
  +          dir_result do_dir( string ) = 1;
  |       } = 1;
  |     } = 0x2000001;

In the protocol definition file there is a series of constants. These constants will be mapped into #define statements by the rpcgen compiler. Following the constant section are two type declarations. The first, typedef char line[MAXP], declares a new type called line that is an array of MAXP number of characters. To translate this type, rpcgen creates a routine named xdr_line, which it places in the tree_xdr.c file. The contents of this routine are shown in Figure 9.32.

Example 9.32. The xdr_line XDR conversion routine created by rpcgen.

File : tree_xdr.c
  |     /*
  |      * Please do not edit this file.
  |      * It was generated using rpcgen.
  |      */
  +
  |     #include "tree.h"
  |
  |     bool_t
  |     xdr_line (XDR *xdrs, line objp)
 10     {
  |             register int32_t *buf;
  |
  |              if (!xdr_vector (xdrs, (char *)objp, MAXP,
  |                     sizeof (char), (xdrproc_t) xdr_char))
  +                      return FALSE;
  |             return TRUE;
  |     }
  . . .

The generated xdr_line routine calls the predefined xdr_vector routine, which in turn invokes the xdr_char primitive. It is the xdr_char primitive that is found in both the client and server stub files that does the actual translation. A similar set of code is generated for the line pointer (line_ptr) declaration and the discriminated union that declares the type to be returned by the user-defined remote do_dir procedure. If we examine the tree.h file produced by rpcgen from the tree.x file, we find the discriminated union is mapped to a structure that contains a union (as shown in Figure 9.33). The single argument for the do_dir procedure is a string (a special XDR data type), which is mapped to a pointer to a pointer to a character. The argument to do_dir will be the directory to examine.

Example 9.33. Structure, found in tree.h, generated by rpcgen from the discriminated union in tree.x.

File : tree.h
  |     /*
  |      * Please do not edit this file.
  |      * It was generated using rpcgen.
  |      */
  +
  |     #ifndef _TREE_H_RPCGEN
  |     #define _TREE_H_RPCGEN
  |
  |     #include <rpc/rpc.h>
  ...
  |     typedef line *line_ptr;
  |
  +     struct dir_result {
  |             int errno;
  |             union {
  |                     line *line_ptr;
  |             } dir_result_u;
 30     };
  |     typedef struct dir_result dir_result;
  ...

The code for the client portion (tree_client.c) of the tree program is shown in Program 9.6, and the server portion (tree_server.c) is shown in Program 9.7.

Example 9.6. The directory tree client program tree_client.c.

File : tree_client.c
  |     /*
  |
  |     #####  #####   ######  ######   ####   #       #  ######  #    #  #####
  |       #    #    #  #       #       #    #  #       #  #       ##   #    #
  +       #    #    #  #####   #####   #       #       #  #####   # #  #    #
  |       #    #####   #       #       #       #       #  #       #  # #    #
  |       #    #   #   #       #       #    #  #       #  #       #   ##    #
  |       #    #    #  ######  ######   ####   ######  #  ######  #    #    #
  |      */
 10     #include "local.h"
  |     #include "tree.h"
  |
  |     void
  |     tree_1(char *host, char *the_dir ) {
  +       CLIENT         *client;
  |       dir_result     *result;
  |
  |     #ifndef DEBUG
  |       client = clnt_create(host, TREE, one, "tcp");
 20       if (client == (CLIENT *) NULL) {
  |         clnt_pcreateerror(host);
  |         exit(2);
  |       }
  |       result = do_dir_1(&the_dir, client);
  +     #else
  |       result = do_dir_1_svc(&the_dir, (svc_req *) client);
  |     #endif                        /* DEBUG */
  |       if (result == (dir_result *) NULL) {
  |     #ifndef DEBUG
 30         clnt_perror(client, "call failed");
  |     #else
  |         perror("Call failed");
  |     #endif                        /* DEBUG */
  |         exit(3);
  +       } else                      /* display the whole array       */
  |         printf("%s:

%s
",the_dir,result->dir_result_u.line_ptr);
  |     #ifndef DEBUG
  |       clnt_destroy(client);
  |     #endif                        /* DEBUG */
 40     }
  |     int
  |     main(int argc, char *argv[]) {
  |       char        *host;
  |       static char directory[DIR_1]; /* Name of the directory        */
  +       if (argc < 2) {
  |         fprintf(stderr, "Usage %s server [directory]
", argv[0]);
  |         exit(1);
  |       }
  |       host = argv[1];             /* Assign the server            */
 50       if (argc > 2)
  |         strcpy(directory, argv[2]);
  |       else
  |         strcpy(directory , ".");
  |       tree_1(host, directory);    /* Give it a shot!              */
  +       return 0;
  |     }

The bulk of the tree_client.c program contains code that is either similar in nature to previous RPC examples or is self-documenting. The one statement that may bear further explanation is the printf statement that displays the directory tree information to the screen. Remember that the remote procedure returns a pointer to a string. This string is already in display format in that each directory entry is separate from the next with a newline. The reference to the string is written as result->dir_result_u.line_ptr. The proper syntax for this reference is obtained by examining the tree.h file produced by rpcgen.

Example 9.7. The directory tree client program tree_server.c.

File : tree_server.c
  |  /*
  |  #####  #####   ######  ######   ####   ######  #####   #    #  ######  #####
  |    #    #    #  #       #       #       #       #    #  #    #  #       #    #
  |    #    #    #  #####   #####    ####   #####   #    #  #    #  #####   #    #
  +    #    #####   #       #            #  #       #####   #    #  #       #####
  |    #    #   #   #       #       #    #  #       #   #    #  #   #       #   #
  |    #    #    #  ######  ######   ####   ######  #    #    ##    ######  #    #
  |  */
  |  #include "local.h"
 10  #include "tree.h"
  |
  |  static int cur = 0,                          /* Index into output array   */
  |         been_allocated = 0,                   /* Has array been allocated? */
  |         depth = 0;                            /* Indenting level           */
  +
  |  dir_result     *
  |  do_dir_1_svc( char **f, struct svc_req * rqstp) {
  |    static dir_result result;                  /* Either array or void      */
  |    struct stat       statbuff;                /* For status check of entry */
 20    DIR               *dp;                     /* Directory entry           */
  |    struct dirent     *dentry;                 /* Pointer to current entry  */
  |    char              *current;                /* Position in output array  */
  |    int               length;                  /* Length of current entry   */
  |    static char       buffer[DIR_1];           /* Temp storage location     */
  +
  |    if (!been_allocated)                       /* If not done then allocate */
  |      if ((result.dir_result_u.line_ptr=(line *)malloc(sizeof(line))) == NULL)
  |        return (&result);
  |      else{
 30        been_allocated = 1;                    /* Record allocation         */
  |    } else if ( depth == 0 )      {            /* Clear 'old' contents.     */
  |      memset(result.dir_result_u.line_ptr, 0, sizeof(line));
  |      cur = 0;                                 /* Reset indent level        */
  |    }
  +    if ((dp = opendir(*f)) != NULL) {          /* If successfully opened    */
  |      chdir(*f);                               /* Change to the directory   */
  |      dentry = readdir(dp);                    /* Read first entry          */
  |      while (dentry != NULL) {
  |        if (stat(dentry->d_name, &statbuff) != -1)        /* If accessible  */
 40          if ((statbuff.st_mode & S_IFMT) == S_IFDIR &&   /* & a directory  */
  |              dentry->d_name[0] != '.') {                 /* & not . or ..  */
  |            depth += INDENT;                              /* adjust indent  */
  |             /*
  |                Store the entry in buffer - then copy buffer into larger array.
  +             */
  |            sprintf(buffer, "%*s %-10s
", depth, " ", dentry->d_name);
 |             length = strlen(buffer);
 |             memcpy((char *)result.dir_result_u.line_ptr + cur, buffer, length);
 |             cur += length;                    /* update ptr to ref next loc */
50             current = dentry->d_name;         /* the new directory          */
 |             (dir_result *)do_dir_1_svc(&current, rqstp);  /* call self      */
 |             chdir("..");                      /* back to previous level     */
 |             depth -= INDENT;                  /* adjust the indent level    */
 |           }
 +         dentry = readdir(dp);                 /* Read the next entry        */
 |       }
 |       closedir(dp);                           /* Done with this one         */
 |     }
 |     return (&result);                         /* Pass back the result       */
60   }

In the tree_server.c program, there are several static integers that are used either as counters or flags. The cur identifier references the current offset into the output array where the next directory entry should be stored. Initially, the offset is set to 0. The been_allocated identifier acts as a flag to indicate whether or not an output buffer has been allocated. Initially, this flag is set to 0 (FALSE). The last static identifier, depth, is used to track the current indent level. It is also set to 0 at the start.

The do_dir_1_svc procedure is passed a reference to a string (actually a character array) and a reference to an RPC client handle. Within the procedure, a series of local identifiers are allocated to manipulate and access directory information. Following this is an if statement that is used to test the been_allocated flag. If an output buffer has not been allocated, a call to malloc generates it. The allocated buffer is cast appropriately and is referenced by the line_ptr member of the dir_result_u structure. Once the buffer has been allocated, the been_allocated flag is set to 1 (TRUE). If the output buffer has already been allocated and this is the first call to this procedure (i.e., depth is at 0; remember this is a recursive procedure), a call to memset is used to clear the previous output contents by filling the buffer with NULL values. When the contents of the output buffer are cleared, the cur index counter is reset to 0.

The procedure then attempts to open the referenced directory. If it is successful, a call to chdir is issued to change the directory (this was done to eliminate the need to construct and maintain a fully qualified path when checking the current directory). Next, the first entry for the directory is obtained with the readdir function. A while loop is used to cycle through the directory entries. Those entries for which the process has access permission are tested to determine if they reference a directory. If they do, and the directory does not start with a dot (.), the depth counter is incremented. The formatted directory entry is temporarily constructed in a buffer using the sprintf string function. The format descriptors direct sprintf to use the depth value as a dynamic indicator of the number of blanks it should insert prior to the directory entry. Each entry has a newline appended to it. The formatted entry is then copied (using memcpy) to the proper location in the output buffer using the value of cur as an offset. The directory name is then passed back to the do_dir_1_svc procedure via a call to itself. Upon return from parsing a subdirectory, the procedure returns up one level via a call to chdir and decrements the depth counter accordingly. Once the entire directory is processed, the directory file is closed. When the procedure finishes, it returns the reference to output buffer.

An output sequence for the directory tree client–server application is shown in Figure 9.34. In this example, the directory tree server, tree_server, is run on the host called kahuna. The user, on host medusa, runs the tree_client program, passing the host name kahuna and the directory /usr/bin. The output, shown on the host medusa, is the directory tree found on kahuna (where the tree_server program is running in the background).

Example 9.34. A sample run of the directory tree application.

medusa$ tree_client kahuna /usr/bin
/usr/bin:
      X11
      man
           man1
           man4
           man5
           man7
           man6
           man3

Using Broadcasting to Search for an RPC Service

It is possible for a user to send a message to all rpcbind daemons on a local network requesting information on a specific service. The request is generated using the clnt_broadcast network call. The broadcast requests are sent to all locally connected broadcast nets using connectionless UDP transport. When sent, multiple responses may be obtained from the same server and from multiple servers. As each response is obtained, the clnt_broadcast call automatically invokes a predefined routine. Table 9.8 provides the syntax details of the clnt_broadcast call.

Table 9.8. Summary of the clnt_broadcast Library Call.

Include File(s)

<rpc/rpc.h>

Manual Section

3N

Summary

enum clnt_stat clnt_broadcast(
     u_long prognum, u_long versnum, u_long procnum,
     xdrproc_t inproc,  char *in,
     xdrproc_t outproc, char *out,
     resultproc_t eachresult      );

Return

Success

Failure

Sets errno

An enumerated type value RPC_SUCCESS indicating the success of the broadcast call.

Use clnt_perrno for error message.

Yes

The clnt_broadcast call is similar in nature to the callrpc function (another call used to invoke a remote procedure). The first three arguments for clnt_broadcast are the program, version and procedure numbers of the service. The parameters inproc and in reference the encoding procedure and the address of its argument(s) while outproc and out reference the decoding procedure and the address of where to place the decoding output if it is successful. Every time the clnt_broadcast call receives a response, it calls the function referenced by the eachresult argument. The eachresult function has two arguments. The first is a char * that references the same value as the out argument used in the clnt_broadcast call. The second argument is a reference to a structure, struct sockaddr_in *, that has the address information from the host that responded to the broadcast request. Keep in mind that the system supplies these values when the function is invoked. Every time the eachresult referenced function returns a 0 (FALSE), the clnt_broadcast call continues to wait for additional replies. The clnt_broadcast call will eventually time out (the user has no control over the amount of time).

Program 9.8 demonstrates the use of the clnt_broadcast call.

Example 9.8. Program broad.c, sending a broadcast request.

File : broad.c
  |     #include <stdio.h>
  |     #include <rpc/rpc.h>
  |     #include <rpc/pmap_clnt.h>             // For resultproc_t cast
  |
  +     u_long   program_number, version;      // Note: These are global
  |     static bool_t
  |     who_responded(char *out, struct sockaddr_in *addr) {
  |       int my_port_T, my_port_U;
  |       my_port_T = pmap_getport(addr, program_number, version, IPPROTO_TCP);
 10       my_port_U = pmap_getport(addr, program_number, version, IPPROTO_UDP);
  |       if ( my_port_T )
  |         printf("host: %s 	 TCP port: %d
",inet_ntoa(addr->sin_addr),
  |                 my_port_T);
  |       if ( my_port_U )
  +         printf("host: %s 	 UDP port: %d
",inet_ntoa(addr->sin_addr),
  |                 my_port_U);
  |       return 0;
  |     }
  |     int
 20     main(int argc, char *argv[]) {
  |       enum clnt_stat  rpc_stat;
  |       struct rpcent  *rpc_entry;
  |       if (argc < 2) {
  |         fprintf(stderr, "usage: %s RPC_service_[name | #] version
", *argv);
  +         return 1;
  |       }
  |       ++argv;                              // Step past your own prog name
  |       if (isdigit(**argv))                 // Check to see if # was passed
  |         program_number = atoi(*argv);      // If # passed use it otherwise
 30       else {                               // obtain RPC entry information
  |         if ((rpc_entry = getrpcbyname(*argv)) == NULL) {
  |           fprintf(stderr, "Unknown service: %s
", *argv);
  |           return 2;
  |         }                                  // Get the program number
  +         program_number = rpc_entry->r_number;
  |       }
  |       ++argv;                              // Move to version #
  |       version = atoi(*argv);
  |       rpc_stat = clnt_broadcast(program_number, version, NULLPROC,
 40                                (xdrproc_t)xdr_void, (char *) NULL,
  |                                (xdrproc_t)xdr_void, (char *) NULL,
  |                                (resultproc_t) who_responded);
  |       if (rpc_stat != RPC_SUCCESS)
  |        if (rpc_stat != RPC_TIMEDOUT) {     // If error is not a time out
  +        fprintf(stderr, "Broadcast failure : %s
", clnt_sperrno(rpc_stat));
  |        return 3;
  |       }
  |       return 0;
  |     }

The program checks the command line for the number of arguments. It expects to be passed the name (or number) of the service to check and its version number. The first character of the first argument is checked. If it is a digit, it is assumed that the number for the service was passed and the atoi function is used to convert the string representation of the number into an integer value. If the name of the service was passed, the getrpcbyname network call is used (line 31) to obtain details about the specified service. Table 9.9 summarizes the getrpcbyname network call.

Table 9.9. Summary of the getrpcbyname Network Call.

Include File(s)

<rpc/rpc.h>

Manual Section

3N

Summary

struct rpcent *getrpcbyname(char *name);

Return

Success

Failure

Sets errno

A reference to the rpcent structure for the service.

NULL

 

The getrpcbyname call has one parameter, a reference to a character array containing the service name. If successful, the call returns a pointer to the rpcent structure for the service (as found in RPC program number database stored in the file /etc/rpc). The rpcent structure is defined as

struct rpcent {
               char *r_name;       /* name of this rpc service */
               char **r_aliases;   /* zero-terminated list of
                                      alternate names */
               long r_number;      /* rpc program number */
          };

In line 38 the program then converts the second command-line argument into a version number. The clnt_broadcast call is used to garner responses. Each time a server responds to a broadcast request, the user-defined function who_responded is automatically invoked.

The who_responded function contains two other function calls, pmap_getport and inet_ntoa. The pmap_getport library function is used to obtain the port associated with the service. Table 9.10 provides the syntax specifics for the pmap_getport library function.

Table 9.10. Summary of the pmap_getport Library Function.

Include File(s)

<rpc/rpc.h>

Manual Section

3N

Summary

u_short pmap_getport(struct sockaddr_in *addr,
        u_long prognum, u_long versnum, u_long
        protocol);

Return

Success

Failure

Sets errno

The associated port number.

0

No, it sets rpc_createerr, query with clnt_pcreateerror( )

The first argument for this call is a reference to an address structure. This structure is as follows:[14]

struct sockaddr_in {
    sa_family_t   sin_family;           // address family
    in_port_t     sin_port;             // port
    struct        in_addr    sin_addr;  // reference to the address structure
    unsigned char sin_zero[8];          // unused
};

The prognum and versnum arguments are the program and version number of the service. The last argument, protocol, should be set to either IPPROTO_TCP for TCP or IPPROTO_UDP for UPD. If the call is successful, it returns the port number; otherwise, it sets the variable rpc_createerr to indicate the error. If an error occurs, the library function clnt_pcreateerror should be used to retrieve the associated error message.

At this point some should be asking, why use pmap_getport at all? Couldn't we just call htons(addr->sin_port) in the who_responded function to get the port number? The answer is, we could if we wanted only the UDP-associated port for the service.

The second function used in who_responded is the network function inet_ntoa. This function takes an encoded four-byte network address and converts it to its dotted notation counterpart. A sample run of the program requesting information about the status service, version 1, is shown in Figure 9.35.

Example 9.35. Output of the broad.c program showing servers providing status service.

medusa$ broad status 1
host: 137.49.6.1         TCP port: 32768
host: 137.49.6.1         UDP port: 32768        <-- 1
host: 137.49.52.2        TCP port: 32782
host: 137.49.52.2        UDP port: 32791
host: 137.49.9.27        TCP port: 751
host: 137.49.9.27        UDP port: 749        <-- 2
host: 137.49.52.152      TCP port: 984
host: 137.49.52.152      UDP port: 982
host: 137.49.240.157     TCP port: 1024
host: 137.49.240.157     UDP port: 1025
host: 137.49.6.1         TCP port: 32768
host: 137.49.6.1         UDP port: 32768        <-- 3
host: 137.49.52.152      TCP port: 984
host: 137.49.52.152      UDP port: 982
host: 137.49.240.157     TCP port: 1024
host: 137.49.240.157     UDP port: 1025
host: 137.49.52.2        TCP port: 32782
host: 137.49.52.2        UDP port: 32791
. . .
  • (1)Same host, same ports.

  • (2)Different host, different ports.

  • (3)Hosts continue to respond.

Notice that before the broadcast call timed out, some servers responded more than once. Also note that the service can be associated with different ports on different hosts. This output should be somewhat similar to the output produced by the rpcinfo command when called as

medusa$ rpcinfo -b status 1

Summary

Programming with RPC allows the programmer to write distributed applications whereby a process residing on one workstation can request another “remote” workstation to execute a specified procedure. Because of their complexity, most RPC-based programs make use of a protocol compiler such as Sun Microsystems's rpcgen. A protocol compiler provides the basic programming framework for the RPC-based application. In RPC applications the client and server processes do not need to know the details of underlying network protocols. Data passed between the processes is converted to/from an external data representation (XDR) format by predefined filters. Beneath the covers, RPC-based programs make use of the socket interface to carry out their communications. While not discussed in this chapter, RPC does support authentication techniques to facilitate secure client – server communications.

Key Terms and Concepts

<netdb.h> header

<netinet/in.h> header

<rpc/pmap_clnt.h> header

<rpc/rpc.h> header

client process

client stub

CLIENT typedef

clnt_broadcast library call

clnt_create library call

clnt_destroy library call

clnt_pcreateerror library call

clnt_perror library call

deserialization

gdb debugging of RPC programs

gethostbyaddr network function

gethostbyname network call

getrpcbyname network call

hostname command

htons network call

inet_ntoa network call

LDLIBS

make utility

mangled function names

memset function

pmap_getport library call

protocol definition file

public–private key authentication

readdir library function

rexec library function

rexecd remote execution server

RPC (remote procedure call)

RPC filters

RPC makefile

RPC program number

RPC template file

RPC version number

RPC_NOBROADCAST

RPC_UNKNOWNADDR

RPC_UNKNOWNHOST

RPC_UNKNOWNPROTO

rpcent structure

rpcgen command

rpcgen utility

RPCGENFLAGS

rpcinfo command

rsh (remote shell command)

serialization

server process

server stub

ssh (Secure Shell command)

ssh-keygen command

TCP (Transmission Control Protocol)

transport protocol

ttyname library call

UDP (Unreliable Datagram Protocol)

XDR (External Data Representation)

xdr_char function

xdr_line function

xdr_vector function



[1] The word remote in RPC is somewhat misleading. RPCs can also be used by processes residing on the same host (indeed, this approach is often used when debugging routines that contain RPCs).

[2] The need for system security today is much different than it was, say, even 5 years ago. An in-depth discussion of security is beyond the scope of this text.

[3] On i80x86 platforms the host byte order is LSB (least significant byte first), while on the Internet the byte order is MSB (most significant byte first).

[4] Up to this point, our examples have been primarily C++-based. Due to the inability of the compiler to handle full blown C++ code in conjunction with rpcgen-generated output, we will stick to C program examples in this section. Think of this as an opportunity to brush up on your C programming skills!

[5] Many programmers are not aware that printf returns a value. However, a pass of any C program with a printf function through the lint utility will normally return a message indicating that the value returned by printf is not being used.

[6] Most often, the identifiers placed in the protocol definition file are in capitals. Note that this is a convention, not a requirement.

[7] If the procedure name is placed in capitals, the RPC compiler, rpcgen, will automatically convert it to lowercase during compilation.

[8] This can be a troublesome default if, per chance, you have also generated your own local header file with the same name and extension.

[9] While this comment is somewhat tongue-in-cheek, it is not all that farfetched (check it out)!

[10] This is just the opposite of what happens in a Sun Solaris environment where no trailing & is needed, as the process is automatically placed in the background.

[11] For example, with a 4-byte (32-bit) number, the most significant byte (MSB) is always leftmost and the least significant byte (LSB) rightmost. If the sequence of bytes composing the number is ordered from left to right, as in the SPARC, the order is called big endian. If the byte sequence is numbered from right to left, as in the i80x86 processor line, the order is called little endian.

[12] For more details, see the manual pages on xdr.

[13] I know, I know, this is not the best way to do this—a dynamic allocation would be more appropriate here, as we do not know in advance how much storage room we will actually need. What is presented is a pedagogical example. The modification of the text example to use dynamic memory allocation is addressed in the exercise section.

[14] In the gdb debugger the command ptype TYPE can be used to display definition of the type of the value for TYPE (assuming, of course, the type is referenced in the current code).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.108.105