Chapter 11. TclHttpd in Client-Server Applications

By now we have learned a lot about network communication, and HTTP in particular, along with setting up TclHttpd for providing files and responding to requests from Tcl. We also learned how Tcl uses VFS to provide both standalone applications and package multiple files, into a single deliverable file we can later on use. Now, we'll put this knowledge into practice and create a sample client-server application that uses these technologies.

Throughout this chapter, we'll create a simple application that consists of a client and server. The application will work over HTTP and will allow multiple clients to communicate with a single server. It will also offer a comm interface on the server to invoke commands on remote clients. We'll start off by creating a minimal client and server, which will be a good starting point for both understanding the model and its upsides and downsides. It will have a mechanism for running Tcl command a on client and sending results back to the server.

After this is done, an automatic updates feature will be added so that whenever the client application changes, it will automatically get updated. This is needed in case we want our application to be run on multiple systems, and a manual upgrade of each one is not possible.

We'll also add plugin capability to our application so that we can create multiple Tcl packages. Those can be deployed to certain clients so that client itself will be minimal, and packages will extend its functionality.

Of course, this is just a sample application and later on we'll reveal some of the shortcuts taken in its development, and what would need to be extended for production quality applications. We'll also mention better security models that could be used.

Creating HTTP-based applications

Client-server applications may use various ways to communicate. Some of them are:

  • One way is to keep a permanent connection between the server and all clients. This approach requires creating protocol for sending messages both from the server to client and the other way around. While this approach is used by many applications, it is quite difficult to scale and can cause issues for long-running applications—such as detection of connections broken on one side, without proper end-of-file events received by other side.
  • Another possibility is to use HTTP instead of a real time connection. HTTP based applications typically use a polling mechanism—that is they periodically query the server to determine if there is anything the client should perform. Using HTTP and timers simplifies implementation a lot, and fits very nicely into Tcl's event-driven programming.

Using HTTP means that our application can also use any HTTP proxy needed by location the client is in, and can be easily configured for majority of network configurations. HTTP on its own is stateless and each request is independent of the other. This means that application is responsible for keeping track of what needs to be sent—usually this is kept on both server and client-side.

All communication in our application will be using Tclhttpd for the server-side and the http package on client. Both packages have already been introduced in previous chapters.

Our application will initially have very small functionality—the client will periodically poll for new jobs to be performed. The server will provide a list of jobs to perform, if any:

Creating HTTP-based applications

These operations will be done periodically. After this, whenever a job has been performed by the agent it sends results to the server as follows:

Creating HTTP-based applications

Preparing source code structure

Before implementing our applications, we need to set up our source code structure and build the system so that we can easily create multiple applications for multiple platforms—for this example only win32 and linux-x86, but there could be more. The approach of creating a separate directory for the client or server on each of the platforms is not manageable in the long term; we would need to copy each change into multiple directories, which is difficult to do.

Note

Even though this section provides a step by step introduction of what the preparations need to be, the code samples available along with this book provides a ready-to-use source code structure. Novice readers are advised to start by having a look at the sources provided for this chapter. This chapter also assumes the reader to be familiar with the content of the previous chapters—especially those related to Starkit technology, building standalone binaries, using databases, HTTP client, and TclHttpd.

Later in this section, we will introduce a building script to automate common tasks.

We want to create each of the binaries from a set of directories:

  • Common libraries and packages for all platforms (pure Tcl)
  • Common libraries and packages for specified platform
  • Source code for specified binary

First let's create the following directories: lib-common, lib-win32, and lib-linux-x86 for common packages. Let's also create src-client and src-server for sources for client and server applications.

The following is a complete file and directory tree of the project along with all scripts we'll create:

Preparing source code structure

The image contains the results from next steps as well, such as build scripts, server and client code, as well as all libraries.

The next thing we need are binaries for each platform. We'll create both Tk and non-Tk versions of each UI. Even though we won't use Tk itself on Microsoft Windows, background processes should use the Tk version and withdraw its main window—otherwise they will have a console window shown to the user.

We can download all versions of Tclkit from:

The authors recommend using the latest version 8.5 of tclkit (with Tk) and tclkitsh (without Tk), and naming them tclkit-<platform> and tclkitsh-<platform> accordingly. For Microsoft Windows we also need a .exe extension for the binaries.

We also need to download and/or copy several packages that will be used by our csa application:

These packages are needed in order to initialize Tclhttpd web server and database storage. They have also been introduced in previous chapters.

Now we can finally create the script to build the binaries. First we need to iterate over each platform we want to build these binaries for, and binaries we want to build:

foreach {platform suffix} { win32 ".exe" linux-x86 "" } {
foreach {binname appname dirname} {
tclkitsh client-cli client tclkit client-ui client
tclkitsh server-cli server tclkit server-ui server
} {

Each platform also has a suffix we need to add to the filename, which is needed for Microsoft Windows platform. The list of binaries consists of three things— the type of Tclkit binary to use (tclkit or tclkitsh), the destination name of the binary, and the source directory to use.

We then create the filename we want to work on,which is copied to the binaries subdirectory:

set filename 
[file join binaries $appname-$platform$suffix]

For example, when building a non-UI version of the client for Microsoft Windows the appname would be client-cli, the variable platform equal to win32 and the suffix set to .exe. The name of the target binary would be client-cli-win32.exe.

Then we copy the source binary as that particular file:

file copy -force $binname-$platform$suffix $filename

For the preceding client-cli-win32.exe example, binname would be set to tclkitsh and the source binary would be tclkitsh-win32.exe.

Then we mount it, copy appropriate directories, and unmount it:

vfs::mk4::Mount $filename $filename
docopy src-$dirname $filename
docopy lib-common [file join $filename lib]
docopy lib-$platform [file join $filename lib]
vfs::unmount $filename

In order to ease the automatic update feature, we will now calculate the MD5 checksum from the created binary and embed it in that binary. We create a script called fileinfo.tcl in the VFS that will set the csa::applicationMD5 variable to value of the checksum. We'll also write that checksum to a separate file.

set md5 [md5::md5 -hex -file $filename]
vfs::mk4::Mount $filename $filename
set fh [open [file join $filename fileinfo.tcl] w]
puts $fh [list set csa::applicationMD5 $md5]
close $fh
vfs::unmount $filename
set fh [open $filename.md5 w]
puts -nonewline $fh $md5
close $fh

While the automatic updates are done later in this chapter, we introduce MD5 calculation earlier. An MD5 checksum is stored in the binary as well as separate file, so that the csa client will be able to easily check if an update is available. It would query the server for MD5 checksum of latest binary for specified platform, and compare it with the MD5 checksum of the local client application. If they are the same, the client is running the latest version. Otherwise, it can download the updated version and replace it.

Even though the actual binary will not have the same MD5 checksum as what we calculated, the checksum that was calculated before can be used to compare the binary that the client is running with the binary the server has, which can be read from the second file created, as the last step.

The docopy helper procedure mentioned earlier copies the contents of source directory to the target directory. It is similar to file copy, but handles the case where one or more files already exist. For example, if both lib-common and src-client/lib are copied to the lib/ subdirectory of the target binary, the file copy command fails with the error that lib/ already exists. The docopy command handles this case properly.

It simply checks the file type and recursively copies directories while simply copying files:

proc docopy {from to} {
foreach g [glob -directory $from -nocomplain -tails *] {
set fromg [file join $from $g]
set tog [file join $to $g]
if {[file type $fromg] == "directory"} {
file mkdir $tog
docopy $fromg $tog
} else {
file copy -force $fromg $tog
}
}
}

Creating our applications

We now need to start creating code for our client and server applications—src-client/main.tcl and src-server/main.tcl. Both applications will start with the same code that initializes starkit, sets up a logger instance, and creates data directory to store configuration in:

package require starkit
starkit::startup
namespace eval csa {}
namespace eval csa::log {}
# initialize logging
package require logger
logger::init csa
logger::import -namespace csa::log csa
# set up directories
set csa::datadirectory [file join [pwd] data]
file mkdir $csa::datadirectory

Server-side

For our server application, besides the recently explained initialization, we'll need to initialize Tclhttpd and comm packages:

csa::log::debug "Initializing Tclhttpd"
package require tclhttpdinit
Httpd_Server 8981
csa::log::debug "Initializing comm interface"
package require comm
comm::comm configure -port 1991

This will cause the csa server to listen for HTTP connections on port 8981 and on comm connections on port 1991. The HTTP connection is used for communication with all clients, and the comm package is set up to allow adding requests from other applications and testing the application.

The next step is to load the additional scripts that initialize the database, set up the comm API and handle incoming HTTP requests:

csa::log::debug "Sourcing remaining files"
source [file join $starkit::topdir commapi.tcl]
source [file join $starkit::topdir database.tcl]
source [file join $starkit::topdir clientrequest.tcl]

The files we source are described later in the chapter. Finally we need to enter the main loop. If Tk is present, we'll show the UI that comes with Tclhttpd:

csa::log::debug "Entering main loop"
# Initialize Tk UI if possible
catch {package require Tk}
if {[info commands tk] != ""} {
package require httpd::srvui
SrvUI_Init "CSA server"
} else {
puts "Server started"
vwait forever
}

Note

The complete source code is located in the main.tcl in the 01clientserver/src-server directory in the source code examples for this chapter.

Client-side

For the client, after setting up the logger and the data directory, we'll accept the host name to connect to as argument, to print out an error if either one is not provided:

if {$argc < 1} {
set error "Usage: client <hostname>"
log::error $error
puts stderr $error
exit 1
}
lassign $argv csa::hostname
csa::log::info "Connecting to $csa::hostname"

We will now read the additional scripts—the initialization of local database, the client to communicate with the server, and read the local application's MD5 checksum:

source [file join $starkit::topdir database.tcl]
source [file join $starkit::topdir fileinfo.tcl]
source [file join $starkit::topdir client.tcl]

Then we'll schedule the csa client to request jobs to perform as soon as it is initialized:

after idle csa::requestJobs

The command csa::requestJobs is explained later in this chapter.

Finally we need to initialize the main loop. Similar to the server part, if Tk is present, we show a minimal UI with an exit button:

csa::log::debug "Entering main loop"
if {[info commands tk] != ""} {
# basic UI
wm title . "CSA Client"
button .exit -text "Exit application" -command exit 
-width 30 -height 3
pack .exit -fill both -expand 1
} else {
vwait forever
}

Note

The complete source code is located in the main.tcl in the 01clientserver/src-client directory in the source code examples for this chapter.

Communication and registration

While there is no silver bullet for all possible use cases, the authors of this book have decided to implement a simple solution. In order to simplify how our communication is done, everything that we send over HTTP will simply be dictionaries encoded as UTF-8. Since dictionaries can be converted from and to strings by Tcl, we do not need to perform any complex operations and everything seems to be transparent to our application.

Using Tcl's structures and format makes it easy to create both client and server in Tcl. It also makes it easier to access the data on server and client, and is much more efficient than using standards such as XML—which add the overhead of converting to and from XML, as well as require more code to send requests or get a response.

If our application needs to talk to other languages as well, it might be a good idea to provide communication using standards such as SOAP. These standards are shown in the next chapter.

Communication between the client and the server is done using the following steps:

  • The client sends a HTTP POST request providing all data as query, encoded in UTF-8; fields guid and password should specify client's Globally Unique Identifier (GUID) and its password
  • The server checks if guid and password are valid
  • If client is authorized, server gets jobs to be performed by this agent and passes them as the response
  • The client then performs these tasks, which are evaluated as Tcl scripts and sends back the results from each task

Authorization of guid and password is done against database of all agents. If an agent already exists in the database, its password has to match one that it provides. If an agent does not exist yet, its password is stored and compared upon the next communication. The client generates both GUID and password when it is first run.

Server-side

Let's start off with how our server would handle requests.

Our function to process a request would start off as follows:

proc csa::handleClientProtocol {sock suffix} {
upvar #0 ::Httpd$sock s
set req [dict create]
set response [dict create]
set ok 0
if {[info exists s(query)]} {
set req [dict create {*}[encoding 
convertfrom utf-8 $s(query)]
}

This would map the HTTP request data to the variable s, initialize default values for variables, and if a query was passed then store it in req variable. We also initialize the ok and response variables, to indicate that request was not correct and response to contain no data, which we'll add later.

After this, we'd check if a client is allowed to talk to our server by trying to authorize it, if guid and password keys were sent in the request:

if {[dict exists $req guid]
&& [dict exists $req password]} {
set guid [dict get $req guid]
if {![authorizeClient $guid 
[dict get $req password]]} {
log::warn "handleClientProtocol: Access denied"
unset guid
}
}

Our clients will send passwords as raw text. However, the password will be created as random string when client first connects, and applications requiring security should rely on more sophisticated algorithms for authorizing clients or users. such as SSL and signed certificates, described in Chapter 13, SSL and Security.

Authorization itself is done by authorizeClient command. If the authorization succeeds, the guid variable remains set. Otherwise we log an error and unset it.

Next, we can provide the client with additional data only if the user is authorized. For example, providing a list of jobs from getJobs command, providing at most the number of jobs client requests, or 10 by default:

if {[info exists guid]} {
set ok 1
if {[dict exists $req joblimit]} {
set joblimit [dict get $req joblimit]
} else {
set joblimit 10
}
dict set response jobs 
[getJobs $guid $joblimit]
}

We make sure that only valid clients get the data by checking if guid variable exists. If authorization fails, it is unset and server won't check for jobs to send to client.

Finally, we need to send the response back to the client, creating a list of ok and response variables values, and converting it into UTF-8:

Httpd_ReturnData $sock application/x-csa-data 
[encoding convertto utf-8 [list $ok $response]]
}

We've used our own MIME type of application/x-csa-data, but it is not required; the type application/octet-stream, which indicates any series of bytes, can also be used.

In order for a client to be able to send results from commands, we'll create an additional small handler. We'll start also by retrieving the query sent by client:

proc csa::handleClientResult {sock suffix} {
upvar #0 ::Httpd$sock s
set req [dict create]
set ok 0
if {[info exists s(query)]} {
set req [encoding convertfrom utf-8 $s(query)]
}

If both job and result keys are sent, we store the job's result in the database.

if {[dict exists $req job]&&[dict exists $req result]} {
setJobResult [dict get $req job] 
[dict get $req result]
set ok 1
}

Finally we also send the result back to the client:

Httpd_ReturnData $sock application/x-csa-data $ok
}

As the client needs to know a valid job identifier, we can assume that it is a valid client. Otherwise, the job identifier would probably be invalid, which setJobResult command needs to handle.

Last thing we need to do is to register handler for specified prefixes in our web server:

Url_PrefixInstall /client/protocol csa::handleClientProtocol
Url_PrefixInstall /client/result csa::handleClientResult

Note

The code mentioned in this section is located in the src-server/clientrequest.tcl file in the 01clientserver directory in the source code examples for this chapter.

Client-side

Implementation of the client for the communication is also not a very difficult task. We'll use a polling-based approach for communication, which means that the client will periodically poll the server for new tasks to be processed.

First thing to implement is command for requesting jobs to be performed from the server. We'll start off by defining the command that will periodically be called to get new jobs. It starts off by cancelling any other scheduled invocations of the check:

proc csa::requestJobs {} {
variable hostname
after cancel csa::requestJobs

We create URL to connect to, based on the hostname provided by the user. We also set the request's key guid and password in order to authenticate the request.

set url "http://${hostname}:8981/client/protocol"
set req [dict create]
dict set req guid $::csa::guid
dict set req password $::csa::password

Values for guid and password are stored in database and described later.

Next the command tries to send request to the server; we convert request to UTF-8, send it as application/x-csa-request MIME type and specify that csa::requestJobsDone command should be invoked whenever results are retrieved.

if {[catch {
http::geturl $url -timeout 30000 
-query [encoding convertto utf-8 $req] 
-type application/x-csa-request 
-command csa::requestJobsDone
}]} {
# try again in 1 minute
after 60000 csa::requestJobs
}
}

If the request has been successfully sent, then the next steps are followed using requestJobsDone command. Otherwise, we set up a retry in 60 seconds.

At the start, we'll set up another check for commands in 1 minute from now:

proc csa::requestJobsDone {token} {
# schedule next communication in 1 minute from now
after 60000 csa::requestJobs

Then we check if the response is valid and no error was encountered. In case of any error, we log it, clean up the HTTP token and return. Otherwise we convert the response from UTF-8 and proceed.

if {[http::status $token] != "ok"} {
log::error "requestJobsDone: Invalid HTTP status"
http::cleanup $token
return
} elseif {[http::ncode $token] != 200} {
log::error "requestJobsDone: Invalid HTTP code"
http::cleanup $token
return
} else {
set response [encoding convertfrom utf-8 
[http::data $token]]
log::debug "requestJobsDone: Response: $response"
}

As the first step, we assign ok and response variables to first and second element of the response, we retrieved from the server. If the server did not send a negative status as the ok variable, we check if the response contains key jobs—if it does, and it has any elements then we invoke the runJobs command to run each of the jobs.

lassign $response ok response
if {$ok} {
if {[dict exists $response jobs]
&& ([llength [dict get $response jobs]] > 0)} {
runJobs [dict get $response jobs]
}
} else {
log::error "requestJobsDone: Server returned error"
}

Finally we clean up the HTTP token.

http::cleanup $token
}

Running the jobs themselves is done as evaluating them in the global stack frame using uplevel #0 command. The runJobs command looks as follows:

proc csa::runJobs {jobs} {
log::debug "runJobs: Running jobs"
foreach {job command} $jobs {
log::debug "runJobs: Running job $job"
set result [uplevel #0 $command]
log::debug "runJobs: Sending result"
sendResult $job $result
}
log::debug "runJobs: Running jobs complete"
}

The command runs each of the commands and then submits its result to the server.

It uses sendResult to send a result for a job, which is shown as follows.

We first create a URL and a request, setting the keys job and result accordingly. Then the command sends a request to the server:

proc csa::sendResult {job result} {
variable hostname
set url "http://${hostname}:8981/client/result"
set req [dict create job $job result $result]
set token [http::geturl $url 
-query [encoding convertto utf-8 $req]]

After finishing sending the HTTP request, we then check the response and whether the server has properly handled our request or not; if not, we log and throw an error:

if {[http::status $token] != "ok"} {
http::cleanup $token
log::error "sendResult: Invalid HTTP status"
error "Invalid HTTP status"
} elseif {[http::ncode $token] != 200} {
http::cleanup $token
log::error "sendResult: Invalid HTTP code"
error "Invalid HTTP code"
}

Finally, the command cleans up HTTP token and exits.

http::cleanup $token
}

This is a complete implementation of the client.

Note

The code mentioned in this section is located in the src-client/client.tcl file in the 01clientserver directory in the source code examples for this chapter.

Storing information

For the purpose of this application we'll be using the SQLite database. Both the client and the server will use a database for storing their data; however each of them will use it for different types of information. Also, with the concept of modules that can be used in our applications, it is possible to store additional data by extensions.

Server side

For the server side, our application needs to keep at least two types of items—a list of clients and the list of jobs a client should perform, along with their results.

Database initialization code should go to src-server/database.tcl, which is read from the main script created earlier. We'll need to start off with loading packages and initializing the database:

package require sqlite3
package require uuid
sqlite csa::db [file join $csa::datadirectory csa-server.db]

After this, we'll create tables for storing clients and jobs. We'll start by checking if the tables already exist:

if {[catch {csa::db eval "SELECT COUNT(*) FROM clients"}]} {

If querying a number of items in clients table throws an error, this means that the table has not been created. In this case, we'll create both clients and clientjobs tables:

csa::db transaction {
csa::db eval {
CREATE TABLE clients (
guid CHAR(36) NOT NULL PRIMARY KEY,
password VARCHAR(255) NOT NULL,
status INTEGER NOT NULL DEFAULT 1,
lastupdate INTEGER NOT NULL DEFAULT 0
);
CREATE TABLE clientjobs (
job CHAR(36) NOT NULL PRIMARY KEY,
client CHAR(36) NOT NULL,
status INTEGER NOT NULL DEFAULT 0,
lastupdate INTEGER NOT NULL DEFAULT 0,
command TEXT,
result TEXT
);
}
}

The first table stores the GUID and password of all clients, along with the time when they last queried the system. The field status defines current status of a client: 1 means enabled; 0 means disabled.

The table clientjobs stores jobs associated with a client. The field job is a GUID identifying the job and client is the GUID of the client that should run it. The field status defines whether the job has already been run or not: 0 means it has not run; 1 means it has already run. The field command keeps the command to run and result stores the result retrieved from the client.

Now we can create functions that other parts of the application will use.

Let's start with authorizing a client. Our csa::authorizeClient command would need to check if a client exists in the database, and add it if it does not. It would need to check if the password provided matches the one in the database and that the client's status is set to enabled. If not, it should return a message stating that the authentication failed. Otherwise, it should change the lastupdate field and return message that authentication succeeded:

proc csa::authorizeClient {guid password} {
set now [clock seconds]
set result [db eval {SELECT status, password
FROM clients WHERE guid=$guid}]
# if no entry found, add a new item to database
# and assume the new agent is ok
if {[llength $result] == 0} {
db eval {INSERT INTO clients (guid, password)
VALUES($guid, $password)}
return true
}
lassign $result status dbpassword
# check if passwords match
if {![string equal $dbpassword $password]} {
return false
}
# if client is not currently enabled, return false
if {$status != 1} {
return false
}
# if everything matched, it's a valid client
db eval {UPDATE clients SET lastupdate=$now
WHERE guid=$guid}
return true
}

Our application assumes that if we do not currently know a particular client, we'll silently add it to the database. While this assumption might not always be correct, in this case there is no danger as a client only accesses his own jobs, which need to be added for new clients explicitly. Also, such an assumption makes working with the examples much easier.

In the next step, we'll need to create functions for managing jobs:

  • csa::addJob for creating new entries for a specified agent
  • csa::getJobs to retrieve list of jobs for specified agent
  • csa::setJobResult to set a result for specified entry when client provides it
  • csa::getJobResult to retrieve a result after it has been provided

Let's start with the command for adding new entries:

proc csa::addJob {client command {callback ""}} {
variable jobResultWait

First we'll check if a specified client exists and is active:

# do not allow adding jobs for inactive clients
if {[llength [db eval 
{SELECT guid FROM clients WHERE guid=$client
AND status=1}]] == 0} {
return ""
}

Now we'll create an identifier for the job and add it:

set job [uuid::uuid generate]
set now [clock seconds]
db eval {INSERT INTO clientjobs
(job, client, lastupdate, command)
VALUES($job, $client, $now, $command)}

If the callback script was provided, we store it:

if {$callback != ""} {
set jobResultWait($job) $callback
}

Finally we return the job identifier.

return $job
}

After we've added a job, clients need to be able to retrieve those using getJobs command, which is a simple query for jobs that were not yet run:

proc csa::getJobs {client limit} {
return [db eval {SELECT job, command FROM clientjobs
WHERE client=$client AND status=0 LIMIT $limit}]
}

The next step is being able to set a job's result:

proc csa::setJobResult {job result} {
variable jobResultWait
db eval {UPDATE clientjobs SET status=1,result=$result
WHERE job=$job}

We also need to handle callback scripts that were passed to addJob; if it has to be set, we set it to be invoked by the event loop, and unset it:

if {[info exists jobResultWait($job)]} {
after idle [linsert $jobResultWait($job) end 
$result]
unset jobResultWait($job)
}
}

Finally, retrieving a job's result is also a simple query:

proc csa::getJobResult {job} { return [db eval {SELECT status, result FROM clientjobs
WHERE job=$job AND status=1}]
}

This provides us with all the database access functions needed for both clients and jobs.

The reason for creating callbacks for jobs is that for the comm interface, we will also provide a synchronous interface. This is an interface which from caller's perspective will return, once the client has finished processing the command. They are not stored in the database as whenever a server restarts, call-back scripts will no longer be valid.

Note

The code mentioned in this section is located in the src-server/database.tcl file in the 01clientserver directory in the source code examples for this chapter.

Client side

The database for the client application is definitely smaller—all that we store in it is a single table with a single row, containing the guid and the password for connecting to server.

Similar to the server, we start by initializing the SQLite package and setting up a database object:

package require sqlite3
package require uuid
namespace eval csa {}
sqlite csa::db [file join $csa::datadirectory csa-client.db]

In the next step, we intialize the table if it was previously not there:

if {[catch {csa::db eval "SELECT COUNT(*) FROM
configuration"}]} {
csa::db transaction {
csa::db eval {
CREATE TABLE configuration (
guid CHAR(36) NOT NULL PRIMARY KEY,
password VARCHAR(255) NOT NULL
);
}

After setting up the table we also generate random guid and password strings, and insert them into the database:

set guid [uuid::uuid generate]
set password [uuid::uuid generate]
# insert authentication data into database
csa::db eval {INSERT INTO configuration
VALUES($guid, $password)}
}
}

Finally, regardless of the fact that the entries were just generated or were already stored in the database, we set csa::guid and csa::password to the values stored in the database:

lassign [csa::db eval {SELECT guid, password
FROM configuration}] csa::guid csa::password
csa::log::info "Local client identifier: $csa::guid"

We also log the identifier that the client has assigned each time it is run, so that it is easier to test the application.

Although just these two values do not require a complete database, our application uses SQLite so that potential extensions could also benefit from a pre-initialized database.

Note

The code mentioned in this section is located in the src-client/database.tcl file in the 01clientserver directory in the source code examples for this chapter.

Comm interface—spooling jobs

Now that both the client and the server code is complete, all that remains is to create a comm interface so we can send commands to the server to add jobs.

As comm interface is already initialized in a server's main.tcl file, all we need to create now is a handler that will limit the functions available and offer a synchronous mechanism for adding a job.

We'll start by creating a hook command for comm's eval event:

comm::comm hook eval {
return [csa::apihandle [lrange $buffer 0 end]]
}

We send each command to the csa::apihandle command, which then evaluates the command.

Let's create this command. We'll start off by checking the actual command that was sent, and creating a switch for handling various commands:

proc csa::apihandle {command} {
set cmd [lindex $command 0]
switch -- $cmd {

First we'll handle the addJob command—which simply accepts a client identifier and the command to evaluate, and returns the new job's identifier:

addJob {
lassign $command cmd client command
set job [csa::addJob $client $command]
return $job
}

We'll also create a similar command, but one that waits for the command to be executed by the client and returns its result. We'll use return_async from comm and job callbacks for this:

addJobWait {
lassign $command cmd client command

We'll now create a future object:

set future [comm::comm return_async]

We then pass this object as a callback when the new job is completed. This will cause the comm's future object to be invoked with the actual result, as soon as the result has been submitted by a client.

set job [csa::addJob $client $command 
[list $future return]]

In case a job was not created, we need to return an error to the user. This might be the case if the client identifier was invalid.

if {$job == ""} {
# if no job was assigned because
# specified client was not found
$future return -code 1 "Unknown client"
}
}

The last method is getting a job's result, which maps directly to getJobResult command:

getJobResult {
lassign $command cmd job
return [getJobResult $job]
}

In case an unknown command was passed, we return an error:

}
error "Unknown command $cmd"
}

This concludes the comm API and was the last item of our server and client applications.

Note

The code mentioned in this section is located in the src-server/commapi.tcl file in the 01clientserver directory in the source code examples for this chapter.

Testing our applications

Now that our application has been implemented, it is time to run it and see how it works.

Let's start with building our 01clientserver code example. Depending on whether it is going to be run on a Linux or Windows system, we can run either build.bat or build.sh script.

After all the binaries have been created, which can take up to several minutes, we can now run them. Let's start by running the server in first console window:

C> binariesserver-cli-win32.exe
[Thu Nov 26 19:37:03 +0100 2009] [csa] [debug] 'Initializing Tclhttpd'
[Thu Nov 26 19:37:04 +0100 2009] [csa] [debug] 'Initializing comm interface'
[Thu Nov 26 19:37:05 +0100 2009] [csa] [debug] 'Sourcing remaining files'
[Thu Nov 26 19:37:05 +0100 2009] [csa] [debug] 'Entering main loop'
Server started

For a Linux system the syntax would be slightly different, but the behaviour and logs would be the same.

We can now run the client. Assuming both are run on the same system we can simply run it with 127.0.0.1 as the hostname:

C> binariesclient-cli-win32.exe 127.0.0.1
[Thu Nov 26 19:38:23 +0100 2009] [csa] [info] 'Connecting to 127.0.0.1'
[Thu Nov 26 19:38:25 +0100 2009] [csa] [info] 'Local client identifier: b5fac0c4-a2a6-439e-6eb1-b68b2d72e974'
[Thu Nov 26 19:38:25 +0100 2009] [csa] [debug] 'Entering main loop'

We now have our client and server set up. We also know that our client's identifier is b5fac0c4-a2a6-439e-6eb1-b68b2d72e974. Now we can run any other Tcl session and send the command to be evaluated by running:

% set cid "b5fac0c4-a2a6-439e-6eb1-b68b2d72e974"
% package require comm
% puts [comm::comm send 1991 addJobWait $cid {expr 1+1}]
2
% set job [comm::comm send 1991 addJob $cid {expr 1+2}]
% after 60000
% puts [lindex [comm::comm send 1991 getJobResult $job] 1]
1 3

We can see that both commands have been evaluated. The latter form returns both the status of the command and the value as a list, so we should in fact do the following:

% puts [lindex [comm::comm send 1991 getJobResult $job] 1]
3

In case of any problems with the commands, logs provided by both client and server on their standard outputs should provide additional information about what the problem might be.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.107.85