We centralize services in part to cut down
on the administrative burden of maintaining distributed services. The
zero-footprint, no-install nature of web-based software is one of its
most compelling features. But it’s not appropriate to
centralize everything. Some services ought to run locally because the
network connection is intermittent, or because it’s more
efficient to connect to local resources, or simply because
they’re personal rather than shared. In these cases code needs
to be not merely mobile, like Java, but persistent, like ActiveX.
What happens, in a distributed network of peers, when you need to
update not only the data stored on each node, but the methods that
operate on that data? The dhttp environment
enables a simple solution to this problem. Example 15.9 shows the public engine function
do_engine_update_sub( )
, which enables a
dhttp node to update one of its plug-ins
in situ and on the fly.
Example 15-9. dhttp Support for Code Replication
sub do_engine_update_sub { my ($args) = @_; my ($argref) = &Engine::PrivUtils::getArgs($args); my ($app) = &Engine::PrivUtils::unescape($$argref{app}); my ($subname) = &Engine::PrivUtils::unescape($$argref{subname}); my ($subcode) = &Engine::PrivUtils::unescape($$argref{subcode}); my ($module) = ''; my ($found) = 0; open(F,"$main::root/Apps/$app.pm") or $main::debug && warn "cannot open $main::root/Apps/$app.pm"; while (<F>) { if ( m#^1;# ) # module ends with 1; { if (! $found ) # if target sub not yet found { $module .= $subcode; } # emit the new code $module .= $_; # then emit the 1; last; # then bail out } if ( m#^sub $subname$# ) # found the target sub { $found = 1; my $end = 0; # not at the end of the target sub yet my $line; $module .= $subcode; # emit the new code while ( (! $end) and ($line = <F>) ) { if ( ($line =~ m#^sub#) or # found the next sub ($line =~ m#^1;# ) ) # found end of module { $end = 1; # signal we're at the end of the replaced sub $module .= $line; # emit next sub's declaration or 1; } } } else { $module .= $_; } } close F; open (F, ">$main::root/Apps/$app.pm"); print F $module; close F; eval ($module); }
This method receives a URL-encoded Perl function over the HTTP
connection, evaluates that function in the namespace of a
dhttp plug-in, and rewrites the plug-in’s
source code accordingly. A web-client script can use this method to
project a new version of any of the plug-in’s methods into any
dhttp node. The update occurs instantly, without
requiring a server restart, because do_update_sub(
)
uses Perl’s eval
function to
alter the target method in situ. It also
rewrites the plug-in’s source code so that when the server does
restart, it uses the new version. Example 15.10 shows
how to update the method do_sfa_ foo( )
from
Version 1 to Version 2.
Example 15-10. Using dhttp Code Replication
use Engine::Server; use Engine::PrivUtils; $host = 'jon_linux'; $port = 9191; while (<DATA>) {$sub .= $_;} $sub = Engine::PrivUtils::escape($sub); # first update the method on the target print Engine::Server::getUrl($host,$port,"/engine_update_sub?app=sfa& host=$host&port=$port&subname=do_sfa_foo&subcode=$sub"); # then call the method print Engine::Server::getUrl($host,$port,"/sfa_foo"); __DATA__ sub do_sfa_foo { print httpStandardHeader; print "foo v2"; }
Assuming that the URL /sfa_foo
originally returned “foo v1,” this script updates the
method do_sfa_ foo( )
, then calls it, producing
the output “foo v2.”
Scary, isn’t it? In fact, much too scary. With very little effort, we’ve arranged so that any dhttp node can expose not only its SQL data, but also its plug-in code, to any HTTP-aware client that wants to rewrite the data or the code. And the code is being executed by a full-strength Perl interpreter. Like any powerful technology, this one’s a double-edged sword. Wielded responsibly, it can enable all sorts of useful things. In the wrong hands, it can spell disaster. As with genetic engineering, there are two ways to respond to this dilemma:
You might reasonably conclude that potential risks outweight potential benefits. Peer-to-peer replication of code is inherently uncontrollable, therefore dangerous, therefore to be shunned.
You might also reasonably conclude that if peer-to-peer replication of code seems too simple and too powerful, then the correct response is to tap into the source of that simplicity and power, analyze the associated risks, and learn how to manage them.
The latter response leads to a discussion of ways to use dhttp securely. As we’ll see, some of the conventional solutions apply. In addition, the presence of an always running process on each node creates an interesting new opportunity based on the notion of a local HTTP proxy.
18.222.115.179