The BYTE Magazine docbase presented all of the magazine’s text and images. From the outset, I knew that I wanted to build some kind of repository and use some kind of transformation tool to convert content stored in that repository into deliverable web pages. What motivated that design, at first, was mainly a concern about efficiency. I supposed that an online version of BYTE would interest a lot of people. It did. Over three years, it attracted more than 3 million readers and eventually up to 10,000 a day. My goal was to provide a rich and comprehensive technical reference libary, but one that would respond quickly under heavy load, even over the slow dialup links that were the only means of access for many international users.
The technique I adopted, which I call dynamic generation of statically served pages, generates HTML pages from a markup-language repository. Because the pages were served statically, there was none of the peformance overhead of a so-called “database-driven” site. Throughout the life of this docbase, a 32MB 150MHz server, laughably antique by today’s standards, pumped out archive pages at an ever-increasing rate with no sign of strain.
Yet because the pages were generated dynamically, the docbase enjoyed much of the flexibility that we expect from a database-driven site. In fact, it was database-driven, although not in the conventional sense. The repository was its database. It wasn’t a real-time or relational or transactional database, but it didn’t need to be. It only needed to package the content using a predictable and regular structure. Given that structure, a transformation tool, which I called a translator and which people in the SGML/XML world tend to call processing software, could generate the deliverable pages.
Although my first concern was efficiency, it soon became clear that this method was enormously flexible. With each iteration of the translator, I found new ways to draw out groupware features that were latent in repository. These features were, conceptually, ways to manage the relationships among various groups, and I thought of them as bindings. As it evolved, the docbase created a series of bindings involving authors, readers, subscribers (a subset of readers), vendors, and advertisers (a subset of vendors). It connected readers to authors by way of a feedback mechanism that evolved from a standard mailto: link into a context-sensitive form, generated on demand for each article, that routed comments to the team responsible for that article. It connected readers to vendors by way of a referral mechanism that transformed references to companies, products, and product categories into appropriate links to a partner site that processed and relayed requests for information. It connected advertisers to readers indirectly by way of a mapping between content categories and ad categories—so that an IBM DB2 ad, for example, could selectively bind to database-related articles. The never released final version connected advertisers more directly to subscribers, so that an IBM DB2 ad could selectively bind to pages viewed by subscribers who had registered a preference for database articles.
Had I started the BYTE docbase in 1999, rather than 1995, I’d have used XML to define the repository format and implemented the translator in Perl, using the XML::Parser module that connects Perl to an XML parser called expat.[6] But in 1995 there was no XML or XML::Parser, and I was only beginning to learn about Perl. So I made up my own simple markup language (see Example 5.4) and used a programmable text editor to process it into web pages. The text editor was Lugaru’s Epsilon, an Emacs workalike to which I’ve been hopelessly addicted for many years. The programming language was Epsilon Extension Language (EEL), an embedded C-like interpreter with powerful regular-expression support.
Think of the repository as source code, the translator as a compiler, and the deliverable HTML pages as object code. It took about 45 minutes to “compile” the 10,000-page magazine docbase, but a complete rebuild was only necessary when we needed to propagate a change—which might be a new standard page template or a new embedded function linked to some standard element of a page—across the entire docbase. Otherwise, only an incremental rebuild was needed. Once a new month’s content was stored correctly in the repository, the incremental rebuild took just a few minutes.
This approach entails the same trade-offs that govern compiled versus interpreted software. If there had been very frequent inflow of new content, and if the docbase had to absorb and reflect that content in real-time, I’d have needed a system that was database-driven in the conventional sense. It’s neither easier, nor harder, to build that kind of system. When you do, you shift work from a compiler, which feeds a statically served site, to a runtime system that implements a dynamically served site. Which approach is best? That depends on a host of docbase variables: the ratio of structured to free-form content, the refresh frequency, the kinds of dynamic features required in the generated HTML, the usage level, the transactional load.
People tend to assume that a dynamically served docbase is intrinsically better than a statically served docbase, because it’s backed by a “real” database. But there’s an important middle ground. A richly structured and rigorously maintained repository, coupled with smart processing software, can—for a certain class of applications—deliver the best of both worlds. This approach can combine the intelligence of a dynamically served docbase with the low overhead and high performance of a statically served docbase. Groupware applications are often good candidates for this treatment. They tend to be text heavy and semistructured, without strong real-time or transactional requirements.
A lot of useful applications fall into this category, and we’ll see more examples of dynamically generated and statically served docbases in later chapters. But we’ll also dynamically generate pages where it makes sense to do so, and we’ll sometimes mix the two styles. Ultimately what matters are the ends, not the means.
My EEL-based translator is now
obselete. I mainly use Perl and its XML::Parser
module to transform docbases. But since I am going to show you some
ways that my EEL translator turned the magazine docbase into a
groupware application, and since it did the same kinds of things that
XML-oriented text-processing software does, let’s look at a
fragment of the EEL code. Example 5.3 shows a
function called doBio( )
.
Example 5-3. Text Processing with Epsilon’s EEL
doBio() { char sTmp[250]; // alloc space for mailto: URL bufnum = bufBI; // switch to buffer containing contents of <bio> tag point = 0; // go to start of buffer if (size() > 0) // test that <bio> tag wasn't empty. { killTag(); // remove <bio> tag point = 0; // go to start of buffer sprintf(sTmp,"<a href=%cmailto:#1%c>#1</a>",0x22,0x22); // make replacement string_replace(RE_MAILURL,sTmp,REGEX); // look for email address, // regex-replace with mailto: URL point = 0; // go to start of buffer stuff("<hr><em><strong>"); // insert styling point = size(); // go to end of buffer stuff("</strong></em>"); // insert styling bufnum = bufFinal; // switch to output accumulator grab_buffer(bufBI); // insert contents of <bio> tag } }
This function was called when the translator had encountered a
<bio>
tag, which might contain text such as:
“Jon Udell, executive editor for New Media, can be reached at
[email protected].” The purpose of doBio( )
was
twofold. Its first job was to wrap HTML styles around the contents of
the <bio>
region of an article. And in
retrospect, it did that poorly. When you see HTML fragments
interspersed with code, as in Example 5.3,
it’s usually a sign of a missed opportunity to templatize an
HTML generator. Templates are easy to make and easy to process, and
they help you visualize and modify your page designs. I’ve
since learned that lesson, and you’ll see lots of examples
throughout this book of processing scripts that merge content into
templates as they generate HTML
pages.
The second job of doBio( )
was to do what the
message-authoring tools we discussed in Part I
do when they encounter an implied URL—namely, activate that
URL. In this case, that meant transforming
[email protected] into the clickable link <a href="mailto:[email protected]">[email protected]</a>
. If
you’re familiar with regular expressions and text processing,
this kind of transformation should seem utterly ordinary. Indeed it
is. Yet in the context of processing software that reads a docbase
repository and writes a deliverable docbase, it can be used to add
subtle but powerful groupware features to the
docbase.
To see how a translator can create groupware, let’s first ask again why it should. Why, after all, did the magazine archive exist online in the first place? We wanted to offer a service to readers, we wanted to keep up with the times, we wanted to market our product. And yes, because the Web is an advertising-driven publishing medium and because our product was the kind of content that plays well in that medium, we aimed for advertising revenue too. Many businesses, of course, aren’t as naturally content-rich as ours was, so ad revenue doesn’t figure into the equation. But for all businesses, there is another reason to be online. Web content, in almost any form, creates groupware opportunities. By that I mean that it creates possibilities to connect internal groups (product development, support, marketing, and other teams) with external groups (existing customers, prospective customers, business partners). These connections can be expressed, through the medium of a docbase, in small details. Suppose, for example, that the HTML rendering of [email protected] were instead:
<a href="mailto:[email protected]?subject=Article+Feedback|August+1998|Web+Project| Distributed+HTTP">[email protected]</a>
Here the translator adds extra value to the docbase. It doesn’t
just activate an implied URL, to make it easier for a reader to send
me email. It also uses its knowledge of the context in which that
implied URL appears to help the reader send me a much smarter piece
of email. How? As it parses the repository, the translator can easily
remember contextual clues, such as the issue date or the magazine
section of an article. Using the parameterized mailto: trick that we
saw in Chapter 4, it can use this information to
spare the reader the trouble of specifying the context that provoked
the sending of the message. This also ensures that every message from
this source—that is, from any <bio>
region in the docbase—will announce its originating context.
The recipient might then use a client-side mail filter, as shown in
Chapter 4, to manage messages from that source,
organizing them (and counting them) by magazine issue (“August
1998”) or by magazine section (“Web Project”).
This kind of detail is, individually, not earthshaking. But when there are dozens or hundreds of contributors to a docbase, and thousands or even millions of users, the details add up. The docbase is, among other things, a tool that facilitates interactions between its contributors and its users. Enriching the context that surrounds one of those interactions yields a small reward for a small effort. But enriching the context of all the interactions yields a much larger reward for the same small effort.
It’s uniquely the mission of a trade magazine to use content to mediate between subscribers and vendors. What about other kinds of businesses? Every corporate web site can and should do more than just publish information about its products and services. Those docbases can become groupware applications that connect the teams that create the products and services to the customers who use them. Such connections ought to be bidirectional, and they ought to help to define and enhance relationships among groups within and across the firewall.
Consider, for example, the publication of a user manual online. That’s something that many companies already do or want to do. Typical rationales might be that an online manual is a cheap alternative to a printed manual or that it enables electronic updates to the printed manual. In groupware terms, though, a docbase can do much more. It needn’t merely dispense information. It can also help the provider of the docbase learn how people use that information—and more importantly, how they use the product that the manual describes. How? By using the docbase translator to add various kinds of context-sensitive intelligence, or instrumentation, to the deliverable HTML pages that it generates.
One important use for this kind of instrumentation is to enable context-sensitive feedback. Your documentation probably divides into functional areas, and you may assign different writers and editors to those areas. There is probably also an implied relationship between each of these areas and product development, marketing, or sales teams. A page of a docbase can (and should) “know” to which area it belongs. When it does, it can collect feedback in an intelligent manner—feedback that’s aware of its page of origin, of the functional area to which the page belongs, and of the teams responsible for that area. And it can route questions and comments to appropriate teams, ideally using references to roles that refer indirectly to people and groups, rather than hardcoded email addresses.
This may sound complicated and hard, but it really isn’t. Suppose you manufacture a bread machine, and the XML repository from which you generate its manual looks like this:
<docbase name="BreadMaker Operations Manual" product="BreadMaker" > <chapter label="3" name="Cleaning and maintaining your BreadMaker" category="maintenance"> <section name="First-time cleaning" model="1A"> <p>....</p> </section> <section name="First-time cleaning" model="2B"> <p>....</p> </section> </chapter> </docbase>
Note the databaselike nature of this repository. There are, for example, two parallel instances of the “First-time cleaning” section, because different models require different methods. Which section will appear in the docbase? The translator selects the right one for the version of the docbase that it generates.
Now let’s suppose the convention in this docbase is that section headings are followed by clickable feedback icons. When it forms the address of an icon’s link, the translator stuffs in every scrap of potentially useful contextual information that it can:
<a href="/cgi-bin/feedback?docbase=BreadMaker+Operations+Manual& product=BreadMaker&chapter=3&model=1A& section=First-time+cleaning&category=maintenance"> <img src="/img/feedback.gif"></a>
Creating this instrumentation was
easy. What’s it good for? That’s up to the
feedback script that runs when a user clicks the
link. It might vary the feedback-collecting form that it presents to
the user according to the originating docbase, the product it
describes, or the category (e.g., maintenance
)
associated with the originating page. It can embed all the contextual
clues in the form it generates so that the form’s handler can
use them to track the feedback, route it to appropriate teams, and
store it. The appropriate place to store it might be in a database to
support numeric analysis of feedback by product or category, or it
might be in another docbase to support review of the anecdotal
responses collected by the form.
To see a more detailed example of the kinds of implied functionality that a translator can extract from a repository, let’s look at how a fragment of the BYTE repository was converted into its corresponding web page. Example 5.4 shows how my August 1998 BYTE column would have looked, had I then relied (as I do now) on XML as a representation language.
Example 5-4. BYTE Docbase Repository Fragment
<article> <section>Web Project</section> <category>distributed_computing</category> <keywords>web programming, distributed computing, http</keywords> <head>Distributed HTTP Now!</head> <deck> Peer-to-peer Web computing is the future. Why not start exploring the possibilities now? </deck> <byline>Jon Udell</byline> <text> The more I work with dhttp, the more I'm convinced that it represents the right way to integrate web/Internet technologies with the mainstream Windows desktop. Something like a dhttp service, I'm arguing, ought to be running everywhere. It defines a new platform. ... see <a href="#fig1">figure 1</a> for details ... ... the module <tt>Engine::Server</tt> ... ... <a href="http://www.netscape.com">http://www.netscape.com</a> ... ... Reader Service Number: 1027 </text> <illustration> <a name="fig1"/> <title>Dhttp Architecture</title>  <caption> The dhttp system comprises an engine that hosts one or more pluggable applications, each with its own ODBC connection. </caption> </illustration> <sidebar> <section>Web Project</section> <category>distributed_computing</category> <keywords>data replication, distributed computing</keywords> <head>Data replication with DHTTP</head> <text>...</text> </sidebar> <bio> Jon Udell is BYTE's executive editor for new media. He can be reached at [email protected]. </bio> </article>
This XML example revises in minor respects the format I actually
used. Most notably, my homegrown markup language didn’t require
each opening tag (e.g., <head>
) to be paired
with a closing tag (e.g., </head>
). Why not?
The parser that read the original format aimed to simplify the coding
rules for the repository. At the time, I thought that made life
easier for the repository maintainer, by reducing keystrokes. But it
was a bad bargain, because my homegrown parser never did the kind of
robust error checking that XML parsers do. The convenience of fewer
keystrokes was more than offset by the burden of finding and
correcting errors. It would be a worse bargain today. With the advent
of XML as a way to represent markup language and of freely available
XML parsers to process such markup, it’s foolish to waste time
writing parsing code. Let the XML parser do that grunt work for you.
Spend your time on what really matters—writing the
parser-enabled processing software that adds value to docbases.
We’ll pretend that this repository had been XML, ignore the parsing step that XML would have made trivial, and focus on what’s still relevant today—namely, the groupware features that the translator injected into the docbase. For a simple but complete example of an XML-enabled docbase translator, see Chapter 9, which shows how a real XML repository (the contents of this book) was translated by an XML::Parser script into a docbase (an HTML archive with feedback instrumentation).
Note how the
repository format in Example 5.4 freely intermixes
XML tags such as <subhead>
and
<illustration>
with HTML tags such as
<tt>
and <a href>
.
To an XML parser, everything here looks like well-formed XML. But as
we’ll see in Chapter 9, well-formed XML can
coincide with HTML. In this example, the translator can choose to
interpret the non-HTML tags one way and the HTML tags another. For
example, an <illustration>
tag told the
translator to emit a series of HTML tags defining a new region of the
generated docbase page. However, the <text>
tag told the translator to pass everything until the closing
</text>
tag—typically, a mixture of
text and HTML—to a routine that specialized in converting that
repository element into its corresponding output.
What might that routine do? Usually, it just passed the text and HTML through to the generated docbase page. In our case, this stuff was manufactured by an export filter that converted Quark pages into HTML extracts. But sometimes, as we’ll see shortly, the routine translated a text pattern into a link, thus adding a dynamic feature to the docbase.
Figure 5.1 shows the docbase page that the translator would have generated from the repository fragment shown in Example 5.4.
This page is rich with instrumentation. Let’s review some of the features the translator has added to the docbase.
Using its
knowledge of the current article, the translator created a composite
title based on the pattern PUBLICATION NAME / ISSUE DATE / SECTION / TITLE
and wrote it out as the HTML document
title—that’s the contents of the
<title>
.. </title>
region, which renders as the text of the browser’s window
titlebar.
This fielded format is a simple technique that turns out to have remarkably many uses. It brands the page as belonging to the BYTE archive, tells the age (August 1998)[7] and type (Web Project) of the article, and announces its title. In Chapter 8, we’ll see how this method helps a search-results filter work intelligently. Briefly, all search engines report hits using two items: a URL and the HTML document title. The database-like nature of this composite title enables the search-results filter to organize found stories by age and type, not just by relevance.
There’s more. The translator also wrote a mapping file that correlated URLs with document titles. An entry in that file looked like this:
/art/9808/sec11/art1/art1.htm | August 1998 | Web Project | Distributed HTTP
This file enabled the log-processing filter to work intelligently. It looked up the URL for each entry in the daily web server log, mapped that URL to its composite title, parsed the title into its constituent fields, and used those fields to organize usage reports. One report ranked usage by issue, another by section. These reports helped answer questions like “What are the most popular three issues online?” and “Which sections of the magazine consistently attract the most readers over time?” So the structured titles became a way to help a group of authors and editors gauge the effect of their work on a group of online readers.
Here’s the same structured title that appears in the browser’s titlebar, but in this context, it’s clickable. A user can jump up one level to a summary page for this month’s Web Project section or up two levels to the table of contents for the August 1998 issue.
There’s another reason to recapitulate the titlebar here. Before the translator did that, we used to engage in this dialogue several times a month:
User: “Why don’t you include the dates of the articles?” |
Webmaster: “We do. The date is part of the HTML document title. It appears in the browser’s window title bar.” |
User: “Gosh, you’re right! Sorry, I don’t know why I didn’t look there.” |
Eventually it dawned on me that nobody looks there. For most people, the document title doesn’t seem to belong to the page in the same way that the document’s body does. To me, the date in the window’s titlebar was obvious, but no matter. The customer is always right.
Of course, the customer would be out of luck if fixing this meant visiting and editing 10,000 pages. Fortunately, the translator already did that routinely and was in a position to know what addresses these links should point to. It took just a few lines of code and a rebuild to add this tree-navigation widget and make some users happier.
The translator knew,
because it parsed the <section>
tag, that
this article belonged to the Web Project section. It also knew,
because it parsed the <category>
tag, that
the article belonged to the distributed_computing
category. It therefore inserted “more like this” links to
appropriate section and category pages.
Where did those pages come from? The translator built them too. It was already processing all the information needed to construct these views of the docbase. Materializing them as the HTML pages behind “more like this” links was straightforward.
In groupware terms, this feature created bindings between views of the docbase and subgroups of the readership interested in those views. A follower of the Web Project column could easily find more such columns; a reader intrigued by distributed computing could find other material in that category.
Initially
the translator just recognized patterns like
http://www.netscape.com/ and turned them into
patterns like <a href="http://www.netscape.com/">http://www.netscape.com/</a>
—thus
activating the implied link. How? It’s a regular-expression
search-and-replace operation. Example 5.3 shows how
that was done in EEL. In Chapter 6, we’ll see
an example of the same thing in Perl.
A later version of the translator modified the link addresses to look like this:
<a href="/refer.pl?url=http://www.netscape.com/&issue=1998-08& section=Web+Project&title=Distributed+HTTP">http://www.netscape.com/</a>
The script named refer.pl logged each use of the link, then issued a redirection to the specified URL. Why? It’s easy to track referrals and costs virtually nothing. You might never need to harvest the data, but then again, you might. Why foreclose the option? Referral data measures the affinity of the users of a docbase for the sites mentioned in the docbase. When you report additional context, as shown here, you can track that affinity in a highly granular way.
Traditionally, in the trade magazine business, company and product references were accompanied by numbers that also appeared on the blown-in postcards, or bingo cards, that readers could use to request product literature from companies. Magazines outsourced the handling of these bingo cards to agencies that tallied the data and produced the mailing labels used by the target companies to fulfill literature requests. Nowadays, as you’d expect, this process is moving online. To modernize its bingo-card system, BYTE partnered with InfoXpress, a company that provides online reader service for a number of magazines.
Thanks to the translator, BYTE’s interface to InfoXpress was particularly effective. Other sites rely on a generic referral to an InfoXpress subsite. There, users have to enter the number of an item, or drill down through issue, category, or company/product pages, in order to reach the page on which to make a specific literature request. Our implementation led directly to that page, because the translator used the contextual information at its disposal to transform the reader service number into a URL that looked like this:
<a href="http://www.infotracker.com/byte?issue=1998-08&rsn=1027">1027</a>
What if an article didn’t contain a reader service number? In many cases, the translator could still create an appropriate link to the partner site, based on the value of the article’s category tag. These links led to category pages on the partner site. An article that listed a flock of printer manufacturers, without individual request codes, might say: “Click here to request more information about printers.” The address behind the link might be:
http://www.infotracker.com/byte?issue=1998-08&category=printers
The translator produced this kind of link when it spotted an instance
of the standard “Products Mentioned” element and when it
was also in possession of an appropriate category term (from the
<category>
or
<keywords>
tags).
The end result was another kind of binding. In the context of product-related articles, the site connected readers, as directly as possible, to the vendors of those products. The interface to the partner site was expressed not as a single generic link but as thousands of context-sensitive links.
Note that the markup language shown in Example 5.4 does not define a reader service element. It wasn’t necessary. The source text already exhibited a perfectly regular pattern matching the Perl regular expression:
/Reader Service Number: (d+)/i
Sure, I could have required the repository maintainer to do this:
Reader Service Number: <rsn>1072</rsn>
But that might be overkill. When data naturally exhibits a regular pattern that you can exploit in a simple and reliable way, you can impose fewer requirements on the people who create and maintain a repository. If a tool writes the repository, this isn’t an issue. It can create arbitrarily rich markup. But sometimes it makes sense for people to create markup directly. In those cases, the less the better.
The tag

referred to a full-size GIF image. But what the translator emitted,
instead, was a reference to a thumbnail version of the full-size
image. Why? Users, many of whom accessed the site from marginal
networks, told me that they preferred this method. It’s another
use of the layering strategy discussed in Chapter 3.
A thumbnail reveals enough about an image to help a user decide
whether it’s worth the time needed to download it. The size of
the image in bytes also helps the user make that decision. So the
translator reported that too. To streamline display of the thumbnail,
it extracted its width and height (from bytes 7-10 of the thumbnail
GIF) and tucked these values into the WIDTH
and
HEIGHT
attributes of the generated
<img src>
tag, so that a browser could frame
a thumbnail and continue rendering the page without interruption.
What about the full-size images? The translator could have simply
linked the thumbnail’s <img src>
tag
to the full image. But that would have presented the image devoid of
context. Instead the translator fabricated a new container page for
each full-size image. There, along with the image, it recapitulated
the standard top and bottom toolbars, the tree-navigation widget (one
level deeper, in this case), and the title and caption of the image.
Then it linked the thumbnail to this container page.
These were small details. But they improved the layered presentation of the site and sped up page delivery. As a result, more users engaged with the site’s content. That meant all the groupware bindings built into the site worked a little more effectively than they otherwise would have.
Email addresses in the
<bio>
region were made into active
mailto: links. Back then, the parameterized
mailto: trick I mentioned earlier wasn’t
widely supported, so the translator didn’t add any extra
context to the messages originating from these links.
There was another way to deliver intelligent feedback, though. The Comment button was instrumented with contextual information that was sent to a comment-form generator.
It seems obvious that every node of a hierarchical docbase should provide a link to its parent. And yet for the longest time, I neglected to do that. Why? I figured that people could rely on the browser’s Go Back button. Having climbed down the tree, they could climb back up. But that’s a flawed assumption. Tree traversal is only one way people reach the pages of a docbase and, in many cases, not the dominant way. Referrals from local or remote search engines, or from other people, bring users directly into the middle of your docbase.
In those cases, Go Back takes people right back to where they came from. You, however, probably want to encourage them to stay and look around. That means your docbase has to be richly interconnected.
The Up link is one of the pathways that can turn
a random visit into a memorable session. You should exploit the
icon’s <alt>
text to maximum advantage
here, as everywhere. Initially we used the stock phrase “Up
Level.” But the translator knew more about the parent of a
particular page than that. It knew that a parent was “Aug 1998
Web Project Table of Contents” or “Aug 1998 Table of
Contents”—so we added these richer descriptions to the
<alt>
tag.
In this docbase, each article comprises one main story and zero or more subarticles, or sidebars. These elements are presented as a series connected with Prev and Next buttons. The use of these buttons is context-sensitive. The example in Figure 5.1 is the first in a series of two—that is, it’s a main story that has one sidebar. So the translator includes the Next link and omits the Prev link. When it generates the sidebar, it includes a Prev link and omits the Next link. For middle elements in a series longer than two, the translator includes both Prev and Next links.
Chapter 7, shows how to build this kind of navigational machinery into a generated docbase. It’s another detail, but the cumulative effect of these kinds of details is to engage more users more deeply with a site’s content. That makes all the site’s groupware bindings more effective.
The address behind this link looked something like this:
/comment.pl?issue=1998-08§ion=Web%20Project&title=Distributed%20HTTP
Given these parameters, the comment script could tailor a feedback form just for this article. That form, in turn, embedded the parameters in hidden fields so that the next script, which handled the form, could assign the comments about this article to appropriate slots in a data store.
The handler also did just what I suggested our hypothetical online
bread-machine manual ought to do. It routed feedback (as email) to
groups of editors, based on attributes of the article from which a
feedback form was generated. And it did so indirectly, mapping
magazine sections (e.g., Reviews
) to roles (e.g.,
review-editors
) expressed as lists of email
addresses.
[6] I’d also have written a Document Type Definition (DTD) to formally describe the repository format and used a validating parser, which XML::Parser isn’t, to ensure conformance with the DTD. We’ll see an example of this in Chapter 9.
[7] Alert readers may wonder about the source of the issue date, which doesn’t appear in the markup shown in Example 5.4. The name of the markup-language file encoded that information.
18.191.108.168