Chapter 15. A World of Uses: Commercial Uses of RDF/XML

While so much of RDF’s early focus has been on the Semantic Web, it’s important to note that there are companies that are utilizing RDF and focusing their products on immediate real-world uses. The mark of a technology entering maturity is not the number of technologies it’s implemented in, but the number of viable applications that use it. It wasn’t until XML started getting wider use within the business community that it become less of a technology for the lab and more of a technology for the office. This same principle holds true for RDF.

Just as happened with XML, and even HTML, it isn’t until people see a technology being used for “practical” applications that business starts to become more comfortable in its use. Without business acceptance, developers are hesitant to work with a technology that may not have a payback in terms of job potential. Without mainstream developers supporting the use of, and finding uses for, RDF/XML, its acceptance is going to be limited. Luckily, though, I found several commercial uses of RDF and RDF/XML, in applications ranging from intelligence-community use to more efficient site navigation to alternative database structures and personal information management.

This chapter takes a look at some of the planned and existing commercial applications I found. This includes a personal information manager currently in design (OSAF’s Chandler), an Application Server (Intellidimension’s RDF Gateway), and Adobe’s use of RDF/XML in existing products. In addition, we’ll also look at Siderean Software’s Seamark server for site navigation, and Plugged In Software’s Tucana Knowledge Store for sophisticated searches.

The chapter is by no means an exhaustive summary of the existing potential and commercial uses of RDF; it is, I hope, a comprehensive view of the different uses of RDF within the business community.

Chandler: RDF Within an Open Source PIM

Chandler is a product resulting from an unusual project, managed through the Open Source Applications Foundation (OSAF), founded by Mitch Kapor. If that name doesn’t ring any bells, Mitch founded Lotus Development Corporation and created Lotus 1-2-3.

Tip

As of this writing, Chandler is in the design stage, a process that’s quite open. For more information, access the OSAF main web page at http://osafoundation.org. A Wiki has been set up to handle external contributions to the design at http://wiki.osafoundation.org/bin/view/Main/WikiHome.

Chandler is a personal information management (PIM) application, being designed in the open, based on open source technologies and specifications including RDF and RDF/XML.

OSA incorporates RDF into its architecture in two places. The first is an import/export mechanism allowing import and export of data from Chandler into RDF/XML. The hope is that this functionality allows Chandler to incorporate data from other sources more easily; data sources such as FOAF, described in Chapter 14, would be a natural candidate for information in a PIM.

In addition, according to the technology overview of the product, Chandler’s own data model will support RDF Schema semantics, which means that the data conforms to all of the semantics defined by the RDF specification, including the very basic concept of triple. According to the Chandler Architecture document, OSAF decided on RDF:

...because of its ability to describe data in a very flexible format and exchange semantic information between applications in a standard format without loss. Because RDF is a World Wide Web Consortium standard, we hope to gain benefit from the existing tools, validators and applications that have been developed or will be developed.

The group is also focusing on a Python object-based data store based on ZODB, the Zope database (at http://www.zope.org/Products/StandaloneZODB). Rather than the more traditional approach of storing RDF in the form of triples, the Chandler development team is pursuing a possible mapping between RDF and the object classes supported in ZODB. There originally was an interest in a mapping between Zope and RDF (at http://www.zope.org/Resources/Mozilla/Projects/RDFSupport/), but there hasn’t been any activity on this for several months. Whether Chandler will rekindle activity in a mapping between Zope, ZODB, and RDF should become more apparent as progress on Chandler continues.

At this time, Chandler is in the planning/design/early implementation stages. To follow the progress of this product, you can subscribe to or view the archives of the OSA mailing lists at http://osafoundation.org/mailing_lists.htm, in addition to accessing the main web site and the Wiki.

RDF Gateway, a Commercial RDF Database

RDF Gateway is a database and integrated web server, utilizing RDF, and built from the ground up rather than on top of either an existing web server (such as Apache) or database (such as SQL Server or MySQL). At this time, it works only within the Windows environment, specifically Windows NT 4.0, 2000, or XP. The installation is extremely easy; I was able to download, install, and run the application in less than five minutes.

Tip

Download an evaluation version of RDF Gateway at http://www.intellidimension.com. This chapter was written with the beta version of the product, but Version 1.0 released as this went into production.

RDF Gateway is an application server providing web page template support similar to ASP or JSP. This includes data storage, support for a scriptlike data query language, and web services. Aside from the use of RDF, all aspects of the tool are proprietary, though many are based on current open source efforts, including the RDF parser associated with Redland (discussed in Chapter 11).

Once installed, an RDF Gateway icon is added to the system tray. Right-clicking on this opens a menu that can be used to start or stop the server or to open a properties window with information about the Gateway, such as port, database location, and so on. The properties page is informational only—unless there’s a problem with the server, these settings shouldn’t need to be changed.

The Gateway can be managed through an online interface, where you can do things such as add or remove users from access to the repository, as shown in Figure 15-1.

Adding a new user for RDF Gateway
Figure 15-1. Adding a new user for RDF Gateway

You can also view the data tables used for the RDF Gateway repository or add COM objects, web services, packages, and so on. These externally created extensions to the Gateway can then be accessed through the scripting language supported by the product: RDFQL, an ECMAScript-based scripting language. RDFQL is used within RDF Server Pages (RSP) similarly to how Java is used in JSP and VBScript in ASP. As do these embedded scripting page approaches, RDFQL supports several built-in and global objects to facilitate application development. Among the objects supported with the released version of RDF Gateway are:

DataSource

Provides access to RDF statements stored in an external file or within the database

Request

Contains HTTP request information, including environment variables

Response

To return response

Security

Access to RDF Gateway security features

Server

Access to server features

Session

Created for every session and used primarily for setting session variables

RDFNode

To access a specific piece of information from an RDF data source

Package

Access to an RDF Gateway package

There are other objects such as strings, enumerators, and so on, but this listing gives you an idea of the built-in capability associated with RDFQL. Example 15-1 is a simple RSP that does nothing more than read an external RDF/XML page into a DataSource object and then use that object’s formatting capability to print the RDF/XML out to the page.

Example 15-1. Reading in and writing out remote RDF/XML document
<%
// Create an in-memory data source
// connect to remote RDF/XML document using the Inet data service
var monsters = new DataSource("inet?url=http://burningbird.net/articles/monsters1.
rdf&parsetype=rdf");

//set the content type 
Response.ContentType = "text/xml"; 

//use the Format command on the datasource to generate an rdf+xml representation of the 
//contents of the datasource 
Response.Write(monsters.Format('application/rdf+xml')); 

%>

As you can see from the example, scripting blocks are separated from the rest of the page with angle bracket/percent sign enclosures.

RDF Gateway can be extended through the use of COM/COM+ objects, as well as through Gateway packages, which are distinct applications or libraries of functions, which can be used in any of the Gateway-managed pages. In addition, the underlying data repository for RDF Gateway can be accessed directly through JDBC from within Java applications and through ADO if you’re programming Windows-based applications. RDFCLI, a Win32 library, also provides the fastest and most direct access to the RDF Gateway services.

At first glance RDF Gateway appears similar to IIS/COM+ and other application/web servers of similar make, until you take a closer glance at the data queries. This is where the product’s RDF roots shine through.

I pulled an example of how data manipulation can work with RDF Gateway from the help files included with the application. Example 15-2 shows how to create and insert RDF statements into an in-memory data source and then how to print select predicate values out.

Example 15-2. Creating and then querying RDF data within memory datastore
foaf = new DataSource(  );

INSERT 
    {[http://www.w3.org/1999/02/22-rdf-syntax-ns#type] 
     [mailto:[email protected]] 
     [http://xmlns.com/foaf/0.1/Person]}
     
    {[http://xmlns.com/foaf/0.1/firstName] 
     [mailto:[email protected]] 
     "Derrish"}
     
    {[http://xmlns.com/foaf/0.1/knows] 
     [mailto:[email protected]] 
     [mailto:[email protected]]}
         
    {[http://www.w3.org/1999/02/22-rdf-syntax-ns#type] 
     [mailto:[email protected]] 
     [http://xmlns.com/foaf/0.1/Person]}

    {[http://xmlns.com/foaf/0.1/firstName] 
     [mailto:[email protected]] 
     "Geoff"}

    {[http://xmlns.com/foaf/0.1/knows] 
     [mailto:[email protected]]
     [mailto:[email protected]]}
     
    INTO #foaf;

var ary = foaf.getObjects(  );

for (var i = 0; i < ary.length; i++)
{
    dumpPerson(ary[i]);
}

function dumpPerson(node)
{       
    var s = node["http://xmlns.com/foaf/0.1/firstName"];
    
    var ary = node["http://xmlns.com/foaf/0.1/knows"]; 
    
    if (ary != null)
    {   
        s += " -> ";
    
        for (var i = 0; i < ary.length; i++)
        {
            if (i > 0)
                s += ", ";
                
            s += ary[i]["http://xmlns.com/foaf/0.1/firstName"];
        }
    }
    
    Response.write(s);  
}

After exposure to RDQL in Chapter 10, the insert statements based on an RDF triple in the first part of the code should be relatively familiar. Once the data’s added to the store, the second part of the code example accesses the firstName property for both the FOAF resource, as well as all the resources that map to the knows predicate, resulting in an output of:

Derrish -> Geoff
Geoff -> Derrish

RDF Gateway also provides the ability to query against multiple datastores, merging the results as appropriate. For instance, you can access data from three different data sources with a query such as the following:

select ?p ?s ?o using #ds1, #ds2, #ds3 where {?p ?s ?o};

RDF Gateway also includes strong inferential support through two types of rules: statement and function. These allow incorporation of process logic within the semantics of the more traditional query. Again, using examples from the help file for the Gateway product, a statement rule would be like the following:

INFER {[acme:size] ?product_id "big"} FROM 
{[itd:size] ?product_id "large"} OR {[itd:size] ?product_id "x-large"};

That’s a lot of strange syntax, but what this statement is really saying is that there is a rule, {[acme:size] ?product_id "big"}, that is true if the body, {[itd:size] ?product_id "large"} OR {[itd:size] ?product_id "x-large"}, evaluates to true, and which can then be used within an RDFQL query as follows:

SELECT ?acme_size FROM inventory WHERE {[acme:size] ?product_id ?acme_size};

The use of an inferential rule allows you to map one type of schema on to another and to then use these within the queries.

How rules work becomes even more apparent when one looks at a function rule, such as the following:

INFER GetLeadTime(?product_id, ?lead_time) FROM
	{[itd:assembly_time] ?product_id ?assembly_time} AND
	SWITCH
	(
	case {[itd:component] ?product_id ?component_id}:
		GetLeadTime(?component_id, ?lead_time_comp) 
			AND ?lead_time = ADD(?lead_time_comp,
?assembly_time)
	default:
		?lead_time = ?assembly_time
	)

Using this function rule within a query, such as the following, returns the lead time for large products. However, within the rule itself, the actual lead time is accumulated from summing all lead times for the individual components that make up the part:

SELECT ?lead_time USING inventory WHERE
	{[itd:size] ?product_id "large"} AND  GetLeadTime(?product_id, ?lead_time);

Learning to work with the inferential engine of RDF Gateway isn’t trivial, but the potential of encapsulating complex logic into a form that can be used and reused within queries has considerable appeal. To enable this encapsulation, RDF Gateway provides support for a rulebase, a set of RDFQL rules that can be included within a query. Redefining the function statement into a rulebase would be as follows:

rulebase app_rules
{
	// ITD size to Acme size mapping rule
	INFER {[acme:size] ?product_id "big"} FROM 
	{[itd:size] ?product_id 'large'} OR {[itd:size] ?product_id "x-large"};

	// Lead time function rule

	INFER GetLeadTime(?product_id, ?lead_time) FROM
	{[itd:assembly_time] ?product_id ?assembly_time} AND
	SWITCH
	{
	case {[itd:component] ?product_id ?component_id}:
		getLeadTime(?component_id, ?lead_time) AND ?lead_time = ADD(?lead_time, 
?assembly_time)
	default:
		?lead_time = ?assembly_time
	};
};

The rulebase would then be used in a query in the following manner:

SELECT ?product_id USING inventory RULEBASE NONE WHERE {[itd:size] ?product_id "big"};

It is this inferential engine support, in addition to the RDF/XML base, that makes RDF Gateway unique among application servers.

Siderean Software’s Seamark

Siderean Software’s Seamark is a sophisticated application providing resources for intelligent site querying and navigation. The company makes use of a faceted metadata search and classification scheme for describing page characteristics. It is intended for larger, commercial applications and web sites, providing the infrastructure necessary for this type of search capability.

Tip

Siderean Software’s web site is at http://siderean.com. I was given access to a beta version of the software for the Windows environment at the time of this writing.

By faceted metadata, Siderean is talking about defined properties or characteristics of objects. Seamark allows searching on variations of this type of data. Once the Seamark repository is installed, it’s quite simple to load data into it from external RDF/XML files. The data in these files is then combined with existing data in the Seamark database. There is no specialized Seamark RDF Schema, which means the RDF/XML can be from any vocabulary.

Aside from the repository, Seamark’s second main component is what Siderean calls search models. Once these models are defined, they can then be incorporated into the navigation and search functionality of the applications based on Seamark. The query language used to define the search models is based on XRBR, XML Retrieval by Reformulation format, a query language proprietary to Siderean. Once a search is defined, Seamark can generate a customizable ASP or a JSP page that incorporates the search and to which you can add custom code as needed. Additionally, you can access the Seamark services through the Seamark API, a SOAP-based protocol.

The user interface for Seamark is quite simple, consisting of a main model/RDF document page, with peripheral pages to manage the application data. Once the application is installed, the first steps to take after starting the application are to create a model and then load one or more RDF/XML documents. Figure 15-2 shows the page form used to identify an internal RDF/XML document. Among the parameters specified is whether to load the document on a timed schedule, or manually, in addition to the URL of the file and the base URL used within the document. The page also provides space for an XSL stylesheet to transform non-RDF XML to RDF/XML.

Adding a URL for an external RDF/XML data source
Figure 15-2. Adding a URL for an external RDF/XML data source

Once the external feed is defined, the data can then be loaded manually or allowed to load according to the schedule you defined.

After data is loaded into the Seamark repository, you can then create the search queries to access it. In the query page, Seamark lists out the RDFS classes within the document; you can pick among these and have the tool create the query for you or manually create the query.

For instance, the example RDF/XML used throughout the book, http://burningbird.net/articles/monsters1.rdf, has three separate classes:

pstcn:Resource

Main object and any related resources

pstcn:Movement

Resource movements

rdf:Seq

The RDF sequence used to coordinate resource history

For my first query, I selected the Resource object, and had Seamark generate the query, as shown in Figure 15-3.

An automatically generated query in Seamark.
Figure 15-3. An automatically generated query in Seamark.

As you can see from the figure, XRBR isn’t a trivial query language, though a little practice helps you work through the verbosity of the query. Once the initial XRBR is generated, you can customize the query, save it, execute it, or generate ASP or JSP to manage the query—or any combination of these options. Executing the query returns XRBR-formatted data, consisting of data and characteristics, or facets for all the Resource classes in the document. At this point, you can again customize the query or generate an ASP or JSP page.

When you add new RDF/XML documents to the repository, this new data is incorporated into the system, and running the query again queries the new data as well as the old. Figure 15-4 shows the page for the model with two loaded RDF/XML documents and one query defined.

PostCon Seamark model with two data sources and one query
Figure 15-4. PostCon Seamark model with two data sources and one query

Seamark comes with a default application called bookdemo that can be used as a prototype as well as a training tool. In addition, the application is easily installed and configured and comes with considerable documentation, most in PDF format. What I was most impressed with, though, was how quickly and easily it integrated my RDF/XML data from the PostCon application into a sophisticated query engine with little or no effort. Few things prove the usefulness of a well-defined metadata structure faster than commercial viability.

Plugged In Software’s Tucana Knowledge Store

Plugged In Software’s Tucana Knowledge Store (TKS) enables storage and retrieval of data that’s designed to efficiently scale to larger datastores. The scalability is assured because distributed data sources are an inherent part of the architecture, as is shown in the diagram in Figure 15-5.

Tip

You can download an evaluation copy of Tucana Knowledge Store at http://www.pisoftware.com/index.html. In addition, if you intend to use the application for academic purposes, you can download and use an academic copy of the application for free.

Demonstration of TKS distributed nature
Figure 15-5. Demonstration of TKS distributed nature

In situations with large amounts of potentially complex data, this distributed data repository may be the only effective approach to finding specific types of data. TKS has found a home in the defense industry because of the nature of its architecture and is being used within the intelligence as well as defense communities.

TKS pairs the large-scale data storage and querying with a surprisingly simple interface. For instance, the query language support (iTQL) functionality can be accessed at the command line by typing in the following command:

java -jar itql-1.0.jar

This command opens an iTQL shell session. Once in, just type in the commands necessary. I found TKS to be as intuitively easy to use as it was to install. I followed the tutorial included with TKS, except using my example RDF/XML document, http://burningbird.net/articles/monsters1.rdf, as the data source. First, I created a model within TKS to hold the data:

iTQL> create <rmi://localhost/server1#postcon>;
Successfully created model rmi://localhost/server1#postcon

Next, I loaded the data from the external document:

iTQL> load <http://burningbird.net/articles/monsters1.rdf> into <rmi://localhost/
server1#postcon>;
Successfully loaded 58 statements from http://burningbird.net/articles/monsters1.rdf 
into rmi://localhost/server1#postcon

After the data was loaded, I queried the two “columns” in the data—the predicate and the object—for the main resource, http://burningbird.net/articles/monsters1.htm:

iTQL> select $obj $pred from <rmi://localhost/server1#postcon> where <pstcn:release> 
$pred $obj;
0 columns: (0 rows)
iTQL> select $obj $pred from <rmi://localhost/server1#postcon> where <http://
burningbird.net/articles/monsters1.htm> $pred $obj;
2 columns: obj pred (8 rows)
        obj=http://burningbird.net/articles/monsters2.htm pred=http://burn 
ingbird.net/postcon/elements/1.0/related
        obj=http://burningbird.net/articles/monsters3.htm       pred=http://burn
ingbird.net/postcon/elements/1.0/related
        obj=http://burningbird.net/articles/monsters4.htm       pred=http://burn
ingbird.net/postcon/elements/1.0/related
        obj=http://burningbird.net/postcon/elements/1.0/Resource        pred=htt
p://www.w3.org/1999/02/22-rdf-syntax-ns#type
        obj=rmi://flame/server1#node123 pred=http://burningbird.net/postcon/elem
ents/1.0/bio
        obj=rmi://flame/server1#node134 pred=http://burningbird.net/postcon/elem
ents/1.0/relevancy
        obj=rmi://flame/server1#node147 pred=http://burningbird.net/postcon/elem
ents/1.0/presentation
        obj=rmi://flame/server1#node164 pred=http://burningbird.net/postcon/elem
ents/1.0/history

The blank nodes are identified with TKS’s own method of generating bnode identifiers, in this case a concatenation of a local server name and a specific node identifier. As you can see from this example, the TKS query language iTQL is very similar to what we’ve seen with RDQL and other RDF/XML-based query languages.

In addition to the command-line shell, there’s also a web-based version that might be easier to use, especially when you’re new. However, the basic functionality is the same.

The power of TKS is accessing the services that the TKS server provides from within your own applications. For this, TKS comes with custom JSP tags for interoperating with the TKS server. In addition, you can access the services through COM objects, within a Windows environment, through SOAP, through a specialized JavaBean, and through two drivers: a JDBC driver and a native TKS driver. This makes the query capability of TKS available in all popular development environments, as shown in Figure 15-6.

Client/Server architecture supported by TKS
Figure 15-6. Client/Server architecture supported by TKS

Bottom line: the power of TKS is just that—power. By combining a simple and intuitive interface with an architecture that’s built from the ground up for large-scale data queries, the application is meant to get you up and running, quickly.

RDF and Adobe: XMP

Rather than integrate RDF into the architecture of a tool from the ground up, as occurred with the previous applications discussed in this chapter, other companies are incorporating RDF and RDF/XML into their existing applications. Adobe, a major player in the publications and graphics business, is one such company. Its RDF/XML strategy is known as XMP—eXtensible Metadata Platform. According to the Adobe XMP web site, other major players have agreed to support the XMP framework, including companies such as Microsoft.

XMP focuses on providing a metadata label that can be embedded directly into applications, files, and databases, including binary data, using what Adobe calls XMP packets—XML fragments that can be embedded regardless of recipient format. Regardless of where the material is moved or located, the data contained in the embedded material moves with it and can be accessed by external tools using the XMP Toolkit. Adobe has added support for XMP to Photoshop 7.0, Acrobat 5.0, FrameMaker 7.0, GoLive 6.0, InCopy 2.0, InDesign 2.0, Illustrator 10, and LiveMotion 2.0.

The information included within the embedded labels can be from any schema as long as it’s recorded in valid RDF/XML. The XMP source code is freely available for download, use, and modification under an open source license.

Unlike so much of the RDF/XML technology, which emphasizes Java or Python, the XMP Toolkit provides only support for C++. Specifically, the toolkit works with Microsoft’s Visual C++ in Windows (or compatible compiler) and Metrowerks CodeWarrior C++ for the Mac.

Within the SDK is a subdirectory of C++ code that allows a person to read and write XMP metadata. Included in the SDK is a good set of documentation that provides samples and instructions on embedding XMP metadata into TIFF, HTML, JPEG, PNG, PDF, SVG/XML, Illustrator (.ai), Photoshop (.psd), and Postscript and EPS formats.

Tip

The SDK is a bit out of date in regard to recent activities with RDF and RDF/XML. For instance, when discussing embedded RDF/XML into HTML documents, it references a W3C note that was favorable to the idea of embedding of RDF/XML into HTML. However, as you read in Chapter 3, recent decisions discourage the embedding of metadata into (X)HTML documents, though it isn’t expressly forbidden.

The SDK contains some documentation, but be forewarned, it assumes significant experience with the different data types, as well as experience working with C++. The document of most interest is the Metadata Framework PDF file, specifically the section discussing how XMP works with RDF, as well as the section on extending XMP with external RDF/XML Schemas. This involves nothing more than defining data in valid RDF and using a namespace for data not from the core schemas used by XMP. The section titled “XMP Schemas” lists all elements of XMP’s built-in schemas.

The SDK also includes C++ and the necessary support files for the Metadata Library, as well as some other utilities and samples. I dusted off my rarely used Visual C++ 6.0 to access the project for the Metadata Toolkit, Windows, and was able to build the library without any problems just by accessing the project file, XAPToolkit.dsw. The other C++ applications also compiled cleanly as long as I remembered to add the paths for the included header files and libraries.

One of the samples included with the SDK was XAPDumper, an application that scans for embedded RDF/XML within an application or file and then prints it out. I compiled it and ran it against the SDKOverview.pdf document. An excerpt of the embedded data found in this file is:

<rdf:Description rdf:about=''
 xmlns:pdf='http://ns.adobe.com/pdf/1.3/'>
 <pdf:Producer>Acrobat Distiller 5.0.5 for Macintosh</pdf:Producer>
 <!--pdf:CreationDate is aliased-->
 <!--pdf:ModDate is aliased-->
 <!--pdf:Creator is aliased-->
 <!--pdf:Author is aliased-->
 <!--pdf:Title is aliased-->
</rdf:Description>

Embedding RDF/XML isn’t much different than attaching a bar code to physical objects. Both RDF and bar codes uniquely identify important information about the object in case it becomes separated from an initial package. In addition, within a publications environment, if all of the files are marked with this RDF/XML-embedded information, automated processes could access this information and use it to determine how to connect the different files together, such as embedding a JPEG file into an HTML page and so on.

I can see the advantage of embedded RDF/XML for any source that’s loaded to the Web. Eventually, web bots could access and use this information to provide more intelligent information about the resources that they touch. Instead of a few keywords and a title as well as document type, these bots could provide an entire history of a document or picture, as well as every particular about it.

Other applications can also build in support for working with XMP. For instance, RDF Gateway, mentioned earlier, has the capability of reading in Adobe XMP. An example of how this application would access data from an Adobe PDF would be:

var monsters = new
DataSource("inet?url=http://burningbird.net/articles/monsters3.pdf&parse
type=xmp");

An important consideration with these embedded techniques is that there is no adverse impact on the file, nothing that impacts on the visibility of a JPEG or a PNG graphic or prevents an HTML file from loading into a browser. In fact, if you’ve read any PDF files from Adobe and other sites that use the newer Adobe products, you’ve probably been working with XMP documents containing embedded RDF/XML and didn’t even know it.

What’s It All Mean?

In my opinion, Adobe’s use of RDF/XML demonstrates how RDF/XML will be integrated in other applications and uses in the future—quietly, behind the scenes. Unlike XML with its public exposure, huge fanfare, and claims of human and machine compatibility and interoperability, RDF was never meant to be anything more than a behind-the-scenes metadata model and an associated serialization format. RDF records statements so that they can be discovered mechanically — nothing more, nothing less. However, this simple act creates a great many uses of RDF/XML because of the careful analysis and precision that went into building the specification upon which RDF resides and which RDF/XML transcribes.

RDF assures us that any data stored in RDF/XML format in one application can be incorporated with data stored in RDF/XML format in another application, and moving the data from one to the other occurs without loss of information or integrity. While sharing and transmitting, merging and coalescing the data, we can attach meaning to objects stored on the Web — meaning that can be accessed and understood by applications and automated agents and APIs such as those covered in this book.

As the use of RDF grows, the dissemination of RDF/XML data on the Web increases and the processing of this data is incorporated into existing applications, the days when I’ll search for information about the giant squid and receive information on how to cook giant squid steaks will fade into the past. I will be able to input parameters specific to my search about the giant squid into the computer and have it return exactly what I’m looking for, because the computer and I will have learned to understand each other.

This belief in the future of RDF and RDF/XML was somewhat borne out when I did a final search for information on the giant squid and its relation to the legends and to that other legendary creature, Nessie the Loch Ness Monster, as I was finishing this book. When I input the terms giant squid legends Nessie in Google, terms from my subject lists associated with the article that’s been used for most of the examples in this book, the PostCon RDF/XML file for my giant squid article was the first item Google returned.

It’s a start.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.119.206