Chapter 9. Web 2.0 Features

This chapter details the next generation of the Web, or at least the Web that is taking shape after the dot-com bubble burst. With higher Internet speeds, faster servers and better hardware at the client side, the trend is towards Web applications and away from the traditional delivery systems where the interaction between the client and the Web was minimal at best.

The chapter looks at Web 2.0 and how to deploy its features in your own Website creations. In addition, you’ll look at the technology that powers Web 2.0 and how it can be harnessed in enhancing your own Website.

The point behind picking up Web 2.0 techniques is that that there are many aspects that you can use to make your site more attractive to visitors while also offering higher levels of functionality than ever before, to keep your visitors coming back.

To date you have learned, in stages, everything that is necessary to start building Web applications—HTML, CSS, PHP, SQL, XML, and JavaScript. This chapter is where they all get pulled together and used, in conjunction with the delivery system that you have learned about in the content-management systems chapter.

The best thing about Web 2.0 is that it doesn’t require learning any new technologies, just new ways of combining existing ones. Given the level of standardization that now surrounds these open technologies, this looks like a trend that’s set to continue, so a good grounding in the ideas behind Web 2.0 will also future-proof Website designs to a certain extent.

What Is Web 2.0?

The key to understanding Web 2.0 is accepting that it is a combination of existing technologies that allow your visitors to interact, but also provides interaction between Web applications, both at the front-end and the back-end. It is a relatively new term, coined in 2004, that has yet to completely realize its potential.

Tim O’Reilly (http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-Web-20.html) coined the term and held the first Web 2.0 conference in 2004. O’Reilly had decided that it was more than a flash in the pan and worth exploring further in a formal conference.

However, despite the general acceptance, many people decry Web 2.0 as just another marketing tactic or a meaningless buzzword. The various phrases that get bandied about do sound rather like buzzwords, which can scare some people.

Behind the buzzwords are some very clever ideas about how people should be using the Web—not just to look up information, but to interact, share, and mash up different aspects and ideas to create Web applications. This makes the Web act more like a set of interconnected applications and services, and not just a place to look at static Web pages.

So Web 2.0 is the Web as a computing platform (see Google’s set of applications, with spreadsheet, word processor, and so on) as well as a way to distribute information. That aspect is still there, and still important, but the interactivity provided by new technologies and better computers has pushed new features to Web surfers.

Part of the importance of this new power also makes Web 2.0 about user-generated content and being able to publish and share information. From YouTube to podcasting, file sharing and collaborative editing, Web 2.0 connects Web users and connects Web services. The driving force behind this is arguably linked to two phenomena:

  • Wikipedia

  • Blogs

Both of these are about interaction—in the first instance, many authors collaborating towards the same goal, and in the second, one author providing her view to the world, who can then provide feedback through the Website. It is the Web community coming together that provides another key aspect to Web 2.0 and is facilitated by Web programming.

The new Web is also about outreach—how to touch people by marketing and advertising. AdSense, for example, is born of people’s newfound journalistic freedoms and the ability to provide advertising space alongside popular online information, be it in a blog or in an article repository.

More than just the ability to push the written word, Web 2.0 is also about interaction through multiple media, including podcasts, video (YouTube.com), and a variety of other ways in which people can project their personality onto the Web. Other aspects include consumers finding content and taking away an experience (sound, vision, written, and so on), as well as being able to contribute to it, in addition to the commercial aspect. Businesses, after all, need to be able to make money from Web 2.0 in order to continue offering Web 2.0 experiences to visitors.

I spent some time discussing what the term Web 2.0 means, because it is useful to understand what the terminology is so that you can understand how it is put together.

This section is about the technologies that are underneath, allowing the mashup (connecting together of technologies and/or services) phenomenon to help drive Web 2.0. It’s not just about mashup but also about other concepts, such as integrating your site with your Web services, as well as others’ Web services, and both principles are built on the same kind of platform.

Many of the ways in which the eventual mashups and APIs will be used rely on an underpinning of communication between the client and the server. This communication happens in one of two ways:

  • Synchronously

  • Asynchronously

Synchronous communication is like a plain text Web page. The client requests it, and the text comes back to the client, which then has to wait before displaying the result. Asynchronous communication is something different—the client no longer has to wait and can start to display parts of the page even while it is working behind the scenes to update other parts.

More than that, if something changes on the page as a result of interaction with the users, the page can locate and deliver the media without having to refresh the page. Part of delivering that aspect involves AJAX.

AJAX

AJAX stands for Asynchronous JavaScript And XML, simply reiterating the core technologies that provide the underlying base for providing the functionality.

Asynchronous means that there has to be some kind of trigger. The whole page loads and then data is populated behind the scenes. The trigger usually comes from some form of user action that causes more data to be loaded and displayed, without breaking pace with the display of the page to the end users.

The JavaScript is needed to intercept that action (as an event handler) and start communication with the server. Once the communication request is complete, the JavaScript comes in again to retrieve the result. Finally, having received the result, the JavaScript has to work within the confines of HTML and style sheets to assign the data to a visible element on the page.

You are, by now, familiar with some of these technologies. In technical terms, one way to render the information is to insert the formatted data into a named div using the getElementById part of the DOM interface provided by JavaScript.

Before you look at implementing AJAX mechanisms, there are a few things to watch out for that you’ll need to bear in mind when designing an AJAX site.

First, there is an issue with browser incompatibility. The discussion here shows how best to deal with this when it occurs, but it stems from the fact that one of the key mechanisms that has been defined by the W3C for AJAX, the xmlHttp object, has been implemented differently by Microsoft than some of the other browser vendors.

The next issue is that JavaScript might not be active on the client’s system. There is nothing you can do about this beyond trying to anticipate problems and displaying appropriate HTML code. You saw how to do this in the JavaScript chapter.

Tip

Keep .js files separate for creating and manipulating the XML objects, as well as the functions that process any results that come back from the XML querying process. This way, the programmer will not have to copy the same code over and over.

So, the requirements for being able to provide AJAX functionality are as follows:

  • HTML and JavaScript

  • Server side scripts (PHP in this case)

  • Database—This is optional, depending on what mechanism provides the data

XML fulfils two different roles in the AJAX paradigm. On the one hand, it is a communication conduit, and on the other it is a way to describe the data and the way that it should be rendered on the Web page.

A worthwhile side note for those who are using Microsoft solutions is that ASP offers an AJAX component that works within .NET scripts that operate on the server side. This can also be integrated with the browser. This chapter discusses the more popular PHP/JavaScript XML solution, because it seems to be more widely adopted among Internet publishers.

The Role of Xml in AJAX

The xmlHttp object provides the mechanism for asynchronous communication and is a standard that has been created to facilitate this communication. The standard is managed by a working group of the W3C.

XML provides both the request/response mechanism as well as a way to describe the data that’s retrieved. On the one hand, it can be used to retrieve a plain text (usually HTML) response for display using one variation:

xmlHttpRequest. responseText

On the other hand, it can be used to retrieve an XML document using another variation:

xmlHttpRequest.responseXML.getElementById('id')

The only difference between these two is that the data retrieved by the JavaScript, through the xmlHttpRequest object, is unstructured in the first case and structured XML in the second.

As mentioned, there are two steps to using the xmlHttp object—creating the object and testing for compliance with the standards provided for its deployment.

Creating the xmlHttp Objec

Different browsers use different kinds of xmlHttp objects. In ECMAScript-compliant browsers, the XMLHttpRequest object is part of the language. In other environments, it might be contained in an ActiveX object. This latter approach is the one typically used by Microsoft.

In order to make sure that you perform the right actions without causing the JavaScript to crash, you have to use something called the exception mechanism. Logically speaking, you try to perform a function and then you catch any error that it returns before JavaScript has a change to report the error.

In JavaScript, these two actions are called the try and catch blocks. The exception is the thing that is caught. Bearing that in mind, here is the user-defined CreateXMLHttpObject function:

function CreateXMLHttpObject()
{
  var _xmlHttpObject = null;
  try {
      // ECMAScript compliant object creation
      _xmlHttpObject = new XMLHttpRequest();
  }
  catch (e) {
        // e is the exception, thrown because
        // XMLHttpRequest does not exist.
  try { // try something else
     _xmlHttpRequestObject =
                new ActiveXObject("Msxml2.XMLHTTP");
    }
  catch (e) {
        // e is raised again, obviously the ActiveX approach
        // is not working either, so try another one:
        _xmlHttpRequestObject =
                  new ActiveXObject("Microsoft.XMLHTTP");
      }
   }
   return _xmlHttpRequestObject;
}

If the return value from CreateXMLHttpObject is still null, the last catch block also fails, and there is not a lot you can do about that, so you can just report it and stop the script. Apart from that, the rest of the code should be fairly self-explanatory.

There are some things to remember. First, you need to test whether the global variable that is allocated to the return value of the CreateXMLHttpObject has been set. If it has not been set, you need to reinitialize the xmlHttpRequest object.

If the variable xmlHttp object is still null even after you run the script, you need to set the relevant piece of content to a suitable message on the page. If you do not do that, the visitors are left wondering why their pages do not look quite right.

This can be done using code such as the following:

if (g_xmlHttp == null) {
  document.getElementById("result_area").innerHTML =
     "<b>Your browser does not support xmlHttp requests.</b>";
}

There is a better alternative to this last fragment. You can provide, via an interactive form, an alternative way to get the information. This would move the model from an AJAX model to a synchronous HTTP request/response mechanism.

The way that the system detects that an action needs to take place is via the xmlHttp state change mechanism.

xmlHttp State Changes

The xmlHttpRequest object has a property that is referenced as follows:

xmlHttpRequest.onreadystatechange

This is a function that can be called whenever the state of the object changes so that it can do some work. A state change can be instigated when data is requested and received or when something else happens.

The function contained in the onreadystatechange property is triggered by a change in an associated property:

xmlHttpRequest.readyState

The states that you’re interested in, and the ones that can be values for the readyState property, are twofold:

  • 3—The request is in process

  • 4—The request is complete

The responseText and responseXML properties of the xmlHttpRequest object will contain the appropriate response for the type of data (text or XML) that was returned by the server to the client when the readyState is equal to 4.

However, before you will be ready to process that information, you first need to make a function to handle the state change. The generic skeleton for such a function might look like this:

function xmlHttpStateChange () {
  if (g_xmlHttp.readyState == 4) {
     // Do work here with the responseText or
     // responseXML property
  }
}

The next step is to assign the function to the xmlHttpRequest object through the onreadystatechange property. You do it this way so that other functions can be assigned to the object depending on what you want to do with the response when it comes back. This is achieved with a simple line of code:

g_xmlHttp.onreadystatechange = xmlHttpStateChange;

Of course, you could make the entire solution more object oriented by creating a derived object to put it in, but there is no sense in working harder than necessary to create client side solutions.

Having set up the xmlHttpRequest object, and assigned a function to process the data that comes back in the response, you need to send a request so that you can obtain that response.

Sending the Request

This is a two-stage process:

  • Build the URL that locates the data

  • Perform the request

The request can be an HTTP GET or POST, depending on what the server script is expecting. These examples use GET because it’s far easier to explain and exposes the components that will be used as input. The URL is hidden behind the rest of the HTML via the JavaScript, so it is never exposed to the end users.

The URL is built in the usual fashion for this kind of request:

http://www.your_url.com/script.php?first_parameter
           =value&second_parameter=value"

The next step is to call the method that makes the actual request. Assuming that the previous URL is present in an appropriate variable, and that you have initialized a global variable g_xmlHttp using the appropriate object-creation function, the following can be used to open the URL:

g_xmlHttp.open("get", url, true);

The g_xmlHttp variable is an object that must be created as per the JavaScript/ ECMAScript standard discussed in Chapter 5. It can be instantiated with code such as the following:

g_xmlHttp = new XMLHttpRequest();

Finally, you need to send null to the server because you’re not providing any additional data (that is, this is a GET and not a POST request, which would contain the form data). This is performed very simply:

g_xmlHttp.send(null);

The true parameter in the call to the open method means that the script will continue and not wait for response. If you had used false, that would instruct the script to wait for the response. This changes the behavior of the local object, and not the server, so putting in false means that the page might not load properly, as the page would become temporarily unresponsive.

Once the response comes back from the server, the state change property will be updated and the onreadystatechange function (referred to in the property) will be called.

Processing a Text Response

Due to the fact that you have requested that the data be returned asynchronously, at some point in time, the function assigned to the state change property will be called. This will occur in reaction to the state change of the g_xmlHttp object.

At the server side, the response can be either plain text or an XML formatted object. A PHP script will usually return, by default, the text content type, although you can change what’s returned using the header function (to return HTML, for example):

header('Content-Type: text/html'),

When a text variant is provided in the response, the entire response will be accessed via the following object property:

g_xmlHttp.responseText

This property contains plain text, but that may contain fragments of HTML, which can subsequently be directly assigned to any named element through JavaScript, as noted. A div element, for example, could be set up that was previously empty and then have its .innerHTML property set to the text (with or without HTML formatting data) returned through the .responseText property.

Processing an XML Response

The xmlHttpRequest object also provides an interface that equates to a DOMDocument style object, which you first encountered in Chapter 5. This allows you to have standard access to a structured piece of data—in this case, an XML document.

This is brought into play when the header is returned from the PHP script running on the server, as follows:

header('Content-Type: text/xml'),

At this point, you can retrieve the XML document from the following member property of the g_xmlHttp global that you set up in the first place:

xmlDocument = g_xmlHttp.responseXML;

From here, you can access the XML using the usual set of methods through the xmlDocument object, such as:

xmlDocument.getElementById('id1'),

To do this, you need to know the structure of the document that has been returned. Otherwise, it becomes difficult to extract the appropriate data to place in the HTML document that is used as the interface with the end users (the visitors).

AJAX with PHP

The previous discussion described the client side processing, so now you need to look at what happens on the server side when the server receives the request. There is, in fact, nothing special about the HTTP request itself; it is identical to any other request for a resource that it might receive.

The PHP script will be called with the HTML GET protocol, and so the query data, if any, will be in the appropriate superglobal variable. As you’ll remember from previous chapters, this is accessed as follows:

$_GET['parameter_name']

All that you need to do on the server side is perform any processing and return the result by using familiar PHP commands, such as echo to send back the data. This data can be returned as if it were a text or XML stream.

The best way to illustrate this is to go through a whole example by starting with a simple HTML document that provides the interaction among the user, the Web, and the server. You’ll need a text box, a button, and a div to store the result in.

The core of the HTML document can be written as follows:

<form name="table_layout">
Number of Columns : <input type="text" name="columns_entry"/>
<input type="button" onClick="updateColumns();">
</form>
<div id="columns_area">
</div>

The important parts of this document are the button and the div. The update-Columns function attached to the onClick handler of the button has to start the AJAX operation by creating the query and submitting it.

The function can be created as follows:

function updateColumns() {
  if (g_xmlHttp == null) {
     g_xmlHttp = initXMLHttpObject();
  }
  if (g_xmlHttp == null) {
  // not much point continuing
  return;
  }
  var queryURL = "get_columns.php?";
  queryURL += "col=" + document.table_layout.
      columns_entry.value;
  g_xmlHttp.onreadystatechange = columnsChanged;
  g_xmlHttp.open("get", queryURL, true); // Do the AJAX
  g_xmlHttp.send(null); // No data to go
}

The first part of the code tests to see if a g_xmlHttp object exists, and if it does not, the code uses the assumed initXMLHttpObject function to create it. If everything is correctly instantiated, the queryURL is built, and the function columnsChanged is attached to the g_xmlHttp object.

The queryURL contains the name of the PHP script you are going to call and a data part that contains col= followed by the contents of the text control.

This columnsChanged function then has to handle the state change once the data is returned and place the data in the div container of the HTML document. The updateColumns function then sends the request.

This function simply sets the contents of the div container to the responseText of the g_xmlHttp object. The code might look like this:

function columnsChanged () {
  if (g_xmlHttp.readyState == 4) { // We have a response!
    document.getElementById("columns_area").innerHTML
                = g_xmlHttp.responseText;
  }
}

This is all well and good, but you also need to create a PHP script that is run on the server. It has to be able to extract the data fed in by the users and return appropriate text with HTML. The script could be as simple as follows:

<?php
$num_cols = $_GET["col"];
echo "<table border=1><tr>";
for ($col_num = 0; $col_num < $num_cols; $col_num++){
  echo "<td>" . $col_num . "</td>"; // Basic work
}
echo "</tr></table>";
?>

The PHP script extracts the number of columns from the $_GET superglobal, and uses that to construct a table using HTML code, which is then sent back to the client as the response to the original request.

Of course, you could make refinements to this example, but the general procedure remains the same. If you wanted to produce XML instead, for example, by way of the result, you would have to change the way it was both written and interpreted.

You’ll look at this possibility in the “AJAX with Databases” section.

AJAX with CSS

This can be considered as more of a side note than anything else, as well as a kind of best practice style hint. CSS is generally used to provide a consistent user interface, and where AJAX is used, and the response possibly follows a different look and feel, the combination of the two technologies can come into its own.

The main principle is that, when using AJAX, try not to make the server side script produce pure HTML. Otherwise, the JavaScript might have to do some finessing of the HTML to fit it into the look and feel of the page on which it is placed.

Instead, leverage the use of CSS to only change the pure text and let the structure of the document also dictate the style information. That way, if the style changes, the PHP script does not have to change, provided that a consistent use of CSS statements has been established.

Otherwise, it becomes very hard to keep up with the style changes, unless the style information is stored in a database. Because AJAX is built on three different technologies—HTML, JavaScript, and XML—there are potentially three pieces of source code that could change.

Using styles helps to keep these potential changes to a minimum.

AJAX with Databases

Finally, AJAX with server side databases becomes a very powerful tool. It is wise to return the information in a structured way such as XML, rather than as plain text, to cut down on the client side processing.

The reasons for this are that it allows you to keep the client/server interactions down to a bare minimum, and XML can store much more information in a structured manner. It is the mix of AJAX with databases and visitor interaction that is at the heart of many Web 2.0 applications.

If you look at previous example of the blog database, it has the following basic schema:

date, title, summary, content

When you first produce the page, you might populate a form control that contains the dates of all the blog entries. You can do this using AJAX (but this might be overly complex) or using straight PHP to build the Web page. The latter implementation might be coded in a PHP script as follows:

<html>
<head></head>
<body>

<form name="blog_search">
Date : <select name="blog_date"
       onChange="listBlogs(this.value)">
<option value="">
<?php
$db_link
  = mysql_connect('mysql_host', 'mysql_user',
                  'mysql_password'),
mysql_select_db('site_content'),
$query = 'SELECT DISTINCT date FROM
          blog_entries ORDER BY date';
$result = mysql_query($query);
while ($line = mysql_fetch_array($result, MYSQL_ASSOC)) {
  echo "<option value='" . $line["date"] . ">";
}
?>
</select>
</form>
<div id="blog_list">
</div>
</body>
</html>

This code leaves out any non-essential information, and of course, the final version would have to be better formed, using more correct HTML. However, in the interest of clarity, I have kept the code to a minimum. The core of the page is the SQL query that is used to populate the HTML select control with a series of option drop-downs—one for each date in the fetched result.

Attached to the select control is an onChange handler, which calls a function, listBlogs. Given that you have a range of dates, the listBlogs function becomes easy to implement:

function listBlogs (var blog_date) {
  if (g_xmlHttp == null) {
    g_xmlHttp = initXMLHttpObject();
  }
  if (g_xmlHttp == null) {
  // not much point continuing
  return;
  }
  var queryURL = "get_blogs.php?";
  queryURL += "date=" + blog_date;
  g_xmlHttp.onreadystatechange = blogListChanged;

  g_xmlHttp.open("get", queryURL, true); // Do the AJAX
  g_xmlHttp.send(null); // No data to go
}

This code should be reasonably familiar by now, and does nothing really different except to build a slightly different URL. Finally, the blogListChanged function can be created so that when the xmlHttp request returns, the appropriate HTML code can be built based on the data that is returned by the get_blogs.php function.

Because this data will be XML, you can use the following implementation:

function blogListChanged () {
  var html_out = "<table>";
  if (g_xmlHttp.readyState == 4) {
              // We have a response in XML!
    xmlDoc = g_xmlHttp.responseXML;
    // For each entry, print the date and summary
    var blogItemList = xmlDoc.getElementsByTagName("item");
    var numBlogItems = blogItemList.length();
    for ( var blog_item = 0; blog_item
        < numBlogItems; blog_item++ ) {
        var curr_item = blogItemList.item(blog_item);
        // curr_item is an entry in the NodeList
        html_out += "<tr><td>" + curr_item.childNodes[0].
                             nodeValue + "</td>";
        html_out += "<td>" + curr_item.childNodes[1].
                             nodeValue + "</td></tr>";
      }
   }
   html_out += "</table>";
   document.getElementById("blog_list").innerHTML = html_out;
   }

Note that this example uses the DOMDocument object in JavaScript to access the XML data returned by the PHP script. Note also that I left out the PHP script that generates the XML data based on a query from the database.

It really is not that different from returning a piece of text, except that the header needs to be changed, as previously noted, with the header function. The general layout of the XML should look something like this:

<bloglist>
<item>

   <title>Title</title>
   <content>Content</content>
</item>
</bloglist>

Really, that’s all there is for AJAX. This discussion is just enough for you to get started using it on your own site. The rest of this chapter is dedicated to showing you a few Web 2.0 applications that you can create using AJAX and some other techniques that are already in existence—why work harder than you need to?

RSS

RSS stands for Really Simple Syndication. Syndication is a way to share regularly updated content, such as that of a blog, with other Web users so that the content spreads around the Web creating a following for your output.

RSS is an open standard for exchanging syndication information. There are many flavors of RSS and there is not space here to describe them all, but each service uses one flavor of RSS, so it is worth checking the result of an RSS request to see what data is returned.

Each RSS document is known as a feed and has a structure that is well known and publicized, even if it does follow one of several conventions. The good thing about RSS is that, despite this clash of different standards, it facilitates the sharing of information through the Web.

AJAX with RSS

Due to the fact that RSS feeds are returned from the HTTP request as XML documents, it becomes easy to integrate feeds of data sources with AJAX on the page, as long as they support the HTTP GET protocol.

This means that a request for a feed will result in an XML object being created that is accessible through DOMDocument in JavaScript and PHP. The latter might make use, for example, of the SimpleXML interface to access the feed, filter it, and return a modified version to the AJAX request object, which then uses JavaScript DOMDocument operations to render the result.

In this way, you should begin to see the power of so-called mashups and Web applications. They reuse lots of different existing technologies and services, one of which is called Yahoo! Pipes.

Yahoo! Pipes

Yahoo! Pipes is an RSS mashup available at pipes.yahoo.com that allows the users to build applications that are solely centered around processing one or more feeds. The principle is that the pipe starts with a data feed, operates on it, and returns a data feed.

Those familiar with UNIX command-line (csh/bash) programming will realize, of course, that the terminology has its roots in UNIX pipes, which are used to redirect output from processes as input to other processes, all from the command line. This technique is also available in MS-DOS-like operating systems, using, like the UNIX/Linux counterpart, the | (pipe) symbol.

The core of Pipes is a process that enables the users to get a variety of XML feeds and build a set of operations around the data in a visual way. It combines dynamic HTML (JavaScript and styles) and AJAX to provide intermediate results.

One of the uses for Pipes is to integrate the result into your site through AJAX, by requesting the URL that the pipe can be accessed through. The Yahoo! Pipes system then runs the pipe and returns a feed that can be freely used.

The underlying Web 2.0 principle is to obtain one or more feeds, combine them, filter the data or use the feed data for something else (a Web search, for example), and then return a result as a feed. There are many different building blocks allowing the users to enter data, services to be queried, and results manipulated.

For example, one use could be to look for open hub requests on HubPages.com, which is a public feed. Then, you could use one of the Pipes functions to extract the search terms from each of the HubPages.com Request titles. The resulting stream of search terms can then be passed to Google.

The result of the Google search is a Web page, for which there is also a Pipes widget, which enables the users to break the page down into tag values. The results of the search could then be returned as an XML feed. This XML feed can then be embedded, using AJAX, in a static Web page, making it dynamic.

Social Networking

Another Web 2.0 phenomenon has been the rise of social networking. Although you might not realize it, many of the social networking activities such as messaging and tagging (not to mention rating, reviewing, and recommending) would not be possible without the underlying technologies discussed in this chapter.

A site such as Facebook, for example, uses many of the technologies discussed here—from HTML, CSS, and AJAX to server side scripting and RSS—to deliver a rich interface for its users and developers. The extensibility of such networks makes them attractive for both content creators and Web programmers.

With this popularity comes better applications and a more attractive platform.

The key to it all is allowing people to present their information online and other people to search and view it. But that’s only part of the interest. The other part is allowing like-minded users to connect with each other and share their own information and experiences.

All of this takes Web programming.

Integration with Web Services

The final component of Web 2.0 programming is being able to integrate with many services available on Web capable of fulfilling a variety of uses.

This section looks at three categories—Amazon representing the shopping services, eBay representing auctions and, very briefly, Google—representing data sharing and publishing.

The idea is to show you how to integrate your site, static or dynamic, with these services to create Web applications in the easiest possible manner, using your newfound knowledge.

This interface to an online processing system is known as a Web service. There are many kinds of Web services, from shopping services (such as Amazon) to search services (such as Google). Usually, the Web service provider offers:

  • An API (application programming interface)

  • Documentation

  • Processor resources

The API is required to enable the Web application to communicate with the Web service; Google, for example, provides the GData API to allow interaction with everything from Froogle to Google Search.

The documentation enables programmers to make sense of what is usually a reasonably complex proprietary API. Finally, the processor resources actually service the requests. It is very important to note that these are not infinite—the service provider will usually put some kind of cap, or threshold, on resource use.

For this reason, the developer usually has to register with Web service providers in order to start using their services. This enables the user’s processor and other resource use to be monitored to make sure that the user doesn’t exceed the predefined limits.

Amazon Web Services (AWS)

Amazon Web Services give the Web programmer access to the Amazon catalog, shopping cart, and fulfilment process. Using AWS, you can integrate Amazon into your site, offering better services, for which you can also earn a commission.

These services use familiar POST requests (otherwise known as REST) over HTTP and result in XML being fed back to the browser.

The following PHP code is taken from one of the official articles covering AWS, on the Amazon Website available at http://developer.amazonwebservices.com/connect/entry.jspa?externalID=636&categoryID=12:

<?php
$request = 'http://ecs.amazonaws.com/onca/xml?
Service=AWSECommerceService&' . ' AWSAccessKeyId=[YourKeyHere]
&Operation=ItemSearch&SearchIndex=DVD&' .
  'Actor=Brad%20Pitt';

$response = file_get_contents($request);
echo htmlspecialchars($response, ENT_QUOTES);
?>

As you’ll note, the query is built up in the usual way as an HTTP request string. This is then sent to the Web server using one of the standard GET functions. The get_file_contents function is part of PHP and just returns the entire contents of a file from the source—local or over the Web—using HTTP.

Of course, this could be an xmlHttpRequest object as well, especially since the result that comes back is actually XML, but the key is to keep it as simple as possible. There are a few things to note.

First, an ID key is needed to use AWS, and this goes in the AWSAccessKeyId part of the request. Second, to deploy AWS efficiently, you need to know the various Operation codes. These are well documented, however.

Processing the XML data is easy, as you have seen in previous examples. It is a good idea to try to access the URL with a browser first to see the structure of the XML before coding a solution that navigates the object tree to obtain the data that the programmer requires.

There are also functions provided by Amazon to interface with shopping carts. For example:

http://ecs.amazonaws.com/onca/xml?Service=AWSECommerceService&
AWSAccessKeyId [YourKeyHere]&
Operation=CartCreate&
Item.1.ASIN=[An ASIN]&
Item.1.Quantity=1&
Item.2.ASIN=[An ASIN]&
Item.2.Quantity=1&
Item.3.ASIN=[An ASIN]&Item.3.Quantity =1

I have broken the URL up so that you can see the various components. Note that a shopping cart that is reasonably full can result in quite a long URL.

The only other issue with this example is that the programmer needs to keep track of the shopping cart and its contents. However, because you can also use Amazon for accepting payment and adding items once the cart has been set up, this is relatively easy.

The CartCreate operation, which returns a cart ID, for example, is provided by AWS. In addition, there are operations to add and remove items, which make it easy to manage the cart, even if the developer has to provide the code to actually show the end users what they have in their carts.

There are many other operations supported by the AWS API for Associates (which is the ability to look up items and include the Associates ID—affiliate ID—to get paid), and each operation returns different data. You’ll become quite intimately acquainted with the AWS documentation once you start building AWS-aware Web applications—but it is worth the effort.

There are several layers possible, but the easiest is to use HTML+CSS as the presentation layer. The presentation layer can then be deployed within PHP scripts, which provide the application layer, responsible for creating the display and communicating with Amazon behind the scenes. This communication will usually take the form of XML HTTP requests that yield data that can then be processed and turned into HTML and CSS for presentation to the users through the browser.

The result, therefore, is a standard HTML page that belies the enormous amount of work, and mashing up of services to produce a result that combines the best aspects of several online services. The term mashup means exactly that—the combination of existing services to produce something new.

You also could include AJAX at the interface level and embed some of the AWS functions in an otherwise static page. However, it is tricky to use at the cart level, because of the way that the page is updated and that data is passed around. It is possible; it, just some work, and probably would involve cookies to handle the cart contents.

In some cases, this might be the only option because the Web host might not allow get_file_contents to work over the Internet. This is something that you’ll have to check for yourself.

Of course, AJAX can be used and the data passed back to PHP to get around this, but this introduces a two-tier system where data has to be relayed from the client to the server and then back to the client again. The logic will then be split across two interfaces, which will make the end result more complex.

Finally, services such as Yahoo! Pipes can be leveraged too, because they can create and process XML/RSS data feeds. This is an advanced mashup, and will require an in-depth knowledge of the Yahoo! Pipes system. However, because it simply passes URLs (with REST data) around, the AWS interfaces can be deployed with Pipes without technical problems.

eBay

Again, eBay uses REST to deliver services, and again, a user ID and developer token are required to get up and running. These are then used in the URL that provides the REST request to authenticate the programmer with eBay. This also implies that there are some usage limits in place to avoid abuse.

Of course, AJAX can be deployed, and even Yahoo! Pipes and similar RSS/XML processing platforms can be deployed, because they are all based on the same underlying communication and data sharing technologies.

An example of this might use Amazon’s XML feeds services to pull down the top ten books and then use this data to look up eBay auction items. These auction items can then be returned as an RSS feed with the affiliate ID of the site owner inserted using some of the Pipes processing functionality for manipulating strings.

An eBay service query string looks similar to the following example:

$apicall = "http://rest.api.ebay.com/
            restapi?CallName=GetSearchResults&" .
"RequestToken=$RequestToken&RequestUserId=$RequestUserId&" .
"SiteId=$SiteId&Version=$Version&Query=$SafeQuery&" .
"EntriesPerPage=$EntriesPerPage&PageNumber=
                $PageNumber&UnifiedInput=1";

The official eBay developer guide uses the simplexml_load_file PHP function to execute the $apicall and resolve the result. The following is taken from the online documentation:

$xmlResult = simplexml_load_file ($apicall);
foreach ($xmlResult->SearchResultItemArray
          ->SearchResultItem as $searchitem) {
     $link = $searchitem->Item->ListingDetails->ViewItemURL;
     $title = $searchitem->Item->Title;
     // For each SearchResultItem node,
     // build a link and append it to $results
     $results .= "<a href="$link">$title</a><br/>";
     }

As you can see, the power of SimpleXML is very useful when you’re applying it to documents for which the structure is well known. This code can be deployed in any CMS to provide auction listings, something that’s almost guaranteed to provide lasting interest for visitors.

Google

Finally, Google has many developer APIs that you can use to interface with the Google services, including specific ones for Data, Calendar, Spreadsheets, and Gadgets. Each of these APIs is a well-documented set of REST requests that can be deployed with any of the previous technologies to consume the available Web services.

You read in the last chapter how to do this with the Blogger API, which is used to display a blog using PHP. Of course, you now know that the same effect can be achieved with AJAX, thereby turning a static Web page into a dynamic one.

If you assume that you have a single page divided into three columns—navigation, content, and advertising or product placement—you can see how, with a little ingenuity, you can build a whole CMS without needing a Web host that supports PHP.

JavaScript can be used to display data conditionally in the left and center panes. The content can be pulled from Blogger using AJAX, which means that the Blogger interface is used for editing.

Then, the right pane can be populated with AdSense adverts, another Google invention, with AWS and eBay API calls providing additional placement, again via AJAX.

And there are plenty of other possibilities—so you have no excuse not to be able to build a great site.

Recap

Note that the vast majority of the examples of Web 2.0 technologies in this chapter have been built on extensions to HTML and XML. In addition, developers can use private extensions and modifications, using JavaScript, XML, and CSS together to do new things with the underlying technologies.

It is the client technology driven by interaction with the visitors, combined with server side technology, that provides the illusion that not only is each Web user interconnected in a communication Web, but also that the Web page is an application that runs over the Web and connects the users to the back-end.

That is what Web 2.0 is all about—more interactivity, and better and more powerful applications making ever better use of the information and opinion distributed through the information superhighway.

 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.223.239.226