Chapter 8. Find: Describe and discover web Things

This chapter covers

  • Learning the basics of discoverability (methods and protocols)
  • Understanding how to do web-level discovery (linking/crawling)
  • Proposing a model to describe web Things and their capabilities
  • Extending the basic model with additional Semantic Web formats

In the previous two chapters, we explored extensively the various integration patterns for connecting your Things to the web, which is the first layer of the WoT architecture we introduced in chapter 6. We illustrated how using web standards as the connective tissue between heterogeneous devices significantly improves interoperability between components in an internet-scale system and thus is the core foundation of the Web of Things. Nevertheless, without a universal format to describe web Things and their capabilities, integrating web Things and applications still requires a consequent effort for developers. Having a single and common data model that all web Things can share would further increase interoperability and ease of integration by making it possible for applications and services to interact without the need to tailor the application manually for each specific device. This is an essential cornerstone of the WoT because it means that the hotel control center example we introduced in chapter 1 could seamlessly discover, understand, and read data and send commands to any device on the Web of Things, regardless of its capabilities or its manufacturer. The ability to easily discover and understand any entity of the Web of Things—what it is and what it does—is called findability.

How to achieve such a level of interoperability—making web Things findable—is the purpose of the second layer, Find, of the WoT architecture and is what this chapter focuses on. The goal of the Find layer is to offer a uniform data model that all web Things can use to expose their metadata using only web standards and best practices. Metadata means the description of a web Thing, including the URL, name, current location, and status, and of the services it offers, such as sensors, actuators, commands, and properties. First, this is useful for discovering web Things as they get connected to a local network or to the web. Second, it allows applications, services, and other web Things to search for and find new devices without installing a driver for that Thing. By the end of this chapter, you’ll understand how to expose the metadata of any web Thing in a universal and interoperable way using network discovery protocols, such as mDNS; lightweight data models, such as the Web Thing Model; and Semantic Web standards, such as JSON-LD.

8.1. The findability problem

Once a device becomes a web Thing using the methods we presented in the previous two chapters, it can be interacted with using HTTP and WebSocket requests. This sounds great in theory, but for this to also work in practice, we must first solve three fundamental problems, as shown in figure 8.1:

Figure 8.1. The three problems of findability in the Web of Things. How can a client application find nearby web Things, interact with them, and understand what these things are and do?

1.  How do we know where to send the requests, such as root URL/resources of a web Thing?

2.  How do we know what requests to send and how; for example, verbs and the format of payloads?

3.  How do we know the meaning of requests we send and responses we get, that is, semantics?

To better understand these problems, let’s get back to the smart hotel scenario from chapter 1. Imagine Lena, an Estonian guest staying in room 202 of the hotel. Lena would like to pop up her phone so she can turn on the heat. The first question is how can Lena—or her phone, or an app on her phone—find the root URL of the heater? This is often called the bootstrap problem. This problem is concerned with how the initial link between two entities on the Web of Things can be established. The simplest solution to this problem would be to write the root URL on the desk or on the wall of the room. Another solution would be to encode the URL into a QR code printed on a card or use an NFC tag upon her check-in, so Lena could scan it with her phone. A more complex solution would be to install an application on her phone that searches for devices with heating capabilities nearby. These approaches will be the subject of section 8.2. Finally, a web-friendly solution would be for her to Google for nearby heaters; we’ll look into that in section 8.4.

Let’s assume for now that Lena enters the root URL of the heater on her phone. Ideally, she would see a pretty user interface in her native Estonian that allows her to figure out right away which button turns on the heat. In this case, a clean and user-centric web interface can solve problem 3 because humans would be able to read and understand how to do this. Problem 2 also would be taken care of by the web page, which would hardcode which request to send to which endpoint.

But what if the heater has no user interface, only a RESTful API?[1] Because Lena is an experienced front-end developer and never watches TV, she decides to build a simple JavaScript app to control the heater. Now she faces the second problem: even though she knows the URL of the heater, how can she find out the structure of the heater API? What resources (endpoints) are available? Which verbs can she send to which resource? How can she specify the temperature she wants to set? How does she know if those parameters need to be in Celsius or Fahrenheit degrees?

1

If the manager ever finds this out, he should probably fire the guy who was responsible for selecting this heater in the first place because it fails to address design rule #2 of chapter 6 by not providing a user interface.

Usually, application developers rely on written documentation that describes the various API endpoints and resources available on the Thing (problem 2) and the meaning of those (problem 3). But in some cases, a more automated way to discover the resources of a REST API at runtime might be useful. If there was a way for Lena—or the app she wrote—to interrogate on the fly any web Thing and find out what services/data it offers, without having to read the documentation, her app would work with any heating device, regardless of its manufacturer.

Providing a web-based solution for these three problems is the goal of the Find layer, as shown in figure 8.2. In the rest of this chapter, we’ll propose a set of tools and techniques for how web Things can expose their data resource so that users, applications, and Things can easily find and interact with them.

Figure 8.2. The Find layer of the Web of Things. This layer relates to how one can easily understand the nature of things, what they relate to, how to access their documentation, what their API endpoints are, and how to access those (what parameters and their types). It also relates to the meaning of these properties in a standard way.

8.2. Discovering Things

We begin our journey in findability by comparing several solutions to the bootstrap problem. In short, how can an app or Thing find the root URL of a web Thing it has never encountered before? This problem deals with two scopes: first, how to find web Things that are physically nearby—for example, within the same local network—and second, how to find web Things that are not in the same local network—for example, find devices over the web. Finding web Things in a local network can be done using network discovery methods described in section 8.2.1. To find web Things beyond the same local network, we’ll rely on resource discovery and search, as described in section 8.2.2. Let’s now look at these methods in more detail.

8.2.1. Network discovery

In a computer network, the ability to automatically discover new participants is common. In your LAN at home, as soon as a device connects to the network, it automatically gets an IP address using DHCP[2] (Dynamic Host Configuration Protocol). But only the DHCP server knows the device is in your network, so what about the other hosts in your network? Once the device has an IP address, it can then broadcast data packets that can be caught by other machines on the same network. As you saw in chapter 5, a broadcast or multicast of a message means that this message isn’t sent to a particular IP address but rather to a group of addresses (multicast) or to everyone (broadcast), which is done over UDP. This announcement process is called a network discovery protocol, and it allows devices and applications to find each other in local networks. This process is commonly used by various discovery protocols such as multicast Domain Name System (mDNS),[3] Digital Living Network Alliance (DLNA),[4] and Universal Plug and Play (UPnP).[5] For example, most internet-connected TVs and media players can use DLNA to discover network-attached storage (NAS) in your network and read media files from it. Likewise, your laptop can find and configure printers on your network with minimal effort thanks to network-level discovery protocols such as Apple Bonjour that are built into iOS and OSX.

2

3

4

5

mDNS

In mDNS, clients can discover new devices on a network by listening for mDNS messages such as the one in the following listing. The client populates the local DNS tables as messages come in, so, once discovered, the new service—here a web page of a printer—can be used via its local IP address or via a URI usually ending with the .local domain. In this example, it would be http://evt-bw-brother.local.

Listing 8.1. An mDNS message from a printer

This is also the protocol that your Pi uses to broadcast its raspberrypi.local URL (see chapter 4) to all nearby computers listening with an mDNS client.

The limitation of mDNS, and of most network-level discovery protocols, is that the network-level information can’t be directly accessed from the web. You could, of course, write JavaScript code that relies on predefined .local domains, but this would be merely a hack not supported by all browsers. This is also the reason why many mobile browsers can’t resolve these addresses: they don’t have an mDNS client populating the local DNS record in the background.

The nerd corner—I want my Pi to say “Bonjour!”

Your Pi already enables mDNS via the Avahi library to broadcast its .local URL, but you could do a lot more with mDNS, such as describing the HTTP services your WoT server provides (just like for the printer in listing 8.1). The experimental node_mdns Node library[a] builds on top of Avahi and lets you programmatically implement this and more. To get started with the library, look at the code sample we provided in the mdns folder of this chapter on GitHub.

a

Note: this module doesn’t always run smoothly on the Pi, so you might have to fall back to your PC. If you’d still like to try it on the Pi, make sure you install the additionally required Debian packages via apt-get install libavahi-compat-libdnssd-dev.

Network discovery on the web

If mDNS doesn’t work in all browsers, how can a web application running on your mobile phone or tablet find nearby web Things? Or, why can’t you find the web Things in your house by following links on a page? An easy solution would be to write a custom plugin for Firefox or Chrome that can talk to those network-level discovery protocols. But this doesn’t solve the problem because in place of enabling web-based resource discovery using web standards, devices would still need to implement one or more non-web network discovery protocols. In consequence, web Thing client applications would also need to speak and understand these protocols, which defeats the purpose of the Web of Things.

Because HTTP is an Application layer protocol, it doesn’t know a thing about what’s underneath—the network protocols used to shuffle HTTP requests around. It also doesn’t need to care—that is, unless a web Thing or application needs to know about other resources in the same network. The real question here is why the configuration and status of a router is only available through a web page for humans and not accessible via a REST API. Put simply, why don’t all routers also offer a secure API where its configuration can be seen and changed by others’ devices and applications in your network?

Providing such an API is easy to do.[6] For example, you can install an open-source operating system for routers such as OpenWrt[7] and modify the software to expose the IP addresses assigned by the DHCP server of the router as a JSON document. This way, you use the existing HTTP server of your router to create an API that exposes the IP addresses of all the devices in your network. This makes sense because almost all networked devices today, from printers to routers, already come with a web user interface. Other devices and applications can then retrieve the list of IP addresses in the network via a simple HTTP call (step 2 in figure 8.3) and then retrieve the metadata of each device in the network by using their IP address (step 3 of figure 8.3).

6

7

Figure 8.3. LAN-level resource discovery. Assuming that all web Things expose their root resource on port 80, web Thing clients can get their IPs from the router and then query each device to extract their metadata.

Because routers usually have the base network address of the local network, you can easily write a web app that periodically queries the routing table, keeps track of the new devices connected to the network, and registers the devices in the network. The same pattern can be used with any other device on the network, where any web Thing—say, a set-top box or NAS—could continuously search for new devices in the network using various protocols, understand their services, and then act as a bridge to these devices by generating on the fly a new WoT API for those devices.

8.2.2. Resource discovery on the web

Although network discovery does the job locally, it doesn’t propagate beyond the boundaries of local networks. Thinking in wider terms, several questions remain open: in a Web of Things with billions of Things accessible on the World Wide Web, how do we find new Things when they connect, how do we understand the services they offer, and can we search for the right Things and their data in composite applications?

The web faced a similar challenge when it shifted from a catalog of a few thousand pages with text and images in the early nineties to an exponentially growing collection of web applications, documents, and multimedia content including movies and music, games, and other service types. In those early days, AltaVista and Yahoo were successful in curating this growing collection of documents. But as the web started to grow exponentially, it became obvious that managing the list of resources on the web manually was a dead end. Around this time (~1998), Google appeared out of nowhere and pretty much wiped out any other search engine because it could automatically index millions of pages and allow users to rapidly and accurately find relevant content in this massive catalog.

On the web, new resources (pages) are discovered through hyperlinks. Search engines periodically parse all the pages in their database to find outgoing links to other pages. As soon as a link to a page not yet indexed is found, that new page is parsed and added to directory. This process is known as web crawling.

Crawling the API of web Things

We can apply the process of web crawling to Things as well: in chapter 2 you used an HTML-based UI for the WoT Pi, and in chapter 5 you saw how to create HTML representations of resources. By adding links to the sub-resources in the HTML code, we make it possible to crawl web Things with the simple pseudo-code shown in the next listing.

Listing 8.2. Pseudocode for crawling the HTML representation of Things
crawl(Link currentLink) {
  r = new Resource();
  r.setUri = currentLink.getURI();
  r.setShortDescription = currentLink.text();
  r.setLongDescription =
    currentLink.invokeVerb(GET).extractDescriptionFromResults();
  r.setOperations = currentLink.invokeVerb(OPTIONS).getVerbs();
  foreach (Format currentFormat: formats) {
    r.setAcceptedFormats =
      currentLink.invokeVerb(GET).setAcceptHeader(currentFormat);
  }
  if (currentLink.hasNext()) crawl(currentLink.getNext());
}
foreach(Link currentLink: currentPage.extractLinks())
{ crawl(currentLink); }

From the root HTML page of the web Thing, the crawler can find the sub-resources, such as sensors and actuators, by discovering outgoing links and can then create a resource tree of the web Thing and all its resources. The crawler then uses the HTTP OPTIONS method to retrieve all verbs supported for each resource of the web Thing. Finally, the crawler uses content negotiation to understand which format is available for each resource. As an exercise, we suggest you try implementing this crawler for the API of the Pi you created in chapter 7.

HATEOAS and web linking

This simple crawling approach is a good start, but it also has several limitations. First, all links are treated equally because there’s no notion of the nature of a link; the link to the user interface and the link to the actuator resource look the same—they’re just URLs. Then, it requires the web Thing to offer an HTML interface, which might be too heavy for resource-constrained devices. Finally, it also means that a client needs to both understand HTML and JSON to work with our web Things.

A better solution for discovering the resources of any REST API is to use the HATEOAS principle we presented in section 6.1.6 to describe relationships between the various resources of a web Thing. A simple method to implement HATEOAS with REST APIs is to use the mechanism of web linking defined in RFC 5988.[8] The idea is that the response to any HTTP request to a resource always contains a set of links to related resources—for example, the previous, next, or last page that contains the results of a search. These links would be contained in the Link: HTTP header of the response. Although a similar mechanism was already supported with the LINK[9] element in the HTML 4 specification, encoding the links as HTTP headers introduces a more general framework to define relationships between resources outside the representation of the resource—directly at the HTTP level. As a result, links can be always described in the same way regardless of the media type requested by the client, such as JSON or HTML. This type of linking is also the one supported by the Constrained Application Protocol we discussed in the previous chapters.[10]

8

9

10

When doing an HTTP GET on any Web Thing, the response should include a Link header that contains links to related resources. In particular, you should be able to get information about the device, its resources (API endpoints), and the documentation of the API using only Link headers. Following is an example HTTP query sent to a WoT gateway:

HTTP 1.1 GET /
Host: gateway.webofthings.io
Accept: application/html

200 OK
Link: </model/>; rel="model", </properties/>; rel="properties", </actions/>;
     rel="actions", </things/>; rel="things", <http://model.webofthings.io/>;
     rel="type", </help>; rel="help", </>; rel="ui"

In this example, the response contains a set of links to the resources of the web Thing in the Link header. The URL of each resource is contained between angle brackets (<URL>) and the type of the link is denoted by rel="X", where X is the type of the relation. If the URL is not an absolute URL—that is, it doesn’t start with http:// or https://—it’s interpreted in the context of the current request path, to which the relative URL will be appended. In this example, the documentation of the web Thing will therefore become devices.webofthings.io/help. Note that the link element can be any valid URI and therefore could well be hosted on the device itself, on a gateway, or anywhere else on the web. Some reserved and standardized relationship types are defined by IANA, but those are mainly relevant to the classic web of multimedia documents. Because no set of relationship types has been proposed for physical objects and for the Web of Things, we’ll propose one in this chapter. In the previous example, you could see that the root page of the Web of Things gateway contains links to the following four resources.

rel=“model”

This is a link to a Web Thing Model resource; see section 8.3.1.

rel=“type”

This is a link to a resource that contains additional metadata about this web Thing.

rel=“help”

This relationship type is a link to the documentation, which means that a GET to devices.webofthings.io/help would return the documentation for the API in a human-friendly (HTML) or machine-readable (JSON) format. The documentation doesn’t need to be hosted on the device itself but could be hosted anywhere—for example, on the manufacturer’s website, in which case the header would look like this:

Link: <http://webofthings.io/doc/v/1.1>; rel="help"

This allows maintaining and continuously updating the documentation of multiple devices deployed in the wild and running various firmware versions, without the need to host it directly on the device but in the cloud.

rel=“ui”

This relationship type is a link to a graphical user interface (GUI) for interacting with the web Thing. The UI must be implemented using HTML so that it can be accessed with any browser, and it should be responsive to allow various device types to interact with the web Thing. Note that the GUI can—but doesn’t have to—be hosted on the device itself as long as the GUI application can access the web Thing and its resources. In the following example, the GUI is hosted on GitHub and takes as a parameter the root URL of the web Thing to control:

Link: <http://webofthings.github.io/ui?url=devices.webofthings.io>; rel="ui"

In some situations you won’t be able to modify the HTTP headers of the response returned by a web Thing. If this is the case, you’ll need to insert them in the HTML or JSON representation of the resource. We’ll show how you do this in sections 8.3.3 for JSON and 8.4.1 for HTML.

8.3. Describing web Things

The ability to discover the root URL and resources of a web Thing solves the first part of the findability problem and is enough to interact with the web Thing if it provides a user interface—the root URL returns an HTML page. But knowing only the root URL is insufficient to interact with the Web Thing API because we still need to solve the second problem mentioned at the beginning of this chapter: how can an application know which payloads to send to which resources of a web Thing? In other words, what possible parameters and their type are supported by each end point, what will be the effect of a given request, what possible error/success messages will be returned, and what do those mean?

This question can be summarized as follows: how can we formally describe the API offered by any web Thing? As you can see in figure 8.4, there are various ways to do this, ranging from no shared data model between the API of a web Thing (1), all the way to semantically defining every possible interaction with a web Thing (4). Semantic Web Things maximize interoperability by ensuring that client applications can discover new Things and use them at runtime automatically, without any human in the loop.

Figure 8.4. The various levels for describing web Things. Any device can have an HTTP API (1). Web Things (2) are HTTP servers that follow the requirements proposed in chapter 6; thus, APIs are more consistent, predictable, and easier to use. Using a shared model will make the web Thing more interoperable (3). Finally, adding semantic annotations will ensure stronger contracts between web Things and also more flexibility to define formally each element of the web Thing API (4).

The simplest solution is to provide a written documentation for the API of your web Thing so that developers can use it (1 and 2 in figure 8.4). This implies that a developer must read the documentation about your web Thing, understand what requests they can send to it and what each does, and finally implement the various API calls with correct parameters for each call. This approach, however, is insufficient to automatically find new devices, understand what they are, and what services they offer. In addition, manual implementation of the payloads is more error-prone because the developer needs to ensure that all the requests they send are valid. This becomes especially tricky when the API documentation differs from the actual API running on the device, which can happen when the API changes but not the documentation. Or simply when the documentation is...hmm...ungracious in the first place. Sadly, most APIs in the Internet of Things are in this situation because they don’t make it easy or even possible to write applications that can dynamically generate a user interface for a device only by knowing its root URL.

As will be shown in the rest of this chapter, all hope is not lost—quite the opposite! By using a unique data model to define formally the API of any web Thing (the Web Thing Model) as described in section 8.3.2, we’ll have a powerful basis to describe not only the metadata but also the operations of any web Thing in a standard way (cases 3 and 4 of figure 8.4). This is the cornerstone of the Web of Things: creating a model to describe physical Things with the right balance between expressiveness—how flexible the model is—and usability—how easy it is to describe any web Thing with that model. Achieving this balance is necessary in order to achieve global-scale interoperability and adoption, and this is what we’ll do in the remainder of this chapter.

8.3.1. Introducing the Web Thing Model

Once we find a web Thing and understand its API structure, we still need a method to describe what that device is and does. In other words, we need a conceptual model of a web Thing that can describe the resources of a web Thing using a set of well-known concepts.

In the previous chapters, we showed how to organize the resources of a web Thing using the /sensors and /actuators end points. But this works only for devices that actually have sensors and actuators, not for complex objects and scenarios that are common in the real world that can’t be mapped to actuators or sensors. To achieve this, the core model of the Web of Things must be easily applicable for any entity in the real world, ranging from packages in a truck, to collectible card games, to orange juice bottles. This section provides exactly such a model, which is called the Web Thing Model.[11]

11

At the time of writing, the Web Thing Model (http://model.webofthings.io) is also an official W3C member submission. This does not mean it is a standard yet, but it means it will influence the standardization efforts around the Web of Things within the Web of Things Interest Group (http://www.w3.org/WoT/IG/). EVRYTHNG (and hence Vlad and Dom) is part of the Web of Things Interest Group at W3C.

Because this model is more abstract and covers more use cases than the ones we used in previous chapters, it’s also a bit more complex to understand and use, and that’s why we only introduce it now. But don’t worry—by the end of this chapter, it will all make sense to you and you’ll see that you can easily adapt it for any Web of Things scenario you can think of. Not only that, but with the reference implementation of this model you’ll find in this chapter, you’ll also be able to implement truly interoperable web Things and WoT applications that reach the full potential of the Web of Things. Let’s get started!

Note that to make it easier for you to discover the Web Thing Model and try the examples in this section, we deployed a web Thing in the cloud: http://gateway.webofthings.io. In the next section, you’ll learn how to implement and run the same web Thing server your Pi or laptop, so feel free to revisit these examples on your own web Things later.

Entities

As we described earlier, the Web of Things is composed of web Things. But what is a web Thing concretely? A web Thing is a digital representation of a physical object—a Thing—accessible on the web. Think of it like this: your Facebook profile is a digital representation of yourself, so a web Thing is the “Facebook profile” of a physical object. Examples of web Things are the virtual representations of garage door, a bottle of soda, an apartment, a TV, and so on. The web Thing is a web resource that can be hosted directly on the device, if it can connect to the web, or on an intermediate in the network such as a gateway or a cloud service that bridges non-web devices to the web. All web Things should have the following resources as illustrated in figure 8.5:

Figure 8.5. The resources of a web Thing. Web Thing clients can interact with the various resources of the web Thing. The model resource provides metadata for discovery, properties are the variables of the Things (data, sensor, state, and so on), and actions are the function calls (commands supported by the web Thing). When the web Thing is also a gateway to other (non-web) Things, the Thing’s resource is a proxy to the non-web Things.

  • Model— A web Thing always has a set of metadata that defines various aspects about it such as its name, description, or configurations.
  • Properties— A property is a variable of a web Thing. Properties represent the internal state of a web Thing. Clients can subscribe to properties to receive a notification message when specific conditions are met; for example, the value of one or more properties changed.
  • Actions— An action is a function offered by a web Thing. Clients can invoke a function on a web Thing by sending an action to the web Thing. Examples of actions are “open” or “close” for a garage door, “enable” or “disable” for a smoke alarm, and “scan” or “check in” for a bottle of soda or a place. The direction of an action is from the client to the web Thing.
  • Things— A web Thing can be a gateway to other devices that don’t have an internet connection. This resource contains all the web Things that are proxied by this web Thing. This is mainly used by clouds or gateways because they can proxy other devices.

Each web Thing can use this model to expose its capabilities. In the next section we examine these in more detail, especially what they look like. Describing the entire model in this book would take a few more chapters, so we limit ourselves to the strict essentials needed to understand what it is and how you can use it. We invite you to refer to the online description of the Web Thing Model to see the entire description with the various entities and fields you can use. You won’t need this information to follow this chapter, but it will help when you will want to adapt the model for your own devices and products. Furthermore, this model is heavily built on the concepts you learned in chapters 6 and 7, so you’re definitely not starting from scratch!

8.3.2. Metadata

In the Web Thing Model, all web Things must have some associated metadata to describe what they are. This is a set of basic fields about a web Thing, including its identifiers, name, description, and tags, and also the set of resources it has, such as the actions and properties. A GET on the root URL of any web Thing ({WT} in the following listing) always returns the metadata using this format, which is JSON by default.

Listing 8.3. GET {WT}: retrieve the metadata of a web Thing
GET / HTTP/1.1
Host: gateway.webofthings.io
Accept: application/json


HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Link: </model/>; rel="model", </properties/>; rel="properties",
  </actions/>; rel="actions", </things/>; rel="things",
  <http://model.webofthings.io/>; rel="type"


{
  "id": "http://gateway.webofthings.io",
  "name": "My WoT Raspberry PI",
  "description": "A simple WoT-connected Raspberry Pi for the WoT
    book.",
  "tags": ["raspberry","pi","WoT"],
  "customFields": {...}
}

As you can see here, the returned payload contains the basic information about the web Thing. The links to the various resources of this web Thing are contained in the Link: header of the response; see section 8.2.2. You can then follow each link to get more information about each of those resources. A GET {WT}/model will return the entire model of the web Thing, including the details of the actions or properties available.

8.3.3. Properties

Web Things can also have properties. A property is a collection of data values that relate to some aspect of the web Thing. Typically, you’d use properties to model any dynamic time series of data that a web Thing exposes, such as the current and past states of the web Thing or its sensor values—for example, the temperature or humidity sensor readings. Because properties should always capture the most up-to-date state of the web Thing, they’re generally updated by the web Things themselves as soon as the value changes and not by web Thing clients or applications. Let’s look at the properties of our web Thing by doing a GET on the {WT}/properties resource, as shown in the following listing.

Listing 8.4. GET {WT}/properties: retrieve the properties of a web Thing
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Link: <http://model.webofthings.io/#properties-resource>; rel="type"

[
  {
    "id": "temperature",
    "name": "Temperature Sensor",
    "values": {
      "t": 9,
      "timestamp": "2016-01-31T18:25:04.679Z"
    }
  },
  {
    "id": "humidity",
    "name": "Humidity Sensor",
    "values": {
      "h": 70,
      "timestamp": "2016-01-31T18:25:04.679Z"
    }
  },
  {
    "id": "pir",
    "name": "Passive Infrared",
    "values": {
      "presence": false,
      "timestamp": "2016-01-31T18:25:04.678Z"
    }
  },
  {
    "id": "leds",
    "name": "LEDs",
    "values": {
      "1": false,
      "2": false,
      "timestamp": "2016-01-31T18:25:04.679Z"
    }
  }
]

You can see the current values of the various sensors on the Raspberry Pi, such as the temperature and PIR and when they were last changed. Let’s now look at one of them in more detail in the next listing.

Listing 8.5. Retrieve the temperature property
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Link: <http://model.webofthings.io/#properties-resource>; rel="type"

[
    {"t":21.1,"timestamp":"2015-06-14T15:00:00.000Z"},
    {"t":21.4,"timestamp":"2015-06-14T14:30:00.000Z"},
    {"t":21.6,"timestamp":"2015-06-14T14:00:00.000Z"},
  ...
]

A GET on a specific property will return an array of value objects like the one shown here. Each value object has one or more fields, such as t for the actual temperature sensor reading, and the timestamp when the value was recorded. Some sensors might have several dimensions; for example, an acceleration sensor will have three dimensions, called values, one for each axis: X, Y, and Z.

8.3.4. Actions

Actions are another important type of resources of a web Thing because they represent the various commands that can be sent to that web Thing. Examples of actions are “open/close the garage door,” “turn on the living room light, set its brightness to 50%, and set the color to red,” and “turn off the TV in 30 minutes.” In theory, you could also use properties to change the status of a web Thing, but this can be a problem when both an application and the web Thing itself want to edit the same property. This is where actions can help. Let’s draw a parallel to better grasp the concept: actions represent the public interface of a web Thing and properties are the private parts. Much like in any programming languages, you can access the public interface, and whatever is private remains accessible only for privileged parties, like the instance itself or, in this case, the web Thing. But limiting access to actions—that is, the public interface—also allows you to implement various control mechanisms for external requests such as access control, data validation, updating a several properties atomically, and the like.

Actions are also particularly useful when the command you want to send to a web Thing is much more complex than setting a simple value; for example, when you want to send a PDF to a printer or when the action might not be automatically executed. You can find the list of actions a given web Thing supports by sending a GET {WT}/actions request, as in the next listing.

Listing 8.6. GET {WT}/actions: retrieve the actions supported by a web Thing
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Link: <http://model.webofthings.io/#actions-resource>; rel="type"

[{"id":"ledState","name":"Changes the status of the LEDs"}]

The response payload contains an array with the name and ID of each action the web Thing supports. More details about these actions are available in the {WT}/model resource, which describes what each action does and how to invoke it (which parameters to use, what their value should be, and so on). Let’s examine the details of the action ledState in the model in the following listing.

Listing 8.7. GET {WT}/model: the actions object of a web Thing model

The actions object of the Web Thing Model has an object called resources, which contains all the types of actions (commands) supported by this web Thing. In this example, only one action is supported: the "ledState":{} object, where ledState is the ID of this action. The values object contains the possible parameters that can be sent when creating the action. Here, the action accepts two values: ledId (the ID of the LED to change as a string) and state (the target state as a Boolean), both of which are required. Actions are sent to a web Thing with a POST to the URL of the action {WT}/actions/{id}, where id is the ID of the action (ledState), as shown in the next listing.

Listing 8.8. POST {WT}/actions/ledState: turn on LED 3
POST {WT}/actions/ledState
Content-Type: application/json

{"ledId":"3","state":true}

HTTP/1.1 204 NO CONTENT

You can see that the payload is an object where the different fields correspond to the values object for that action (see listing 8.7). The response of the request will usually be 204 NO CONTENT if it is executed immediately or 202 ACCEPTED if the action will be executed at a later time. If the web Thing keeps track of all actions it receives, you can see the list of all actions with a GET on the {WT}/actions/{actionId} resource. You’ll find more details about actions and how to use them in the Web Thing Model reference online.

8.3.5. Things

As shown in figure 8.5, a web Thing can act as a gateway between the web and devices that aren’t connected to the internet. In this case, the gateway can expose the resources—properties, actions, and metadata—of those non-web Things using the web Thing. The web Thing then acts as an Application-layer gateway for those non-web Things as it converts incoming HTTP requests for the devices into the various protocols or interfaces they support natively. For example, if your WoT Pi has a Bluetooth dongle, it can find and bridge Bluetooth devices nearby and expose them as web Things.

The resource that contains all the web Things proxied by a web Thing gateway is {WT}/things, and performing a GET on that resource will return the list of all web Things currently available, as shown in the following listing.

Listing 8.9. GET {WT}/things: the things object of the Web Thing Model
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Link: <model.webofthings.io/things>; rel="meta"

[
  {
    "id":"http://devices.webofthings.io/pi",
    "name":"Raspberry Pi",
    "description":"A WoT-enabled Raspberry Pi"
  },
  {
    "id":"http://devices.webofthings.io/camera",
    "name":"Fooscam Camera",
    "description":"LAN-connected camera."
  },
  {
    "id":"http://devices.webofthings.io/hue",
    "name":"Philips Hue",
    "description":"A WoT-enabled Philips Hue Lamp."
  }
]

You can then access the web Thing for each of those resources by accessing its ID if it’s an absolute URL, or by appending it to the Things resource URL ({WT}/things/{id}) and send actions or retrieve its properties like you would with any other web Thing. The Things resource is mainly relevant when a web Thing is a gateway or a cloud service but also if the web Thing has a number of other devices connected to it; for example, via USB, Bluetooth, or any other type of interface.

8.3.6. Implementing the Web Thing Model on the Pi

Now that you’ve seen the basics of the Web Thing Model, it’s time to dig into the most important parts of its implementation.

How to get the code

There’s a copious amount of code behind the implementation of the Web Thing Model we just presented, so instead of describing each line of code, we’ll focus on the most important or tricky parts. You’ll find the full code on GitHub; see http://book.webofthings.io. The examples for this chapter are located in the chapter8-semantics folder. Go to the webofthingsjs-unsecure folder and run npm install followed by node wot.js.

Because the code is using the webofthings.js project (the reference implementation of the Web Thing Model), you must clone the Git repository with the –-recursive option to make sure all the sub-modules of this chapter are also retrieved.

The WoT Pi model

The first thing we want to do now is to use the Web Thing Model to describe the Pi and its capabilities. This means extending the simpler sensor/actuator model we wrote in chapter 7. The tree structure of the Pi modeled using the Web Thing Model is shown in figure 8.6 and the corresponding JSON model can be found in the /resources/piNoLd.json file.

Figure 8.6. Resource tree of the Pi implementing the Web Thing Model. The notion of sensors and actuators is replaced by the idea of properties (variables) and actions (functions). Some of the resources, such as type or product, can be external references.

The listing that follows shows the model of the temperature property shown in listing 8.5.

Listing 8.10. Temperature property for Pi

Remember that the properties of our model are variables or private interfaces of the web Thing and therefore shouldn’t be changed by external clients, only by the device itself. Properties can be modified through actions, which you can see as functions or public interfaces a web client can invoke on a web Thing.

An action is a contract between the clients and the Things. When an action is created, the web Thing must know what to do with it; you’ll see an implementation of an action shortly. Likewise, the client must know the format and semantics of the action, such as which parameters can be sent.

In order for clients to easily access the resources of a web Thing, the entire model of the Thing should be easily retrievable by the client. Once the model is ready, we make it accessible through the /model resource, which returns the entire piNoLd.json file.

Validating your model with JSON schema

Creating your model file so that it complies with the Web Thing Model can be a daunting task because this model is significantly more complex than the one we used in chapter 7, for example. This is unfortunately the price we pay for better interoperability and real-world readiness. Luckily, there’s a tool that can help us: JSON schemas.[12] A JSON schema is a way to formalize the model of a JSON payload; it’s basically the XML schema (XSD) of JSON. The Web Thing Model provides a Web Thing Model–compliant JSON schema that you can use to validate the JSON model of your Things. To use it, download it[13] and then use a JSON schema validator library such as JSONSchema for Node.js,[14] or use an online validator such as the good JSON Schema Lint.[15]

12

13

14

15

Extending the WoT server for discovery—architecture

There are many ways to implement this model on the Pi, but the simplest way to do it is to extend the architecture on top of what you implemented in chapter 7. The key idea is to put the Web Thing Model in the middle. The properties of the model will be updated by the different plugins connected to the sensors; for example, the temperature or PIR plugins. The plugins managing actuators will listen for incoming actions by observing the model. Finally, clients request resources and the server sends them a subset of the model as a response. Look at figure 8.7 to see the key parts of this implementation.

Figure 8.7. Implementation strategy of our Pi web Thing: the model is in the middle. It’s used by the routes creator to create the REST resources and their corresponding endpoints. Sensor plugins—for example, PIR—update the model whenever fresh data is read from a sensor. Actuator plugins listen for actions sent by clients, execute the action, and finally update the model when the action has been executed successfully; for example, they update the properties that have changed as a result of the action.

Dynamic routing

In chapter 7, we manually created Express routes. Here, because we implemented a well-known contract (the Web Thing Model), we’re able to automatically generate the routes with little effort. To do this we first load the model and create the routes accordingly inside the /routes/routesCreator.js file. The code in the next listing shows the creation of the root resource.

Listing 8.11. /routes/routesCreator.js: root resource route

The code for the Thing (/), model (/model), properties (/properties/...), and actions (/actions/...) resources is similar. The next listing how to create the routes related to actions.

Listing 8.12. /routes/routesCreator.js: actions resources routes

You can see that the routes are created using two helper functions, defined in utils.js, that map the model to the resource representation as specified in the Web Thing Model:

  • extractFields(fields, model) creates a new object by copying only the necessary fields from the model.
  • modelToResources(subModel, withValue) transforms a subset of the model into an array of resources; for example, it extracts all the properties from the model with their latest values to create the /properties resource.
Plugins

Because the Web Thing Model is based on the concepts of actions, not just properties (like our implementation in chapter 7), we need to adapt the plugins to react to incoming actions. The basic concept is shown in figure 8.7: sensor plugins (for example, the temperature and humidity plugin, the PIR plugin) still update properties just like in the code of chapter 7. But actuator plugins will listen for incoming actions by observing the model and will update properties when changing their state after an action has been executed.

You can find the code for the new plugins in the /plugins/internal directory. You’ll notice that unlike in chapter 7, all plugins inherit now from a corePlugin.js module. This helps us group the code common to all plugins into an abstract plugin that other concrete plugins will inherit from and extend. This can be done using a JavaScript feature called prototypal inheritance. If you have no clue what we’re talking about here, don’t worry. All you need to remember is that the all the code shared by all plugins is implemented in corePlugin.js, whereas all the functionality that’s specific to a plugin is implemented in the concrete plugin modules themselves; for example, pirPlugin.js.[16] The most important part of the corePlugin.js file is shown in the next listing.

16

If you’d like to learn more about prototypical inheritance in JavaScript, the Mozilla JavaScript portal is a good place to start: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Inheritance_and_the_prototype_chain. Or you can use any of the JavaScript or Node.js books we recommended in chapter 3.

Listing 8.13. /plugins/corePlugin.js: generic plugin for common features

As a result, concrete plugins are much shorter and simpler because they can use the functionality from the corePlugin.js module. All these plugins have to do now is register which property they will update and which actions they will listen to (observe). Obviously, they also have to implement the hardware connectivity part (GPIOs) as well as what to do with the hardware when an action they listen to is performed through the REST API. All the plugins are in the /plugins directory. To understand how all this works, take a closer look at the LED plugin shown in the next listing.

Listing 8.14. /plugins/ledsPlugin.js: LED plugin working with the Web Thing Model

If you have other devices at home, we invite you to extend the Web Thing Model for your devices and adapt this implementation so you can expose those devices as web Things so that they can be part of the Web of Things.

8.3.7. Summary—the Web Thing Model

In this section, we introduced the Web Thing Model, a simple JSON-based data model for a web Thing and its resources. We also showed how to implement this model using Node.js and run it on a Raspberry Pi. We showed that this model is quite easy to understand and use, and yet is sufficiently flexible to represent all sorts of devices and products using a set of properties and actions. The goal is to propose a uniform way to describe web Things and their capabilities so that any HTTP client can find web Things and interact with them. This is sufficient for most use cases, and this model has all you need to be able to generate user interfaces for web Things automatically, as we’ll show in chapter 10. If the hotel room where our Estonian friend Lena is staying would only offer a Web Thing Model and an API like this for all the devices and services in the room, she would be happy and could build her dream app in no time! Sadly, the Web of Things is nowhere near this vision yet because such a model for the Web of Things has been missing. Until now, that is!

8.4. The Semantic Web of Things

In an ideal world, search engines and any other applications on the web could also understand the Web Thing Model. Given the root URL of a web Thing, any application could retrieve its JSON model and understand what the web Thing is and how to interact with it. But this is not yet the case because the Web Thing Model we proposed isn’t a standard. The question now is how to expose the Web Thing Model using an existing web standard so that the resources are described in a way that means something to other clients. The answer lies in the notion of the Semantic Web and, more precisely, the notion of linked data that we introduce in this section.

Semantic Web refers to an extension of the web that promotes common data formats to facilitate meaningful data exchange between machines. Thanks to a set of standards defined by the World Wide Web Consortium (W3C), web pages can offer a standardized way to express relationships among them so that machines can understand the meaning and content of those pages. In other words, the Semantic Web makes it easier to find, share, reuse, and process information from any content on the web thanks to a common and extensible data description and interchange format.

8.4.1. Linked data and RDFa

When search engines find and index content from the web, most of the data on web pages is unstructured. This makes it difficult to understand what a web page is about. Is this page about someone? Or is it about a restaurant, a movie, a birthday party, or a product? HTML pages have only a limited ability to tell web clients or search engines what they talk about. All you can do is to define a summary and a set of keywords. The HTML specification alone doesn’t define a shared vocabulary that allows you to describe in a standard and non-ambiguous manner the elements on a page and what they relate to.

Linked data

Enter the vision of linked data,[17] which is a set of best practices for publishing and connecting structured data on the web, so that web resources can be interlinked in a way that allows computers to automatically understand the type and data of each resource. This is particularly appealing because any application that understands the type of a resource can then collect, process, and aggregate data from different sources uniformly, regardless of where it was published.

17

This vision has been strongly driven by complex and heavy standards and tools centered on the Resource Description Framework[18] (RDF) and various controlled vocabularies, known as ontologies. Although powerful and expressive, RDF would be overkill for most simple scenarios, and this is why a simpler method to structure content on the web is desirable.

18

To overcome the limited descriptive power of the web without the heavy machinery of RDF, RDFa[19] offers an interesting tradeoff. This standard emerged as a lighter version of RDF that can be embedded into HTML code. Designed for both humans and machines, RDFa is a simple and lightweight way to annotate structured information such as products, people, places, and events directly within HTML pages. Most search engines can use these annotations to generate better search listings and make it easier to find your websites.

19

Using RDFa to annotate the elements of the Web Things Model directly in the HTML representation of your device is particularly useful because search engines could then find and understand your web Things, without having to understand the JSON representation of the Web Thing Model. Putting it bluntly, using RDFa to describe the metadata of a web Thing will make that web Thing findable and searchable by Google. Although Google supports several data types, such as products, recipes, and events,[20] there is no specific type for the Web of Things. Let’s look at how we can create our own data types and use them within RDFa.

20

Learn more about Google’s support for markups: https://developers.google.com/structured-data/rich-snippets/.

RDFa primer

To annotate any content using RDFa, we must either reuse an existing vocabulary or create a new one. A vocabulary,[21] also called a taxonomy, is a set of terms (fields) that can be used to annotate a certain type of element, along with a definition of what each field refers to. For example, if we only want to expose basic information about a Raspberry Pi, such as its name, description, or an image, we could use the products vocabulary supported by Google.[22]

21

22

Unfortunately, this format doesn’t allow exposing the properties or actions of our Web Thing Model because there isn’t a vocabulary for the web Things we can reuse. But we can define our own based on the Web Thing Model reference found here: http://model.webofthings.io.

In the following listing,[23] we show how the WoT Pi can expose its JSON model using RDFa and our own Web of Things vocabulary. Start the WoT Pi server from our GitHub repository, as shown in section 8.3.6. By accessing the root resource of your WoT Pi with your browser, you’ll see the following HTML code.

23

Note that to improve readability, the extract shown in listing 8.15 is a shorter version of the actual HTML code returned by the web Thing implementation you’re using in this chapter.

Listing 8.15. The HTML representation of the root resource with RDFa annotations
<div vocab="http://model.webofthings.io/" typeof="WebThing">
  <h1 property="name">Raspberry Pi</h1>
  <div property="description">
    <p>A simple WoT-connected Raspberry PI for the WoT book.</p>
  </div>
  <p>ID:<span property="id">1</span></p>
  <p>Root URL:<a property="url" href="http://devices.webofthings.io">http://devices.webofthings.io</a></p>
  Resources:
  <div property="links" typeof="Product">
    <a property="url" href="https://www.raspberrypi.org/products/raspberry-pi-2-model-b/">
     Product this Web Thing is based on.</a>
  </div>
  <div property="links" typeof="Properties">
    <a property="url" href="properties/">
     Properties of this Web Thing.</a>
  </div>
  <div property="links" typeof="Actions">
    <a property="url" href="actions/">
    Actions of this Web Thing.</a>
  </div>
  <div property="links" typeof="UI">
    <a property="url" href="ui/">
    User Interface for this Web Thing.</a>
  </div>
</div>

You can see that most HTML tags have some unfamiliar attributes[24] defined by RDFa:

24

Lean more about HTML attributes here: http://www.w3schools.com/html/html_attributes.asp.

  • vocab defines the vocabulary used for that element, in this case the Web of Things Model vocabulary defined previously.
  • property defines the various fields of the model such as name, ID, or description.
  • typeof defines the type of those elements in relation to the vocabulary of the element.

This allows other applications to parse the HTML representation of the device and automatically understand which resources are available and how they work. In particular, because Web of Things search engines will become increasingly popular (or will when Google supports and understands the Web Thing Model), physical devices, their data, and services will be easily indexed and searchable in real time.

Adding RDFa to your WoT Pi

To offer RDFa annotations for your WoT Pi, you’ll need to extend the HTML representation of its resources. In chapter 7 you saw a simple way of returning HTML based on converter middleware. The problem with this approach is that you had to create the HTML code inside the converter middleware, which wasn’t very clean. A much better method in Express is to use templating engines. These modules offer the ability to create HTML templates that are dynamically filled with data when an HTML representation is requested. We installed a templating engine called Handlebars[25] in the project of chapter 8, but feel free to install it yourself as described in the nerd corner that follows.

25

Once the templating engine is installed, all you need to do is to create HTML templates that contain your RDFa code. As an example, listing 8.16 is a snippet of the HTML template for the root resource of the Pi.

The nerd corner—Install a templating engine

To use a templating engine for your WoT Pi, install the consolidate module[a] (npm install –-save consolidate); this module facilitates the integration of many templating engines to Express. In our case we’ll use the Handlebars templating module, which you can install via NPM as well (npm install –-save handlebars). Once it’s installed, you need to tell your Express app to use it by adding the following code to the http.js file:

a

app.engine('html', cons.handlebars);
app.set('view engine', 'html');
app.set('views', __dirname + '/../views');
Listing 8.16. Templating HTML view in Express with RDFa tags

Then, extend the converter.js middleware to inject the variables needed for your RDFa and to invoke the templating engine, as shown in the next listing.

Listing 8.17. /middleware/converter.js: extending the converter

That’s it! The HTML pages of your Pi now offer RDFa annotations, ready for the actors of the Semantic Web (for example, clients and search engines) to consume that data.

8.4.2. Agreed-upon semantics: Schema.org

The tools of the Semantic Web can be used to describe pretty much anything. For instance, we could use RDFa to add more semantic description on top of our Web Thing Model. We could create a vocabulary that describes that a web Thing is a washing machine or smart door lock. The issue with the approach would be that only applications in our ecosystem would understand these specific vocabularies. We could push it one step further and turn these vocabularies into standards. But this is time-consuming and would often lead to competing standards because each manufacturer would want their own vocabulary.

A more recent approach is to rely on more lightweight collaborative repositories. These repositories offer simple schema for specific semantic descriptions. They provide de facto ways of describing simple concepts such as things, people, and locations.

Schema.org[26] has become the most popular of these collaborative repositories. It hosts a set of well-defined schemas for all sorts of structured data on the internet. In their own words,

26

Schema.org is a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond. Schema.org vocabulary can be used with many different encodings, including RDFa, Microdata and JSON-LD. These vocabularies cover entities, relationships between entities and actions, and can easily be extended through a well-documented extension model. Over 10 million sites use Schema.org to markup their web pages and email messages. Many applications from Google, Microsoft, Pinterest, Yandex and others already use these vocabularies to power rich, extensible experiences.

Extract from http://schema.org/

In other words, not only can anyone directly reuse the models from schema.org to describe their web resources in a more standard way, but doing so will also make them automatically findable and understandable by many other websites and services. Google, Yahoo!, and Microsoft Bing, for instance, can parse the schema.org vocabulary for people. If you create a product description page using a serialization of this vocabulary—for example, using RDFa—to describe a product, a search engine will know you’re talking about a product and will render the results accordingly. Similarly, the Person vocabulary is used to identify pages that describe human beings, and the Place vocabulary is used to attach physical locations to web pages that are taken into account when using location-based search queries, such as via Google Maps. Search engines aren’t the only clients that use these vocabularies; mail clients such as Gmail,[27] web browsers, and other web-based discovery tools are also starting to understand them.

27

In the Web of Things, these agreed-upon vocabularies can readily be used to improve the findability of Things, as we’ll illustrate next with a small example using a growing format called JSON-LD.

8.4.3. JSON-LD

The schemas available on schema.org aren’t bound to a particular format. You can obviously use them in RDFa but you can also use them in Microdata[28] as another way of representing linked data within HTML. On top of that, the schemas are available in a more recent format called JSON-LD (JSON-based serialization for Linked Data).

28

JSON-LD is an interesting and lightweight semantic annotation format for linked data that, unlike RDFa and Microdata, is based on JSON.[29] It’s a simple way to semantically augment JSON documents by adding context information and hyperlinks for describing the semantics of the different elements of a JSON objects.

29

Getting started with JSON-LD can be a little tricky because at the time of writing JSON-LD is not yet an official standard, but rather an evolving W3C recommendation.[30] A good place to start is the official JSON-LD page,[31] where you’ll find a number of tutorials, examples, and a playground to test your JSON-LD payloads. In this section, we focus on only the bare minimum you’ll need to understand how to use it for the examples we provide.

30

The latest version of the recommendation is available here: http://www.w3.org/TR/json-ld/.

31

JSON-LD extends the JSON language with a number of keywords represented by the special names of JSON properties starting with the @ sign. The most important keywords are summarized in table 8.1.

Table 8.1. The three main reserved keywords the JSON-LD language adds to JSON

Key

Description

Example

@context URL referencing a particular schema http://schema.org/Person
@id Unique identifier (usually a URI) http://dbpedia.org/page/Mahatma_Gandhi
@type A URL referencing the type of a value http://www.w3.org/2001/XMLSchema#dateTime

On its own, JSON-LD is just another format for adding semantics to data. But when using it with standard schemas, such as those available on schema.org, it can be powerful because it lets you reference an agreed-upon context to semantically describe your data.

JSON-LD for Things

Let’s look at a simple example. We’ll use the Product schema described on schema.org[32] to add some semantic data to our Pi. After all, our Pi is also a product, so it does make sense! The following listing shows a modified version of the pi.json model that uses the JSON-LD vocabulary for products.

32

Listing 8.18. resources/piJsonLd.json: adding JSON-LD to our JSON model

JSON-LD uses a different MIME or media type than JSON. Thanks to HTTP’s content-negotiation mechanism you saw earlier, you only have to add a small bit of code to your converter.js middleware, as shown in the next listing, to start serving JSON-LD.

Listing 8.19. middleware/converter.js: adding support for JSON-LD representations

Now go ahead and try to request JSON-LD on the root resource of your Pi with the Accept: application/ld+json header, and you’ll get JSON-LD data returned.

Findability and beyond

This simple example already illustrates the essence of JSON-LD it gives a context to the content of a JSON document. As a consequence, all clients that understand the http://schema.org/Product context will be able to automatically process this information in a meaningful way. This is the case with search engines, for example. Google and Yahoo! process JSON-LD payloads using the Product schema to render special search results; as soon as it gets indexed, our Pi will be known by Google and Yahoo! as a Raspberry Pi product. This means that the more semantic data we add to our Pi, the more findable it will become. As an example, try adding a location to your Pi using the Place schema,[33] and it will eventually become findable by location.

33

We could also use this approach to create more specific schemas on top of the Web Thing Model; for instance, an agreed-upon schema for the data and functions a washing machine or smart lock offers. This would facilitate discovery and enable automatic integration with more and more web clients.

8.4.4. Beyond the book

As you’ve realized, a common Application layer protocol is essential but not sufficient to achieve interoperability. A higher-level model to describe the metadata and functionality of the Web of Things along with a standard set of APIs are needed to build interoperable applications and devices. The Web Thing Model we introduced in section 8.3 bridges this gap and is an excellent starting point for building your next Web of Things device, gateway, or cloud.

At the time of writing, this model has been published as a W3C Member Submission. Although it isn’t an official standard, it might serve as a basis for working groups, and we invite you to follow the upcoming standardization efforts within the W3C Web of Things consortium.[34]

34

The battle for semantics and models for the IoT is strategic and will not only involve open standards. In the home automation space, Apple HomeKit and Google Weave will likely play an important role. We’re at a critical turning point in the development of the IoT, and relying on standards created by large companies might not be the best option for individual consumers. Therefore, independent institutions such as the W3C will have to play a vital role in the future of the web and WoT.

8.5. Summary

  • The ability to find nearby devices and services is essential in the Web of Things and is known as the bootstrap problem. Several protocols can help in discovering the root URL of Things, such as mDNS/Bonjour, QR codes or NFC tags.
  • The last step of the web Things design process, resource linking design (also known as HATEOAS in REST terms), can be implemented using the web linking mechanism in HTTP headers.
  • Beyond finding the root URL and sub-resources, client applications also need a mechanism to discover and understand what data or services a web Thing offers.
  • The services of Things can be modeled as properties (variables), actions (functions), and links. The Web Thing Model offers a simple, flexible, fully web-compatible, and extensible data model to describe the details of any web Thing. This model is simple to adapt for your devices and easy to use for your products and applications.
  • The Web Thing Model can be extended with more specific semantic descriptions such as those based on JSON-LD and available from the Schema.org repository.

Although internet access is the bare minimum required to be part of the Web of Things, you’ve seen that a shared and open data model to describe a web Thing will maximize interoperability without sacrificing flexibility and ease of use.

Now that you’ve learned how to open, expose, find, and use web Things in the World Wide Web, you’re ready for the next challenge—and layer—of the Web of Things: how to share web Things securely over open networks such as the web. In the next chapter, we’ll first show you how to secure your web Things using state of the art methods and best practices. Afterward, you’ll learn how to use your existing social network account in order to share your devices with your friends. Finally, we’ll show how to implement best practices of web security and data sharing on your WoT Pi.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.2.15