6
Agent‐Oriented Body Sensor Networks

6.1 Introduction

Many computing paradigms have been to date exploited to support modeling and implementation of wireless sensor networks (WSNs) and, more specifically, of body sensor networks (BSNs). As widely discussed in Chapter 2, different kinds of paradigms, from low level to high level, can be used to develop WSN‐based systems. Among such paradigms, the most notable ones are event‐driven programming [1], data‐based models [2], service‐oriented programming [3], macro‐programming [4], state‐based programming [5], and agent‐oriented programming [6]. This chapter proposes the agent‐oriented paradigm for the modeling and implementation of BSNs. After introducing background concepts on the agent‐computing paradigm and, specifically, on software agents in the WSN context, the chapter discusses motivations and challenges on the exploitation of agents for BSNs and provides a description of the related state‐of‐the‐art. We then present agent‐based modeling and implementation of BSNs. A case study is finally proposed that uses two well‐known agent‐oriented platforms (JADE and MAPS) to develop an agent‐based real‐time human activity recognition system.

6.2 Background

6.2.1 Agent‐Oriented Computing and Wireless Sensor Networks

Software agents are defined as being networked software entities or programs that can perform specific (even complex) tasks for a user and having a degree of intelligence that allows them to carry out parts of their tasks/activities autonomously and to interact with their environment in a useful manner. The features of software agents perfectly fit those of the WSNs and their sensor components [7, 8]; in fact, they mainly include [9]:

  • Autonomy: agents (or sensor nodes) should be able to perform the majority of their problem‐solving tasks without the direct intervention of humans, and they should have a degree of control over their own actions and their own internal state.
  • Social ability: agents (or sensor nodes) should be able to interact, when they deem appropriate, with other software agents (or sensor nodes) and humans in order to complete their own problem solving and to help others with their activities where and when appropriate.
  • Responsiveness: agents (or sensor nodes) should perceive their environment, in which they are situated and which may be the physical world, a user, a collection of agents (or other sensors), the Internet, etc., and respond in a timely fashion to changes which occur in it.
  • Proactiveness: agents (or sensor nodes) should not simply act in response to their environment, but they should be able to exhibit opportunistic, goal‐directed behavior and take the initiative where and when appropriate.

An interesting taxonomy about WSNs and their relationships with multiagent systems (MAS) can be found in Ref. [8]. In particular, the major motivation of using agents over such networks is that many WSN properties are shared with and can be actually supported by agents and MAS: physical distribution, resource boundedness, information uncertainty, large‐scale, decentralized control, and adaptiveness. Moreover, as sensors in a WSN must typically coordinate their actions to achieve system‐wide goals, coordination among dynamic entities (or agents) is one of the main features of MAS. In the following, the aforementioned common properties are discussed:

  • Physical distribution implies that sensors are situated in an environment from which they can receive stimuli and act accordingly, also through control actions aiming at changing their environment. Situatedness is in fact a main property of an agent, and several well‐known agent architectures were defined to support such an important property.
  • Boundedness of resources (computing power, communication, and energy) is a typical property both of sensor nodes as single units and of the WSN as a whole. Agents and related infrastructures can support such limitation through intelligent resource‐aware, single, and cooperative behaviors.
  • Information uncertainty is typical in large‐scale WSNs in which both the status of the network and the data gathered to observe the monitored/controlled phenomena could be incomplete. In this case, intelligent (mobile) agents could recover inconsistent states and data through cooperation and mobility.
  • Large‐scale is a property of WSNs either sparsely deployed on a wide area or densely deployed on a restricted area. Agents in MAS usually cooperate in a decentralized way through highly scalable interaction protocols and/or time‐ and space‐decoupled coordination infrastructures.
  • Centralized control is not feasible in large‐scale WSNs as nodes can have intermittent connections and also can suddenly disappear due to energy lack. Thus, decentralized control should be exploited. The multiagent approach is usually based on control decentralization transferred either to multiple agents dynamically elected among the available set of agents or to the whole ensemble of agents coordinating as peers.
  • Adaptiveness is the main shared property between sensors and agents. An agent is by definition adaptive in the environment in which it is situated. Thus, modeling the sensor activity as an agent or a MAS and, consequently, the whole WSN as a MAS could facilitate the implementation of the adaptiveness property.

6.2.2 Mobile Agent Platform for Sun SPOT (MAPS)

MAPS [10–12] is a Java‐based framework purposely developed on Sun SPOT technology [13] for enabling agent‐oriented programming of WSN applications. MAPS has been developed according to the following requirements:

  • Component‐based lightweight agent server architecture to avoid heavy concurrency by exploiting cooperative concurrency.
  • Lightweight agent architecture to efficiently execute and migrate agents.
  • Minimal core services involving agent migration, naming, communication, activity timing, and access to sensor node resources, i.e. sensors, actuators, flash memory, switches, and batteries.
  • Plug‐in‐based architecture on the basis of which any service can be defined in terms of one or more dynamically installable components implemented as single or cooperating (mobile) agent(s).
  • Java language for programming mobile agents.

The architecture of MAPS, shown in Figure 6.1, is based on components that interact through (high level or internal) events and provide a set of services to (mobile) agents including message transmission, agent creation, agent cloning, agent migration, timer handling, and easy access to the sensor node resources.

Architecture of MAPS displaying two-headed arrows linking a cloud shaped labeled WSN to 2 boxes on top labeled MAPS node and a box at the bottom labeled MAPS node containing circles and boxes labeled MA, MAEE, etc.

Figure 6.1 Architecture of MAPS.

The main components of the MAPS architecture are described as follows:

  • Mobile Agent (MA) is the basic high‐level component defined by the user for developing agent‐based applications.
  • Mobile Agent Execution Engine (MAEE) controls the execution of MAs by means of an event‐based scheduler enabling cooperative concurrency. MAEE also interacts with the other service‐provider components (see Figure 6.1) to fulfill service requests (e.g. message transmission, sensor reading, and timer setting) issued by MAs.
  • Mobile Agent Migration Manager (MAMM) supports agents’ migration through the Isolate (de)hibernation feature provided by the Sun SPOT environment [13]. Such feature involves a data collection and execution state, whereas the agent code should already be at the destination node. This is a limitation of the Sun SPOTs, which do not support dynamic class loading and code migration.
  • Mobile Agent Communication Channel (MACC) enables interagent communications based on asynchronous messages (unicast or broadcast) supported by the radiogram protocol.
  • Mobile Agent Naming (MAN) provides agent naming based on proxies for supporting MAMM and MACC in their operations. MAN also manages the (dynamic) list of the neighbor sensor nodes that are updated through a beaconing mechanism based on broadcast messages.
  • Timer Manager (TM) manages the timer service for timing MA operations.
  • Resource Manager (RM) manages access to the resources of the Sun SPOT node: sensors (3‐axial accelerometer, temperature, and light), switches, LEDs, batteries, and flash memory.

The MAPS Mobile Agent model is depicted in Figure 6.2. Specifically, the dynamic behavior of MA is modeled as a multiplane state machine (MPSM). The GV block represents the global variables, namely, the data inside an MA, whereas the GF is a set of global supporting functions. Each plane may represent the behavior of the MA in a specific role, thus enabling role‐based programming [14], and is composed of local variables (LVs), local functions (LFs), and an ECA‐based automaton (ECAA). This automaton is composed of states and mutually exclusive transitions among states. Transitions are labeled by Event–Condition–Action (E[C]/A) rules, where E is the event name, [C] is a Boolean expression (or guard) based on global and local variables, and A is an atomic action. A transition is triggered when E is received and C is true. When a triggered transition is fired, A is first atomically executed and then the state transition is completed. MAs interact through events that are asynchronously delivered by the MAEE and dispatched, through the Event Dispatcher component, to one or more planes according to the events that the planes are able to handle. It is worth noting that the MPSM‐based agent behavior programming allows exploiting the benefits deriving from three main paradigms for WSN programming: event‐driven programming, state‐based programming, and mobile agent‐based programming.

Agent behavior model of MAPS displaying a downward arrow labeled Internal events linking a box labeled Event Dispatcher (top) to three concentric boxes for Agent behavior, MPSM, and Plane_i (bottom).

Figure 6.2 Agent behavior model of MAPS.

6.3 Motivations and Challenges

In the context of highly dynamic distributed computing, mobile agents are a suitable and effective computing paradigm for supporting the development of distributed applications, services, and protocols [15]. A mobile agent is an executing program that has the unique ability to transport itself from one system in a network to another in the same network. Networks could be large‐scale networks or even personal area networks like BSNs. Such ability allows mobile agents to (i) move across a system containing objects, agents, services, data, and devices with which the mobile agent wants to interact and to (ii) take advantage of being in the same host or network as the elements with which it interacts. Agent migration can be based on weak mobility (agent data and code are migrated) or strong mobility (agent data, code, and execution state are migrated) [16]. Mobile agents are supported by MASs [16] that basically provide an API for developing agent‐based applications, and an agent server is able to execute agents by providing them with basic services such as migration, communication, and node resource access.

In their seminal paper [17], Lange and Oshima defined at least seven good reasons for using mobile agents in generic distributed systems. In the following, we customize them in the WSN context:

  1. Network load reduction: mobile agents are able to access remote resources, as well as communicate with any remote entity, by directly moving to their physical locations and interacting with them locally to save bandwidth resources. For instance, a mobile agent that incorporates data‐processing algorithm/s can move to a sensor node (e.g. a wearable sensor node), perform the needed operations on the sensed data, and transmit the results to a sink node. This is more desirable, rather than executing a periodic transmission of raw sensed data from the sensor node to the sink node and the data processing on the latter.
  2. Network latency overcoming: an agent provided with proper control logic may move to a sensor/actuator node to locally perform the required control tasks. This overcomes the network latency that will not affect the real‐time control operations also in case of lack of network connectivity with the base station.
  3. Protocol encapsulation: if a specific routing protocol supporting multi‐hop paths should be deployed in a given zone of a WSN, a set of cooperating mobile agents encapsulating this protocol can be dynamically created and distributed into the proper sensor nodes without any regard for standardization matter. Also in case of protocol upgrading, a new set of mobile agents can easily replace the old one at runtime.
  4. Asynchronous and autonomous execution: these distinctive properties of mobile agents are very important in dynamic environments like WSNs where connections may not be stable and network topology may change rapidly. A mobile agent, upon a request, can autonomously travel across the network to gather required information “node by node” or to carry out the programmed tasks and, finally, can asynchronously report the results to the requester.
  5. Dynamic adaptation: mobile agents can perceive their execution environment and react autonomously to changes. This behavioral dynamic adaptation is well suited for operating on long‐running systems like WSNs where environment conditions are very likely to change over time.
  6. Orientation to heterogeneity: mobile agents can act as wrappers among systems based on different hardware and software. This ability can fit well the need for integrating heterogeneous WSNs supporting different sensor platforms or connecting WSNs to other networks (like IP‐based networks). An agent may be able to translate requests coming from a system into suitable ones for another different system.
  7. Robustness and fault tolerance: the ability of mobile agents to dynamically react to unfavorable situations and events (e.g. low battery level) can lead to better robust and fault‐tolerant distributed systems; e.g. the reaction to the low battery level event can trigger a migration of all executing agents to an equivalent sensor node to continue their activity without interruption.

6.4 State‐of‐the‐Art: Description and Comparison

Although many MASs [18] were developed for conventional distributed platforms, a very few agent frameworks for WSNs have been to date proposed and concretely implemented. In the following, we first describe Agilla and actorNet, the most significant available research prototypes based on TinyOS [19], and then, we overview AFME and MAPS, which are the most representative ones based on the Java language.

Agilla [6] is an agent‐based middleware developed on TinyOS and supporting multiple agents on each node. It provides two fundamental resources on each node:

  • The tuplespace, which represents a shared memory space where structured data (tuples) can be stored and retrieved, allowing agents to exchange information through spatial and temporal decoupling. A tuplespace can be also accessed remotely.
  • The neighbor list, which contains the address of all one‐hop nodes needed when an agent has to migrate.

Agilla agents can migrate carrying their code and state, but they cannot carry their tuples locally stored on a tuplespace. Packets used for node communication (e.g. for agent migration/cloning and remote tuple accessing) are very small to minimize losses, whereas retransmission techniques are also adopted.

ActorNet [20] is an agent‐based platform specifically designed for Mica2/TinyOS sensor nodes. To overcome the difficulties of code migration and interoperability due to the strict coupling between applications and sensor node architectures, actorNet exposes services like virtual memory, context switching, and multitasking. Due to these features, actorNet effectively supports agent programming by providing a uniform computing environment for all agents, regardless of hardware or operating system differences. The actorNet language used for high‐level agent programming has syntax and semantics similar to those of Scheme with proper instruction extension.

Both Agilla and actorNet are designed for TinyOS that relies on the nesC language.

The Java language, through which Sun SPOT [13] and Sentilla JCreate [21] sensors can be programmed, due to its object‐oriented features, could provide more flexibility and extendibility for an effective implementation of agent‐based platforms. Currently, the main available Java‐based mobile agent platforms for WSNs are MAPS [11] and AFME [22].

The AFME framework [22], a lightweight version of the Agent Factory framework purposely designed for wireless pervasive systems and implemented in J2ME, is also available on Sun SPOT and is used for exemplifying agent communication and migration in WSNs. AFME is strongly based on the Belief–Desire–Intention (BDI) paradigm, in which intentional agents follow a sense–deliberate–act cycle. In AFME, agents are defined through a mixed declarative and imperative programming model. The declarative Agent Factory Agent Programming Language (AFAPL), based on a logical formalism of beliefs and commitments, is used to encode an agent’s behavior by specifying rules that define the conditions under which commitments are adopted. The imperative Java code is instead used to encode perceptors and actuators. However, AFME was not specifically designed for WSNs and, particularly, for Java Sun SPOT.

MAPS, the Java‐based agent platform overviewed in Section 6.2.2, is conversely specifically designed for WSNs and currently uses the release 4.0 (Blue) of the Sun SPOT library to provide advanced functionality of communication, migration, timing, sensing/actuation, and flash memory storage. MAPS allows developers to program agent‐based applications in Java according to the rules of the MAPS framework, and thus no translator and/or interpreter need to be developed and no new language has to be learnt as in the case of Agilla, ActorNet, and AFME. MAPS was also ported on the Sentilla JCreate sensor platform and renamed TinyMAPS [21].

In Table 6.1 a comparison among the aforementioned agent platforms is reported.

Table 6.1 Comparison among agent‐oriented platforms (Agilla, ActorNet, AFME, and MAPS) for WSNs.

AgillaActorNetAFMEMAPS
Agent migration availabilityYesYesYesYes
Concurrent agentsYesYesYesYes
Agent communicationsTuple‐basedAsynchronous messagesAsynchronous messagesAsynchronous messages
Agent programming languageProprietary ISAScheme‐likeDeclarative + JavaJava
Agent modelAssembly‐likeFunctionalBDIFinite state machine
Intentional agents availabilityNoNoYesNo
WSN‐supported platformsMica2, MicaZ, TelosBMica2Sun SPOTSun SPOT, Sentilla JCreate

6.5 Agent‐Based Modeling and Implementation of BSNs

As widely discussed in Chapter 1, a BSN is basically composed of a coordinator node or base station and one or more wearable sensor nodes connected with a 1‐hop wireless connection with the coordinator. According to the agent‐oriented approach, each component of a system is agentified; therefore, both the BSN coordinator and the BSN sensor nodes are modeled as agents. A BSN system as a whole constitutes a MAS that is basically structured as a master/slave system (see Figure 6.3a), where the coordinator is the master agent and the sensor nodes are the slave agents. The slave agents can only interact with the coordinator agent. A variant of the basic architecture (see Figure 6.3b) is a mix of Master/Slave (M/S) and peer‐to‐peer (P2P): the coordinator agent can interact with all slave agents and the slave agents can interact with each other. Both basic M/S and advanced M/S + P2P can be used to structure a single BSN. To model collaborative/interacting BSNs (see Chapter 7), the Super Peer model (see Figure 6.3c) can be exploited: coordinator agents are super peers and can interact with each other, whereas sensor nodes belonging to the same BSN can only interact with each other and with their coordinator agent.

Image described by caption and surrounding text.

Figure 6.3 Agent modeling of BSNs. (a) Master/slave model, (b) Master/slave + peer‐to‐peer model, and (c) Super peer model.

Agent‐based implementation of BSN systems should be based on real agent platforms [23] supporting the programming of both the coordinator agent and the application agents and the sensor agents. Specifically, we propose JADE [24] to implement the application and coordinator agents and MAPS [11] to implement the sensor agents. Thus, agent programming follows the rules of JADE and MAPS. Moreover, the application development of agent‐based applications is also supported by an agent‐oriented software engineering methodology [25], which usually covers the phases of requirement analysis, design, implementation, and deployment. In the next section, a case study is proposed to exemplify the agent‐based engineering approach for BSN applications.

6.6 Engineering Agent‐Based BSN Applications: A Case Study

In order to show the effectiveness of agent‐based platforms to support programming of BSN applications, in Ref. [26] a MAPS‐based agent‐oriented signal‐processing in‐node environment specialized for real‐time human activity monitoring has been proposed. In particular, the system is able to recognize postures (e.g. lying down, sitting, and standing still) and movements (e.g. walking) of assisted livings. The architecture of the developed agent‐based system, shown in Figure 6.4, is organized into three types of agents:

  • The Application‐level Agent (running on a PC or a handheld device) that embeds the application logic, implemented with Java and JADE [24].
  • The Coordinator Agent (running on a PC or a handheld device), implemented with Java and JADE.
  • The Sensor Agent (running on the wearable sensor nodes), programmed with MAPS [11].
Architecture of agent‐based activity monitoring system with two-headed arrows linking boxes for JADE-based Activity Recognition Agent, JADE-based Coordinator Agent, WaistSensorAgent, and ThighSensorAgent.

Figure 6.4 Architecture of the agent‐based activity monitoring system.

The Coordinator Agent is based on JADE and incorporates several modules of the Java‐based coordinator developed in the context of the SPINE framework [27]. In particular, it is used by end‐user applications (e.g. the agent‐based real‐time activity recognition application – ARTAR) for managing BSNs by (i) sending commands to the sensor nodes and (ii) capturing low‐level messages and events coming from the sensor nodes. Moreover, the Coordinator Agent integrates an application‐specific logic to keep the sensor agents synchronized. To recognize postures and movements, the ARTAR application integrates a classifier based on the K‐Nearest Neighbor (k‐NN) algorithm. Postures and movements are defined during the training phase. ARTAR and the Coordinator Agent interact through JADE ACL messages.

While the ARTAR and the Coordinator Agent are based on JADE, the two sensor agents are based on MAPS. Thus, a communication adaptation module between JADE and MAPS was developed to allow communication interoperability. The two sensor nodes are, respectively, positioned on the waist and the thigh of the monitored‐assisted living. Specifically, two sensor agents are defined: WaistSensorAgent and ThighSensorAgent. Their behaviors are modeled through a 1‐plane MPSM (see Section 6.2.2) by executing the following stepwise cycle:

  1. Accelerometer Data Sensing: the 3‐axial accelerometer sensor collects raw accelerometer data (<Acc_X, Acc_Y, and Acc_Z>) according to a given sampling time.
  2. Feature Computation: specific features are computed on the collected raw accelerometer data. Features are calculated as follows: (i) Mean on all accelerometer axes for the WaistSensorAgent, (ii) Max and Min on the X accelerometer axis for the WaistSensorAgent, and (iii) Min on the X accelerometer axis for the ThighSensorAgent.
  3. Feature Merging and Transmission: the computed features are merged into a single message and transmitted to the Coordinator Agent.
  4. Go to 1.

Figure 6.5 also shows how such elaboration cycle is actually programmed using the MAPS finite state machine.

Finite state machine of WaistSensorAgent and ThighSensorAgent. A solid circle has arrow labeled AGN_Start/A0 linking to WaitForSensing which is linked to ComputingFeatures. The latter is linked to another circle.

Figure 6.5 Finite state machine of the sensor agents: WaistSensorAgent and ThighSensorAgent.

In Ref. [26], the entire BSN system has been analyzed in depth by considering the following two aspects:

  • The performance evaluation of the timing granularity degree of the sensing activity at the sensor node and the synchronization degree or skew of the activities of the two sensor agents.
  • The recognition accuracy that shows how well the human postures and movements are recognized by the overall agent system.

On the basis of the obtained performance results, it can be stated that MAPS shows its great suitability for supporting efficient BSN applications, thus demonstrating that the agent approach is not only effective during the design and implementation phases of a BSN application but also during its execution. Furthermore, the recognition accuracies are good and encouraging if compared with other works in the literature that use more than two sensors on the human body to recognize activities [28]. Finally, with reference to the programming effectiveness of MAPS, the MAPS programming model based on the finite state machine offers a very straightforward and intuitive tool for supporting BSN application development.

6.7 Summary

This chapter has provided an overview of the use of the agent‐oriented paradigm to model and implement BSN systems. We have first introduced the motivations and challenges for this exploitation. Then, we have introduced MAPS for WSN‐based system development. Furthermore, related work and a qualitative comparison among the most diffused (mobile) agent platforms for WSNs have been discussed. Finally, the chapter has focused on agent‐oriented BSN application development based on MAPS; specifically, a MAPS‐based human activity recognition BSN system has been described.

References

  1. 1 Gay, D., Levis, P., von Behren, R. et al. (2003). The nesC language: a holistic approach to networked embedded systems. Proceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation, San Diego, CA (9–11 June 2003).
  2. 2 Madden, S.R., Franklin, M.J., Hellerstein, J.M., and Hong, W. (2005). TinyDB: an acquisitional query processing system for sensor networks. ACM Transactions on Database Systems (TODS) 30 (1): 122–173.
  3. 3 Marin, C. and Desertot, M. (2005). Sensor bean: a component platform for sensor‐based services. Proceedings of the 3rd International Workshop on Middleware for Pervasive and Ad‐Hoc Computing, MPAC’05, Grenoble, France (28 November–2 December 2005), pp. 1–8. ACM.
  4. 4 Gummadi, R., Gnawali, O., and Govindan, R. (2005). Macroprogramming wireless sensor networks using Kairos. Proceedings of the International Conference on Distributed Computing in Sensor Systems (DCOSS), Fortaleza, Brazil (10–12 June 2015).
  5. 5 Kasten, O. and Römer, K. (2005). Beyond event handlers: programming wireless sensors with attributed state machines. Proceedings of the 4th International Symposium on Information Processing in Sensor Networks, Los Angeles, CA (24–27 April 2005).
  6. 6 Fok, C.‐L., Roman, G.‐C., and Lu, C. (2009). Agilla: a mobile agent middleware for sensor networks. ACM Transactions on Autonomous and Adaptive Systems 4 (3): 1–26.
  7. 7 Rogers, A., Corkill, D., and Jennings, N.R. (2009). Agent technologies for sensor networks. IEEE Intelligent Systems 24: 13–17.
  8. 8 Vinyals, M., Rodriguez‐Aguilar, J.A., and Cerquides, J. (2010). A survey on sensor networks from a multiagent perspective. The Computer Journal 54 (3): 455–470.
  9. 9 Wooldridge, M.J. and Jennings, N.R. (1995). Intelligent agents: theory and practice. The Knowledge Engineering Review 10 (2): 115–152.
  10. 10 Aiello, F., Fortino, G., Gravina, R., and Guerrieri, A. (2009). MAPS: a mobile agent platform for Java Sun SPOTs. Proceedings of the 3rd International Workshop on Agent Technology for Sensor Networks (ATSN‐09), jointly held with the 8th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS‐09), Budapest, Hungary (12 May 2009).
  11. 11 Aiello, F., Fortino, G., Gravina, R., and Guerrieri, A. (2011). A Java‐based agent platform for programming wireless sensor networks. The Computer Journal 54 (3): 439–454.
  12. 12 MAPS – Mobile Agent Platform for Sun SPOT. Documentation and software. http://maps.deis.unical.it (accessed 23 August 2015).
  13. 13 Sun SPOT. Documentation and code. www.sunspotdev.org (accessed 14 June 2017).
  14. 14 Zhu, H. and Alkins, R. (2006). Towards role‐based programming. Proceedings of CSCW’06, Banff, Alberta (4–8 November 2006).
  15. 15 Yoneki, E. and Bacon, J. (2005). A survey of Wireless Sensor Network technologies: research trends and middleware’s role. Tech. Rep. UCAM‐CL‐TR‐646, University of Cambridge.
  16. 16 Karnik, N.M. and Tripathi, A.R. (1998). Design issues in mobile‐agent programming systems. IEEE Concurrency 6: 52–61.
  17. 17 Lange, D.B. and Oshima, M. (1999). Seven good reasons for mobile agents. Communications of the ACM 42 (3): 88–90.
  18. 18 Fortino, G., Garro, A., and Russo, W. (2008). Achieving mobile agent systems interoperability through software layering. Information & Software Technology 50 (4): 322–341.
  19. 19 TinyOS. Documentation and software. www.tinyos.net (accessed 9 June 2017).
  20. 20 Kwon, Y., Sundresh, S., Mechitov, K., and Agha, G. (2006). ActorNet: an actor platform for wireless sensor networks. Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), Hakodate, Japan (28 April 2006), pp. 1297–1300.
  21. 21 Aiello, F., Fortino, G., Galzarano, S., and Vittorioso, A. (2012). TinyMAPS: a lightweight Java‐based mobile agent system for wireless sensor networks. In Fifth International Symposium on Intelligent Distributed Computing (IDC2011) (5–7 October), Delft, the Netherlands. In Intelligent Distributed Computing V, Studies in Computational Intelligence, 2012, Vol. 382/2012, pp. 161–170. doi: 10.1007/978‐3‐642‐24013‐3_16.
  22. 22 Muldoon, C., O’Hare, G.M.P., O’Grady, M.J., and Tynan, R. (2008). Agent migration and communication in WSNs. Proceedings of the 9th International Conference on Parallel and Distributed Computing, Applications and Technologies, Dunedin, New Zealand (1–4 December 2008).
  23. 23 Luck, M., McBurney, P., and Preist, C. (2004). A manifesto for agent technology: towards next generation computing. Autonomous Agents and Multi‐Agent Systems 9 (3): 203–252.
  24. 24 Bellifemine, F., Poggi, A., and Rimassa, G. (2001). Developing multi agent systems with a FIPA‐compliant agent framework. Software Practice and Experience 31: 103–128.
  25. 25 Fortino, G. and Russo, W. (2012). ELDAMeth: an agent‐oriented methodology for simulation‐based prototyping of distributed agent systems. Information & Software Technology 54 (6): 608–624.
  26. 26 Aiello, F., Bellifemine, F., Fortino, G. et al. (2011). An agent‐based signal processing in‐node environment for real‐time human activity monitoring based on wireless body sensor networks. Journal of Engineering Applications of Artificial Intelligence 24: 1147–1161.
  27. 27 Bellifemine, F., Fortino, G., Giannantonio, R. et al. (2011). SPINE: a domain‐specific framework for rapid prototyping of WBSN applications. Software: Practice and Experience 41 (3): 237–265.
  28. 28 Maurer, U., Smailagic, A., Siewiorek, D.P., and Deisher, M. (2006). Activity recognition and monitoring using multiple sensors on different body positions. Proceedings of the International Workshop on Wearable and Implantable Body Sensor Networks (BSN’06), Cambridge, MA (3–5 April 2006), pp. 113–116. IEEE Computer Society.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.168.56