CHAPTER 3

Mobile and Pervasive Computing

Before delving further into an analysis of the privacy issues and implications of mobile and pervasive computing technologies, we first describe defining characteristics of both mobile computing and pervasive computing. While some of these characteristics, such as interconnectivity, are typical for computing systems in general, other characteristics, such as context awareness and implicit interaction, set mobile and emerging pervasive technologies somewhat apart from other information and communication technologies (ICTs). The interested reader may refer to Abowd and Mynatt [2000], Want [2010], and Ferscha [2012] for a larger historical perspective on the evolution and emergence of mobile and pervasive computing.

3.1  MOBILE COMPUTING CHARACTERISTICS

Mobile computing describes a paradigm shift from fixed, stationary computers and servers connected by wired networks to smaller, portable devices that users can take with them and interact with while on the go [Satyanarayanan, 1996]. Laptop computers, PDAs, and mobile phones emerged as the first mobile computing devices in the 1980s and 1990s. Since then mobile devices have evolved into smartphones and tablets, as well as more specialized devices, such as eBook readers, fitness trackers, smart watches, and smart glasses [Schmidt et al., 2012]. Four key aspects differentiate mobile computing from traditional computing: the (1) form factor and (2) computation and communication capabilities of mobile computing devices; (3) their ability to sense the environment, and (4) their software ecosystem.

3.1.1  NOVEL FORM FACTORS–MOBILITY AND DIVERSITY

In contrast to traditional computers, mobile devices are small and light enough to be portable and, therefore, live in close proximity to their users. Today, most of us carry a smartphone with us wherever we go. The relentless miniaturization of ICT components has not only led to portability, but also to diversity: many people today have multiple mobile devices that serve different purposes. For instance, a smartphone may be primarily used for social communication, quick look-ups, location-based services, and mobile entertainment; an ebook reader is used for reading longer texts; a tablet serves for entertainment and light work; a laptop supports more complex work tasks; a fitness tracker keeps track of one’s step count and activity level; and a smartwatch provides us with just-in-time information and notifications.

Fitness trackers and smartwatches are a particular class of mobile devices that are worn on or at the body. Having long since been used only in labs and by a few pioneering individuals, such wearable computers [Billinghurst and Starner, 1999, Mann, 2013] recently saw huge commercial growth. Many manufacturers have released smartwatches that act as companions to smartphones, providing notifications and controls, e.g., about incoming calls or messages, and may also enables activity and fitness tracking with integrated sensors. Similarly, wrist-worn activity and health trackers are small sensor-equipped bracelets that can provide comprehensive information about the user’s fitness and activity level. Slightly more futuristic yet increasingly available commercially are head-mounted displays such as Google Glass or Microsoft’s HoloLens, which can augment a user’s vision with information displays.

The sheer unlimited variety of shapes of current and future mobile computing devices means that users are not only able to carry them throughout the day, but increasingly also wear them at night. The diversity of purposes supported by such devices means that, in principle, some amount of computing and communication power is always within reach, even in situations where a general-purposes device may be impractical to use (e.g., while driving, sleeping, exercising). Last but not least, their portability and mobility also makes mobile devices more susceptible to loss, theft, and damage than fixed computing devices [Satyanarayanan, 1996].

3.1.2  POWER IN YOUR POCKET–COMPUTATION AND COMMUNICATION

As the “computing” in mobile computing suggests, mobile devices are nowadays equipped with substantial processing power. Consider for example, Apple’s 2018 iPhone XS. The iPhone XS uses the A12 Bionic, a system on a chip (SoC) that contains six CPU cores, a dedicated 4-core graphics processor (GPU), a real-time machine learning engine with eight cores enabling immersive augmented reality experiences, and other components, while being energy-efficient [Apple, 2018]. The iPhone XS further has a range of sensors (e.g., barometer, three-axis gyro, accelerometer, proximity sensor, ambient light sensor) that enable continuous and energy-efficient processing of sensor data and motion-based activity detection.

Mobility necessitates untethered operation, and hence has driven wireless connectivity in recent years. Most mobile devices today support some sort of wireless communication: short-range (NFC, RFID, Bluetooth), mid-range (WiFi), and long-range (GSM, LTE) [Schiller, 2003]. Most communication is still using client-server patterns (e.g., Web and Cloud services), though ad-hoc peer-to-peer communication is increasingly common (e.g., a wearable device with a smartphone). Mobile devices and applications leverage these communication capabilities to facilitate ubiquitous access to information for users and synchronize information across devices and services. Mobile devices and applications must however account for variable and intermittent connectivity [Satyanarayanan, 1996].

However, not all mobile devices have the same processing and communication capabilities due to associated energy requirements and limited battery capacity in smaller devices especially. In contrast to traditional computers, the energy resources of mobile devices are finite [Satyanarayanan, 1996]. Smaller mobile devices, such as fitness trackers or smart watches, may be constrained to short-range or mid-range communication and have less powerful processors. Such devices typically off-load communication and processing tasks to other devices. They may connect to smartphones via Bluetooth or similar short-range protocols in order to let the more capable device perform processing on collected data and provide cellular communication to synchronize information with a cloud service, e.g., a fitness or quantified-self website. Smartphones may also hand-off tasks that require extensive data or processing to cloud services. For instance, most smartphones support powerful voice-based assistants, e.g., Siri on iOS or Google Assistant on Android smartphones, by forwarding recorded voice data to a cloud service, where powerful backends can use voice data from a large user base in order to improve voice recognition performance. Cyber foraging extends this approach by enabling mobile devices to opportunistically utilize available computing infrastructure in their environment [Flinn, 2012].

Advances in chip integration and low-power modes allow for the use of an incredible amount of processing power practically everywhere. The ability to connect to both other mobile devices and—via long-range communication—to any Internet service, allows mobile devices to not only outsource the heaviest computational tasks but also have almost limitless access to information. Such offloading and outsourcing inherently leads to much more widespread data sharing than ever before. For instance, voice assistants stream a user’s voice queries from the user’s phone to their company’s servers to process and interpret the command [Nusca, 2011].

3.1.3  DATA RECORDING–SENSING AND CONTEXT-AWARENESS

Mobile computing was initially largely focused on the development of cheap and low power computing and wireless networking capabilities [Weiser, 1991, 1993]. To a large extent, such aspects have become commodities to be found in almost all mobile devices [Ebling and Baker, 2012, Weiser and Brown, 1997]. Today’s innovation in mobile devices is often powered by numerous sensors that provide awareness of the device’s context—and hence the user [Patel et al., 2006]. Today’s smartphones can sense geographic location (using satellite-based, wifi-based, and cell tower-based positioning), orientation (compass, gyroscope), altitude (barometer), temperature (thermometer), and motion (accelerometer). Multiple integrated microphones and cameras can serve as audio and optical sensors, e.g., to detect ambient light conditions or noise levels, or to measure a user’s physiological parameters like heart rate by placing a finger tip on the phone’s camera lens [Pelegris et al., 2010, Scully et al., 2012].

This abundance of sensing has enabled a wide range of context-aware applications and services. Location-based services use the device’s location to provide information about nearby points of interest (e.g., highly-rated restaurants, ATMs, public transportation), nearby contacts, or location-based reminders. Applications such as IFTTT (if this then that) [IFTTT, 2014] enable users to write their own triggers for certain contexts. Personalized mobile assistants such as Google Assistant [Google, 2014] or Cortana [Microsoft, 2014], as well as other context-aware third-party app launchers, provide location-specific and activity-targeted information based on the user’s location, the user’s activities (e.g., opened apps, sent text messages), calendar information, and other information sources (e.g., current traffic).

Context awareness is an integral part of the vision of ubiquitous and pervasive computing as it enables devices, applications, and services to adapt autonomously to the device’s and user’s context and activities [Schilit et al., 1994]. Today’s mobile devices feature a wealth of sensing capabilities that already support a range of useful applications. The potential of such applications makes continuous data collection an increasingly attractive option for many (e.g., constant location tracking). While location information is still the primary context factor used in most context-aware mobile applications [Schmidt, 2012, Schmidt et al., 1999], other sensors are slowly starting to receive more attention, e.g., motion sensors to detect physical activity.

3.1.4  SOFTWARE ECOSYSTEMS–THE DEVICE AS A PLATFORM

A particular characteristic of today’s smartphones and mobile devices is that they act as platforms for many different types of applications (apps). While this is nothing new in the context of PCs and laptops, mobile phones have traditionally been closed ecosystems that were fiercly guarded by carriers. The fact that today’s smartphones and smartwatches can run software not only from the manufacturer (e.g., Apple or Google) or carrier (e.g., AT&T or Vodafone), but in principle from any third party, has greatly accelerated innovation in the mobile space. As of October 2018, 2.1 million apps are available for Android in the Google Play Store and 2 million for iOS in the Apple App Store [Statista, 2018]. Similar app ecosystems exist for wearables, such as smartwatches (e.g., Android Wear, Apple Watch, Pebble) and optical head-mounted displays (e.g., Google Glass).

Today’s modern app ecosystems not only accelerate innovation, but also customization by users. Moreover, they greatly reduce the traditional power of carriers and manufacturers, leading to a democratization of the application space where in principle a single developer can easily reach millions of customers. However, as mobile apps have usually almost unfettered access to a device’s communication, processing and sensing capabilities, as well as the user’s information stored on the device, there is now a plethora of parties that are, at least in principle, able to closely monitor an individual’s communication and information behavior. Consequently, permission management has since become an important aspect on mobile devices.

3.2  PERVASIVE AND UBIQUITOUS COMPUTING CHARACTERISTICS

In contrast to the mobile computing paradigm, which focuses on the shift from stationary computers to portable devices, the field of pervasive and ubiquitous computing1 is driven by the quest for a more “natural” fit of computing to people’s everyday lives. Originating from the vision of Marc Weiser in the late 1980s and early 1990s, pervasive computing aims to support users’ activities and goals, rather than getting in their way by requiring users to focus on accurately operating and controlling a computer [Weiser, 1991, 1993]. Ultimately, pervasive computing complements and extends mobile computing characteristics with three novel dimensions: embeddedness, implicit interaction, and ubiquity.

3.2.1  EMBEDDEDNESS–INVISIBLE COMPUTING

Like mobile computing, pervasive computing strives to create ever smaller, yet powerful devices. However, pervasive computing aims to leverage the potential of miniaturization and commoditization of computing components much more substantially. The idea is to enrich physical artifacts—everyday objects such as cups or blankets—and environments—rooms, buildings, parks, plazas—by embedding sensing, processing, and communication capabilities into them. Instead of consciously interacting with a computer—even if it is a highly mobile one such as a modern smartphone—pervasive computing sees users interact with everyday items that are enriched with computing and communication power. This also has implications in terms of scale: instead of interacting with a single dedicated device, or even a small set of mobile devices, a pervasive computing environment may eventually contain hundreds of small-scale interconnected devices. Such devices can be part of the environment’s infrastructure or be personal devices belonging to a specific user (e.g., embedded in clothing or personal artifacts) [Beckwith, 2003].

Local sensing and computing infrastructure can be combined with cloud services to facilitate synchronization and information exchange between different environments. Physical artifacts and the physical environment gain a virtual representation “in the cloud” that reflects the artifact’s or environment’s state and context. Such networks of interconnected physical artifacts and environments are also referred to as the “Internet of Things” (IoT) [Atzori et al., 2010]. In fact, IoT has become a catch-all term for advanced mobile and pervasive computing technologies.

Embedding pervasive computing technology into environments at a large scale enables comprehensive sensing of user behavior and personalized adaptation of such systems to the user’s needs. Examples of such systems are smart and autonomous cars (or intelligent transportation systems in general), smart homes (or smart buildings in general), and smart cities. IBM Corp. [2008] even coined the term “smarter planet.” The “smartness” in these terms refers to the enrichment of living spaces with sensors and actuators that can sense—and ultimately predict—user behavior: to save energy, time, and user mind share.

A smart home is characteristically equipped with a set of sensors, activators, and computing facilities linking these components [Sadri, 2011]. Integrated sensors determine presence and activities of inhabitants, as well as measure physical characteristics, such as temperature or humidity. Activators, also called actuators, can change the state of the building according to sensed information, e.g., adjusting room temperature or switching on lights. These basic functions are often subsumed under the term home automation [Friedewald et al., 2005].

Ambient-assisted living (AAL) leverages smart home concepts to improve the quality of life for impaired and elderly individuals [Sadri, 2011] to facilitate prolonged autonomous, independent living in their own homes. AAL systems can support inhabitants in regularly taking medication, remind them of meal times, and prevent potentially dangerous situations, for example, by automatically turning off the stove after use [Moncrieff et al., 2008]. AAL systems can also monitor a user’s health and vital signs and alert care takers about unusual conditions or accidents [Sadri, 2011].

Similar safety concerns drive much of the development behind smart cars [Jones, 2002, Silberg and Wallace, 2012]. Here, sensors are primarily meant to detect road conditions and share such information with close-by vehicles. Sharing is done both directly to other cars using short-range wireless communication, as well as by relying on road-side infrastructure (e.g., toll stations, bridges, street lamps) and cellular communication. Additionally, user interface elements such as heads-up displays and voice control focus on supporting drivers without distracting them from the surrounding traffic [Pfleging et al., 2012].

Pervasive computing components may not only be embedded into the user’s environment but also into the user’s clothing or jewelry. A smart shirt may measure the user’s body temperature, heart rate, arousal, and other vital signs based on skin conductivity [Baig and Gholamhosseini, 2013, Lee and Chung, 2009] and share this information with a smartphone. A smart necklace may vibrate to notify the user of incoming messages or interesting deals offered in a nearby store. After smart glasses that can overlay the user’s field of view with additional information (“Augmented Reality”) [Starner, 2013], research is underway to embed this into smart contact lenses [Lingley et al., 2011]. Nano-scale computing and communications research aims to create devices small enough to be inserted into a human’s blood stream, digestive systems, or organs to monitor and diagnose health issues without invasive surgery [Staples et al., 2006].

Note that the “invisibility” of such embedded computing does not necessarily have to involve physical size. Invisibility also has a cognitive dimension, where the interaction with pervasive computing components is integrated into the user’s activity and goals in order to make the computer “disappear” [Weiser, 1991]. This disappearance hence is meant in a metaphorical sense, i.e., the user’s expectations are met with minimal distraction from the envisioned task [Satyanarayanan, 2001]. Both cases—whether it is a physically invisible or “just” cognitively invisible computer—make it difficult for users to realize that they are interacting with a computer at all. While this is the intention of ubiquitous computing, it also means that maintaining individual awareness of data collection, processing, and dissemination activities becomes more difficult. Also, simple physical fixes for controlling data collection, e.g., sticking tape on a laptop’s camera to prevent surreptitious recording, will not be feasible anymore.

3.2.2  IMPLICIT INTERACTION–UNDERSTANDING USER INTENT

Ubiquitous and pervasive computing constitute a major paradigm shift for human-computer interaction. Dedicated input and output components, such as mouse, keyboard, or a touch screen, are being gradually replaced with (or augmented by) more “natural” interaction modalities. Key activities in this space are tangible user interfaces, natural interaction, multimodal interaction, and context awareness.

Enriching everyday artifacts with sensing, processing, and communication capabilities allows us to create virtual representations of such artifacts and use them as “tangible” interfaces. Research on “tangible user interfaces (TUIs)” studies the association of physical artifacts with digital information, and how the manipulation of the physical object can be used to transform the associated information [Ishii and Ullmer, 1997]. Such direct interaction with physical artifacts can be leveraged to create more natural mappings between human-computer interaction and physical interaction [Abowd and Mynatt, 2000]. For example, instead of looking at a map to orient oneself, a tangible user interface would allow the user to look through a “lens” (e.g., a smartphone with an augmented reality map app) to see labels attached to landmarks they are seeing, or see navigation directions seemingly painted on the ground.

User attention is a limited resource [Roda, 2011], so if users are going to be surrounded by countless ubiquitous computing devices, these devices should not continuously compete for the user’s attention. Instead, devices and applications must try to provide most of their output in the form of ambient, unobtrusive notifications in the user’s periphery of attention [Weiser, 1991], yet easily be able to move to the user’s center of attention if needed [Abowd and Mynatt, 2000, Weiser and Brown, 1997]. This implies that systems are becoming much more autonomous in their decisions, and that users intentionally will not be fully aware of the various system activities. “Natural interaction” aims for an adequate balance between autonomously acting devices and systems, and explicit user interaction and engagement.

“Multimodal interaction” seeks to enable users to engage with computing systems through multiple input/output channels [Dumas et al., 2009]. This allows users to communicate with a system in a way that is most conducive to their current activity. Common input modalities in such systems are speech and gesture recognition, as they free the user from having to focus on a specific input or output component in the environment. Visual, auditive, and tactile channels are used for output. In combination with pervasive projection of information into the environment, the user’s whole environment can be turned into an immersive interaction environment. For example, Microsoft Research’s RoomAlive project [Jones et al., 2014] uses multiple projectors and depth cameras to transform a user’s living room into an immersive gaming experience in which walls and furniture are incorporated into the game visualization. The user can freely move in and interact with this game world through gestures and touching the physical and virtual objects.

Ultimately, pervasive computing seeks to provide systems and applications with “context awareness”—a nuanced understanding of the user’s current situation, in order to provide more meaningful interaction experiences. Typically, multiple basic context features (e.g., location, physical movement, time of day, but also social context such as calendar entries or a user’s social network) are combined to infer higher-level “situations” [Abowd and Mynatt, 2000]. Weiser stressed that ubicomp systems should be smart [Weiser, 1991], but they do not need to be actually intelligent. A “smart” coffee maker might only need a few context features to understand if it should start brewing a fresh cup of coffee (e.g., time of day, day of the week, and the last 15 min of user movement). Typical context features include location, time, the user’s activity, present persons or devices, and other information available about the user, such as their schedule. However, a high level of context adaptivity also requires a deeper understanding of the user’s activity and personal experiences [Dourish, 2004]. In order to form such an understanding, ubiquitous computing systems can try to adapt their behavior over time: initially, users are involved in individual decisions yet gradually the system moves toward automated decision making [Bardram and Friday, 2009]. Thus, ubicomp systems would not only adapt to context changes, but also adapt their behavior to individual users.

The ability of ubicomp systems to be “smart” relies not only on the availability of context information, but also on the accuracy of this data [Bardram and Friday, 2009]. Ideally, systems should autonomously learn to cope with previously unknown situations, users, and other entities at runtime [Caceres and Friday, 2012]. Ambient intelligence (AmI) and intelligent environments are terms that are supposed to highlight this evolution from simple smartness to a deeper understanding of user context. AmI also underlines a trend toward more proactive systems consisting of agents that act autonomously on the user’s behalf [Caceres and Friday, 2012]. Such proactive systems try to look ahead for the user by combining disparate knowledge from different system layers [Satyanarayanan, 2001]. Consider, for example, a personal digital assistant that automatically reschedules appointments when the user is stuck in traffic, or the notorious smart fridge that keeps track of available food and automatically reorders fresh produce as needed [Langheinrich, 2009]—potentially taking into account the users eating habits and preferences, as well as dietary and health considerations.

The combination of novel input and output capabilities, together with a system’s awareness of user context, has enabled a novel interaction paradigm: implicit interaction [Schmidt, 2000]. Sensor-driven awareness of the user’s context and behavior is interpreted as input to provide situation-specific support and adaptation [Schmidt, 2000]. Research on affective computing further aims to recognize a user’s emotions in order to adapt systems with respect to the user’s mood [Picard, 2003]. Depending on the application, the user’s behavior may determine the system’s reactions or the system may proactively modify the user’s environment. Thus, instead of acting with a computer, the user acts within a ubicomp system and is surrounded by it. Interaction with such a system becomes continuous, i.e., interaction has no defined beginning or ending anymore, and may be interrupted at any time [Abowd and Mynatt, 2000]. This continuity and ubiquity of ubicomp applications facilitates the support of everyday tasks [Weiser and Brown, 1997]—tasks and problems that relate to and occur in the daily routines of users.

Two main implications stem from such interaction models. First, data collection becomes paramount for “smart” or even “intelligent” systems. This not only drives an effort to deliver ever greater accuracy in sensing (e.g., high quality audio and video recording), but also a continous urge to include more sensing modalities in order to not “miss” a crucial piece of context. Second, understanding the complexities of everday life seems to require collecting an ever-increasing share of a user’s life, i.e., both spatially and temporally.

3.2.3  UBIQUITY–FROM SOCIAL TO SOCIETAL SCALE

The terms “ubiquitous” and “pervasive” already express the widespread presence that sensing and computing devices should have in a future with ubiquitous computing. Such a presence will allow for a new level of “social” and sociotechnical systems that offer highly customized services, based on intimate observations of individuals. At the same time, the ubiquitous availability of sensing and computing will also prompt applications at a “societal” scale, i.e., large-scale deployments that will affect cities, regions, and countries.

A prime example of this is the “smart grid,” i.e., optimizing and coordinating energy consumption across multiple households and neighborhoods to stabilize the power grid against peaks. It envisions that individual smart appliances, such as dishwashers or washing machines, coordinate with energy providers to shift their workloads to times when surplus energy is available. This not only saves money for the consumer but also eliminates consumption peaks as energy providers can use monetary incentives to much excert more fine-grained control over demand. Another example is “smart transport,” where cars periodically broadcast their location, speed, and heading to enable collaborative collision avoidance between individual vehicles [Kargl, 2008], as well as provide a detailed overview of the traffic situation to improve traffic prediction and control. When deployed at large-scale, personal devices such as smartphones or even wearable computers can learn not only about their owners but also—collaboratively—about the behavior of larger groups and societies [Lukowicz et al., 2012]. For example, the activity recognition running on a smartphone (in order to understand a user’s current activity) can be made remotely available in order to allow for “opportunistic sensing”—the dynamic brokering of sensor information in order to collect information about a certain area, e.g., the level of crowdedness at a shopping mall, or the air quality across a city [Das et al., 2010, Ganti et al., 2011, Lane et al.]. Such large-scale, cooperative sensing with mobile devices can provide a “socio-technical fabric” [Ferscha, 2012], which can provide a powerful analysis of social interactions, generate novel models of human behavior and social dynamics, and spur the development of socially-tuned recognition algorithms [Lukowicz et al., 2012].

A major challenge in achieving effects on a societal scale is the connection of many heterogeneous context-aware entities into large-scale ensembles of digital artifacts [Ferscha, 2012]. In addition to establishing inter-connectivity between heterogeneous systems and devices, self-organization and cooperation of those entities are paramount challenges in large scale complex dynamic systems [Th. Sc. Community, 2011]. Ensembles of devices with emergent and evolutionary capabilities can form societies of artifacts in order to support specific activities or user goals. Novel engineering and programming concepts are required to leverage the potential of these developments [Th. Sc. Community, 2011].

Embeddedness and continuous interaction will soon allow for long-term interactions with computers that will be much more intimate than any other data collection to date. However, with the realization of true inter-connectivity between heterogeneous systems, novel forms of social monitoring and control will not only cover the individual, but also extend to social groups and entire societies.

3.3  SUMMARY

Modern computing systems in general are characterized by a high degree of interconnectivity. However, mobile and pervasive computing are significantly different from “traditional” computers (e.g., a laptop or a desktop computer) due to seven reasons.

1.  Novel form factors: Computers now not only come as powerful smartphones but are also embedded in clothing and toys, making it possible to have them with us almost 24 h a day.

2.  Miniaturized computation and communication: Today’s miniaturized computing resources allow us to run sophisticated machine learning applications in real time, or wirelessly transfer large amounts of data at gigabit speeds.

3.  Always-on sensing: Modern sensors not only use fewer power than ever before, but also include sophisticated digital signal processors that provide application developers with usable high-level context information (e.g., indoor and outdoor location, physical activity).

4.  Software ecosystems: App stores have revolutionized the way we distribute and consume software. Never before was it easier to bring new software to millions of users, yet users now need to be better trained to understand the implications of installing untrusted programs.

5.  Invisibly embedded: The low costs of computing, communication, and sensing systems has made it possible to make rooms, buildings, and even entire cities “smart.” As this ideally happens without distractions (e.g., blinking lights), it will become increasingly more difficult to tell augmented (i.e., computerized) from un-augmented spaces.

6.  Implicit interaction: The ubiquity of computers has made it possible to create “invisible” assistants that observe our activities and proactively provide the right service at the right time. Obviously, this requires detailed and comprehensive observations.

7.  Ubiquitous coverage: Embedding computing from small-scale (e.g., in our blood stream) to large scale (e.g., across an entire metropolitan area) significantly increases both vertical and horizontal coverage (i.e., across time and space) of our lives.

While today’s interconnectivity certainly forms the starting point for most of today’s privacy issues, the characteristics discussed in this chapter are of particular interest when looking at the privacy implications of mobile and pervasive computing. The next chapter will discuss these implications in detail.

1While the terms “pervasive computing” and “ubiquitous computing” were initially characterized by nuanced differences [Want, 2010], nowadays both terms are used interchangeably and refer to the same research field and community. For instance, the two premier research conferences in this area, UbiComp and Pervasive—merged in 2013 to form the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp); see http://www.ubicomp.org. In this book, we use the two terms interchangeably.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.147.85.221