Chapter 8. The Body as Interface

In the twenty-first century, the technology revolution will move into the everyday, the small and the invisible. The impact of technology will increase ten-fold as it is imbedded in the fabric of everyday life. As technology becomes more imbedded and invisible, it calms our lives by removing annoyances while keeping us connected with what is truly important.

Mark D. Weiser, 1999

The late Mark D. Weiser was chief scientist at Xerox PARC (Palo Alto Research Center, now named simply PARC) in the United States, one of Silicon Valley’s most revered institutions and home to several important computing inventions and technological advancements such as Ethernet, the Graphical User Interface (GUI), and the personal computer. Weiser envisioned a future in which computers are embedded in everyday objects with technology disappearing into the background, serving to calm rather than distract.

In 1996, he wrote the critical paper “The Coming Age of Calm Technology” with John Seely Brown (Xerox PARC’s chief technologist). Calm technology can be summarized as invisible and natural to use; it doesn’t interrupt or get in the way of life. It happens in the background, and it appears when you need it. This is how I personally see the second wave of AR evolving: it’s not about being lost in our devices, it’s about technology receding into the background so that we can engage in human moments, while being more deeply immersed in the real world that surrounds us.

In an interview1 in 2014, Brown spoke to the power and possibility of calm technology to be anticipatory and adaptive, remaining quietly aware until knowing when it is needed to come alive, to spring to action without instruction, by codifying your context. This aspect of calm technology can be tied back to Chapter 7, in which we explored the idea of adaptive and attuned agents, avatars, and objects anticipating and acting on your behalf through contextual awareness. This chapter continues with these ideas of calm technology as we augment our bodies to create a (near) invisible interface. From electronic textiles worn on the body, to embedding technology in the body, to brain-controlled interfaces, technology not only recedes into the background quietly and invisibly, it becomes intimately personal.

Electronic Skin and the Body as Touch Screen

On his Ubiquitous Computing web page,2 Weiser wrote in 1996, “Ubiquitous computing is roughly the opposite of virtual reality. Where virtual reality puts people inside a computer-generated world, ubiquitous computing forces the computer to live out here in the world with people.” Like ubiquitous computing, Augmented Reality (AR) shares this difference with Virtual Reality (VR). AR is about computing in the real world, where what becomes invisible is technology, not reality or the people in it. By taking cues from calm technology and ubiquitous computing, there are immense opportunities to evolve AR to change the way we exist and interact with our surroundings and one another in less distracted and more deeply connected ways.

For Weiser, the “highest ideal” of ubiquitous computing is “to make a computer so imbedded, so fitting, so natural, that we use it without even thinking about it.” The body is perhaps the most “natural” interface we have. Usability expert Jakob Nielsen writes,3 “When you touch your own body, you feel exactly what you touch—better feedback than any external device. And you never forget to bring your body.”

While attending the Conference on Human Factors in Computing Systems (CHI ‘13) in Paris, the top research conference for Human–Computer Interaction, Nielsen was particularly impressed with two projects that use the human body itself as an integrated component of user interface; Imaginary Interfaces and EarPut move toward a direct experience of immersion using the body without screens.

Imaginary Phone (part of the Imaginary Interfaces project), designed by Sean Gustafson, Bernhard Rabe, and Patrick Baudisch from the Hasso-Plattner Institute in Germany, is a palm-based screenless user interface. The user interface is “imaginary” as there is nothing beyond the naked hand; there is no projection or visual digital layering. Patrick Baudisch, one of the researchers who designed Imaginary Phone, refers to a time when we used a stylus with personal digital assistants (PDAs), and how the iPhone and touchscreens later eliminated the need for a stylus. Baudisch says he wants to see this go even further by leaving behind screens, as well.

The technology uses small depth-sensing cameras placed above a user (this could be worn on the body) to locate where the user’s fingers are and what part of his hand he’s touching. The interface could be used to interact with your mobile phone even if it’s not in front of you, such as in your pocket. Baudisch identifies4 how Imaginary Phone could be useful for the large amount of “microinteractions” we perform everyday like turning off an alarm, sending a call to voicemail, or setting a timer by interacting directly on the palm of your hand without the need to touch your phone. Specific functions that you personalize and define can be connected to your phone, activated as you touch different points on your hand.

The researchers conducted experiments in which under normal use research subjects were shown to be equally fast choosing functions from a regular touchscreen phone and from the imaginary palm-based system. It is interesting to note, however, that users who were blindfolded were twice as fast when touching their own hand on the palm-based system as the regular touchscreen phone. The information gathered on nonsighted use is of interest and relevant to accessibility and assistive techniques for the blind as well as scenarios in which users encounter a moment when they are unable to look at their phone, or even don’t want to be taken away from what they are doing (having technology interrupt human interactions).

Researchers Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, and Max Mühlhäuser from the Technical University of Darmstadt in Germany have created a prototype called EarPut, which proposes another human body touchscreen replacement using the ear as an interactive surface. “One of the pervasive challenges in mobile interaction is decreasing the visual demand of interfaces toward eyes-free interaction,” state the researchers. EarPut supports one-handed and eyes-free mobile interaction. It can enable otherwise noninteractive devices such as ordinary glasses or earphones, and complement existing interaction capabilities of head-worn devices. At the time of its invention, EarPut served as a touch-based extension of Google Glass’s touch-enabled frame.

The researchers identify possible interactions for EarPut, including touching part of the ear surface, tugging on an earlobe (suited to on-off commands), sliding your finger up or down the ear (suited to volume control), and covering the ear (suited as a natural gesture for mute). Applications5 the researchers envision for EarPut include a remote control for mobile devices (particularly in playing music), controlling home appliances (such as a television or light sources), and mobile gaming.

In addition to our ears and the palms of our hands, researchers at MIT have explored conductive makeup applied to the face and body with Beauty Technology by Katia Vega, as well as the thumbnail as a trackpad with NailO by Cindy (Hsin-Liu) Kao. Kao has also developed DuoSkin, a temporary tattoo skin interface, with fellow researchers Asta Roseway, Christian Holz, Paul Johns, Andres Calvo, and Chris Schmandt in collaboration with Microsoft Research. All of these projects, however, including EarPut and Imaginary Interfaces, are still at the research and prototyping stages; there are no commercial products available as of yet. We will likely see calm technology first integrated into garments, with products for which the body and skin itself are used as a touchscreen coming later.

Responsive Clothing

Responsive clothing are garments embedded with sensors that are reactive to your context, environment, body, and movement. Integrating principles of calm technology, responsive clothing is creating new user interactions and is adding to the ecosystem of AR devices. Responsive clothing can help guide you to a geographical location, coach you through an exercise sequence, and even be emotive and expressive, using your biometrics to communicate your excitement (or lack thereof). Responsive clothing like The Navigate Jacket and No Place Like Home shoes are examples of a shift to a more human-centered experience with technology not interrupting or getting in the way of life.

Dominic Wilcox’s GPS shoe prototype, No Place Like Home, guides you to the destination of your choice. Inspired by the film The Wizard of Oz (1939) and how Dorothy could click her heels to return home, the shoes integrate custom mapping software with GPS embedded in the heel, which is activated by a heel click.

The first step in preparing the shoes for your guided journey is using a computer to enter the destination of your choice on a map with the custom software Wilcox developed. After you’ve plotted your destination on the computer, select “upload to shoes”; the location details are then transferred via a USB cable that is plugged directly into the back of the shoe. Unplug the cable, slip on the shoes, click your heels to activate the GPS, and get walking. A ring of mini-LED lights points you in the direction of your desired destination with a progress bar of lights on the right shoe indicating how close you are to the final location.

Created by the Australian-based company Wearable Experiments, The Navigate Jacket also uses a built-in GPS system and LED lights with integrated haptic feedback using vibrations to direct the wearer to her destination. Cofounder Billie Whitehouse states, “We are transforming the art of travel into a hands-free application.” The jacket allows wearers to walk to their destination without the need to hold or look at a map on their smart device or otherwise. Instead, the directions are visualized on the sleeves. The LED lights indicate how far the next turn ahead is and the total progress of their journey. The vibrations notify the wearer which way to turn and when by feeling a tap on the shoulder.

Nadi X fitness tights are the latest project from Wearable Experiments, designed to correct your form during yoga practice. Tiny electronics are woven into the nylon material (no gadgets or wires protrude from the tights), sitting on the wearer’s hips, knees, and ankles. Using an accompanying smartphone app, the electronics communicate with one another, working together to determine where the wearer’s body parts are in relation to one another to help monitor and correct the body’s alignment. “It’s a wireless network for the body,” says6 cofounder Ben Moir. “We have a motion sensor in each part of the tights that knows exactly what angle you’re in.”

Like The Navigate Jacket, Nadi X utilizes subtle haptic vibrations to guide the wearer. Within the app, select the yoga poses you would like to monitor and allow form correction on. When you begin your yoga practice and settle into a pose, the sensors do a body scan and report back. For example, if in Warrior pose, your hip is rotated too far inward, a vibration will move across your hip in an outward direction, like the guiding hands of a yoga instructor. When everything is correctly aligned, the Nadi X tights give off a gentle “om” hum. “The nice thing about haptics is you process them subconsciously,” says Moir. “So, if you’re in the flow of yoga, you don’t have to look at a screen and engage your attention on the screen or listen to a voice instruction.”

Moire and Whitehouse see an opportunity to go beyond yoga and create form-correcting clothing for all kinds of sports like cycling, boxing, and weightlifting. They also envision a day when your pants could tell you when it’s time to leave your desk and walk around, or your shirt could remind you to sit up straight. Whitehouse says, “Yoga’s just our starting point. This can be useful across the board.”

American clothing company Levi Strauss & Co. has partnered with Google’s Advanced Technology and Products (ATAP) group using Project Jacquard technology (a conductive yarn that enables touch interactivity) to create and bring interactive garments to consumers. With Jacquard’s technology discreetly woven in, Levi’s Commuter x Jacquard by Google Trucker Jacket (available Spring 2017 in various US cities, before broader release in Europe and Asia later in the year) is designed specifically for urban bike commuters to stay connected without reaching for their smartphone. By tapping, swiping, or holding on the left cuff of the jacket sleeve, users can wirelessly access their smartphone and favorite apps to adjust music volume, change music tracks, silence a phone call, or get an estimated time of arrival on their destinations delivered by voice.7 “Anyone on a bike knows that navigating your screen while navigating busy city streets isn’t easy—or a particularly good idea,” says Paul Dillinger, head of global product innovation for Levi Strauss & Co. “This jacket helps to resolve that real world challenge by becoming the copilot for your life, on and off your bike.”

Each user can customize the textile interface with the Jacquard platform’s accompanying app, linking gestures to activate preferred functions, and configuring primary and secondary uses from a set of options ahead of time. Ivan Poupyrev, technical program lead at Google’s ATAP, says,8 “We don’t want to define what functionality is the most important, so we’ve given categories for users to choose from. Wearables to date have just been able to do one thing. In our case, the garment does what you want it to do.”

Another innovative aspect to the jacket is that it is produced in Levi’s existing factories; the interactive textile is woven on Levi’s looms in the same way as regular Levi’s jackets are. The ability to integrate the technology into established supply chains makes production at a large scale possible, as opposed to a few one-off products. Poupyrev says:

Often the thing that technology companies don’t really appreciate is that garments are made by apparel makers, not by consumer electronics companies. So, if we really want to make technology a part of every garment in the world, then we have to empower apparel makers such as Levi’s or any other brand, to be able to manufacture smart garments. It means you have to work with their supply chain.

Google is continuing to look at working with new partners and is exploring athletics, enterprise garments, and the luxury market. Poupyrev sees an industry-wide opportunity that he thinks consumers will quickly desire and expect. “If you look at the history of apparel you can see how technology comes in and adds new functionality, like nylon and zippers,” he says. “It’s very natural at this point that new technology becomes another ingredient in building apparel and fashion of the future. Once the appetite is there in the public for smart textiles, it becomes almost like a right. People will expect it all the time and everywhere.”

What kind of new protocols or rituals will we have when wearables do become part of our daily lives? Daan Roosegaarde is a designer exploring this very question. Roosegaarde views wearable computers as an extension of the human mechanics of the body like sweating or blushing. His concept garment the Intimacy dress is made out of opaque smart e-foils that become increasingly transparent based on close and personal encounters with people. Social interactions determine the level of transparency working in response to the heartbeat of the wearer. For example, when you become excited or stimulated and your heart beats faster, the garment becomes more transparent.

Roosegarde comments on how he would like to have wearables react in different ways based on when a certain person is present, with neutral behaviors when others are around. He describes this as, “In the same way when you talk to your boyfriend, you have a different conversation than you would with me. It’s both English, we are both guys, but you tell different stories.”

He also queries what it would be like if your clothing began to make suggestions. He refers to the online retailer Amazon as an example when you buy a book, how other books you might want to purchase are suggested based on your likes and those of your friends. Roosegarde’s ideas hint at a cognizant computing in AR from Chapter 7 wherein your personal assistant is not limited to your smartphone; rather, it is omnipresent and also embedded in your apparel.

Embedding Technology Inside the Body

Perhaps in the near future, as the examples of embeddables (tiny computing devices embedded inside your body) we will see here point to, technology will become “natural” after it is part of our physiology and integrated into our biology, with technology becoming us; there is no longer a separation, we are augmented humans.

A step beyond responsive clothing is having wearables embedded in our bodies and under our skin with technological implants. The ears serve as a natural progression of implants into the body for augmentation; it will likely be the first fairly noninvasive step because we are already accustomed to placing things into our ears like earbuds, Bluetooth wireless headsets, and hearing aids.

As described in Chapter 4, the iRiver ON Bluetooth earphones, powered by Valencell’s PerformTek biometric sensor technology, feature a Tic-Tac-sized sensor that is able to track your heart rate, calories burned, and speed and distance traveled, all by shining a light into the ear. Paired with an app on your smartphone, the device captures your biometrics during your workout, speaking into your ears to notify you of the heart-rate zone you are in and whether calories goals have been reached. The real time data is sent to the smartphone app, enabling you to review the captured biometrics post-workout.

A benefit to such ear-based gadgets is that they can be almost invisible, which might appeal to people who don’t want their devices to draw attention. John Edson, president of design consultancy Lunar, points out, “The current trend is to hide the technology. The ear is a nice place to hide electronics.” Founder of wearable camera company Looxcie, Romulus Pereira, states, “Life requires both hands, frequently. Glasses and the wrist have become the poster child for exploring wearability. Behind this poster child is a whole population of things.” And these other “things” will come as sensors and computers get smaller and faster and get closer and closer to the body, even embedded beneath our skin.

The term “grinders” refers to a community of “biohackers,” people who are exploring sensory enhancement and augmentation with surgical implants. Richard Lee, one notable grinder, explains how the term comes from video games. “In gaming, grinding is where you methodically improve your character. After hours of play, you grab skills or powers,” he says.9 “We got stuck with that name because it resembles the approach we take: constant, methodical grinding to make implants and figure things out.”

Lee is experimenting with surgically implanted headphones that use magnetic speakers in his ears. Beyond listening to music, Lee says,10 “I can see myself using it with the GPS on my smartphone to navigate city streets on foot.” Less a tiny scar on Lee’s body and a coil necklace he conceals under his shirt, his impants are nearly invisible to the naked eye. Lee built the coil to wear around his neck, saying it creates a magnetic field that causes the implant to vibrate and make a sound.

Lee is losing his sight in his right eye. He plans to connect his new system to an ultrasonic rangefinder to have the ability to hear “hums when objects get closer or further away,” hoping to make his hearing more “bat-like.” “The implant is going to allow for a lot of new senses,” says Lee.11

He points out how most people in the grinder community begin with a magnetic finger implant as a kind of rite of passage. “You insert a special bioproof magnet in your fingertip and the nerves regrow around the magnet. After that, every time you pass your hand through a magnetic field, the magnet will vibrate in response, which lets you feel magnetic fields,” he explains. “Once you get the magnetic finger implant and you can sense the magnetic fields, all of a sudden you realize there is an otherwise invisible world that you can reach out and actually feel.” He comments on how it gets you thinking about other areas of the spectrum that you might not be able to perceive or see:

How far could humanity go if we could see those things, instead of having to guess? When you can see something, you gain intuitive knowledge about those areas. So, sensory enhancement and expansion has always been one of those things that to me is a no-brainer, because if you enhance the amount that you can see and experience, it’s just going to add to your view of what reality is, and what the world is like around you. That’s been our pursuit, I guess.

Dr. Daniel Kraft, Executive Director for FutureMed and the Medicine and Neuroscience Chair at Singularity University, states,12 “I think one way to frame this is how ‘hacking’ is moving from enabling the disabled to becoming super-enabled.”

Trevor Prideaux, a British man who was born without his left forearm, added a smartphone docking system to his prosthetic arm; he can hold his arm to his ear to place and receive calls. There is a rising trend in modern “medical augmentation,” which Kraft states is, “generally riding exponential trends of smaller devices, connected computing, and big data.”13 Will augmenting our bodies in these new ways truly make us super-enabled and superhuman?

Cassie Goldring, a Journalism and Media Studies student at Duke University comments14 on the blurring of technological extensions of humans, and identifies a choice, with which I agree; she writes:

We can choose to view these technological advancements as an eventual threat to our humanity, or we can see them as devices that help us become more human. As long as we maintain a strong humanistic perspective in the midst of technological advancement, and acknowledge that these technologies are extensions of us and not the reverse, our humanity will always prevail.

Human-centric experiences are at the heart of this second wave of AR and this will include the integration of devices that extend our natural abilities. I believe Goldring summarizes these pending issues and possibilities ahead quite well by stating, “We must look at devices such as Google Glass not as a desperate attempt to become superhuman, but instead, as an attempt to reach our full potential as humans, connect us in new ways, and ultimately gain a deeper understanding of one another.”

Think Your Reality

Brain Computer Interfaces (BCIs), a hardware and software system that allows you to control a computer using your brain, offer a new way of connecting to and interacting with the world around us. In her 2010 TED talk,15 Tan Le, founder and CEO of Emotiv, an electronics company that develops BCI wearable devices, states:

Our vision is to introduce this whole new realm of human interaction into human-computer interaction so that computers can understand not only what you direct it to do, but it can also respond to your facial expressions and emotional experiences. And what better way to do this than by interpreting the signals naturally produced by our brain, our center for control and experience.

On stage at TED, Le showcased examples of life-changing applications of Emotiv such as a mind-controlled electric wheelchair.

BCIs can replicate the functionality of a mouse and keyboard by allowing you to click icons, scroll menus, and even input text using only your brain. BCIs are already commonly used in medical devices, but with the popularization of wearable technology, BCIs for the masses might not be as far off as we think. Pioneering companies like Emotiv and Interaxon have brought low-cost consumer BCI headsets to market that use EEG (Electroencephalography) sensors to offer mindfulness training to improve your meditation skills or enhance concentration while at work.

Ariel Garten, cofounder of Interaxon, says, “Our initial idea was: how to control the world with your mind?” Garten says.16 “Now it’s more important to us to have a world that understands and adapts to your needs. It’s about helping people become better at doing what they want to do.”

Michael Thompson, vice president of business development at Neurable, believes BCIs will radically reshape our relationship with personal technology by creating computers that function as a direct extension of our brain:

Our vision is to create a world without limitations. For the traditional users of BCI technology—the severely disabled—this quite literally means enabling people to access technology and its innumerable benefits to the same degree as any other person. For humanity at large, we are excited by the revolution in imagination and creativity that this technology will unleash.

Neurable is building brain-controlled software for AR and VR. “Where AR and VR headsets have broken new ground, BCIs constitute the next evolutionary milestone in transformative technology,” says Thompson. “Augmented reality needs brain–computer interfaces to achieve its full potential. Neurable solves the ‘interaction problem’ by providing an interface that is intuitive and liberating.”

Wearing a Neurable-enabled HoloLens, you could mentally “click” the YouTube icon on the home screen. After YouTube opens, you could type the description of the video you are looking for. From the search results, you could select the video you want, click play, and leave a comment at the end. You would do all of this without a physical keyboard or hand gestures; it would be accomplished using only your brain.

To achieve this, the user currently wears an EEG skull cap in combination with AR hardware (like the HoloLens). Neurable’s point of view is that AR headset manufacturers will begin implementing EEG sensors in their headsets in the near future, so the hardware will be fully integrated and a skull cap will not be required.

Thompson believes there are certain environments in which existing control inputs such as voice and gesture are not adequate and might limit AR’s adoption. “This is particularly true of enterprise use cases for AR,” he says. For example, Neurable could be used with HoloLens to visualize the blueprints of a building under construction when voice or gesture might not be ideal. If a worker wants to activate an electrical wiring filter of the blueprint in the AR application, it might be difficult to use voice commands due to construction noise, or to use gesture controls while operating machinery and using physical tools. Neurable helps solve that problem offering another way to interact with AR.

The science behind Neurable isn’t as simple as “reading your mind.” “Nobody has figured that out yet,” says Thompson. Neurable works by presenting you with a menu of options, and then figuring out which option you want to choose. Your choices are limited to whatever is currently displayed on the screen. Thompson explains:

The particular brainwaves we work with are associated with visually-evoked potentials (VEPs). VEPs present the user with a screen full of icons—think of the home screen on your smartphone that has a bunch of apps. VEP-based BCIs rapidly apply a visual stimulus to generate a brain response. When the icon you want to choose is stimulated, your brain produces a VEP-brainwave. Our system detects that response and matches it to the item you wanted to select.

Thompson points out how there is a heavy component of user-intent and predictive analytics to Neurable. “This is applicable to calm technology in that we can present options and information to people only when we believe that it is relevant,” he says. Thompson notes how a person could use BCIs to reduce his cognitive load with software that responds to biometrics to display only the most relevant information. Researchers in Finland are experimenting with brainwave analysis as a method for content curation to help do just this.

Researchers at Helsinki Institute for Information Technology (HIIT) have demonstrated the ability to recommend new information based on directly extracting relevance from brain signals. The researchers completed a study using EEG sensors to monitor the brain signals of people reading text in Wikipedia articles, combined with machine learning models trained to interpret the EEG data and identify which concepts readers found interesting. Using this technique, the team was able to generate a list of keywords that study participants mentally flagged as informative while they read. The information then could be used to predict other relevant Wikipedia articles to that person. In the future, such an EEG method could be applied to help filter a social media feed for example, or identify content of interest to someone using AR.

“There’s a whole bunch of research about brain–computer interfacing but typically the major area they work on is making explicit commands to computers,” says17 researcher Tuukka Ruotsalo. “So that means that, for example, you want to control the lights of the room and you’re making an explicit pattern, you’re trying explicitly to do something and then the computer tries to read it from the brain.”

“In our case, it evolved naturally—you’re just reading, we’re not telling you to think of pulling your left or right arm whenever you hit a word that interests you.” says Ruotsalo. “So, it’s purely passive interaction in a sense. You’re just reading and the computer is able to pick up the words that are interesting or relevant for what you’re doing.”

Combining this process with Neurable could make for a calm technology experience in AR where you are engaging as you normally would be, the technology triggering subsequent relevant content in the background. This technology could be assistive in minimizing cognitive load, particularly work tasks for which you might have lots of information coming in and you might need to remember multiple things. Such a system could serve to annotate importance in an information-intensive task, later reminding you to revisit things of interest.

“We are already leaving all kinds of traces in the digital world. We are researching the documents we have seen in the past, we maybe paste some digital content that we later want to get back to—so all this we could record automatically,” says Ruotsalo. “And then we express all kinds of preferences for different services, whether it’s by rating them somehow or pressing the ‘I like this.’ It seems that all this is now possible by reading it from the brain.”

Ruotsalo identifies the implications of being able to take interest signals from a person’s mind as potentially being a little dystopic, particularly in considering how marketing messages could be tailored to your interests as you engage with content. “So, in other words, targeting advertising that’s literally reading your intentions, not just stalking your clicks,” he says.

Ruotsalo hopes for other uses of the technology that have a positive impact. “Information retrieval or recommendation it’s a sort of filtering problem, right? So, we’re trying to filter the information that is, in the end, interesting or relevant for you,” he says. “I think that’s one of the biggest problems now, with all these new systems, they are just pushing us all kinds of things that we don’t necessarily want.”

We began this chapter by quoting calm technology pioneer Mark Weiser; let’s conclude with another prescient quote from him: “The scarce resource of the twenty-first century will not be technology; it will be attention.” As we continue to augment our environments, bodies, and minds with new technologies, our directed focus on the things that truly matter to us will be key, and it is my hope that technology will aid to calmly achieve this intention rather than serving to distract or overwhelm. We design our technologies, and in turn, our technologies design us. This is our reality to design, to align with our human values. Now more than ever we must ask, how do we want to live in this new augmented world?

1 Calm Tech, Then and Now.

2 Ubiquitous Computing

3 Jakob Nielsen, “The Human Body as Touchscreen Replacement,” Nielsen Norman Group, July 22, 2013.

4 Imaginary Phone

5 Roman Lissermann, Jochen Huber, Aristotelis Hadjakos, Suranga Nanayakkara, Max Mühlhäuser, “EarPut: Augmenting Ear-worn Devices for Ear-based Interaction.”

6 Jessica Hullinger, “These Vibrating Yoga Pants Will Correct Your Downward Dog,” Fast Company, January 15, 2016.

7 These small interactions recall Imaginary Phone, but instead of using the palm of your hand and bare skin, here conductive textiles are used.

8 Rachel Arthur, “Project Jacquard: Google And Levi’s Launch The First ‘Smart’ Jean Jacket For Urban Cyclists,” Forbes, May 20, 2016.

9 Cyborg Series #3: Rich Lee is a Grinder

10 Leslie Katz, “Surgically implanted headphones are literally ‘in-ear',” CNET, June 28, 2013.

11 Ibid.

12 Seth Rosenblatt, “Hacking humans: Building a better you,” CNET, August 21, 2012.

13 Ibid.

14 Cassie Goldring, “Man or Cyborg: Does Google Glass Mark the End of True Humanity?” Huffpost, July 22, 2013.

15 Tan Le, “A headset that reads your brainwaves,” TED, July 2010.

16 “Mind games,” Macleans, December 19, 2012.

17 Natasha Lomas, “Researchers use machine learning to pull interest signals from readers’ brain waves,” TechCrunch, December 14, 2016.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.23.30