Chapter 4. Removing Sound from Interactions

THERE ARE MANY SITUATIONS where a sound doesn’t match a context, volume, or environment. It is not always necessary to remove such a sound; sometimes it’s sufficient simply to organize it. For example, we can organize sounds in a network of interconnected devices, such as those in a workspace, to keep them from interrupting at odd times.1

Removing or changing sounds into different senses can dramatically improve the user experience. This chapter provides a list of recommendations for removing sounds from interactions.

Unwanted Sounds

If you get annoyed when you hear the sound of someone chewing, you might have misophonia, or selective sound sensitivity syndrome. To people with this syndrome, specific sounds may incite feelings of stress, misery, or anger. Most of us have some sort of sound sensitivity. We might agree that many of the following sounds are unwanted sounds. They are annoying in some way, and we’d rather not hear them:

  • Heavy in transients (snoring, noisy chewing and lip smacking, hacking coughs)
  • Low frequency (sonic pressure of a large truck driving by)
  • High frequency (squeaky wheels, chalk screeching on a chalkboard, children crying)
  • Unexpected noise pollution (jackhammers, helicopters, steamrollers)
  • Repetitive sound (service workers might hear the same 20 songs played over a sound system every day)

Employees that are exposed to higher levels of ambient sound are more likely to be sick, tired, stressed, and inefficient at communicating. Some standards organizations impose noise exposure limits for workplaces, but these standards are seldom sufficient in service, healthcare, childcare, or elder care industries, and are more often geared toward preventing acute health risks than creating a pleasant environment.

Let’s look at a couple of examples of environments where sound could be reduced or removed to improve the lives of those experiencing it.

Solving Alarm Fatigue in Hospitals

[ NOTE ]

This section originally published by Kellyn Standley and Amber Case on Medium.com, “False Alarms and Alert Fatigue: The Tragic Design of Hospital Alerts,” October 31, 2018.

Of the health hazards produced by technology in healthcare, alarm fatigue—or stress and desensitization from frequent alarms—frequently tops published lists.2 Alarms keep patients awake at night, and some crucial ones are ignored when they sound too often. An astonishing percentage of alarms in hospitals are either false or clinically insignificant. These misleading alarms may be created by a mismatch between the default threshold for the alarm and the needs of the patient based on their size, age, and condition, or introduced through poor connectivity between sensors and the patient. The Joint Commission for Patient Safety reports: “The number of alarm signals per patient per day can reach several hundred depending on the unit within the hospital, translating to thousands of alarm[s] on every unit and tens of thousands of alarm[s] throughout the hospital every day.”3 Clinically insignificant alarms make up over 90% of pediatric ICU alarms and over 70% of adult ICU alarms.4 An estimated 80-99% of ECG heart monitor alerts do not require clinical intervention.5

Hospitals are already noisy, chaotic environments, and the introduction of nuisance alerts can easily overwhelm workers (see Figure 4-1). If the equipment were calibrated and redesigned to reduce the number of clinically insignificant alarms, it would also reduce fatigue in workers.

Some of the dozen machines with alarms and constant tones Amber Case’s father was hooked up to in May 2016.
Figure 4-1. Some of the dozen machines with alarms and constant tones Amber Case’s father was hooked up to in May 2016.

Medical devices are regulated by standards organizations that set requirements specifying that medical alerts must fit into specific frequency bands that target the most sensitive part of our hearing range. However, it is reasonable to believe this is simply an unfit approach for the context. Many of these devices meet legal requirements but still fail massively in experience design. The frequency and decibel requirements were set with the intention of enabling these alarms to be heard above background noise, however, when placed in a context of hundreds of such alarms, all sounding frequently, the intended purpose is overwhelmed by the number of alarms vying for attention.

While reducing the quantity of clinically insignificant alarms is a natural first step to mitigating the problem, rethinking the overall approach to sound design for these devices is essential. We explore some different strategies here. Some of these would require changes to the current regulations governing medical alerts, but others would not. Ultimately, converting to an integrated system and changing regulations seems well worthwhile, although it will take time to create a system that is sufficiently robust and well designed.

Because of the frequency with which healthcare workers are subjected to auditory notifications, they must be designed to be calm, positive, and relatively nonintrusive in order to not exhaust the faculties needed to answer them. To tie this back to principles mentioned in Chapter 3, the more often an alert or notification occurs, the less intense it should be. Also, for alerts that happen at irregular intervals, the notification should be longer. Because of their importance, it is likely that hospital alerts ought to be gentle but continuous until the underlying situation is addressed.

Could you imagine a joyful sound playing to bring a nurse or physician to a patient’s room, instead of a harsh alarm? A beautiful and complex aria? Unless we block our ears or have limitations in our hearing, we cannot avoid hearing sounds. A melody would be difficult to miss, and we enjoy listening to beautiful things, so our inherent preferences should reinforce attention to such alarms rather than detracting. The new IEC 60601-1-8 guidelines for hospital alarms do allow for melodies,6 but overt melody-making could run an additional risk of extreme annoyance from overuse. It may be better to borrow principles of ambient awareness and sonification to create a series of nonrepeating soundscapes that both calm and inform, disappearing seamlessly into the background but also creating a readable and digestible auditory text for practitioners.

One study of US hospitals showed that nurses take up to 40 minutes to respond to alarms,7 and another showed caregivers responding to only 10% of alarms.8 A further study demonstrated that, of all relevant alarms, caregivers could correctly identify only half of them.9 A quieter hospital would help with patient recovery. Recognizable sounds, such as music or common sounds in nature, could help with identification. And overall, a positive connotation for any sound played in this context could reduce the emotional fatigue for both employees and patients. Design for alarms in healthcare must take the human element into consideration.

To reduce the cognitive load for doctors and nurses, more information could be placed into ambient awareness. With conscientious design, this could be formed into a “soundscape of health.” We are arriving at a time where new capabilities in integrated devices are available to us. Instead of each manufacturer separately adding sound to just one device, we could instead integrate robust connectivity in each device (with a backup) in order to integrate sounds in a network.10 This would allow the entire system to be sound-designed as a single cohesive auditory experience. This could be tailored to a particular specialty, culture, or location.

It is likely that we will not be able to achieve a functional integration of hospital alarms without such an approach, which could be tackled with the type of dedication currently being applied to developing automated vehicles, although likely with fewer novel problems to solve, and therefore with fewer unknown technological hurdles. An interconnected system would enable greater context awareness for the machines. Context awareness is often critical to determining whether a particular reading is—or is not—clinically significant: “A heart rate of 170 on a treadmill test may warrant a low-priority condition whereas this same heart rate at an intensive-care monitoring station may be assigned a high priority.”11

Additionally, an integrated system can analyze separate pieces of biometric data to generate condition-specific alarms, highlighting life-threatening conditions. For example, the Cushing response is a hardwired response to increased intracranial pressure, caused by a traumatic brain injury. It is a sign that there is a high probability of death within minutes or seconds. Cushing’s response is indicated by decreased, irregular breathing, caused by impingement on brainstem function; low heart rate, caused by dysregulation of heart function; elevated blood pressure coupled with a widening of the difference between systolic (“on beat”) and diastolic (“off beat”) arterial pressures; and may be indicated by pathological waveforms, known as Mayer waves, on cardiac monitors. It occurs only in response to acute and prolonged elevations of intracranial pressure and the combination of elevated blood pressure and low heart rate occurred 93% of the time that blood flow to the brain dropped below key due to increased intracranial pressure.12 It is a reliable indicator that requires immediate, life-saving intervention.

At present, Cushing’s response is identified by healthcare workers actively attending to several independent alarms and visual displays and summing this information. Such a calculation on the part of the healthcare workers should be unnecessary. This response is both well understood and critical for care. It could easily be programmed into a system of integrated devices, and such a system could take advantage of new scientific research and findings to further refine the diagnostic system and evolve a truly effective alert system.

An integrated system could also allow for automatic tailoring of the relevant thresholds for a particular patient using information entered into the hospital system, eliminating the time needed to individually customize alarm settings for each and every patient, as well as a large proportion of unnecessary alarms. An intelligent system could employ previous saved data to create an individual baseline, particularly when the patient has a special condition that presents with chronic irregularities that do not require clinical intervention. Devices could rely on listening and learning to coordinate their actions around the user, rather than the other way around.

Designing an evolving information soundscape could reduce alarm fatigue created by listening to the same type of constant, high-pitched beeps. Current guidelines for priority would be incorporated into the system, with a high priority given to alerts where death or irreversible injury could happen quickly, but a low priority if only discomfort or slow-developing conditions are likely. Soft background noises like crickets could indicate “all is well” in order to act as a steady-state indicator of a patient’s biometric status, where the absence of such sounds indicates the need for an intervention, perhaps as a low-priority alert. Soft rhythms could indicate the pace of breathing or the current heart rate. This type of sonification could inform and unburden both caregivers and patients, allowing them to focus on important details. The advantage of direct sonification of data such as heart rate and breathing is that it directly translates the information and retains a high degree of specificity and variance. Over time, doctors and nurses will develop their ability to listen and be able to interpret elements that are not well conveyed by conventional alarms (emotional activation relates to heart rate and breathing, even though variations are understated). The soundscape would represent a true “fingerprint” for the patient, conveying multiple independent variables in a continuously generated composition.

A key insight from “Sonification” in Chapter 1 is that alerts need not carry a negative connotation in order to carry information. If additional meaning, such as priority level, can be added to an alert through nonstressful, emotionally neutral elements, such as increasing the tempo or the number of instruments in a composition (as with Brian Foo’s sonification of income along the New York subway line in Chapter 1), then the alert becomes a special message decodable by doctors and nurses while remaining nonalarming and neutral to patients.

Another advantage of an intelligently integrated system is that it can employ localized sound through audio beamforming (also discussed in Chapter 1). This requires a system of smart speakers distributed around a room. It enables sounds to be set to different volumes in different parts of a room. This lessens the impact of alarm sounds on patients while still allowing them to be audible. Some sounds might be audible only when standing in a particular location, allowing the patient to rest in silence. Certain noncritical information, like the resting, ambient information about a patient, could be beamed right outside the room. Imagine doctors and nurses being able to listen outside of a patient’s room to assess the patient’s breathing rate, heart rate, blood glucose, and oxygen saturation. It could be a noninterruptive, calm way to assess a patient without even opening the door.

Integrating haptic signals sent to devices worn by doctors and nurses to signal critical, time-sensitive matters can ensure that important alarms are not missed. Haptics can be useful for anyone who may need to receive information without the use of their eyes and ears. They are a sensible backup to auditory notifications. Additional information could be conveyed in a coordinated fashion on a visual display, for instance, using devices like an Apple Watch or tablet. (See Chapter 8 for a more detailed discussion of switching notifications between hearing, sight, and touch.)

Even without changes to regulations, which an integrated system would require, much can be done to improve the quality and impact of hospital alerts. Redevelopment of medical equipment should focus on reducing connectivity issues between sensors and the patient, potentially by developing remote methods of monitoring the patient where they can be successful. Remote monitoring by measuring CO2 released in breath is one promising avenue.13

In many cases, sensors can be improved by simple user design, such as making them wireless, more discreet, and more comfortable to wear. Many sensors rely on a medical adhesive to maintain connectivity. If removing these sensors is uncomfortable for the patient, it is more likely the sensors will go unchanged for long periods of time, meaning the adhesive may dry out, which produces artifacts in the signals. A new adhesive could be an important breakthrough in increasing reliability of alarm signals and ease of use.

Another important change is simply to improve the playout hardware. As researchers have noted, “many pieces of medical equipment currently use low-cost piezoelectric audible alarms for their signaling. These are the same kind of alarms used in smoke detectors or at checkout counters at grocery stores. These low-cost alarms can no longer be used because they will not meet the complex frequency requirements of IEC 60601-1-8”—the new optional guidelines for medical equipment manufacturers as of 2006.14 Low-quality speakers contribute to the cognitive burden on patients and healthcare workers by creating grating sounds.

A key element is allowing the sounds to be nonrepeating. A curious observation about human neurology is that our brains ignore sensations we receive too often. If we are constantly around a certain smell, such as a particular perfume, our brains will adapt to make it less apparent to us, to the point where it might altogether disappear from our conscious attention. If we get used to the sensation of glasses resting on our noses, we may forget we are wearing them, and if we hear the same sounds over and over our brains will start to filter them out. Generative audio, discussed in Chapter 1, would prevent habituation to specific sounds. The introduction of variability would make notifications more interesting, causing us to pay more attention. And because our minds naturally prioritize novel stimuli, even if the alerts are more subtle than beeping alarms, the lack of repetition should allow our brains to notice the sounds generated with more acuity and sensitivity.

The myriad of sounds we hear in nature is a good example of sounds that are clearly recognizable, yet change with each iteration. Using recognizable sounds from nature instead of abstract tones is one way we could aid memory and recognizability in alarms. Alternatively, because we are good at picking instruments such as trumpets, strings, and piano even out of a complex composition, it is not unreasonable to think that we could encode information in a constantly changing generative audio alert simply by creating rules about what trumpets signify, or string instruments, or piano.

Beyond coding alarms with information based on the instruments present in the composition, we could assign melodies to page individual doctors and nurses, which they could learn and over time would recognize instantaneously. In a hospital with such a system, instead of patients trying to block out interruptions from buzzing pagers and announcements, they would instead enjoy periodic melodies.

Companion technology to this system would of course be needed while the musical phrases and haptics were being learned. When the musical phrases became known, doctors and nurses would have difficulty not noticing when they were played. This companion system would provide an alternative for people with hearing impairments and a backup if the system goes offline. A pager, phone, or smartwatch could display the meaning of the messages in text and buzz to alert users.

The cacophony of beeps we have created as hospital alarms is simply poor design. It is ineffective and counterproductive. Let us imagine a better, more healthful hospital system.

Ambient Sound in Animal Environments and Habitats

Although we talk about sound exposure in work environments for humans, sound design for animals is often overlooked. It is essential to empathize with the creatures that live alongside us. Animals can’t change their environments like we can. The noise from dog kennels can reach above 100 decibels. Fish and amphibians can be disrupted by the noisy pumps and motors in their tanks. This results in shortened lifespans and miserable quality of life.

Simple acoustic treatment for these spaces can dramatically improve the experience of animals and those that work with them. Quieting pumps and tank components or accessories can improve conditions for these sensitive creatures. We can work to understand their particular needs and help them live full and enriched lives.

In some cases, adding sounds to animal environments can be helpful for desensitizing animals to sounds associated with human company. Particularly for animals like rabbits and guinea pigs, but also for feral cats and abused animals, sounds from children’s movies can help familiarize animals with background noise including language and music, which can make them calmer when in the presence of people.

Guidelines for Removing Sound from Interactions

In this section we’ll look at specific guidelines for removing sound from common user interactions with your products.

Enable Users to Turn off the Sound or Change the Notification Style

Consider how a sound might disrupt others or go off when unnecessary. Some insulin pumps have beeps that cannot be turned off or reduced in volume. These lifesaving devices might not be heard in loud environments, and might disrupt others in quiet ones. Allowing users to switch the notifications to a vibration can help prevent contextual mismatch.

Eliminate Redundant Notifications when Possible

Consider removing or “downgrading” sounds where they might be unnecessary or redundant. This will lighten the cognitive load on the user and unify the sensory aspects of the product experience. Consider the sound’s default volume, whether the sound builds over time (like a teakettle), and whether or not the sound can be turned off by the user.

Pair Sounds with Haptics

Consider how to pair sound with haptic stimuli, including timing and rhythm. Language and voice can often be converted to tones, and tones can often be converted to lights or haptics. Be creative.

Ensure Sounds Fit Within the Context

You cannot fully anticipate the context in which your product will be used. The more intrusive a sound is, the more control the user should have over the way it will intrude.

Imagine setting a dishwasher late at night and having a noisy alert disturb your sleep. Some home appliances, like washers and dryers, allow users to turn off the sounds entirely. Here is a set of considerations for making sounds flexible:

  • Allow the user to change the volume of the sound via an easily accessible volume setting.
  • Give the user the ability to turn the sound off—ideally, for each notification individually and with a single control that mutes everything. However, if a sound is associated with critical machinery (such as construction equipment, or back-up sounds for electric vehicles), it is important that it is built to be unalterable.
  • Add a setting where users can set a time range during which the product won’t make a sound. For example, Apple’s iPhone has a Do Not Disturb mode that ceases notifications at a specific time of day. It is important to allow the range to be set—don’t assume everyone has the same sleep schedule. Late-night or swing-shift workers might have different noise reduction needs.
  • Allow users to change the sound into a different type of notification, such as a haptic (touch/vibration) or light signal. Converting to a haptic notification is often a perfect solution for personal devices that are naturally close to the body, since haptic signals are mainly perceptible by the intended recipient, not those around them. This kind of change can help give users control over wearable medical devices such as insulin pumps (which often come with sounds that cannot be switched off), and is especially useful if the notification is critical to life support. Relying solely on sound can be dangerous when the user is in a loud environment and unable to hear the alert, and can cause problems during movies or funerals, for example.

Check That the Frequency Matches the Context

Humans naturally localize high-frequency, short-duration sounds such as clicks, and the human ear is most sensitive to sounds in the 2–4 kHz range. These sounds can be used for urgent, unmissable alerts that occur relatively infrequently (such as fire alarms), but if your product doesn’t require this kind of notification, consider using a different range of frequencies or ambient soundscapes and background notifications. In certain cases, such as construction and medical alerts, ISO guidelines constrain alerts to specific frequency ranges. Check the parameters of your project before designing out of the box.

Reduce Noise from Mechanical Sources

Operating noise is one of those qualities that can make or break that hard-to-define “feel” of a product. If you’re designing consumer electronics, appliances, or other products that contain motors or transformers, or require cooling, put some thought into the unintended sources of sound. It’s often worth the small additional expense to use a quieter fan, for example, or to include sound-dampening measures to reduce a product’s unintended sonic footprint.

Consider Active Noise Cancellation Technology

Many modern headphones use active noise cancellation to reduce the effects of noise. Versions of this technology are used in telepresence systems as well as mobile devices and laptops. Active noise cancellation comes at a cost of some distortion to the signal it is protecting against noise, but it does reduce the overall fatigue users can experience from the constant pressure of high decibels, and it can make certain products and experiences much more tolerable.

Reduce the Volume of the Alert

The simplest fix for intrusive sounds may be simply to decrease their volume. For example, the sound when a user presses a start button on an electric kettle need not be as loud as the sound announcing that the water is ready. It may not be the nicest-sounding beep, but we have, with this simple change, gone from “bad” to “not bad.”

This is a simple example of what we mean by paying attention to context. Even though both sounds are made by the same device and heard by the same person, they occur in different contexts, which can be predicted with a bit of thought and attention. A “set” alarm always occurs when the user is next to the device, while a “boil” alarm usually happens when the user is at a distance. Taking this simple fact into account allows for a much more satisfying, humane audio user interface.

A kettle in an industrial kitchen shouldn’t need sounds at all; it is already a complex environment. The kettle could instead have a visual indicator and the option to turn on a sound if needed. Ideally, in a home the sound can be made louder if the house is large. If it’s a tiny condo, you should able to turn the sound down or off.

Remove or Reduce Speech

Speech is more cognitively expensive to process than a simple tone. An in-car navigation system is a more reasonable use of speech than a robotic vacuum cleaner. Consider whether spoken words are absolutely necessary to your product. Replace speech with tones, lights, or short melodies, and consider replacing tones with haptic alerts or small indicator lights. Downgrade speech to alerts, or alerts to lights, to reduce cognitive burdens on your users.

Conclusion

Products don’t always need reductions in sound, but when they do, consider changing the alert style or volume, and providing the ability to turn the sound off. Test your product in a variety of contexts, including late at night, when the cost of intrusion could be higher than during the day. Finally, consider what happens when things go wrong, and how your product might work smoothly alongside people—or animals—in everyday life. You are never going to be able to predict all of the contexts in which your product might be used, but allowing users to change how it sounds is the best way of ensuring that they can adequately adapt it to their needs.

1 For more on mitigating interruptive technology, see Aaron Day, “Nash and the Noise: Using Iota to Mitigate Interruptive and Invasive Technology,” http://bit.ly/2RjNvP4.

2 Alarm fatigue has shown up on ECRI’s yearly list of, “Top 10 Health Technology Hazards,” every year since first being included in 2014, http://bit.ly/2Omrg8Y.

3 Joint Commission on Patient Safety, “Medical Device Alarm Safety in Hospitals,” https://www.jointcommission.org/assets/1/6/SEA_50_alarms_4_26_16.pdf.

4 Amogh Karnik and Christopher P. Bonafide, “A framework for reducing alarm fatigue on pediatric inpatient units,” PMC, http://bit.ly/2SGiqWX.

5 Samantha Jacques, PhD, and Eric Williams, MD, MS, MMM, “Reducing the Safety Hazards of Monitor Alert and Alarm Fatigue,” PSNet, http://bit.ly/2CXd0Bu.

6 Dan O’Brien, “Audible Alarms in Medical Equipment,” Medical Device and Diagnostic Industry, https://www.mddionline.com/audible-alarms-medical-equipment.

7 Sanderson PM, Wee A, Lacherez P., “Learnability and discriminability of melodic medical equipment alarms,” NCBI, https://www.ncbi.nlm.nih.gov/pubmed/16430567.

8 Marie-Christine Chambrin, “Alarms in the intensive care unit: how can the number of false alarms be reduced?” NCBI, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC137277/.

9 Cropp AJ, Woods LA, Raney D, Bredle DL, “Name that tone. The proliferation of alarms in the intensive care unit,” NCBI, https://www.ncbi.nlm.nih.gov/pubmed/8162752.

10 For more discussion of this concept, see Aaron Day’s “Nash and the Noise: Using Iota to Mitigate Interruptive and Invasive Technology,” http://bit.ly/2RjNvP4.

11 O’Brien, https://www.mddionline.com/audible-alarms-medical-equipment.

12 W.H. Wan, B.T. Ang, and E. Wang, “The Cushing Response: A Case for a Review of Its Role as a Physiological Reflex,” Journal of Clinical Neuroscience 15 , no. 3 (2008): 223–228. doi:10.1016/j.jocn.2007.05.025. PMID 18182296

13 B.D. Guthrie, M.D. Adler, and E.C. Powell, “End-Tidal Carbon Dioxide Measurements in Children with Acute Asthma,” Academic Emergency Medicine 14, no. 12 (2008): 1135–9.

14 O’Brien, https://www.mddionline.com/audible-alarms-medical-equipment..

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.1.232