CHAPTER1

Introduction

I am in the field of designing and evaluating technology for people with disabilities, especially for those with cognitive disabilities. Doing this, I have often drawn, particularly in early stages of a project, on the ideas that were “in the air” while in graduate school, in the Centre for Lifelong Learning and Design (L3D, 2006) at the University of Colorado at Boulder (Figure 1.1). This is because the nature of the problem and the target population do not lend themselves to easily described requirements and quantifiable results. Let’s go back over this somewhat cryptic sentence. In this domain the problem often is the mismatch between the cognitive skills available and the task’s cognitive requirements, for instance performing a job that is just a bit too complex to remember all the steps (Lancioni et al., 2000). This is very different than magnifying a screen or reading aloud the text on a screen to compensate for visual deficiencies. For example, the Americans for Disabilities Act (ADA, 1990) has a very nice quantifiable list of requirements (ADA, New England ADA Center, 2015) for accessibility of buildings, mostly for mobility issues (e.g., wheelchair access), that make it easy to see if the building’s accessibility requirements are met. There is no such simple set of requirements for accessible and useful cognitive technology, although there are standards committees (ISO, 2015; RESNA, 2015) working on this thorny issue. Additionally, the environment of the job may change as well as the specifics of the task at hand (Suchman, 1987), whereas screens and users almost always remain the same relation. As for the population, mitigating missing intellectual deficiencies is both complicated and unique to each person, due to individual cognitive variations (Cole, 2013) and co-existing illnesses and pathologies (Mc Sharry, 2014).

1.1LAY OF THE LAND

Before continuing, we need to unpack several concepts: (1) what is meant here by “peple with cognitive disabilities” and what are their unique needs; and (2) what problems we are trying to solve and finally there will be several examples of the types of technology targeted.

The end-users for these technologies can be roughly described as people with cognitive disabilities, but this broad category can be misleading. More precisely, people who need these support systems could be functionally described as missing the cognitive functions (congenitally, by (de) acquisition, or by gradual cognitive decline) that make them capable of determining and taking the actions needed to live a high quality of life, because of deficiencies in memory, executive function, or the ability to derive accurate and appropriate decisions based on the events in ordinary life. This is a functional definition (Scherer, 2011), so it does not particularly matter what the etiology of impaired cognition is, i.e., the type of congenital problem in development that is the basis of the functional deficiency (with the exception of, to some extent, the peculiarities of Autism syndrome—see Section 5.1). Moreover, nor does the manner of losing cognitive acuity and mnemonic ability—such as acquired traumatic brain injury or diseases of aging like Alzheimer’s (in the early stages) or mild cognitive impairment—make for different sets of requirements. In this world, every cognitive impairment of sufficient morbidity becomes a “universe of one” to be designed for and the cause often bears as little importance as the geographic location. Now the population is functionally defined, as will become clearer in the discussion of the International Classification of Functioning, Disability, and Health (ICF) (World Health Organization, 2001) in the Section 5.1.

To what kinds of support systems are the concepts’ described here applicable? Just as every artificial intelligence (AI) problem can be described as a classification problem (or a set of such problems (Russell and Norvig, 2009)), the applications here are all versions of task support. Task support spans shopping, cooking, navigation (by bus or walking), planning activities, taking medication, washing hands, doing a job, deciding what to cook, using money or a computer, deciding where to go and how to get there, and what to do when lost or off track. All and anything to do with day-to-day tasks, from very simple to complex sets of tasks. That’s a lot of ground to cover. Some of the examples are very focused, such as helping elders with cognitive impairment properly wash their hands (which turns out to be a very hard task) (Mihailidis et al., 2008) to traveling across the county to visit your sister (Assistant Project, 2015). What we are not talking about are specific sensory issues associated with some forms of intellectual disability, such as dyslexia or dyslisia.

One of the interesting side effects of working in this area is that, for almost any application discussed, someone will say that they could use the system themselves from time to time. The now-familiar curb cut effect (Carmien et al., 2005), when it comes into contact with context-caused functional cognitive disabilities leads to many opportunities like using a public transportation navigation system to support tourists who are new to the area and non-native speakers of the local language. However, care must be taken to avoid creeping featurism, ending up with a system difficult to use by the original target population.

1.2ASSISTIVE TECHNOLOGY (AT): ADOPTION AND ABANDONMENT

Device rejection is the fate of a large percentage of purchased assistive technology (King, 1999; 2001). Caregivers report that difficulties in configuring and modifying configurations in assistive technology often leads to abandonment1 (Kintsch and dePaula, 2002), an especially poignant fate considering that these types of systems may cost thousands of dollars. While assistive devices can have a profound effect on life, such devices have a high abandonment rate, ranging from 8% for life-saving devices to 75% for hearing aides. In fact, about one-third of all assistive devices are abandoned (Scherer, 1996; Scherer and Galvin, 1996). While there are no studies examining the abandonment rate across all types of assistive devices (Kintsch and dePaula, 2002), some experts estimate that as much as 70% (Martin and McCormack, 1999; Reimer-Reiss, 2000) of all such devices and systems are purchased and not used over the long run, particularly those designed as a cognitive orthotic (LoPresti et al., 2004). Other causes for abandonment have many dimensions; a study by Phillips and Zhao reported that a “change in needs of the user” showed the strongest association with abandonment (Phillips and Zhao, 1993). Thus, those devices that cannot accommodate the changing requirements of the users were highly likely to be abandoned. It then follows logically (and is confirmed by interviews with several AT experts (Kintsch, 2002; Bodine, 2003)) that an obstacle to device retention is difficulty in reconfiguring the device. A survey of abandonment causes lists “changes in consumer functional abilities or activities” as a critical component of AT abandonment (Galvin and Donnell, 2002). A study by Galvin and Scherer states that one of the major causes for AT mismatch (and thus abandonment) is the myth that “a user’s assistive technology requirements needs to be assessed just once” (Scherer and Galvin, 1996); on-going re-assessment and adjustment to changing needs is the appropriate response. A source for research on the other dimensions of AT abandonment, and the development of outcome metrics to evaluated adoption success, is the ATOMS project at the University of Milwaukee (Rehabilitation Research Design & Disability (R2D2) Center, 2006). The types of AT design to support task completion, decision making, and navigation are specifically most at risk for non-adoption or abandonment. It is these kinds of systems this book is aimed at helping the designer succeed with.

Successful AT design for this population must support the interface requirements for users with cognitive impairments as well as view configuration and other caregiver tasks as different, yet equally, important requirements for a second user interface (Cole, 1997). One proven approach applies techniques such as task-oriented design (Lewis and Rieman, 1993) to mitigate technology abandonment problems. Research (Fischer, 2001b) and interviews (Kintsch, 2002) have demonstrated that complex, multifunctional systems are the most vulnerable to abandonment due to the complexity of the many possible functions.

It is very encouraging to see an increasing educational focus on the design of assistive technology in engineering schools across America. One of the useful marks of many of these courses is the involvement of hands-on professionals, often in the form of special education teachers or vocational rehabilitation specialists, insuring that not only is the technology built right (or in software engineering terms validated as correct (IEEE, 1990)) but that it is the right form of technology for the user/problem (in software engineering verified as the right system). This is a big step forward from what was more common in the 1990’s and 2000’s when there seemed to be a lot of designs coming out of naive but inspired systems that seemed more an exploration of what could be done rather than what was needed. But, beyond involving members from various stakeholder groups (see Section 4.5 in Part 2), there are other ways of looking at needs and implementing solutions, from other disabilities and from more deeply looking at assistive technology (AT) design itself. Exploring some of these ideas are what this book is about. L3D’s approach to design, by looking deeply at the context and pattern of problems as problems has proved remarkably useful in working with these complex, dynamic, and often idiosyncratic challenges.

1.3ASSISTIVE TECHNOLOGY AND THE TOOLKIT

The ideas here comprise what I think of as my AT design toolkit—a set of conceptual guides and levers to support and contextualize AT and Design for All2 (DfA (Stephanidis and ASavidis, 2001; Center for Universal Design, 2011)) approaches. The notion of a toolkit comes from Tammy Summers (1995):

I refer to these software collections as “high-tech toolbelts” because each designer assembles her personal collection just as a carpenter assembles a collection of hammers, screwdrivers, tape measures, etc. into a personal toolbelt.

While this refers to the strengths and problems involved in using multiple tools to create representations of problems, the notion of a “toolbelt” (or in this case a “toolkit”) is, I think, a good one. We all bring toolkits to design problems, and all problems that are not simple assembly of pre-created parts are design problems. For some of us, this is simply the set of skills that have worked for us in the past—perhaps they have been taught to us in a formal fashion (i.e., schooling or apprenticeship), or perhaps they are part of the cultural zeitgeist we were raised with. My approach is to exteriorize the choice of the tools3 and lenses to approach each design project. By exteriorize I mean that after first carefully and un-biasedly looking at the problem or situation, one steps back and considers what might be applied to make the process tractable or even just what this is similar to in previous experience.

So what do you look at? The simple answer is the end-user and what they want to, or must, do. To think about end-users you end up thinking about the context of use and the larger goal of use. The ideas and tools here are intimately tied to this investigation.

Where did the toolkit come from? The majority of ideas here came from time in my graduate school at the University of Colorado in the Centre for Lifelong Learning and Design. So what does the L3D approach have to say about this? The general approach can be summarized with this quote (L3D, 2006):

The Centre for Lifelong Learning and Design is part of the Department of Computer Science and the Institute of Cognitive Science at the University of Colorado at Boulder. The mission of the center is to establish, both by theoretical work and by building prototype systems, the scientific foundations for the construction of intelligent systems that serve as amplifiers of human capabilities.

Amplifiers of human capabilities . . . This is supported by five basic dimensions brought together from widely different research cultures (Fischer, 2002; 2003):

artificial intelligence (AI) → intelligence augmentation (IA);

instructionist learning → constructionist learning;

individual focus → social contexts;

things that think → things that make us smart; and

what computers can do → human and computer synergies.

This book discusses, in some depth the dimensions that particularly relate to the task of designing AT. L3D members are drawn from computer science, education, psychology, electrical engineering, architecture/urban planning, microbiology, and sociology/anthropology. L3D has two parent organizations: the Department of Computer Science and the Institute of Cognitive Science. L3D interacts with other academic units at CU Boulder (College of Architecture and Planning), K-12 schools, community groups, government laboratories (NCAR/UCAR), and industrial partners (BEA, Siemens, IBM, Apple, PFU, SRA).

Not all of the elements in this book come specifically from L3D; some come from my experience in developing ATs in the lab while being supported by the Coleman Institute for Cognitive Disabilities (Figure 1.2). The Coleman Institute formed the bridge between L3D and ATs. The Coleman Institute became interested in the work that L3D was producing, particularly its unique approach, and I was fortunate enough to be supported by them during my Ph.D. studies. L3D formed a group of developers and researches that called themselves CLever (Cognitive LEVERs) (CLever, 2004), which produced several projects, many papers, and two dissertations. CLever pulled together interest in AT, our expertise in design and cognitive science, and, most importantly, external input from domain experts and the L3D community. The domain experts (one half-time staffer and any number of invited speakers at the meetings) brought in expertise in AT design and many, many years of experience in design and computer-assisted work in other contexts. They provided tremendous leverage in avoiding naïve mistakes (see Section 5.1 as well as Section 5.2). The larger L3D community provided us with the sort of “out of the box” questions and criticism that supported effective novelty in our designs (see Section 4.5).

image

Figure 1.1: L3D logo. From L3D (2006).

image

Figure 1.2: Coleman Institute logo. From Coleman (2004).

1.4ELEMENTS OF THE TOOLKIT

The following sections will examine, in a framework, the 20 perspectives that I have chosen for this book (Table 1.1). Each topic will begin with an explanation of the concept, and where it came from and how it was initially used. Then, the bridge from the contact to intelligence augmentation and AT is presented. Finally, the section presents several examples of the concept in use or in current research. The documents seemed to fall naturally into four categories: Fundamentals, Models, Technique, and Things to Avoid. The table below shows the topics and categories.

Table 1.1: Categories and concepts

Category

Element

Fundamentals

Artificial intelligence (AI)/Intelligence augmentation (IA)

Design for failure

Distributed cognition

Scaffolding

Situated action/cognition

Socio-technical environments

Universe of one

Wicked problems

Model

Importance of representation

Tools for living and tools for learning

Dyads

Technique

Plans and action

Low-hanging fruit

MetaDesign

Personalization

Symmetry of ignorance

Things to Avoid

Diagnosis and functionality

I have a theory/cousin

Islands of ability

The collections don’t follow any meta categorization plan, and are not necessarily sequential, however with the exception of things to avoid the collections roughly fall into what can be called the ground of design, the development or path of the process, and the implementation or fruition of the design. The first category, Fundamentals, presents tools that can be used in general approach of the problem, for instance thinking about how to leverage the end-user’s abilities rather than just produce results—which is one of the differences between AI and IA. Modeling items present ways to think about how to present the problem and also the kinds of solutions that might result. The Technique is just that, specific frameworks that can be used to approach these high function in AT/DfA problems. Finally, Things to Avoid is a catch-all category that came out of my initial work in AT and the shortcuts of domain experts, such as the special education technology expert we had the good fortune to have on part-time in the lab during the CLever (CLever, 2004) project. There are so many blind alleys in developing this sort of technology and having a domain expert with years of experience available to point out dead ends and acting as a proxy user in end-user design process is invaluable. If you were starting out in this field I would recommend finding a partner such as Anja Kintsch who guided us (see Section 4.5).

Part 2 will discuss each of these topics. For each one there is an initial definition of the concept, then a more detailed discussion with respect to assistive technology design for intelligence augmentation, a list of canonical papers and discussion of the source of the concept, and finally examples of the concept in existing systems, with an emphasis on how its design was influenced by the concept. Most of the examples in Part 2 are only presented on the basis of my familiarity with the systems, a result of time spent talking about them with their creators. The other criteria of systems discussed follows my personal commitment to working on projects that promise to mitigate the digital divide and to make the cost of a commercial version reasonable for those of us not so financially well off.

Part 3 lists a set of publications about each concept, in the same order as Part 2. The publications go into the concepts in more detail, some of them relating to the source of the concept, some of them about specific implementations and forks of the concept in different domains.

1.5HOW TO USE THIS BOOK

While I certainty can’t compare the depth of insights and broad and profound applicability in this book to the “Gang of Four”’s seminal Design Patterns (Gamma et al., 1995) or to Christopher Alexander’s A Pattern Language (Alexander et al., 1977), the approach is similar. Depending on your interest and needs, this can be used as a broad overview of the landscape of designing and implementing these types of systems, as an introduction to the source of the concepts, and of course as a guide to using them in the design and construct in this particularly interesting area of technology.

Some of the descriptions that follow only expose the concept and point to relevant publications; concrete examples are only briefly discussed as it is difficult to extract specific design components that illustrate a broad design approach (Situated Action/Cognition, Wicked Problems). Other descriptions have detailed examples, in one case illustrating the complexity and detail of the implementation but not intended as an algorithm to copy (Personalization), and in another a very implementable set of steps to integrate this approach into your system (Design for Failure). For all but one concept Part 3 lists publications that were seminal to the concept and in many cases others that illustrate various ways in which the concept is implemented.

I have intentionally left out some of the more advanced platforms and approaches that will become more available and mainstream in the future. These include the Internet of Things4 (IoT), intelligent agents, and “emotion”-based robotics. This book is meant to help the development of Assistive Technology using Intelligence Augmentation (AT/IA) now, and in the very near future, not to propose or discuss the cutting edge work that will make production of IA systems much more effective and easier to use and design in the not-so-near future. However, the topics to follow will be useful and important no matter what platform or infrastructure you design with.

1There is another kind of abandonment, which is not using the system or device because the need no longer exists. This “good” abandonment of AT is not in the purview of the current study.

2AT and DFA refer to two overlapping/complementary approaches to accessibility design. AT technologies make adaptations that allow users to do something that they otherwise would be unable to. DFA is a design movement to guide designers to create systems and artifacts that are useable by as many different types of people (e.g. with disabilities) as possible. While the ultimate goal of DFA is to make all human technologies accessible by all people of varying abilities, the realties of the distribution of abilities (i.e. some disabilities are so unusual that designing a system for everyone is either too hard or expensive to do) make AT one end of the continuum that will continue to be needed.

3By “tools” I do not necessary mean tools in the sense of CASE or IDE tools, although these are included; tools here mean any idea, framework, or theory that extends our ability to understand or create systems.

4This assertion is made in the sense of the IoT as being integral to an AT system. This excludes GPS as an IoT component and recently developed IoT systems that straddle SmartHomes, requiring high levels of infrastructure spending. Similarly, there are many, many indoor navigation systems that require putting sensors all over the building to succeed. What has interested me is the creation of intelligent AT that does not exclude the majority of potential users due to expense or the need for everyone to adopt a standard (especially in the current world of many competing, possible, standards). I have no doubt that the interconnected world of IoT will, in the near future, produce affordable solutions based on an existing infrastructure and opportunistically available information without proprietary shackles. But I cannot yet discuss rules of thumb or theoretical frameworks that are unique to IoT systems. I have, however, tried to present the concepts in Part 2 so that they may be applicable in IoT-based systems.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.29.119