Chapter 23
Face And Interface: Richer Product Experiences through Integrated User Interface and Industrial Design1

Keith S. Karn

Bresslergroup

Introduction

When users flip a light switch, turn a knob to adjust the volume on a radio, or swipe their fingers across a touch screen, they are interacting with the product's user interface (UI). The user interface encompasses the physical and digital components that allow a user to communicate with a machine or device. The devices we use in everyday life are constantly evolving, and they have morphed drastically in the past 50, and even 20, years. Likewise, user interfaces and the UI design process have changed considerably.

This chapter begins with a call for reintegration of hardware and software UI development, which have evolved into separate silos within many organizations. Next, I provide an overview of emerging UI technologies in new product development (NPD) that are providing designers with opportunities to expand products beyond the limits of physical controls and screens. In the last half of the chapter, I suggest methods for teams who are ready to dig in and develop UI and industrial design (ID) in parallel. (This chapter assumes that teams have already discovered and defined the problems to be solved by the product and are at the threshold of creating and evaluating concepts.) And as no two projects progress identically—and as no process informed by design thinking proceeds in a wholly linear fashion—I conclude with seven questions to ask yourself along the way in order to guide your particular, sometimes unavoidably meandering, but hopefully more focused path.

Defining Terms

In this chapter, I use the term digital in reference to digital visual displays, often paired with a touchscreen for user input. There is a tendency today to give every new product a touch screen, but that may not always be for the best—more on that later.

In the product development world, there is some confusion around the term user interface (UI) and other similar terms, including user experience (UX) and interaction design (IxD). Because the disciplines are relatively new, they are still being defined. For the purposes of this chapter, UI refers to both the physical and digital (on-screen) interactions between a human user or operator and a device or piece of equipment. I like to think of the user interface as the means of communication between human and machine.

A person who designs UIs is an interaction designer. IxD refers to the art and science of user interface design, but people often misuse it to indicate only the digital portion of the interface. The term UX is frequently misused as the exclusive domain of website design, but it actually denotes a broader, more holistic human-product experience. It takes into account considerations such as the purchasing process, the maintenance of a product, how it will be stored, customer support, and activities all the way through to end of life.

While UI encompasses both the physical and digital components of a product's controls and displays, this chapter will occasionally call out physical versus digital features since the goal is to clarify the process of developing each.

23.1 Divergent Paths: User Interface in Physical and Digital Products

Separate Development Paths

Prior to 1980, user interface design fell under the domain of industrial design (ID) and mechanical engineering because it was so physical, and it was primarily driven by the selection of appropriate controls such as buttons, switches, and knobs. That changed with the advent of the Age of Computers, and hardware and UI software development processes have evolved separately, even within companies, ever since. Hardware generally takes longer to design, build, and test and typically follows a more linear Stage-Gate (or phase-gate) development process. The Stage-Gate model divides the process into a series of tasks (stages) and decision points (gates) that a team advances through sequentially. This development process is not as fluid or flexible as the software development process. Much like building a house, you have to put down a foundation and have a well-defined architectural plan before you start building product hardware.

Software development, however, is generally more flexible and typically follows an agile development process. The concept of agile development was introduced in the early 2000s. It emphasizes adaptive planning, an iterative approach/and rapid, flexible response. It is characterized by lots of loops, short sprints with working software output, and minimal documentation. Because of this, software goes through more and shorter cycles during development. Rather than producing a physical prototype, like their hardware development counterparts, to test and refine, software developers relatively quickly write, test, and rewrite code.

Even though 3D printing and other technology is making it easier to prototype physical products, developing a physical device to production-ready status still takes longer than software development. The result is that we often develop hardware first because of this longer lead time, and software is brought in later. When the hardware team specifies the control and display elements of the UI before the interaction design work has even begun for the UI software, a suboptimal user experience is almost certain. Thus, this separation of hardware and software development—both organizationally and temporally—results in a lower-quality product.

A Call for Reintegration

Much of the UI work that was happening in the 1980s was for the screen, with the keyboard and mouse controlling the action. Gamers were the first to realize the limits of this paradigm. When everything was on a computer, most users took the mouse and keyboard for granted. To most users, those were the only input devices they knew. Video gamers refused to settle for these input devices, though, and they recognized the limitations of the typical computer output devices (displays with a small color gamut and crude audio systems). As a result, the game industry was born and began to develop new tools, such as joysticks and handheld controllers to better simulate natural user inputs. Soon the industry was pushing the envelope on higher-resolution displays and developing its own game processors like the Xbox and PlayStation for handling higher-quality animation and better audio.

These trends—developing processors for specific purposes (rather than general-purpose computers) and designing user inputs that go beyond the mouse and keyboard—have continued to grow. Today, countless products center around dedicated microprocessors and are restoring physical controls, like buttons and switches. The “Internet of Things” is populated by devices (aka “things”) that can connect to the Internet. That connection allows devices to communicate with each other, without physical input or assistance from users. This will surely lead to product innovations and new business models and processes, and has the potential to spur efficiency and reduce costs and risks. Product developers need to consider how these inputs will contribute to the use of the product. ID considerations must, once again, be reassessed.

At the same time, consumers and product developers are realizing that not everything should be controlled by a touch screen—sometimes physical buttons or knobs are best. Ideally, touch screens enable compact and multipurpose devices because the screen allows a small surface to be reconfigured for a variety of tasks. A smartphone, for instance, can be a phone, camera, and web browser all in one. But for many devices, a combination of hardware controls (physical buttons and switches) and touch interfaces is ideal. To get the best mix of hardware and onscreen/digital controls, product developers need to reunite UI design with engineering and industrial design processes—ideally within the Design Thinking framework.

23.2 Emerging User Interface Technologies

Today, the interaction designers' toolbox is growing. As emerging technologies such as advanced audio technology, haptic or tactile feedback technologies, and gestural interfaces travel down the learning and cost curves, designers have more opportunities than ever to expand products beyond the limits of visual screens. More designers are realizing that, when used appropriately, these new tools can have a positive impact on usability, user experience, and brand recognition.

Auditory Feedback

UI design is, for the most part, missing out on the auditory dimension. Many product design firms do not even have dedicated sound designers. It is hard to imagine Apple's iPhone without the “swoosh” that signals a sent email or the chime to indicate a phone is connected to its charger. Very few products popular today have that sort of auditory element. Compared to products developed pre-1980 (when physical buttons, knobs, and switches provided their own inherent sounds), products today are way behind in terms of auditory feedback.

Imagine a wearable, mountable camera similar to the GoPro Hero camera. This camera is going to spend most of its time bolted to a helmet or the tip of a kayak or surfboard—somewhere out of reach or out of sight. The user is going to control the camera with a remote or by pressing buttons that are out of sight. For such a product, auditory feedback is essential since the user will not be able to see the camera during use. The user will rather rely on tactile cues to locate and identify the correct controls and rely on auditory cues to provide the feedback that the correct function is selected.

Haptic Technology

Haptic technology allows systems to stimulate the user's sense of touch by applying force, vibrations, or motion to the user. This is typically targeted to the users' hands or feet but can be applied to any body surface. For instance, instead of seeing or hearing a device power on, the user might feel it vibrate as it turns on. Haptic technology appeals to the user's tactile senses and kinesthetics. Tactile senses are those associated with skin contact (or, more technically, cutaneous stimulation) and include feelings such as temperature, pain, vibration, and pressure. The kinesthetic sense refers to the perception of movement and position of our limbs based on muscle forces exerted.

Most people are familiar with the haptic technology of a smartphone vibrating, but designers are only scratching the surface of what they can do with haptics. Today a vibration is analogous to a simple beep in the auditory domain or a single indicator light in the visual domain. It alerts the user but provides limited information. Imagine the difference in the audio domain between using a simple beep and conveying information to a user through speech or a beautiful song. Today, the haptic domain is using vibrations as “beeps” when they could be creating the equivalent of beautiful songs. The problem is that more sophisticated force feedback is still complex and costly.

Gestural Interfaces

Gestural interfaces allow computers to interpret human gestures via algorithms. For users, this means the ability to issue commands to a computer without making physical contact. Gestural interfaces are evolving from simple presence detection, though. Inexpensive cameras and high-speed processing are making camera inputs more common and resulting in higher performance. The Amazon Fire phone, for instance, has four front-facing cameras that enable gestural control, through which a user can control the phone's screen with the nod of a head or the wave of a hand. Similarly, the Leap Motion Controller is a device that allows users to manipulate a desktop computer's screen using hand and finger motions similar to a mouse, but does not require hand contact or touching. In other words, a user can control a computer with his or her fingers without ever making physical contact.

Augmented Reality

Since the 1980s, screens have diminished the use of auditory and tactile feedback. Visual feedback has gotten better, but there is always room for improvement. Augmented reality has the potential to take visual feedback to the next level. Augmented reality allows computer-generated sensory inputs like sound, video, or graphics to be overlaid on top of a real-world view. The user sees both the real-world view and the virtual images at the same time. Advances in augmented reality hardware and three-dimensional modeling are making this technology more viable from a user's standpoint.

23.3 New Technology Demands a New Development Process

As discussed earlier in this chapter, hardware and UI development have become separated over time. Today, the physical hardware components of a product are often designed by ID teams closely coupled with mechanical engineers. The digital interfaces are often designed separately by UI designers, who work closely with software engineers. More often than not, though, designing the hardware and digital interface components separately detracts from the final product. To build better products, development teams should integrate the hardware and software development processes. The design thinking mind-set could provide the means to enable this reunification.

Merge Development Timelines

Hardware development typically happens on a linearly constrained timeline. First there is a long design and development process. Then production, assuming tooling lead times, testing, compliance, and so on, can take upward of six months before finished goods are ready to ship. This forces the ID process into a strict timeline, and industrial designers must lock down the design as quickly as possible. In contrast, UI design operates on a much more flexible, faster development process, which is typically nonlinear. Interaction designers can be iterating the software weeks before launch, and even post launch in the form of software updates. Since the development timelines do not naturally line up, the ID and UI are typically developed separately and brought together at the end. As mentioned previously, that disconnect can diminish the final product.

To marry the UI and ID development processes, the development timelines can be merged at certain key points (Figure 23.1). Some elements of UI and ID are the same. For instance, both require the up-front user research necessary for developing empathy with the intended end user, discovering unmet needs, and distilling these down into key customer insights. Later, both require some form of prototyping, though digital UI (screen) prototypes tend more often to be the lower fidelity prototypes so helpful in early, iterative evaluation. Both hardware and software will need visual brand language development. So, the product development processes can be pulled together at those common points. For example, the software and hardware evaluation (typically in the form of usability or concept testing), can be done at the same time.

c23f001

Figure 23.1 An example of how to merge UI and ID product development timelines.

The UI development process, which evolved from the software development process, is typically broken into segments called sprints. The idea is that development happens in a series of sprints that are limited in time, typically a few weeks. In each sprint, designers tackle as much as they can within the timeframe and are required to present functional deliverables at the end of each sprint. This process is flexible and open to change, which is much different than the ID development process. Hardware designers start wide, with a number of options, and then funnel their options until they lock down a final concept that can be refined until it is ready for production.

Bridging the agile development process of UI and the linear Stage-Gate process of hardware design is challenging, but it can be done by forcing a more iterative process. Product developers can commit to a series of sprints that are set in time (Figure 23.2). In each sprint, the hardware team must deliver a physical prototype, but the expectations for those prototypes can be lowered. They do not need to be perfect because their primary function is to test hardware and software together and to generate user feedback. This approach to problem solving is, of course, in line with the notion of the design thinking approach in the context of NPD: teams develop simple prototypes, then, armed with feedback, iterate further. The result is a better, faster, more efficient process and a more cohesive final product.

c23f002

Figure 23.2 A blended process plan for prototyping in parallel.

Prototype in Parallel

Early on in the process, the UI software will likely be created with wire frames and workflow diagrams, which can be printed out and turned into paper prototypes. Then, when initial form-factor physical prototypes are ready, the paper screens can be stuck on the physical prototype to represent the screen designs in the context of the physical device.

At higher levels of fidelity, the development team might run the UI prototypes on the screen hardware of the intended final display. The screen might not yet be connected to the hardware, but it can still be driven by a computer concealed “behind the curtain” to get a sense of the UI interactivity on the real display and a higher fidelity look and feel of the micro-interactions. It is important to integrate the software and hardware during the early, iterative prototyping to really understand the capabilities and limitations of particular display hardware. Do the colors display correctly? Can the screen adjust to ambient lighting conditions? Do elements of the UI need to be altered to interact more coherently with the industrial design, or vice versa? These are the types of questions a holistic development process will draw out.

23.4 Seven Questions to Guide the Integration of Industrial Design with User Interface Design

For managers looking to integrate hardware and UI development, these seven questions or decision points can help guide a project team. They are drawn from the collective experience of myself and my colleagues, with the caveat that each project differs based on goals and circumstances.

Who Is Leading the Process?

This question refers to both organizational leadership and which aspects of the project are taking the lead simply by starting first. When it comes to organizational leadership, a collaborative process that includes both hardware and software teams is best. Typically, a mechanical or electrical engineer will lead the hardware development team and a software engineer will lead the software team.

Ideally, each project will have its own multidisciplinary project team. The project team members will vary depending on the product you are developing, but the project team may include interaction designers, software engineers, industrial designers, mechanical engineers, interaction designers, product planners, user research experts, and marketing experts. The idea is to have one unified team working together on a product rather than separate hardware and software teams working independently of one another. It is all about getting the right people to the table and getting them there early.

In most cases, it makes sense for hardware to lead—to a certain extent. It takes more time to produce the hardware, and the hardware specs generally inform the software. (The development team cannot develop code to run on a processor that has not yet been selected or might not even exist yet, and if the product has a screen, the interaction designers will need to know the specifications of that screen.) The problems that teams run into start when the software designers arrive too late in the game—after the hardware engineers have already determined critical features based on cost constraints, size, and weight.

For instance, a major camera company was developing a camera with a touch screen that had menu-scrolling capabilities similar to the scrolling function of the iPhone's contacts list. Unfortunately, the UI team was brought into the process too late to achieve this. The hardware team had already chosen a processor that could not meet the demands of a fast-scrolling function. The menu scrolled, but the processor was so busy scrolling, it could not receive the next command to stop the scrolling and select a menu item. To solve the problem, the software team put more “friction” in the system (a deceleration function) and essentially slowed the scroll function so the processor could keep up. The final product would have been better if UI considerations had been taken into account earlier on, when the engineering team was specifying the hardware.

Managing the sometimes conflicting needs of UI and ID teams is a balancing act. Most UI teams would love to have high-resolution, capacitive touch screens on every product, but each project has its own constraints. The ID team might specify, for instance, that the final product cannot be large enough for a six-inch screen. It is important to keep product planning and marketing teams involved, too. Marketing team members would supply the “voice of the customer” and say, “Yes, a large touch screen would be great but the final product has to sell for less than $200.”

What Are the User's Tasks and Needs?

Just as it is important to know how much a user will pay for the product, it is critical to understand who the user is and how we expect the user to interact with the product. Once the project team is in place, the next step is to define the system requirements. Human Factors 101 teaches the importance of the relationship between users, their tasks, the products, and the environments in which the products will be used—and, finally, the demands they place on the user (Figure 23.3).

c23f003

Figure 23.3 Human factors sits in the intersection of products, tasks, users, and environment.

The takeaway for a development team is to apply these questions to their product: What is the machine going to do? What is the human going to do? Once we know what we are asking the human to do, the question becomes: what information has to be communicated from the human to the machine in terms of inputs to the device, and what information needs to be communicated from the machine to the human in terms of outputs from the device?

Environment also plays a role. Is the product going to be used in a public space or a private space? Will one person use the device, or will it be used by multiple people, either individually or as a group? Will it operate in bright sunlight? A backlit LCD screen used outdoors or in a bright environment will need to crank up the brightness to overcome ambient light. As soon as the user has to lift a hand to shield the screen from light, you have lost the usability battle. That said, in a typical domestic environment, standard LCD screens with LED backlights do work well, as do LED-based displays like the seven segment modules on stoves and microwaves.

The user's tasks are important, too. More tasks mean more complexity, and complexity will influence UI design. How many functions do there need to be and how complex is their presentation? If a device is going to display temperature, can the display show just three general ranges (hot, medium, cold)? Or does it need to display temperature readings to the precision of a tenth of a degree? Do all of the functions appear on the display at once? It might be beneficial to hide extraneous information in order to make the product appear less complex, but developers should not try to diminish complexity by making the screen smaller than necessary or burying functions behind too many layers of interface.

Industrial designers might ask if the user is going to hold the device, if it is going to be mounted on a wall or another device, or if it will be freestanding. This might influence the shape and weight of the final product. Is it being used outdoors and does it need to be waterproofed? What kind of physical use and abuse will it need to withstand?

All of these questions will direct the work of a project team. The answers will lead to different takeaways for interaction designers and industrial designers, so it is important that everyone involved in product development take these questions into consideration. Build a matrix that lists possible hardware and software UI elements and the resulting impact of each on the user experience. A sample checklist of hardware specs might include touch screen display, LED status lights (red, green, blue), and navigational hard keys. Balancing these hardware and software constraints is critical to the success of the final product and is something the product development team should account for early on.

Which Functions Are Digital and Which Are Physical?

After the team has determined what information the user and device will communicate to each other, the next question is: how will they communicate with each other? Will the interactions be physical or digital? This is where the development team should decide what types of technologies and interfaces the product will feature and which commands will be dedicated hardware controls versus on-screen, soft buttons. Physical interactions will impact hardware capabilities and industrial design. Digital interactions will generally fall under the UI domain and require software capabilities. Today, more and more companies want their products to have screens, but these features should not be gratuitous. Their suitability needs to be thought out carefully.

In the early 1980s, military aircraft cockpits were transitioning from literally hundreds of electromechanical displays and controls to integrated, interactive, digital displays. I worked on the F/A-18 Hornet and the A/V-8B Harrier where much of the navigation, communication, sensing, and even weapons system management was being moved from dedicated control and display “heads” to integrated screens with dynamically labeled buttons along the edge. The fighter pilots initially rebelled at the thought of having to navigate menus on a cockpit computer interface during a dogfight or critical target acquisition task. The design team had to consider carefully which functions still needed separate, dedicated controls (and sometimes dedicated displays) and which could be integrated into a more central system. Only the controls that made sense as part of a layered, visual presentation were moved to the screen to make the most of limited cockpit space. The rest of the interactions remained physical.

Today's product developers are making similar decisions. Some functions still need physical controls, but many others can be adapted for digital screens. It is important to envision in advance exactly how your device will work. Imagine if Apple had designed the physical shape and layout of the iPhone before deciding a swipe of a finger on a touch screen or a fingerprint ID on a home button would activate the device.

No matter what the product, interaction design needs to be thought of holistically. A lot of people think of interaction design as pretty pictures on-screen, but it also encompasses all the ways users physically interact with their products. Determine early in the design process what will be physical and what will be digital.

What Are the Hardware Characteristics? Define the Display and Other UI Elements

Companies should aim for a unified look and feel for their product lines—all the products in a line or family characterized by a cohesive brand language—translated to the physical design as well as the UI design. Users do not want to see dials on one product and a toggle switch on another for the same function in the same product line. Transfer of learning carries over from one device to the next and adds value. Clients often ask us to develop products in a line at different price points typically resulting in screens of differing sizes or other hardware differences. It is important during the design process to consider how UI elements might translate to a display on, for instance, a small tablet versus a larger monitor.

Because humans are such visual animals, design teams sometimes overlook speakers and audio capability. With some products, due to cost constraints, the team will choose a tone generator that can only beep or buzz instead of a more sophisticated audio system. That can constrain the overall design. Sometimes a killer tone for the power on and power off functions can become a strong signature for a brand or product. It is important to expand your definition of UI and interaction design to include auditory elements, along with planning for these elements early on in the design process.

How Can UI and Industrial Designers Best Work Together?

Industrial designers are not experts in interaction design, and UI or interaction designers are not experts in industrial design. To help bring the two together, co-location, team-building activities, and cross-training gets everyone in the same space and on the same page. Another strategy is to divide teams by product lines rather than by discipline. For instance, a printer and office supply company might divide its product developers into a Home Products Group, an Office Products Group, and so on. Each group would consist of industrial designers, interaction designers, and graphic designers who all report to one team manager.

Storyboarding

Another way to bridge the divide is to storyboard product development so each discipline can envision how UI and ID contribute to a functioning final product. A storyboard might show how a user would interact with the product and how the product would look at different points in the process (Figure 23.4).

c23f004

Figure 23.4 A slide from a storyboard mapping out a typical day in the life of an automated homebrew device with mobile app.

To storyboard, it is best if the interaction and industrial design teams work together to generate the storyboard concept and determine user inputs and system outputs. (For more on storyboarding, see the chapter, “Visual Storytelling” in Bill Buxton's Sketching User Experiences.) The collaborative team should discuss the flow of elements and best-case scenario for the user-product interaction. The interaction design team can map out how the UI architecture contributes to the process and the industrial design team can outline the role of the hardware. The storyboard helps bring those two elements together to show how UI and industrial design will work in tandem. Storyboarding is especially helpful when co-location is not an option and team members must work remotely.

Sprints

A best practice model for designing in parallel is one that views the entire process as a series of sprints. For instance, if a team has 12 weeks to design a product, they could divide the process into three “sprints.” In the first sprint, industrial designers, interaction designers, and engineers would work together to develop the first physical prototype. In the second sprint, the same team would iterate the process and advance a second physical prototype. In the third four-week sprint, the cycle would repeat and produce a third and final product. This helps marry the faster, agile development process of software development to the more rigid Stage-Gate process of hardware development.

What Kind of Prototyping Does This Product Need?

Determine the goals of prototyping at each point in development, and base the fidelity of the prototype—whether it needs to function, and how closely it needs to mirror the final product in terms of appearance and behavior—on these goals. At times, a paper prototype will suffice. At other times a three-dimensional print will work. As you advance, the prototypes may need to become indistinguishable from the final product.

Often, the industrial design and interaction design prototypes are developed separately, which feeds a disconnect that expands as you move through the design process. For example, when interaction designers know they will run the product on a touch screen, it can be tempting to test the UI on an iPad or in Flash. Meanwhile, the industrial design team will be working on a beautiful physical model. When the two eventually come together, they may not mesh.

Even if both the ID and UI prototypes have to be slightly dumbed down in terms of their level of fidelity, a more integrated approach to prototyping will ultimately end up saving time. It can also help avoid having to force incompatible UI and industrial designs together at the end of the development process. The UI and industrial design are much more powerful when they are co-prototyped.

Test Early and Often

To make sure your product is on the right track, it is best to test early and often in an iterative cycle. Design, prototype, test, evaluate. Rinse and repeat. Even at the paper prototype stage, it is not too early to do usability testing. Early testing can help determine if users understand the proposed information architecture, even if the physical interactions are not quite there yet.

The earlier you start testing, the better. Product development can be like hardening concrete. At first it is easy to redirect, shape, and form, but as time goes on, the concrete begins to set and is very difficult to change. The longer you wait, the more your prototype is going to represent the final product and the harder it will be to change. Companies developing new products often ask: Are we ready? Should we cancel or postpone this test? Almost always, it is best to continue with a test. You can learn a lot with what you have, and it is better to learn more, earlier than later.

How Will You Specify the Integrated Design?

When it comes to creating product specs, it is not about integration between the software or UI team and the hardware or ID team. Here, integration between the user interface design and industrial design teams and their respective execution or development teams is more important. The on-screen portions of the UI design need to be coordinated with a software or firmware development team. The physical portions of the UI and the industrial design need to be coordinated with engineering, manufacturing, and purchasing teams.

23.5 Practice Makes Perfect

Keep in mind that a book like this is full of “how-tos” describing perfect situations that never really exist. So do not panic if your project is not tracking along in a picture-perfect way. Remember that you are not going to change corporate culture overnight—especially if you work in a large organization. Take small, steady steps in the direction to which we are pointing in this chapter and toward the overall design thinking mind-set presented throughout this book. And keep your eyes open for rare opportunities for sudden leaps forward, such as when your CEO gets converted and wants to elevate your design function to the C-suite level.

By answering the questions in the preceding section, the manager of the development team should have a good sense of how well the UI and ID teams are working together. Furthermore, discussing these questions among the team should provide some thoughts on how to increase collaboration and the quality of the resulting design.

All of this may seem like an excessive amount of stopping to strategize and evaluate, but merging the UI and ID processes introduces enough variables to disqualify anything resembling a “standard,” one-size-fits-all template. For the successful reintegration of two disciplines that have—to the detriment of the quality of our end products—become siloed, we need to become comfortable with a looser, more flexible paradigm. Speediness comes with practice, and after working through a few integrated process projects, teams will begin to accumulate repeatable processes and best practices.

About the Author

Keith S. Karn, the Director of User Research and Human Factors at Bresslergroup, leads contextual inquiry and usability testing to add the rigor required to create successful user interfaces and intuitive product experiences. He speaks and publishes frequently on usability evaluation methods, improving customer acceptance through user-centered product design and user-interface design for emerging technologies. He earned his PhD in experimental psychology at the University of Rochester and degrees in ergonomics and industrial engineering at North Carolina State University and Penn State. Keith also taught human factors and human-computer interaction for many years at the Rochester Institute of Technology and the University of Rochester.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.144.82.154