Home Page Icon
Home Page
Table of Contents for
Title Page
Close
Title Page
by Yael Shmueli-Friedland, Michael Lawo, Brion van Over, Alexander Rudnicky, Asaf D
Design of Multimodal Mobile Interfaces
Cover
Title Page
Copyright
Preface
Table of Contents
List of contributing authors
1 Introduction to the evolution of Mobile Multimodality
1.1 User Interfaces: Does vision meet reality?
1.2 Discussion of terms: Mobility and User Interface
1.2.1 Mobility
1.2.2 User Interface
1.2.3 User-centered design
1.2.4 Teamwork
1.2.5 Context
1.3 System interaction: Moving to Multimodality
1.3.1 User input and system output
1.3.2 Multimodality
1.3.3 Combining modalities
1.4 Mobile Multimodality: The evolution
1.4.1 Technology compliance to user needs
1.4.2 Technology readiness and availability
1.4.3 The readiness of multimodal technology
1.4.4 User requirements and needs
1.4.5 Cycle of mutual influence
1.5 Conclusion
2 Integrating natural language resources in mobile applications
2.1 Natural language understanding and multimodal applications
2.1.1 How natural language improves usability in multimodal applications
2.1.2 How multimodality improves the usability of natural language interfaces
2.2 Why natural language isn’t ubiquitous already
2.3 An overview of technologies related to natural language understanding
2.4 Natural language processing tasks
2.4.1 Accessing natural language technology: Cloud or client?
2.4.2 Existing natural language systems
2.4.3 Natural language processing systems
2.4.4 Selection Criteria
2.5 Standards
2.5.1 EMMA
2.5.2 MMI Architecture and Interfaces
2.6 Future directions
2.7 Summary
3 Omnichannel Natural Language
3.1 Introduction
3.2 Multimodal interfaces built with omnichannel Natural Language Understanding
3.3 Customer care and natural language
3.4 Limitations of standard NLU solutions
3.5 Omnichannel NL architecture
3.5.1 Omni-NLU training algorithm
3.5.2 Statistical-language model
3.5.3 Input transformation
3.5.4 Predictive omnichannel classifier
3.5.5 Score normalization
3.5.6 Conversation manager
3.6 Experimental results
3.6.1 Current analysis segment
3.7 Summary
4 Wearable computing
4.1 Introduction to Wearable Ecology
4.2 Human-computer symbiosis
4.3 Interactional considerations behind wearable technology
4.4 Training of end users
4.5 Wearable technology in the medical sector
4.6 Human-centered design approach
4.7 Context of wearable computing applications
4.8 State of the art in context-aware wearable computing
4.9 Project examples
4.10 Towards the TZI Context Framework
4.11 Conclusion
4.12 Discussion and considerations for future research
5 Spoken dialog systems adaptation for domains and for users
5.1 Introduction
5.2 Language adaptation
5.2.1 Lexicon adaptation
5.2.2 Adapting cloud ASR for domain and users
5.2.3 Summary
5.3 Intention adaptation
5.3.1 Motivation
5.3.2 Data collection
5.3.3 Observation and statistics
5.3.4 Intention recognition
5.3.5 Personalized interaction
5.3.6 Summary
5.4 Conclusion
6 The use of multimodality in Avatars and Virtual Agents
6.1 What are A&VA – Definition and a short historical review
6.1.1 First Avatars – Bodily interfaces and organic machines
6.1.2 Modern use of avatars
6.1.3 From virtual “me” to virtual “you”
6.2 A relationship framework for Avatars and Virtual Agents
6.2.1 Type 1 – The Avatar as virtual me
6.2.2 Type 2 – The interaction with a personalized/specialized avatar
6.2.3 Type 3 – Me and a virtual agent that is random
6.3 Multimodal features of A&VA – categorizing the need, the challenge, the solutions
6.3.1 About multimodal interaction technologies
6.3.2 Why use multimodality with Avatars?
6.3.3 Evaluation of the quality of Avatars and Virtual Agents
6.4 Conclusion and future directions: The vision of A&VA multimodality in the digital era
7 Managing interaction with an in-car infotainment system
7.1 Introduction
7.2 Theoretical framework and related literature
7.3 Methodology
7.4 Prompt timing and misalignment – A formula for interruptions
7.5 Interactional adaptation
7.6 Norms and premises
7.7 Implications for design
8 Towards objective method in display design
8.1 Introduction
8.2 Method
8.2.1 Listing of informational elements
8.2.2 Domain expert rating
8.2.3 Measurement of integrative interrelationships
8.2.4 Clustering algorithm
8.2.5 Comparison of the two hierarchical structures
8.2.6 Comparisons between the domain expert and the design expert analyses
8.3 Analysis of an instrument display
8.4 Conclusion
8.4.1 Extension of the approach to sound- and haptic-interfaces
8.4.2 Multimodal presentation
9 Classification and organization of information
9.1 Introduction
9.1.1 Head up displays
9.1.2 Objectives
9.2 Characterization of vehicle information
9.2.1 Activity
9.2.2 Information Type
9.2.3 Urgency
9.2.4 Timeliness
9.2.5 Duration of interaction
9.2.6 Importance
9.2.7 Frequency of use
9.2.8 Type of user response required
9.2.9 Activation mode
9.3 Allocation of information
9.4 Head up display (HUD) and its information organization
9.4.1 Information completeness and conciseness
9.5 Principles of HUD information organization
9.6 Review of existing Head Up Displays (HUDs)
9.6.1 “Sporty” head up display
9.6.2 Simplistic HUD
9.6.3 Colorful head up display
9.6.4 Graphically-rich head up display
9.7 Conclusion
Index
Footnotes
Search in book...
Toggle Font Controls
Playlists
Add To
Create new playlist
Name your new playlist
Playlist description (optional)
Cancel
Create playlist
Sign In
Email address
Password
Forgot Password?
Create account
Login
or
Continue with Facebook
Continue with Google
Sign Up
Full Name
Email address
Confirm Email Address
Password
Login
Create account
or
Continue with Facebook
Continue with Google
Prev
Previous Chapter
Front Matter
Next
Next Chapter
Copyright
Add Highlight
No Comment
..................Content has been hidden....................
You can't read the all page of ebook, please click
here
login for view all page.
Day Mode
Cloud Mode
Night Mode
Reset