Search is a fundamental mobile activity. Think about it—mobile is much less about creating stuff (unless you are talking about taking pictures or writing an occasional tweet). Instead, you use mobile devices mostly for finding stuff. Riffing on Douglas Adams’ Hitchhiker’s Guide to the Galaxy, mobile devices help you find places to eat lunch, people to eat lunch with, and directions to get to the restaurant, which helps everyone to get there sometime before the Universe ends—which makes search patterns important.
Audio query inputted via an on-board microphone is used as input for searching instead of a keyword query. Typing on the phone is awkward and prone to errors. This makes audio input a great alternative to text.
Usually, the searcher taps a microphone icon, causing the device to go into listening mode. The searcher speaks the query into the on-board microphone. The device listens for a pause in the audio stream, which the device interprets as the end of the query. At this point the audio input is captured and transcribed into a keyword query, which is used to run the search. The transcribed keyword query and search results are shown to the searcher.
One of the most straightforward implementations of the Voice Searchpattern is the standard input box for writing text, augmented with a microphone icon, as exemplified in Google’s native Android search. (See Figure 7-1.)
Most apps that have a search box can also use the Voice Searchpattern. For example, the Yelp app, as shown on the left in Figure 7-2, does not currently include the Voice Search feature, but it can be easily augmented with a microphone icon, as shown in the wireframe on the left.
People often use Yelp while they’re walking around with a bunch of friends and talking about where to go next. In this case, simple voice entry augmentation makes perfect sense: Speak the query into the search box (which is quite a natural behavior as part of the human-to-human conversation already taking place) and share the results with your friends by showing them your phone. Then, after the group decision has been made, tap Directions, and use the map to navigate to the place of interest.
Most mobile search is done “on the go” and in context. Given how hard text is to enter into a typical mobile phone (and how generally error-prone such text entry is) voice input is an excellent alternative. Other important considerations for using the Voice Search pattern are multitasking activities such as driving. Driving is an ideal activity for voice input because the environment is fairly quiet (unless you are driving a convertible), and the driver’s attention is focused on a different task, so traditional text entry can be qualified, to put it mildly, as “generally undesirable.”
The release of Siri for the iPhone 4S kicked into high gear a long-standing race to create an all-in-one voice-activated virtual assistant. Prior to Siri, Google had long been leading the race with Google Search: the federated search app that searched across phone’s apps, contacts, and the web at large. Vlingo and many other apps took the Voice Searchpattern a step further by offering voice recognition features that enabled the customer to send text messages or e-mails and do other tasks by simply speaking the task into the phone. However, none of the apps have come close to the importance and popularity of Siri. Why? There are many reasons, including the mature interactive talk-back feature in Siri that enables voice-driven question-and-answer interactivity, including the amazing capability to handle x-rated and gray-area questions with consistent poise and humor, as shown in Figure 7-3 (in other words, Siri has something of a personality). Another important feature was a dedicated hardware Siri button (on iPhone 4S you push and hold the Home button to talk to Siri) that enabled one-touch interaction with a virtual assistant without having to unlock the phone.
Although it’s pure speculation at this point, one of the applications of Google’s voice recognition technology could be the same sort of virtual assistant for your phone or tablet, activated by pressing (or holding) one of the hardware buttons (the Home button would be a good choice). Added security can be achieved via voice-print pattern recognition. Voice recognition technology would also help distinguish your voice patterns from those of other people in loud, crowded places, thereby further increasing the personalization of the device and making it completely indispensable (if that is even possible at this point!).
If this becomes the case, dedicated in-app Voice Search (refer to Yelp in Figure 7-2) can be completely superceded by the Google virtual assistant. For example, the customer could say, “Assistant: search Yelp for xyz.” The assistant program would then translate the voice query into keywords using advanced personalized voice recognition, open the Yelp app, populate the search box with the keyword query, and execute the search.
In some Google Search apps, the simple action of bringing the phone to your ear forces the app into a listening mode by using input from the on-board accelerometer to recognize this distinctive hand gesture. Unfortunately, this feature does not seem to be automatically enabled on Android 4.0 as of this writing. It is, however, an excellent feature and one that should come included with the voice recognition because it makes use of what we already do naturally and without thinking, so the design “dissolves in behavior.”
The role of voice input is not limited to search. It can be used for data entry and basic tasks as well. For example, while driving you could push the button and say, “Text XYZ to James,” and the device will obey. I should also mention that Google is not the only supplier of voice recognition technology. For example, Nuance communications, the maker of Dragon Naturally Speaking products, is likely the largest and most vocal (pardon the pun) distributor of speech-recognition software. As of this writing, the Target app uses technology licensed from Nuance for its voice recognition feature.
Just as in the earlier Yelp example, you can use voice recognition to search for a specific pet. The customer would launch the Pet Shop app and then swing the phone up to his ear and speak a search query, such as “black lab.” When the customer has a pause in speech, pushes the Done button, or simply swings the phone down, the query activates and displays the appropriate search results.
For Voice Search, tablets are different from phones. Although there is some debate about this (and no official studies have yet been performed) anecdotal evidence points to typing on the tablet being not quite as challenging as it is on the phone. Thus voice input for tablets is likely to be more error prone. While using a tablet, the person is also less likely to be multitasking in a loud environment or be engaged in an activity that requires the user’s attention to be placed outside the visual interface of the device (driving, for example)—most of the tablet use happens at home or work. Does this mean Voice Search is not useful on the tablet? Not at all. There still exists an opportunity for high-end, high-touch, visual interaction with a virtual assistant software program. Apple’s original vision for the tablet device, The Knowledge Navigator, created in 1987 (sorry, Google, you were not yet born at that time) involved exactly that kind of speech recognition interaction with the device.
The best way to implement a high-end, personalized virtual assistant might be to create a hybrid of software plus human virtual assistant. The person using the tablet would get high-end service with a consistent, pleasing visual and auditory representation. Given Google’s reputation for awesome inventive geekiness, highly customized animated Obi One, Jarvis, and HAL virtual assistants (as well as various Playboy models, anime characters, and maybe a little something for the millions of John Norman fans) complete with high-end graphics and voice simulations might be coming soon to the Android tablet near you. Perhaps this book can serve as an inspiration?
Voice recognition is still a fairly new technology, and despite the apparent similarity of the interface, there are many important considerations and ways to get this pattern wrong:
Auto-Completeand its sister pattern, Auto-Suggest, are broad classifications of keyword-entry helper patterns. Both reduce the number of characters the person needs to type and reduce the number of entry errors and queries that produce too many or too few results.
When the person enters one or more characters into the search field, the system shows an additional “suggestions layer” that contains one or more possible keyword combinations that in some way correspond to what the person has entered. At any point, the person has the option to keep typing or select one of the system suggestions.
Strictly speaking, Auto-Completeuses a part of the query the person typed in as a seed to providing suggestions (so that the suggestions include the original keyword or fragment). This does not always work perfectly on a mobile device because many times a small fragment contains fat-fingered misspellings. That’s where Auto-Suggestcomes in.
Auto-Suggesthas more “freedom of movement” than Auto-Complete, providing keywords and queries that include
The suggestions work best when they are a clever combination of Auto-Suggest and Auto-Complete, with the system drawing the best ideas from multiple sources.
Google Android search is a great example of the combined pattern, splitting the suggestions layer into two sections: first providing three auto-complete ideas and then auto-suggesting some contacts and apps that can be found on the phone (see Figure 7-4).
Any time there is a keyword query entry box, Auto-Suggest and Auto-Complete are both great patterns to implement. As search expert Marti Hearst reports in her book, Search User Interfaces (Cambridge University Press, 2009) These features generally rate great on usability and work well with other user interface (UI) patterns.
For most people, typing—especially on the mobile device—is tedious and prone to errors. Generally, the less typing you do on the phone, the better. Therefore, any UX pattern that can assist a person in entering information is a big win.
Auto-Complete and Auto-Suggest help reduce errors and increase satisfaction in multiple ways:
Auto-Complete and Auto-Suggest can draw from many other resources to improve the quality of the suggestions:
With the variety of common names for dog breeds (and difficulty of spelling them) it’s easy to envision a useful combination of the Auto-Suggest and Auto-Complete layer for the Pet Shop app, as shown in Figure 7-5.
In this simple example, the person types in Mas, and the suggestions layer presents the Auto-Complete options Massive and Mastiff as possible query completions, thereby forestalling the common misspelling Mastif, which would have likely resulted in zero results. In the same suggestions layer, Auto-Suggest also kicks in with English Mastiff, Neapolitan Mastiff, and an interesting keyword variation Bullmastiff, a popular Mastiff breed that the person may not have thought of using as a query.
Mastiff is also a generally accepted synonym for a query “large guard dog,” so the auto-suggest layer can expand the original query by suggesting a category Guard Dogs, which can expand into a number of related breeds the person might not have thought of originally, such as Doberman, Rottweiler, American Bulldog, and so on. Both Auto-Suggest and Auto-Complete automatically scope the suggestions using a controlled vocabulary with a preset list of recommended search terms that match common tasks the app supports.
Tablet auto-suggestions represent a different use case from auto-suggestions on mobile devices. In principle, large tablets do support mobile activities; in practice, the mobility pattern for a typical consumer large tablet device is found in the area between the refrigerator and the couch, as user researcher Marijke Rijsberman explains in her perspective “A Fine Line: The iPad As a Portable Device,”which is in my first book Designing Search (Wiley, 2011). Simply put, it is more common for large tablets to be used as casual, “lean back” devices.
Typing on large tablets is easier and less error prone, so they are closer to desktops and can use the same auto-suggestion database as a desktop web application. Also, one-tap auto-suggestions that jump directly to a different app are not as important on large tablets as they are on mobile devices because people on tablets are typically not in as much of a hurry and are less likely to mind a few additional taps, as long as it’s clear that they are progressing toward their goal. Also, local results are generally not as important as they are on mobile devices; however, they should definitely be included.
Note that this does not necessarily apply to mid-size 7-inch tablets and note-tablet hybrids (refer to Chapter 3, “Android Fragmentation”). These smaller tablet devices are at once more mobile and harder to type on than their large counterparts. For the purposes of this pattern, these smaller tablet devices can be treated as mobile phones, and you should design for them accordingly.
Finally, another consideration is the interface element. In mobile devices, the auto-suggestions layer often occupies the entire page, whereas on a tablet auto-suggestions are presented in a popover layer occupying only a small part of the screen. (For more on tablet design patterns, see Chapter 14, “Tablet Patterns.”)
If you do provide a custom auto-suggest layer (which is highly recommended) remember to turn off the device’s auto-suggest feature.
Remember that mobile phones are a different class of device. They may require a completely different auto-suggest approach (one of such mobile-only approaches is described in the next pattern 7.3 “Tap-Ahead”). Mobile Auto-Suggestions are prioritized differently because they are meant to respond to different needs. Mobile devices need to give higher weight to auto-suggestions based on on-board sensors that are only available on mobile devices. For example, local auto-suggestions, previous mobile search history and category browsing (for example, Guard Dogs, as described in the Pet Shop example) need to be higher on the list than typical desktop web auto-suggest options, which are mainly controlled vocabulary substitutions.
People misspell things differently on desktop web and tablets with full keyboards than on smaller mobile devices. Mobile misspellings mainly arise due to fat-fingering, not from common spelling misconceptions. This dictates ideally using and maintaining a different database for mobile auto-corrections that take the unique nature of mobile keyboards into account.
Tap-Ahead implements auto-suggest one word at a time, through step-wise refinement, creating a kind of keyword browsing.
Instead of trying to guess the entire query the customer is trying to type at the outset and offer the best one-shot replacement the way desktop web does, Tap-Ahead on mobile devices guides the auto-suggest interface through the guessing process one phrase or keyword at a time.
This is how it works: When the searcher enters a few characters, the auto-suggest function offers a few query suggestions. At this point the searcher has two choices:
By giving the searcher the ability to “build” the query instead of typing it, the interface offers a much more natural, flexible, and robust auto-suggest method that’s optimized to solve low bandwidth and fat-finger issues people experience on mobile devices. Using the Tap-Ahead interface, customers can quickly access thousands of popular search term combinations by typing just a few initial characters.
An excellent example of this pattern is the Android native search (see Figure 7-6). As you can see from the following example, the Tap-Aheadpattern offers an excellent alternative to typing longer multi-keyword queries.
In this case, by tapping the diagonal Tap-Ahead arrow, the searcher could enter a complex query “Harry Potter spells app” by typing only four initial characters (harr) and tapping the diagonal arrow two times. The traditional one-shot auto-suggest interface is unlikely to be able to offer this entire fairly unusual phrase as an auto-suggestion, so the customer is likely to have to type most, if not all, of the 23 characters of the query Harry Potter Spells app.
Use the Tap-Ahead pattern anywhere the auto-suggest is used outside a one-shot controlled vocabulary auto-suggestion and where longer, multistep, multi-keyword queries offer an advantage and create a better set of results.
In contrast to desktop web search, auto-suggest on mobile devices is subject to two unique limitations: It’s harder to type on a mobile device and signal strength is unreliable. Tap-Aheadsolves both issues in an elegant, minimalist, and authentically mobile way. Tap-Ahead enables the mobile auto-suggest interface to maintain flow and increase speed and responsiveness on tiny screens that is simply not possible to currently achieve with the traditional one-shot auto-suggestion interface.
Is there evidence of this? The author’s field research shows that in mobile environments people often select search suggestions they do not need, just to save typing in a few characters. (Read more about this in “Mobile Auto-Suggest on Steroids: Tap-Ahead Design Pattern,” Smashing Magazine, April 27th, 2011, http://www.smashingmagazine.com/2011/04/27/tap-ahead-design-pattern-mobile-auto-suggest-on-steroids/). Tap ahead effectively resolves this issue.
For the few years that the Android platform has been around, the keyword suggestions have evolved from being an exact match to Google’s web suggestions to being its own mobile-specific set. Yet you can do even better in your own app by using a simple trick: Offer Tap-Ahead one keyword at a time.
The advantage of the one-word-at-a-time Tap-Aheadrefinement interface is that the refinement keywords can be loaded asynchronously for each of the 10 auto-suggestions while the customer makes the selection of the first keyword. Given that most queries are between two and three keywords long, and each successive auto-suggest layer offers 10 additional keyword suggestions, Tap-Ahead with step-wise refinement enables customers to reach between 100 (10 * 10) and 1,000 (10 * 10 * 10) of the top keywords through typing only a few initial characters.
Anecdotally, although Tap-Ahead is useful, few people have discovered its power to cut through tediousness and all the fat-finger mistakes associated with typing. By offering keywords one at a time, the interface is optimized for the Tap-Ahead pattern, so discovery should increase, thereby also increasing the satisfaction. Tap-Aheadone word at a time is an excellent variation of the Tap-Ahead for e-commerce apps.
It’s easy to imagine Tap-Ahead being useful in entering complex keyword queries. However, it’s not as important with dog breeds, for example, which form a controlled vocabulary. There is scant advantage to provide a Tap-Ahead expansion from Mas to Mastiff to Neapolitan Mastiff because there are not many queries that start with Mastiff. Instead, a simple, traditional one-shot controlled vocabulary auto-suggestion (Mas directly to Neapolitan Mastiff) is a more useful approach because it not only allows the user to pick up standard keyword queries such as English Mastiff and Neapolitan Mastiff but also an interesting keyword variation Bullmastiff and category expansion Guard Dogs (see the “7.2 Pattern: Auto-Complete and Auto-Suggest” section).
The owners of large tablets are generally more willing to type a longer query, and low bandwidth is usually less of a problem for them (many tablets are used with Wi-Fi only). Nevertheless, Tap-Aheadis no less useful on tablets, where less work is perceived as a good thing and tapping a suggestion is as easy as tapping the next character on the touch keyboard. There is also early evidence that tablet queries are slightly longer, which also speaks in favor of keyword browsing.
The best auto-suggestions on a mobile device come from a database that’s different and distinct from the web auto-suggestions database. This is especially true for Tap-Ahead implemented one keyword at a time—but that’s how important this function is to creating an excellent search experience!
At this point it’s not clear who, if anyone, holds a patent on this functionality. Google began using it first in its general device search and Google App for iPhone; although it is not used for single keyword browsing as of the time of this writing. Microsoft and Apple are both likely actively pursuing similar patents.
Search results are refreshed when the customer swipes down (pulls down) on the results. Slick and convenient, this is a great pattern to refresh results that update frequently.
The customer is presented with a long list of updates, typically sorted by Time: Most Recent First. The customer typically reviews the list of updates starting at the top, reading the most-recent messages first. When the customer wants to load newer updates, he pulls down on the results list, performing a scroll-up function. Typically, a watermark appears that lets the customer know that when he pulls down and then releases the list, it will update. The system issues an update call, which is reflected by a visible timer, followed by loading of the updated results.
A great example of this pattern is the original application that helped popularize it: the Twitter mobile app. (See Figure 7-7.)
Use Pull to Refresh for long lists of search results or updates sorted by Time: Most Recent First. This pattern is especially useful for social update streams, active inboxes, and other long lists that update frequently.
The Pull to Refresh pattern uses a gesture instead of a button, which is always an excellent idea if you can communicate the needed gesture in an obvious and unobtrusive way. For Pull to Refresh, the gesture needed is the one the customer already uses to scroll the results up, so the call to action naturally “dissolves in behavior.”
When the customer first loads the results, he typically engages with the list by scanning or reading the newest updates or search results first, starting at the top, and scrolling down the list to read or scan more. When the customer reads far enough down and wants fresher results, he naturally scrolls to the top and keeps scrolling until he reaches the top of the currently loaded results and scrolls past the top of the list. At that point he sees the watermark telling him what to do to load the newest results. This often happens naturally and in the state of flow, when the customer flicks rapidly to scroll the results quickly.
One other point makes this pattern feel natural. The action to pull down on the list “pulls” new data from the server, which is an excellent fit to the customer’s existing mental model. This is a fine example of using unique capabilities of mobile and tablet touch devices to expand on the desktop web model of buttons and links.
Most applications of this pattern deal with search results or updates sorted by Time: Most Recent First. Another possible application might be triggered by transversing space instead of time. For example, if your customer looks for points of interest around him as he moves through a city, he has a different set of attractions within walking distance as he moves. Depending on the specific goal of the interaction, you can use the Pull to Refresh pattern to show the search results list sorted by Distance: Nearest First. This way, as the customer moves through the city, he can have an updated list of points of interest around him with a flick of a finger.
One possible way to use Pull to Refresh in the Pet Shop app is to show updates of lost pets. If your pet is lost, for example, you can stay on top of the search with the updates page that tracks found pets in your neighborhood by periodically pulling to refresh the list. However, forcing the customer to do this may be a stressful activity if the list keeps coming up empty or static. If the list is mostly static, instead consider using some sort of a push alert (an alert that is loaded on the device and shown automatically, as opposed to being triggered by some action the customer explicitly needs to take) that notifies the customer when a new pet is found. To create a push alert a polling technology is frequently used, but from the standpoint of the customer, the alert is being “pushed” to him.
Pull to Refreshworks just as well on medium- and large-size tablets as it does on mobile phones. The vertical space needed to communicate Pull to Refresh should grow proportionally to the size of the device and the extent of a gesture needed to scroll the results. Larger tablets require longer, more sweeping gestures with which to execute the “pull.”
Although it’s tempting to use it due to the pattern’s sheer coolness, Pull to Refresh is not recommended for the majority of search results that deal with mostly static content. It is simply not satisfying to execute a pull and release and get the same data, and the watermark on the top of the list becomes chart-junk—a useless distraction. Other counter-indications of Pull to Refresh is for lists sorted in ways that do not lend themselves to rapidly updated content, such as Best Match, Price, and so on.
Here’s another thing to keep in mind: The Pull to Refreshpattern is patented. That’s right; Twitter currently holds the patent on this design. Although it’s unlikely that Twitter would go after anyone other than a direct competitor using this pattern, it’s an important caveat to keep in mind if you plan to use it in your app.
Search is an option that can be accessed from the navigation bar menu.
To do the search, the user must tap the menu button in the phone’s navigation bar (that also houses the Back, Home, and Recents buttons) and then select the Search option. After Search has been tapped, the resulting page may show one or more of the following: saved searches, search refinement options, popular searches, nearby locations, and so on.
In the Amazon app (see Figure 7-8), the customer accesses the search feature by tapping the magnifying glass in the menu located in the navigation bar.
The resulting Search page shows the previous query and a list of alternative query entry mechanisms, in this case a picture or a barcode that the customer can scan with an on-board camera. The menu is opened from the phone’s navigation bar, which has been dynamically modified to add the app menu function.
Despite being used by some of today’s leading apps, this pattern is now largely deprecated. Most of the native Google apps in Android 4.0 have a dedicated Search button on the app’s action bar or in the overflow menu (see the “7.6 Pattern: Search from Action Bar” section later in this chapter). Search from Menu is a transitional pattern that can still be used for a short time (or at least until the Android 4.0 Police show up) as a way to bridge apps in older Android versions with those in Android 4.0.
This is a popular pattern descended from older Android OS implementations, which recommended that the app’s menu button always be present in the device’s navigation bar. This handy pattern enables the designers to hide the search along with most of the rest of the navigation, on the navigation bar, which often eliminates the need for an additional action bar. This provides the advantage of a simple interface and “taller” vertical space so that more screen space is devoted to products or content.
Some older Android implementations, most notably those on the Motorola and LG hardware, provide a special dedicated hardware accelerator button for search. Tapping this button is the equivalent of tapping the menu button in the navigation bar and selecting Search from that menu.
This dedicated Search button has been removed from the latest hardware designed to run Android 4.0. You can speculate as to what this means long term, but in the immediate Android future, Search from Menu and Search from Action Bar search design patterns appear to take precedence over the dedicated hardware button.
In implementing this pattern with the Pet Shop app, there are two options of what to put on the Search page. One option is to provide alternative input methods (refer to the Amazon app shown in Figure 7-8). Other popular options include previous searches and search refinements, such as filtering or sorting. Figure 7-9 shows previous searches.
When showing the alternative query entry mechanisms such as barcode scan, picture, voice, NFC, and so on, recent previous searches can be shown as a grouped button (Recent Searches); although, this is generally less effective than actually listing previous queries in the list. Whatever strategy you decide to use, be sure to highlight (select) the current query as shown or provide an X or Clear button for the searcher so that starting a new search is easy.
Tablets do not generally need to use this pattern because there is plenty of room to install a dedicated search box or use the Search from Action Bar patterninstead.
Also, the Search from Menu pattern is ergonomically inferior to most other tablet patterns of search implementation because the menu button moves around constantly. In portrait mode the tablet’s navigation bar is on the bottom of the device, which makes it generally awkward to access a menu from a normal tablet viewing position. (Read more about ergonomics in Chapter 3, “Android Fragmentation”.)
In addition to this pattern being deprecated in Android 4.0, using Search from Menu can lead to an awkward separation of the keyword query from the refinement tools. See the “7.9 Antipattern: Separate Search and Refinement” section.
The customer can access search via a dedicated button on the app’s action bar.
The Search button (usually styled as a standard Android magnifying glass icon) is shown on the top or bottom action bar. After the user taps Search, the resulting page shows one or more of the following: saved searches, search refinement options, popular searches, nearby locations, and so on.
Google Plus offers an excellent example of this pattern (see Figure 7-10).
Google Plus offers a dedicated Search button on the top action bar. Tapping the Search button navigates the user to the dedicated tabbed searchpage, with two search subdomains, Posts and People, displayed as tabs. Tabs are a common pattern in search, as discussed in Chapter 9, “Avoiding Missing or Undesirable Results.”
Another example of the dedicated Search button in the action bar is in the Android Messaging app.
In the Messaging app, the Search button is in the middle of the split action bar, which is at the bottom of the screen. Inconsistent? Sure. But relative freedom of placement of controls on the screen is a large part of the Android DNA (refer to Chapter 2, “What Makes Android Different”).
Any time you have an action bar in your app that has some space on it and search is important to your customers, this pattern is a great choice. Ergonomically, placing the Search button on the bottom of the split action bar makes it easier to access the function one-handed.
Although I am not aware of any official standing on the matter, it seems that the Google Android team has made a real effort to generally replace the Search from Menu pattern with the Search from Action Bar pattern, at least in native Google apps in Android 4.0. This is a strong signal that search remains important at Google. If search is likewise important to you, this pattern is an excellent choice and is now more or less “official” (to the extent that anything in Android can be considered official).
When the app’s screen real estate shrinks due to the size of hardware that runs it, some action bar functions may move into the overflow menu, as discussed in Chapter 1, “Design for Android: A Case Study.” In this case, the search function shown on the action bar might be forced into an overflow menu as well. To access the search function, the customer will have to tap the overflow menu and select Search—pretty straightforward.
Contrast the Search from Action Barpattern shown in Figure 7-12 with the Search From Menupattern referred to in Figure 7-9.
Both patterns enable access to the search page from anywhere in the application and use the same search page design. However, with the Search from Action Bar pattern, getting to the search page is accomplished via a single tap on the dedicated Search button on the App bar rather than in the two taps required by the Search from Menupattern. Search from Action Barsaves an extra tap and surfaces the search much more prominently in the mind of the customer. There is a drawback, however; using this pattern adds an action bar, which takes away precious pixels from the vertical space available for viewing content and products.
This is the standard search pattern to use in tablet apps. However, if you use the standard top action bar layout that places the search icon somewhere close to the middle of the action bar (refer to the Messaging app in Figure 7-11), your customers may get a severe case of what Josh Clark has dubbed “Tablet Elbow” if they must tap this button often (read more in Chapter 3). A better placement of this button is on the right or left nav bars, which run vertically along the edges of the device (see Chapter 14 to find out more about tablet-specific patterns).
Similar to Search from Menu, Search from Action Barcan also lead to an awkward separation of the keyword query from the refinement tools. See the “7.9 Antipattern: Separate Search and Refinement” section.
The search box is placed on top of the search results and does not scroll with them.
The search box sits on top of the search results, which enables customers to easily edit and fine-tune the keyword query. Often, a refinement (filter) button is placed to the left or right of the search box.
A great example of this pattern is Yelp, as shown in Figure 7-13.
The dedicated search box in Yelp sits on top of the search results and does not scroll when the search results are scrolled. In addition, search tools, such as Filter and Map, are located on the same line as the search box.
For apps in which search is a key part of functionality, the Dedicated Searchpattern is an excellent choice. The Dedicated Searchpattern shows clearly what keyword query yielded the search results and provides convenient, dedicated tools to change the query and access other refinements.
As Peter Morville and Jeff Callender so eloquently stated in their book Search Patterns (O’Reilly, 2010), “What we find changes what we seek.” Nowhere is this statement truer than in the mobile space, where typing is awkward and people are highly distracted with multitasking. People prefer to start general and refine rapidly, and changes to the keyword query are part of that refinement. The Dedicated Searchpattern addresses the need with unmatched simplicity and elegance. The original keywords that the searcher types are always visible on top of the results and are retained in the search box for easy editing.
If additional filters and sort options are used with the keyword query, the Dedicated Search pattern combines well with the Filter Strippattern that shows filters and query refinements (see Chapter 8, “Sorting and Filtering”). Together, these two patterns show the searcher the entire contents of a complex query.
Figure 7-14 shows the implementation of the Dedicated Searchpattern.
This is a fantastic pattern for the Pet Shop app if you expect customers to edit their queries often.
Tablets are much less screen space–challenged than mobile phones. For most apps that use search, having a dedicated search box is an excellent idea. Simply having a dedicated search box on top of every page in the app implements the Dedicated Searchpattern nicely.
Having a dedicated search box on top of the page does not mean that you need to give up the person’s history of previous searches or auto-correct functionality. Remember that previous searches can be easily presented via a layer under the search box (refer to the “7.2 Pattern: Auto-Complete and Auto-Suggest”).
On smaller devices this pattern takes up a fair bit of vertical space (20 to 30 percent of the total screen space), which significantly reduces the number of products or the amount of content that can be shown to the customer. The Dedicated Search pattern is akin to reducing the number of books that can be shown on a bookstore shelf because of the giant sign that tells you the name of the section. It’s not always a bad thing, but it is something to keep firmly in mind.
The search box is on top of the search results and part of the content page, so it scrolls with the rest of the content. This pattern is an alternative of the Dedicated Search pattern.
The basic premise of this pattern is that the search box is part of the content page. When the page first loads, the search box is shown to the customer. As the customer scrolls the content page down, the search box simply scrolls out of view with the rest of the content. To search, the customer must scroll back to the top of the page.
The Twitter app makes an effort to have a consistent interface on iOS and Android, which makes it a good example of the Search in the Content Pagepattern (see Figure 7-15).
This pattern works well with the Pull to Refreshpattern described earlier.
Any time you have a screen that is content-centric but might need to be occasionally searched, Search in the Content Pageis a great option. However, make sure that your customers want to only run keyword queries and that sort order is obvious and does not need to be changed. Ideally, people never want to have any refinement on the query because this pattern generally makes search refinement awkward.
This pattern is popular in iOS but is currently seldom used in Android. That’s a shame because it’s ideal for certain applications. In particular, content-centric screens such as name lists or activity streams such as updates, which are normally browsed but not searched, make great candidates for use of this pattern. The Search in the Content Pagepattern makes search easily available but does not take up permanent screen space the way the Dedicated Searchpattern does.
One modification popular in iOS but virtually unknown in Android is Scroll to Search. When a content page loads, a search box is hidden on top of the page. Pulling the page down reveals the search box that searches within the content on the page. After the query runs, the resulting page shows the search box with the query.
This pattern is not suitable for e-commerce because it makes refinement awkward. However, you can use it for an update stream or Pet News section, where search is likely to be infrequent and made up of keyword queries (see Figure 7-16).
This pattern is all about saving space, which makes it superfluous for tablets, which generally have enough space. However, it still has its place because it is easy to implement.
This pattern is currently rare on the Android platform but is quite widespread on iOS. The reasons for this are not clear. One possibility is that iOS enables a quick scroll to the top of the page (which thereby “jumps” to the search box) using a single tap in the middle of the top App bar. This single tap jump to the top of the page shortcut is unavailable on Android because the top of the screen is normally occupied in the Android OS by the Notifications strip, which understands the pull-down touch gesture. This could make frequent use of search functionality in Search in the Content Pageimplementations problematic on Android because the person must deliberately scroll back to the top of the page “the long way” to reveal the search box.
For the Scroll to Search modification of this pattern described in the “Other Uses” section, the reason could be even simpler but more insidious. Although at this time I’m not aware of any limitation, Apple could be holding a patent to this pattern, so Android apps are generally prevented from using it (or it could be more popular in iOS simply from lack of screen space, which is less of a problem with larger Android devices). If you’re in doubt, use the simple version of this pattern implemented by Twitter as described in the “Example” section.
An awkward experience results when the keyword query search box is removed by two or more taps from the other search refinements.
Any time the keyword query and multiple complex refinement options are separated, you must pay attention. Although this shows up frequently on iOS, this antipattern is especially an issue on Android because of the widespread use of dedicated search pages, the result of Search from Menuand Search from Action Barpatterns.
It’s easy to mess up when blindly copying successful apps and applying a slightly different paradigm. For example, the Amazon app manages to pull off using Search from Menuand a separate keyword search page successfully by using a simple filter drop-down located in-page with the rest of the content (refer to Figure 7-8).
Contrast the Amazon app search and filter scheme with that in TheFind, as shown in Figure 7-17.
The refinement page is a dedicated page with multiple text fields. One thing is conspicuously absent: the keyword search box. To change the keywords in the query, the user must tap the Menu button and then tap Search. This separation is completely artificial and therefore awkward, which should be avoided.
In most people’s minds, search is an iterative activity. (Recall Peter Morville’s quote, “What we find changes what we seek.”) So in the mind of searchers, there is little separation between keywords, filters, and sort options. These are all tools to find what they want. Separate Search and Refinement is an antipattern precisely because it introduces awkward separation between the keyword query and everything else. This is neither wanted nor needed. Separate Search and Refinementbreaks the association between different parts of the query and makes it difficult to find what you want and stay in the flow.
A better pattern called Parallel Architecture or any of the simple faceted search patterns covered in Chapter 8offer a more usable configuration.
Although often harder to recognize, the Separate Search and Result antipattern also occurs when search is presented in a different way on the homepage and on a separate search page. For example, TheFind app also offers a different search from the homepage shown in Figure 7-18.
Although it has similar search functionality at first glance (neither one have any refinements, for example), the homepage search doesn’t have the previous search’s history widget that the dedicated Search page has. This “separate homepage search” antipattern is a child of the Separate Search and Refinement antipattern. It can be daunting to customers who quickly get lost.
Unfortunately, this situation happens quite often and is much harder to recognize and prevent. Two great solutions for this issue are the Parallel Architecture pattern, where the homepage is the basic search page, and the Dedicated Searchpattern, which presents a consistent search box and functionality on the homepage and search results pages.
In general, it’s a good idea to offer the same basic search functionality every time you have the search box. If you offer history and auto-suggest in one place, do it everywhere you use the basic search box. Also, avoid having multiple places for search that differ only slightly; it makes it too easy for people to get lost and confused and abandon search altogether.
98.82.120.188