Appendix C

Glossary

A

A/B testing A percentage of users are shown a design (A) and, via log analysis, performance is compared against another version (B). Designs can be a variation on a live control (typically the current version of your product) or two entirely new designs.

Accessibility The degree to which a product, device, service, or environment is available to as many people as possible.

Account manager Within large corporations, an account manager is often someone who is devoted to managing a customer’s relationship with his or her company. For example, if IXG Corporation is a large customer of TravelMyWay.com, an account manager would be responsible for ensuring that IXG Corporation is satisfied with the services they are receiving from TravelMyWay.com and determining whether they require further services.

Acknowledgment tokens Words like “oh,” “ah,” “mm hm,” “uh huh,” “OK,” and “yeah” carry no content. They reassure participants that you hear them, understand what is being said, and want them to continue.

Acquiescence bias To easily agree with what the experimenter (e.g., in interviews, surveys, evaluations) or group (e.g., focus group) suggests, despite one’s own true feelings. This may be a conscious decision because a participant wants to please the experimenter (or group) or it may be unconscious.

Affinity diagram Similar findings or concepts are grouped together to identify themes or trends in the data.

Analysis of variance (ANOVA) An inferential statistical method in which the variation in a set of observations is divided into distinct components.

Anonymity Not collecting any personally identifying information about a participant. Since we typically conduct screeners to qualify participants for our studies, we know their names, e-mail addresses, etc. so participants are not anonymous.

Antiprinciples Qualitative description of the principles your product does not intend to address.

Antiuser Someone who would not buy or use your product in any circumstances.

Artifacts Objects or items that users use to complete their tasks or that result from their tasks.

Artifact notebook A book that a participant uses to collect all the artifacts during a diary study.

Artifact walkthrough This is typically a session where stakeholders step through the artifacts collected to understand the users’ experience

Asynchronous Testing and/or communication that does not require a participant and a researcher to be working together at the same time. For example, e-mail is an asynchronous form of communication while a phone call is a synchronous method of communication. See also Synchronous.

Attitudinal data How a participant or respondent feels (as opposed to how he or she behaves).

B

Behavioral data How a participant behaves (as opposed to how he or she feels).

Benchmarking Study to compare the performance of your product or service against that of a competitor or a set of industry best practices.

Beneficence Concept in research ethics that any research you conduct must provide some benefit and protect the participant’s welfare.

Binary questions Questions with two opposing options (e.g., yes/no, true/false, agree/disagree).

Bipolar constructs Variables or metrics that have a midpoint and two extremes (e.g., Extremely satisfied to Extremely unsatisfied).

Brainstorming A technique by which a group attempts to find a solution for a specific problem or generate ideas about a topic by amassing all the ideas together without initial concern of their true worth.

Branching logic Presenting questions based on responses to earlier questions.

Brand-blind study Study in which the product branding is removed to avoid biasing the participant or to protect product confidentiality.

Burnout In the case of a longitudinal study, participants become exhausted of participating.

C

Cache Location where information is stored temporarily. The files you request are stored on your computer’s hard disk in a cache subdirectory under the directory for your browser. When you return to a page you have recently visited, the browser can retrieve the page from the cache rather than the original server. This saves you time and saves the network burden of some additional traffic.

Café study Research study (e.g., usability evaluation) that takes place in a café or other gathering spot in the wild to recruit people at random for brief studies (typically 15 minutes or less).

Card sorting Research method in which participants group concepts or functionality based on their mental model. The data are analyzed across many participants to inform a product’s information architecture.

CDA See Confidential disclosure agreement.

Census A survey that attempts to collect responses from everyone in your population, rather than just a sample.

Central tendency The typical or middle value of a set of data. Common measures of central tendency are the mean, median, and mode.

Chi-squared An inferential statistical test commonly used for testing independence and goodness of fit.

Click stream Sequence of pages requested as a visitor explores a website.

Closed sort Card sort in which participants are given a set of cards and a set of predetermined categories and asked to place the cards into those preexisting categories.

Closed-ended question A question that provides a limited set of responses for participants to choose from (e.g., yes/no, agree/disagree, answer a/b).

Cluster analysis Method for analyzing card sort data by calculating the strength of the perceived relationships between pairs of cards, based on the frequency with which members of each possible pair appear together.

Cognitive interference The ability of one idea to interfere with another’s ability to generate ideas.

Cognitive interview testing This involves asking the target population to describe all the thoughts, feelings, and ideas that come to mind when examining specific questions or messages, and to provide suggestions to clarify wording as needed. Typically used to evaluate a survey prior to launch to determine if respondents understand the questions, interpret them the way you intended, and measure how long it takes to complete the survey.

Cognitive pretest See Cognitive interview testing.

Cognitive walkthrough A formative usability inspection method. It is task-based because it is based on the belief that people learn systems by trying to accomplish tasks with it, rather than first reading through instructions. It is ideal for products that are meant to be walk up and use (i.e., no training required).

Cohen’s Kappa Measure interrater reliability.

Communication speed Whether one is speaking, writing, or typing, one can communicate an idea only as fast as he or she can speak, write, or type.

Competitive analysis List of the features, strengths, weaknesses, user base, and price point for your competitors. It should include first-hand experience with the product(s) but can also include user reviews and analysis from external experts or trade publications.

Confidence interval In statistics, it is a type of interval estimate of a population parameter. It is the range of values with a specified probability that the value of a parameter lies within it. In other words, it is the amount of uncertainty you are willing to accept.

Confidential disclosure agreement (CDA) A legal agreement, which the participant signs and thereby agrees to keep all information regarding the product and/or session confidential for a predefined time.

Confidentiality The practice of protecting participants’ identity. In order to keep their participation confidential, do not associate a participant’s name or other personally identifiable information with his or her data (e.g., notes, surveys, videos) unless the participant provides such consent in writing. Instead, use a participant ID (e.g., P1, participant 1).

Confound A variable that should have been held constant but was accidentally allowed to vary (and covary) with the independent/predictor variable.

Conjoint analysis Participants are presented a subset of possible feature combinations to determine the relative importance of each feature in their purchasing decision making. It is believed that relative values of attributes considered jointly can be better measured than when considered in isolation.

Consent form A document that informs a participant of the purpose of the activity he or she is involved in, any risks involved, the expected duration, procedures, use of information collected (e.g., to design a new product), incentives for participation, and his/her rights as a participant. The participant signs this form to acknowledge that he or she has been informed of these things and agrees to participate.

Construct The variable that you wish to measure.

Context of use The situation and environment in which the task is conducted or product of interest is used.

Convenience sampling The sample of the population used reflects those who were available (or those that you had access to), as opposed to selecting a truly representative sample of the population. Rather than selecting participants from the population at large, you recruit participants from a convenient subset of the population. For example, research done by college professors often uses college students for participants instead of representatives from the population at large.

Correlation Relationship or connection between two or more variables.

Cost per complete The amount you must spend in order to get one completed survey.

Coverage bias Sampling bias in which certain groups of individuals are not represented in the sample for personal reasons or recruiting methods (e.g., households without a landline are not included in a telephone survey).

Crowd sourcing In the case of data analysis, it is leveraging the services of groups of individuals not otherwise involved in the study to help categorize the study data.

Customer support comments Feedback from customers or users about your product or service.

D

Data retention policy An organization’s established protocol for retaining participant data.

Data-logging software Software that allows one to quickly take notes and automatically record data during a usability study.

Data saturation It is the point during data collection at which no new relevant information emerges.

Debrief The process of explaining a study to the participant once participation is complete.

Deep hanging out Coined by anthropologist Clifford Geertz in 1998 to describe the anthropological research method of immersing oneself in a cultural, group, or social experience on an informal level.

Demand curve analysis The relationship between the price of product and the amount or quantity the consumer is willing and able to purchase in a specified time period.

Dendrogram A visual representation of a cluster analysis. Consists of many U-shaped lines connecting objects in a hierarchical tree. The height of each U represents the distance between the two objects being connected. The greater the distance, the less related the two objects are.

Descriptive statistics These measures describe the sample in your population (e.g., measures of central tendency, measures of dispersion). They are the key calculations that will be of importance to you for close-ended questions and can easily be calculated by any basic statistics program or spreadsheet.

Design thinking Human-centered design approach that integrates the people’s needs, technological possibilities, and requirements for business success.

Desirability testing Evaluates whether or not a product elicits the desired emotional response from users. It is most often conducted with a released version of your product (or competitor’s product) to see how it makes participants feel.

Diary study Longitudinal study in which participants respond to questions either in writing or via an app at specified times of a day.

Discussion guide List of questions, discussion points, and things to observe in an interview.

Double negatives The presence of two negatives in a sentence, making it difficult for the survey respondent or study participant to understand the true meaning of the question.

Double-barreled questions A single question that addresses more than one issue at a time.

Drop-off rate Respondents exiting the survey or remote, unmoderated study before completing it.

Droplist Web widget that can expand to show a list of items to choose from.

E

Early adopters People who start using a product or technology as soon as it becomes available.

Ecological validity When a study mimics the real-world environment (e.g., observing users at the home or workplace).

ESOMAR European Society for Opinion and Marketing Research. It provides a directory of market researchers around the world.

Ethnography The study and systematic recording of the customs or behaviors of a people or culture.

Evaluation apprehension The fear of being evaluated by others. Individuals with evaluation apprehension may not perform a specific task or speak truthfully for fear of another’s negative opinion. The larger the group, the larger the affect.

Experience Sampling Methodology (ESM) Diary-like study in which participants are pinged at random several times a day for several days and asked about their experience or what they are doing/thinking/feeling right now. It provides a reliable measure of the events occurring in the stream of consciousness over time.

Expert review Usability inspection methods leverage experts (e.g., people with experience in usability/user research, subject matter experts) rather than actual end users to evaluate your product or service against a set of specific criteria. These are quick and cheap ways of catching the “low-hanging fruit” or obvious usability issues throughout the product development cycle.

Eye tracking study An evaluation method that utilizes an eye tracker to record participant fixations and saccades (i.e., rapid eye movements between fixation points) to create a heat map of where people look (or do not look) for information or functionality and for how long.

F

Feasibility analysis Evaluation and analysis of a product or feature to determine if it is technically feasible within an estimated cost and will be profitable.

Feature creep The tendency for developers to add more and more features into a product as time goes by without clear need or purpose for them.

Feature-shedding The tendency for developers to remove features from a product because of time constraints, limited resources, or business requirements.

Feedback form A questionnaire that does not offer everyone in your population an equal chance of being selected to provide feedback (e.g., only the people on your mailing list are contacted, it is posted on your website under “Contact us” and only people who visit there will see it and have the opportunity to complete it). As a result, it does not necessarily represent your entire population.

Firewall Computer software that prevents unauthorized access to private data on your computer or a network by outside computer users.

Focus troupe Mini-workshop in which dramatic vignettes are presented to potential users where a new product concept is featured merely as a prop but not as an existing piece of technology.

Formative evaluation Studies done early in the product development life cycle to discover insights and shape the design direction. They typically involve usability inspection methods or usability testing with low-fidelity mocks or prototypes.

Free-listing Participants write down every word or phrase that comes to their mind in association with a particular topic, domain, etc.

Frequency The number of times each response is chosen.

G

Gap analysis A competitive analysis technique in which your product/service is compared against a competitor’s to determine gaps in functionality. A value of “importance” and “satisfaction” is assigned to each function by end users. A single score is then determined for each function by subtracting the satisfaction from importance. This score is used to help determine whether resources should be spent incorporating each feature into the product.

Globalization The process of expanding one’s business, technology, or products across the globe. See also Localization.

Grounded theory A form of inquiry where the goal of the researcher is to derive an abstract theory about an interaction that is grounded in the views of users (participants). During this form of inquiry, researchers engage in constant comparison to examine data with respect to categories as they emerge.

Groupthink Within group decision-making procedures, it is the tendency for the various members of a group to try to achieve group consensus. The need for agreement takes priority over the motivation to obtain accurate knowledge to make appropriate decisions.

Guidelines A general rule, principle, or piece of advice.

Guiding principles The qualitative description of the principles the product stands by.

H

Hawthorne effect Participants may behave differently when observed. They will likely be on their best behavior (e.g., observing standard operating procedures rather than using their usual shortcuts).

HCI Acronym for human-computer interaction. Human-computer interaction is the field of study and practice that sits at the intersection of computer science and human factors. It is interested in understanding and creating interfaces for humans to interact successfully and easily with computers.

Heat map Visualization created from an eye tracking study showing where participants looked at a website, app, product, etc. The longer the participants’ gazes stay fixed on a spot, the “hotter” the area is on the map, indicated in red. As fewer participants look at an area or for less time, the “cooler” it gets and transitions to blue. Areas where no one looked are black.

Heuristic A rule or guide based on the principles of usability.

Hits The number of times a particular webpage is visited.

Human factors The study of how humans behave physically and psychologically in relation to particular environments, products, or services.

I

Incentive Gift provided to participants in appreciation for their time and feedback during a research study.

Incident diary Participants are provided with a notebook containing worksheets to be completed on their own. The worksheets may ask users to describe a problem or issue they encountered, how they solved it (if they did), and how troublesome it was (e.g., via a Likert scale). It is given to users to keep track of issues they encounter while using a product.

Inclusive design See Universal design.

Inference A statement based on your interpretation of facts (compare to observation).

Inferential statistics These measures allow us to make inferences or predictions about the characteristics of our population.

Information architecture The organization of a product’s structure and content, the labeling and categorizing of information, and the design of navigation and search systems. A good architecture helps users find information and accomplish their tasks.

Informed consent A written statement of the participant’s rights and any risks with participation presented at the beginning of a study. Participants sign this consent form saying they willingly agree to participate in the study.

Intercept surveys A survey recruitment technique in which individuals are either stopped in person while completing a task (e.g., shopping in the mall) or online (e.g., booking a ticket on a travel site). When conducted online, the survey typically pops up and asks the user if he or she would like to complete a brief survey.

Internationalization Process of developing the infrastructure in your product so that it can potentially be adapted for different languages and regions without requiring engineering changes each time.

Internet protocol (IP) This is the method or protocol by which data are sent from one computer to another on the Internet.

Internet service provider (ISP) A company that provides individuals or companies access to the Internet and other related services. Some of the largest ISPs include AT&T WorldNet, IBM Global Network, MCI, Netcom, UUNet, and PSINet.

Interrater agreement See Interrater reliability.

Interrater reliability The degree to which two or more observers assign the same rating or label to a behavior. In field studies, it would be the amount of agreement between observers coding the same user’s behavior. High interrater reliability means that different observers coded the data in the same way.

Interviewer prestige bias The interviewer informs participants that an authority figure feels one way or another about a topic and then asks the participant how he or she feels.

IP address Every computer connected to the Internet is assigned a unique number known as an Internet protocol (IP) address. Since these numbers are usually assigned in country-based blocks, an IP address can often be used to identify the country from which a computer is connecting to the Internet.

Iterative design Product changes are made over time based on user feedback or performance metrics to continually improve the user experience.

Iterative focus group A style of focus group where the researcher presents a prototype to a group and gets feedback. Then, the same participants are brought back for a second focus group session where the new prototype is presented and additional feedback is gathered.

L

Laws Rules set forth by the government and everyone must comply with them, regardless of where they work. Laws vary by country.

Leading questions Questions that assume the answer and may pass judgment on the participant. They have the ability to influence a participant’s answers.

Likert scale A scale developed by Rensis Likert to measure attitudes. Participants are given a statement and five to seven levels along a scale to rate their agreement/disagreement, satisfaction/dissatisfaction, etc., with the statement.

Live experiments From an HCI standpoint, this is a summative evaluation method that involves comparing two or more designs (live websites) to see which one performs better (e.g., higher click-through rate, higher conversion rate).

Live polling An activity during a study in which participants are asked questions in order to get information about what most people think about something. Responses are collected and results are often displayed in real time.

Loaded questions Questions that typically provide a “reason” for a problem listed in the question. This frequently happens in political campaigns to demonstrate that a majority of the population feels one way or another on a key issue.

Localization Using the infrastructure created during internationalization to adapt your product to a specific language and/or region by adding in local-specific components and translating text. This means adapting your product to support different languages, regional differences, and technical requirements. But it is not enough to simply translate the content and localize things like currency, time, measurements, holidays, titles, standards (e.g., battery size, power source). You also must be aware of any regulatory compliance that applies to your product/domain (e.g., taxes, laws, privacy, accessibility, censorship).

Log files When a file is retrieved from a website, server software keeps a record of it. The server stores this information in the form of text files. The information contained in a log file varies but can be programmed to capture more or less information.

Longitudinal study Research carried out on the same participants over an extended period.

M

Margin of error The amount of error you can tolerate in your statistical analysis.

Markers Key events to the participant that you can probe into for richer information.

Measures of association Using statistics, they allow you to identify the relationship between two survey variables (e.g., comparisons, correlations).

Measures of central tendency Descriptive statistics that tell us where the middle is in a set of data (e.g., mean, median, mode).

Measures of dispersion These statistics show you the “spread” or dispersion of the data around the mean (e.g., range, standard deviation, frequency).

Median A measure of central tendency. When data points are ordered by magnitude, the median is the middlemost point in the distribution.

Mental model A person’s mental representation or organization of information.

Mixer A video mixer/multiplexer will allow multiple inputs—from cameras, computers, or other inputs—to be combined into one mixed image. Some mixers will also allow creation of “picture in picture” (PIP) overlays. The output from a video mixer can be fed either directly into a screen (e.g., in the observation room) or into a recording device locally.

Mobile lab See Portable lab.

Moderator Individual who interacts with participant during a study.

Multimodal survey Conducting a survey via more than one mode (e.g., online, paper, telephone, in person) to increase the response rate and representativeness of the sample.

Multiple-choice questions Close-ended questions that provide multiple responses for the participant to choose from.

Multivariate testing Follows the same principle as A/B testing but instead of manipulating one variable, multiple variables are manipulated to examine how changes in those variables interact to result in the ideal combination. All versions must be tested in parallel to control for extraneous variables that could affect your experiment (e.g., website outage, change in fees for your service).

N

N In statistics, the size of a population or sample. Traditionally, N refers to the size of the population and n to the size of the sample.

NDA See Nondisclosure agreement.

Negative user See Antiuser.

Nominal data Values or observations that can be assigned a code in the form of a number where the numbers are simply labels (e.g., male = 1, female = 2). You can count but not order or measure nominal data.

Nondisclosure agreement (NDA) Legally binding agreements that protect your company’s intellectual property by requiring participants to keep what they see and hear in your study confidential and hand over the ownership of any ideas, suggestions, or feedback they provide.

Nonmaleficence In research ethics, this is the obligation that your research cannot do harm. Even stopping a study or intervention can actually cause harm, not just introducing the study or intervention.

Nonprobability sampling Respondents are recruited from an opt-in panel that may or may not represent your desired population. There is not an equal chance (probability) for everyone in your user population to be recruited.

Nonresponder bias People who do not respond to surveys (or participate in studies) can be significantly different from those who do. Consequently, missing the data from nonresponders can bias the data you collect, making your data less generalizable.

O

Observation A statement of fact based on information obtained through one of the five senses (compare to an inference).

Observation guide A list of general concerns or issues to guide your observations in a field study—but it is not a list of specific questions to ask.

Older adult A person chronologically aged 65 years or more. This age is generally associated with declines in mental and physical capabilities and, in many developed countries, is the age at which people begin to receive pensions and social security benefits.

Omnibus survey Most large survey vendors conduct regular omnibus surveys that combine a few questions from many clients and send them to a broad sample of users on their panels. It is kind of like carpooling but instead of sharing a car, you are sharing a survey instrument. This is a cheap and efficient method if you have just a few questions you want to ask a general population.

Open sort Card sort in which participants are allowed to generate as many categories of information as they want, and name each of those categories however they please.

Open-ended question A question designed to elicit detailed responses and free from structure (i.e., you do not provide options for the participant to choose from).

Outlier A data point that has an extreme value and does not follow the characteristics of the data in general.

P

Page views Number of users who visit a specific webpage.

Paradata The information about the process of responding to the survey. This includes things like how long it took respondents to answer each question or the whole survey, if they changed any answers to their questions or went back to previous pages, if they opened and closed the survey without completing it (i.e., partial completion), how they completed it (e.g., smartphone app, web browser, phone, paper), etc.

Participant rights The ethical obligations the researcher has to the participant in any research study.

Persona An exemplar of a particular user type designed to bring the user profile to life during product development.

Pilot test Study to evaluate the questions in your survey, interview, evaluation, or the methodology of your study to ensure you are measuring what you want to measure. It is typically done with a few individuals similar to your sample. It is an important step to ensure a successful study.

Policies Guidelines set forth by your company, often with the goal of ensuring employees do not come close to breaking laws or just to enforce good business practices. Policies may vary by company.

Population All of the customers or users of your current or potential product or service.

Portable lab Set of equipment taken to the field to conduct a study (e.g., laptop, video recorder).

Power analysis Statistical method that allows us to determine the sample size required to detect an effect of a given size with a given degree of confidence.

Prestige response bias The participant wants to impress the facilitator and therefore provides answers that enhance his or her image.

Price sensitivity model The degree to which the price of a product affects consumers’ purchasing behaviors.

Primacy effect The tendency for the first variable presented to participants (e.g., first item in a list, first prototype shown) to influence a participant’s choice.

Primary users Those individuals who work regularly or directly with the product.

Privacy An individual’s right to prevent others from knowing his or her personally identifying information.

Probability sampling Recruiting method that ensures everyone in your population has an equal chance (probability) of being selected to participate in your study.

Procedural knowledge Stored information that consists of knowledge of how to do things.

Process analysis A method by which a participant explains step-by-step how something is done or how to do something.

Product development life cycle The duration and process of a product from idea to release.

Production blocking In verbal brainstorming, people are asked to speak one at a time. By having to wait in a queue to speak, ideas are sometimes lost or suppressed. Attention is also shifted from listening to other speakers toward trying to remember one’s own idea.

Progress indicators Online survey widget to let respondents know how far along they are and how many more questions they have left to complete.

Protocol A script that outlines all procedures you will perform as a study moderator and the order in which you will carry out these procedures. It acts as a checklist for all of the session steps.

Proxy Server that acts as a mediator between a user’s computer and the Internet so that a company can ensure security, administrative control, and caching service.

Purposive sampling Also known as selective or subjective sampling, this nonprobability sampling technique involves recruiting a nonrepresentative sample of your larger population in order to serve some purpose. This is typically done with you having a specific group in mind that you wish to study (e.g., small business owners).

Q

Qualitative data Represents verbal or narrative pieces of data. These types of data are collected through focus groups, interviews, open-ended questionnaire items, and other less structured situations.

Quantitative data Numeric information that includes things like personal income, amount of time, or a rating of an opinion on a scale from 1 to 5. Even things that you do not think of as quantitative, like feelings, can be collected using numbers if you create scales to measure them.

R

Random Digit Dialing (RDD) Survey recruiting method in which telephone numbers are selected at random from a subset of households.

Random sampling Each member within a population has an equal chance of being selected for a study.

Range The maximum value minus the minimum value. It indicates the spread between the two extremes.

Ranking This type of scale question gives participants a variety of options and asks them to provide a rank for each one. Unlike the rating scale question, the respondent is allowed to use each rank only once.

Rating scale Survey question that presents users with an item and asks them to select from a number of choices along a continuum. The Likert scale is the most commonly used rating scale.

Reliable/reliability Reliability is the extent to which the test or measurement yields the same approximate results when used repeatedly under the same conditions.

Research ethics The obligation of the researcher to protect participants from harm, ensure confidentiality, provide benefit, and secure informed consent.

Response bias In any study in which responses of some sort (e.g., answers to set questions) are required of participants, response bias exists if, independently of the effect of any experimental manipulation, the participants are more likely to respond in one way than in another (e.g., more likely, in a multiple-choice task, to choose option A than option B).

Response distribution For each question, it is how skewed you expect the response to be.

Retrospective interview An interview that is done after an event has taken place.

Retrospective think-aloud Participants are shown a video of their session and asked to tell the moderator what they were thinking at the time.

S

Sample A portion of the population selected to be representative of the population as a whole. Since it is typically unfeasible to collect data from the entire population of users, you must select a smaller subset.

Sample size The number of participants in your study.

Sampling bias The tendency of a sample to exclude some members of the sampling population and overrepresent others.

Sampling plan A list of days/times to observe users. This should include days/times when you anticipate key events (e.g., the day before Thanksgiving, or bad weather at an airport), as well as “normal” days.

Satisficing Decision-making strategy in which participants scan the available alternatives (typically options in a survey question) until they find the minimally acceptable option. This is contrasted with optimal decision making, which attempts to find the best alternative available.

Scenario A story about a user. It provides a setting, has actors, objectives or goals, a sequence of events, and closes with a result. It is used to illustrate how an end user works or behaves.

Screen-capture software Software that automatically records a computer desktop or other digital input (e.g., mobile device via HDMI cable).

Screener Survey that captures data about potential participants in order to select which individuals to include in a study based on certain criteria.

Secondary users Individuals that utilize the product infrequently or through an intermediary.

Selection bias The selection of research participants for a study that is nonrandom and thus results in a nonrepresentative sample. Nonresponse bias and self-selection bias are two forms of selection bias.

Self-report A form of data collection where participants are asked to respond and/or describe a feeling or behavior about themselves. These reports represent participants’ own perceptions, but are subject to limitations such as human memory.

Self-selection bias Bias that results because a certain type of person has volunteered or “self-selected” to be a part of your study (e.g., those people who have a special interest in the topic, those who really just want your incentive, those who have a lot of spare time on their hands, etc.). If those who volunteered differ from those who did not, there will be bias in your sample.

Semi-structured interview The interviewer may begin with a set of questions to answer (closed- and open-ended) but deviate from that set of questions from time to time. It does not have quite the same conversational approach as an unstructured interview.

Significance testing Statistical methods used to determine whether a claim about a population from which a sample has been drawn is the result of chance alone or the effect of the variable under study. Tests for statistical significance tell us the probability that a relationship we think exists is due only to random chance.

Significant event A specific experience in a participant’s past that either exemplifies specific experiences or that is particularly noteworthy.

Similarity matrix A matrix of scores that represents the similarity between a number of data points. Each element of the similarity matrix contains a measure of similarity between two of the data points.

Simplification bias If the researcher is a novice to the domain, he or she may have a tendency to conceptually simplify the expert user’s problem-solving strategies while observing the expert. This is not done intentionally, of course, but the researcher does not have the complex mental model of the expert.

Snowball sampling Nonprobability sampling method in which participants in one study are asked to recruit participants for future studies from their acquaintances.

Social desirability bias Participants provide responses to your questions that they believe are more socially desirable or acceptable than the truth.

Social loafing The tendency for individuals to reduce the effort that they make toward some task when working together with others. The larger the group, the larger the effect.

Social sentiment analysis Analysis of text posted by customers or users on social media, online forums, product review sites, blogs, etc.

Sponsor-blind study Study in which participants are not informed what organization is paying for the study.

Stakeholder An individual or group with an interest (or stake) in your user requirements activity and its results. Stakeholders typically influence the direction of the product (e.g., product managers, developers, business analysts, etc.).

Standard deviation A measure of dispersion, it calculates the deviation from the mean. The larger the standard deviation, the more varied the responses were that participants gave.

Statistically significant The probability that the results you obtained were unlikely to have occurred by chance.

Storyboards Illustrate a particular task or a “day-in-the-life” of the user using representative images to illustrate a task/scenario/story. Merge data across your users to develop a generic, representative description.

Straight-lining In survey completion, participants select the same choice for all questions rather than read and consider each option individually.

Structured data Data that reside in a fixed field within a record or file. This includes data contained in relational databases and spreadsheets.

Subject matter expert Domain expert who is an authority on a given topic or domain.

Summative evaluation Studies typically done toward the end of the product development life cycle with high-fidelity prototypes or the actual final product to evaluate it against a set of metrics (e.g., time on task, success rate). This can be done via in-person or remote usability testing or live experiments.

Surrogate products These are products that may or may not compete directly with your product. They have similar features to your product and should be studied to learn about the strengths and weaknesses.

Surveys Data collection technique in which a sample of the population is asked to self-report data via a questionnaire on paper or online or is interviewed in person or over the phone and a researcher completes a questionnaire for the respondent.

Synchronous Testing and/or communication that requires a participant and a researcher to be working together at the same chronological time.

Synergy An idea from one participant positively influences another participant, resulting in an additional idea that would not have been generated without the initial idea.

T

Task allocation The process of determining who or what should be responsible for completing various tasks in a system. This may be dividing tasks among different humans or between human and machine based on specific criteria.

Telescoping People have a tendency to compress time. So, if you are asking about events that happened in the last six months, people may unintentionally include events that happened in the last nine months. Overreporting of events will result.

Tertiary users Those who are affected by the system or the purchasing decision makers.

Think-aloud protocol A technique used during usability activities. The participant is asked to vocalize his/her thoughts, feelings, and opinions while working or interacting with the product.

Transfer of training Transfer of learned skills from one situation to another. You are leveraging the users’ current skill set so they do not have to learn everything new to use your product.

Translation bias Expert users will attempt to translate their knowledge so that the researcher can understand it. The more experts translate, the more there is the potential for them to oversimplify and distort their knowledge/skills/etc.

Triangulation of data Combining data from multiple methods to develop a holistic picture of the user or domain.

Trusted testers A set of vetted evaluators given early access to a product or service to provide feedback. This nonprobability sampling recruiting method may not be representative of your broader population but is used when there are confidentiality concerns.

t-Test Statistical test of two population means.

Two-way mirror A panel of glass that can be seen through from one side but is a mirror on the other.

U

Unipolar construct Variables or units that go from nothing to a lot (e.g., Not at all useful to Extremely useful).

Universal design A product or service that enables everyone to access and use your product or service regardless of one’s age, abilities, or status in life.

Unstructured data Refers to information that does not reside in a traditional row-column database. As you might expect, it is the opposite of structured data—the data stored in fields in a database.

Usability The effectiveness, efficiency, and satisfaction with which users can achieve tasks when using a product. A usable product is easy to learn and remember, efficient, visually pleasing, and pleasant to use. It enables users to recover quickly from errors and accomplish their tasks with ease.

Usability inspection method Methods that leverage experts (e.g., people with experience in usability/user research, subject matter experts) rather than involve actual end users to evaluate your product or service against a set of specific criteria. These are quick and cheap ways of catching the “low-hanging fruit” or obvious usability issues throughout the product development cycle (e.g., heuristic evaluation, cognitive walkthrough).

Usability lab Space dedicated to conducting usability studies. Typically contains recording equipment and the product you wish to get feedback on. It may contain a two-way mirror to allow the product team to view the study from another room.

Usability testing The systematic observation of end users attempting to complete a task or set of tasks with your product based on representative scenarios.

User experience The study of a person’s behaviors, attitudes, and emotions about using a particular product, system, or service.

User profile A list of characteristics and skills that describe the end user. It should provide the range of characteristics or skill levels that a typical end user may fall in, as well as the most common ones.

User requirements The features/attributes your product should have or how it should perform from the users’ perspective.

User-centered design (UCD) A product development approach that focuses on the end users of a product. The philosophy is that the product should fit the user, rather than making the user fit the product. This is accomplished by employing techniques, processes, and methods throughout the product life cycle that focus on the user.

V

Vague questions Questions that include imprecise terms like “rarely,” “sometimes,” “usually,” “few,” “some,” or “most.” Individuals can interpret these terms in different ways, affecting their answers and your interpretation of the results.

Valid/validity The degree to which a question or task actually measures the desired trait.

Visit summary template A standardized survey or worksheet used in field studies. It is given to each investigator to complete at the end of each visit. This helps everyone get his or her thoughts on paper while fresh in his or her mind. It also speeds data analysis and avoids reporting only odd or funny anecdotal data.

W

Warm-up activity Activity for getting participants comfortable at the beginning of a study.

Web analytics Measurement and analysis of web data to assess the effectiveness of a website or service.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.217.198.254