Key points

This chapter contains descriptions of 24 standardized questionnaires designed to assess perceptions of usability or related constructs (e.g., satisfaction or usefulness).
Those questionnaires fall into four broad categories: post-study, post-task, website, and other.
Standardized post-study questionnaires include the QUIS, SUMI, PSSUQ, SUS, USE, and UMUX.
Standardized post-task questionnaires include the ASQ, ER, SEQ, SMEQ, and UME.
All of these post-study and post-task questionnaires are of potential value to usability practitioners due to psychometric qualification indicating significant reliability, validity, and sensitivity.
Head-to-head comparisons of the methods indicate that the most sensitive post-study questionnaire is the SUS, followed by the PSSUQ; the most sensitive post-task questionnaire is the SMEQ, followed by the SEQ.
Unless there is a compelling reason to use one of the other questionnaires, our recommendation is to use the SUS for post-study and SEQ or SMEQ for post-task assessments.
Due to their growing use for commercial transactions, standardized usability questionnaires for websites include items focused on the assessment of attributes such as trust and service quality.
Recent research indicates that the common practice of mixing the tone (positive and negative) of items in standardized usability questionnaires is more likely to harm rather than benefit the quality of measurement.
Recent research also indicates that minor adjustments to the wording of items in standardized usability questionnaires do not appear to have an effect on the resulting scores (but extreme changes can affect the resulting metrics).
The scores from standardized usability measurements do not have any inherent meaning, but they are useful for comparisons—either between products or conditions in usability studies or against normative databases.
Commercial usability questionnaires that provide comparison with normative databases are the SUMI, WAMMI, and SUPR-Q.
For noncommercial usability questionnaires, some normative information in the public domain is available for the PSSUQ and CSUQ (Lewis, 2002) and researchers have recently published norms for the SUS (Bangor et al., 2008 2009Sauro, 2011a).
Questionnaires from the market research literature that may be of interest to usability practitioners are the ASCI, NPS, CxPi, and TAM scales (Perceived Usefulness and Perceived Ease-of-Use).

Chapter review questions

1. You’ve run a study using the PSSUQ (standard Version 3), with the results shown in Table 8.10. What are each participant’s overall and subscale scores, and what are the mean overall and subscale scores for the study?
2. Given the published information about normative patterns in responses to the PSSUQ, are you surprised by the mean score of Item 7 relative to the other items for the data in Table 8.10? What about the relative values of InfoQual and IntQual? Based on the typical values for the PSSUQ, does this product seem to be above or below average in perceived usability?
3. Suppose you’ve run a study using the standard version of the SUS, with the following results (Table 8.11). What are the SUS scores for each participant and their average for the product?
4. Given the published information about typical SUS scores, is the average SUS for the data in Table 8.11 generally above or below average? What grade would it receive using the Sauro-Lewis SUS grading curve? If you computed a 90% confidence interval, what would the grade range be? If these participants also responded to the NPS Likelihood to Recommend item, are any of them likely to be Promoters? Using those estimated Likelihood to Recommend ratings, what is the estimated NPS?

Table 8.10

Sample PSSUQ Data for Review Question 1

Participant
1 2 3 4 5 6
Item 1 1 2 2 2 5 1
Item 2 1 2 2 1 5 1
Item 3 1 2 3 1 4 1
Item 4 1 1 2 1 4 1
Item 5 1 1 2 1 5 1
Item 6 1 1 4 1 4 3
Item 7 1 2 . 1 6 1
Item 8 3 1 . 1 6 1
Item 9 3 1 1 1 5 1
Item 10 1 3 2 1 4 1
Item 11 2 2 2 1 4 1
Item 12 1 1 2 1 4 1
Item 13 1 1 2 2 4 1
Item 14 1 1 2 3 4 1
Item 15 1 1 3 1 4 1
Item 16 1 1 2 1 4 1

Table 8.11

Sample SUS Data for Review Question 3

Participant
1 2 3 4 5
Item 1 3 2 5 4 5
Item 2 1 1 2 2 1
Item 3 4 4 4 5 5
Item 4 1 3 1 1 1
Item 5 3 4 4 4 5
Item 6 1 2 2 2 1
Item 7 4 3 4 3 5
Item 8 1 2 1 1 1
Item 9 4 4 5 3 5
Item 10 2 1 2 3 1

Answers to chapter review questions

1. Table 8.12 shows the overall and subscale PSSUQ scores for each participant and the mean overall and subscale scores for the study (averaged across participants). Even though there is some missing data, only two cells in the table are empty, so it’s OK to just average the available data.
2. To answer these questions, refer to Table 8.2 (PSSUQ Version 3 norms). Regarding Item 7, generally, its scores tend to be higher (poorer) than those for other items, but in this set of data, the mean item scores are fairly uniform, ranging from 1.67 to 2.40, with 2.20 for Item 7, making it one of the higher scoring items but not as much higher as is usual, which is a bit surprising. The same is true for the relative pattern of the subscales. Typically, InfoQual is about half a point higher than IntQual, but for these data the difference is only about 0.15. Based on the typical values for the PSSUQ, the mean Overall score is usually about 2.82, so with an Overall score of 1.98, this product seems to be above average in perceived usability—at least, in reference to the products evaluated to produce the norms in Table 8.2. To determine if it is significantly better than average, you’d need to compute a confidence interval on the data from the study to see if the interval included or excluded the benchmark. It turns out that the 95% confidence interval for Overall ranges from 0.622 to 3.33—a fairly wide interval due to the small sample size and relatively high variability—so even though the mean is lower than the norm, the interval is consistent with the norm. This Overall score is not statistically significantly different from the norm of 2.82.
3. Table 8.13 shows the recoded item values and SUS scores for each participant and the mean SUS score for the study averaged across participants.
4. Based on the data collected by Sauro (2011a), the mean SUS score across a large number of usability studies is 68, so the mean from this study is above average. On the Sauro-Lewis SUS grading curve, scores between 80.8 and 84.0 get an A (Table 8.5). A 90% confidence interval on these data ranges from about 71 to 93, so the corresponding grade range is from C to A+ (at least you know it’s probably not a D), and because the confidence interval does not include 68, the result is significantly above average (p < 0.10). If these participants also responded to the NPS Likelihood to Recommend item, only one of them is likely to be a Promoter (responding with a 9 or 10 to the Likelihood to Recommend question). The simplified regression equation for estimating Likelihood to Recommend from SUS is LTR = SUS/10, so the predicted Likelihood to Recommend responses for these five participants are, respectively, 8, 7, 8, 7, and 10. Given these LTR scores, there are 0% Detractors and 20% (1/5) Promoters, for an estimated NPS of 20% (%Promoters minus %Detractors).

Table 8.12

Answers for Review Question 1

Participant
1 2 3 4 5 6 Mean
Item 1 1 2 2 2 5 1 2.17
Item 2 1 2 2 1 5 1 2.00
Item 3 1 2 3 1 4 1 2.00
Item 4 1 1 2 1 4 1 1.67
Item 5 1 1 2 1 5 1 1.83
Item 6 1 1 4 1 4 3 2.33
Item 7 1 2 . 1 6 1 2.20
Item 8 3 1 . 1 6 1 2.40
Item 9 3 1 1 1 5 1 2.00
Item 10 1 3 2 1 4 1 2.00
Item 11 2 2 2 1 4 1 2.00
Item 12 1 1 2 1 4 1 1.67
Item 13 1 1 2 2 4 1 1.83
Item 14 1 1 2 3 4 1 2.00
Item 15 1 1 3 1 4 1 1.83
Item 16 1 1 2 1 4 1 1.67
Overall 1.31 1.44 2.21 1.25 4.50 1.13 1.97
SysUse 1.00 1.50 2.50 1.17 4.50 1.33 2.00
InfoQual 1.83 1.67 1.75 1.00 4.83 1.00 2.01
IntQual 1.00 1.00 2.33 2.00 4.00 1.00 1.89

Table 8.13

Answers for Review Question 3

Participants
1 2 3 4 5
Item 1 2 1 4 3 4
Item 2 4 4 3 3 4
Item 3 3 3 3 4 4
Item 4 4 2 4 4 4
Item 5 2 3 3 3 4
Item 6 4 3 3 3 4
Item 7 3 2 3 2 4
Item 8 4 3 4 4 4
Item 9 3 3 4 2 4
Item 10 3 4 3 2 4
Mean
Overall 80 70 85 75 100 82.00
Pred-LTR 8 7 8 7 10 Grade: A

References

Aladwani AM, Palvia PC. Developing and validating an instrument for measuring user perceived Web quality. Inform. Manag. 2002;39:467476.

Albert, W., Dixon, E., 2003. Is this what you expected? The use of expectation measures in usability testing. Paper presented at the Usability Professionals Association Annual Conference, UPA, Scottsdale, AZ.

Anastasi A. Psychological Testing. New York, NY: Macmillan; 1976.

Andrich D. Application of a psychometric rating model to ordered categories which are scored with successive integers. Appl. Psychol. Meas. 1978;2(4):581594.

ANSI, 2001. Common Industry Format for Usability Test Reports (ANSI-NCITS 354-2001). Washington, DC: Author.

Azzara CV. Questionnaire Design for Business Research. Mustang, OK: Tate Publishing; 2010.

Bangor A, Kortum PT, Miller JT. An empirical evaluation of the System Usability Scale. Int. J. Hum.-Comput. Interact. 2008;24:574594.

Bangor A, Kortum PT, Miller JT. Determining what individual SUS scores mean: adding an adjective rating scale. J. Usability Stud. 2009;4(3):114123.

Bangor, A., Joseph, K., Sweeney-Dillon, M., Stettler, G., Pratt, J., 2013. Using the SUS to help demonstrate usability’s value to business goals. In: Proceedings of the Human Factors Society and Ergonomics Society Annual Meeting, HFES, Santa Monica, CA, pp. 202–205.

Bargas-Avila JA, Lötscher J, Orsini S, Opwis K. Intranet Satisfaction Questionnaire: development and validation of a questionnaire to measure user satisfaction with the Intranet. Comput. Hum. Behav. 2009;25:12411250.

Barnette JJ. Effects of stem and Likert response option reversals on survey internal consistency: if you feel the need, there is a better alternative to using those negatively worded stems. Educ. Psychol. Meas. 2000;60:361370.

Benedek, J., Miner, T., 2002. Measuring desirability: new methods for evaluating desirability in a usability lab setting. Paper presented at the Usability Professionals Association Annual Conference, UPA, Orlando, FL.

Blažica B, Lewis JR. A Slovene translation of the System Usability Scale (SUS). Int. J. Hum.-Comput. Interact. 2015;31:112117.

Bond TG, Fox CM. Applying the Rasch Model: Fundamental Measurement in the Human Sciences. Mahwah, NJ: Lawrence Erlbaum; 2001.

Borsci S, Federici S, Lauriola M. On the dimensionality of the System Usability Scale: a test of alternative measurement models. Cogn. Process. 2009;10:193197.

Borsci S, Federici S, Bacci S, Gnaldi M, Bartolucci F. Assessing user satisfaction in the era of user experience: comparison of the SUS, UMUX and UMUX-LITE as a function of product experience. Int. J. Hum.-Comput. Interact. 2015;31:484495.

Brace I. Questionnaire Design: How to Plan, Structure, and Write Survey Material for Effective Market Research. second ed. London, UK: Kogan Page Limited; 2008.

Brooke J. SUS: A ‘quick and dirty’ usability scale. In: Jordan P, Thomas B, Weerdmeester B, eds. Usability Evaluation in Industry. London, UK: Taylor & Francis; 1996:189194.

Brooke J. SUS: a retrospective. J. Usability Stud. 2013;8(2):2940.

Cavallin H, Martin WM, Heylighen A. How relative absolute can be: SUMI and the impact of the nature of the task in measuring perceived software usability. Artif. Intell. Soc. 2007;22:227235.

Cheung GW, Rensvold RB. Assessing extreme and acquiescence response sets in cross-cultural research using structural equations modeling. J. Cross-Cult. Psychol. 2000;31:187212.

Chin, J.P., Diehl, V.A., Norman, K.L., 1988. Development of an instrument measuring user satisfaction of the human–computer interface. In: Proceedings of CHI 1988, ACM, Washington, DC, pp. 213–218.

Cordes, R.E., 1984a. Software ease of use evaluation using magnitude estimation. In: Proceedings of the Human Factors Society, HFS, Santa Monica, CA, pp. 157–160.

Cordes, R.E., 1984b. Use of magnitude estimation for evaluating product ease of use (Tech. Report 82.0135), IBM, Tucson, AZ.

Cortina JM. What is coefficient alpha? An examination of theory and applications. J. Exp. Psychol. 1993;78(1):98104.

Courage C, Baxter K. Understanding Your Users: A Practical Guide to User Requirements. San Francisco, CA: Morgan Kaufmann; 2005.

Cronbach LJ. Response sets and test validity. Educ. Psychol. Meas. 1946;6:475494.

Davis D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quart. 1989;13(3):319339.

Embretson SE, Reise SP. Item Response Theory for Psychologists. Mahwah, NJ: Lawrence Erlbaum Associates, Inc; 2000.

Erdinç O, Lewis JR. Psychometric evaluation of the T-CSUQ: the Turkish version of the Computer System Usability Questionnaire. Int. J. Hum.-Comput. Interact. 2013;29:319323.

Finstad K. The System Usability Scale and non-native English speakers. J. Usability Stud. 2006;1(4):185188.

Finstad K. Response interpolation and scale sensitivity: evidence against 5-point scales. J. Usability Stud. 2010;5(3):104110.

Finstad K. The usability metric for user experience. Interact. Comput. 2010;22:323327.

Finstad K. Response to commentaries on “The Usability Metric for User Experience”. Interact. Comput. 2013;25:327330.

Grier, R.A., Bangor, A., Kortum, P., Peres, S.C., 2013. The System Usability Scale: beyond standard usability testing. In: Proceedings of the Human Factors Society and Ergonomics Society Annual Meeting, HFES, Santa Monica, CA, pp. 187–191.

Grimm SD, Church AT. A cross-cultural study of response biases in personality measures. J. Res. Pers. 1999;33:415441.

Hassenzahl M. The effect of perceived hedonic quality on product appealingness. Int. J. Hum.-Comput. Interact. 2001;13(4):481499.

Hassenzahl M. The interplay of beauty, goodness, and usability in interactive products. Hum.-Comput. Interact. 2004;19:319349.

Hassenzahl, M., Platz, A., Burmester, M., Lehner, K., 2000. Hedonic and ergonomic quality aspects determine a software’s appeal. In: Proceedings of CHI 2000, ACM, The Hague, Amsterdam, pp. 201–208.

Hassenzahl M, Wiklund-Engblom A, Bengs A, Hägglund S, Diefenbach S. Experience-oriented and product-oriented evaluation: psychological need fulfillment, positive affect, and product perception. Int. J. Hum.-Comput. Interact. 2015;31:530544.

Hollemans, G., 1999. User satisfaction measurement methodologies: extending the user satisfaction questionnaire. In: Proceedings of HCI International 1999, Lawrence Erlbaum, Mahwah, NJ, pp. 1008–1012.

Hornbæk K. Current practice in measuring usability: challenges to usability studies and research. Int. J. Hum.-Comput. Stud. 2006;64(2):79102.

Hornbæk, K., Law, E.L., 2007. Meta-analysis of correlations among usability measures. In: Proceedings of CHI 2007, ACM, San Jose, CA, pp. 617–626.

Ibrahim AM. Differential responding to positive and negative items: the case of a negative item in a questionnaire for course and faculty evaluation. Psychol. Rep. 2001;88:497500.

ISO. (1998). Ergonomic requirements for office work with visual display terminals (VDTs), Part 11, Guidance on usability (ISO 9241-11:1998E). Geneva, Switzerland: Author.

Joyce M, Kirakowski J. Measuring attitudes towards the Internet: the General Internet Attitude Scale. Int. J. Hum.-Comput. Interact. 2015;31:506517.

Karn, K., Little, A., Nelson, G., Sauro, J., Kirakowski, J., Albert, W., Norman, K., 2008. Subjective ratings of usability: reliable or ridiculous? Panel Presentation at the Usability Professionals Association Annual Conference, UPA, Baltimore, MD.

Keiningham TL, Cooil B, Andreassen TW, Aksoy L. A longitudinal examination of Net Promoter and firm revenue growth. J. Market. 2007;71:3951.

Kirakowski J. The software usability measurement inventory: background and usage. In: Jordan P, Thomas B, Weerdmeester B, eds. Usability Evaluation in Industry. London, UK: Taylor & Francis; 1996:169178: Available from: www.ucc.ie/hfrg/questionnaires/sumi/index.html.

Kirakowski, J., Cierlik, B., 1998. Measuring the usability of websites. In: Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, HFES, Santa Monica, CA, pp. 424–428. Available from: www.wammi.com

Kirakowski J, Corbett M. SUMI: the Software Usability Measurement Inventory. Br. J. Educ. Technol. 1993;24:210212.

Kirakowski J, Dillon A. The Computer User Satisfaction Inventory (CUSI): Manual and Scoring Key. Cork, Ireland: Human Factors Research Group, University College of Cork; 1988.

Kortum P, Bangor A. Usability ratings for everyday products measured with the System Usability Scale. Int. J. Hum.-Comput. Interact. 2013;29:6776.

Kortum, P., Johnson, M., 2013. The relationship between levels of user experience with a product and perceived system usability. In: Proceedings of the Human Factors Society and Ergonomics Society Annual Meeting, HFES, Santa Monica, CA, pp. 197–201.

Kortum P, Peres SC. The relationship between system effectiveness and subjective usability scores using the System Usability Scale. Int. J. Hum.-Comput. Interact. 2014;30:575584.

Kortum P, Sorber M. Measuring the usability of mobile applications for phones and tablets. Int. J. Hum.-Comput. Interact. 2015;31:518529.

Kuniavsky M. Observing the User Experience: A Practitioner’s Guide to User Research. San Francisco, CA: Morgan Kaufmann; 2003.

LaLomia MJ, Sidowski JB. Measurements of computer satisfaction, literacy, and aptitudes: a review. Int. J. Hum.–Comput. Interact. 1990;2:231253.

Landauer TK. Behavioral research methods in human–computer interaction. In: Helander M, Landauer TK, Prabhu P, eds. Handbook of Human–Computer Interaction. second ed. Amsterdam, Netherlands: Elsevier; 1997:203227.

Lascu D, Clow KE. Web site interaction satisfaction: scale development consideration. J. Internet Commer. 2008;7(3):359378.

Lascu D, Clow KE. Website interaction satisfaction: a reassessment. Interact. Comput. 2013;25:307311.

Lewis, J.R., 1990a. Psychometric evaluation of a post-study system usability questionnaire: The PSSUQ (Tech. Report 54.535), International Business Machines Corp, Boca Raton, FL.

Lewis, J. R., 1990b. Psychometric evaluation of an after-scenario questionnaire for computer usability studies: The ASQ (Tech. Report 54.541), International Business Machines Corp., Boca Raton, FL.

Lewis JR. Psychometric evaluation of an after-scenario questionnaire for computer usability studies: the ASQ. SIGCHI Bull. 1991;23:7881.

Lewis, J.R., 1992. Psychometric evaluation of the Post-Study System Usability Questionnaire: the PSSUQ. In: Proceedings of the Human Factors Society 36th Annual Meeting, Human Factors Society, Santa Monica, CA, pp. 1259–1263.

Lewis JR. Multipoint scales: mean and median differences and observed significance levels. Int. J. Hum.-Comput. Interact. 1993;5:382392.

Lewis JR. IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int. J. Hum.-Comput. Interact. 1995;7:5778.

Lewis, J.R., 1999. Tradeoffs in the design of the IBM computer usability satisfaction questionnaires. In: Proceedings of HCI International 1999, Lawrence Erlbaum, Mahwah, NJ, pp. 1023–1027.

Lewis JR. Psychometric evaluation of the PSSUQ using data from five years of usability studies. Int. J. Hum.–Comput. Interact. 2002;14:463488.

Lewis JR. Practical Speech User Interface Design. Boca Raton, FL: Taylor & Francis; 2011.

Lewis, J.R., 2012a. Predicting net promoter scores from system usability scale scores. Available from: www.measuringu.com/blog/nps-sus.php.

Lewis JR. Usability testing. In: Salvendy G, ed. Handbook of Human Factors and Ergonomics. fourth ed. New York, NY: John Wiley; 2012:12671312.

Lewis JR. Critical review of “Intranet Satisfaction Questionnaire: development and validation of a questionnaire to measure user satisfaction with the intranet”. Interact. Comput. 2013;25:299301.

Lewis JR. Critical review of “The Usability Metric for User Experience”. Interact. Comput. 2013;25:320324.

Lewis JR, Mayes DK. Development and psychometric evaluation of the Emotional Metric Outcomes (EMO) questionnaire. Int. J. Hum.-Comput. Interact. 2014;30:685702.

Lewis JR, Sauro J. The factor structure of the System Usability Scale. In: Kurosu M, ed. Human Centered Design, HCII 2009. Heidelberg, Germany: Springer-Verlag; 2009:94103.

Lewis JR, Henry SC, Mack RL. Integrated office software benchmarks: a case study. In: Diaper D, ed. Proceedings of the 3rd IFIP Conference on Human–Computer Interaction, INTERACT ’90. Cambridge, UK: Elsevier Science; 1990:337343.

Lewis, J.R., Utesch, B.S., Maher, D.E., 2013. UMUX-LITE—when there’s no time for the SUS. In: Proceedings of CHI 2013, Association for Computing Machinery, Paris, France, pp. 2099–2102.

Lewis JR, Brown J, Mayes DK. Psychometric evaluation of the EMO and the SUS in the context of a large-sample unmoderated usability study. Int. J. Hum.-Comput. Interact. 2015;31:545553.

Lewis JR, Utesch BS, Maher DE. Measuring perceived usability: the SUS, UMUX-LITE, and AltUsability. Int. J. Hum.-Comput. Int. 2015;31:496505.

Loiacono ET, Watson RT, Goodhue DL. WEBQUAL: a measure of website quality. Market. Theory Appl. 2002;13(3):432438.

Lucey, N.M., 1991. More than meets the I: User-satisfaction of computer systems (Unpublished thesis for Diploma in Applied Psychology). University College Cork, Cork, Ireland.

Lund, A., 1998. USE Questionnaire Resource Page, Available from: http://usesurvey.com.

Lund A. Measuring usability with the USE questionnaire. Usability User Exp. Newslett. STC Usability SIG. 2001;8(2):14: Available from: www.stcsig.org/usability/newsletter/0110_measuring_with_use.html.

Massaro DW. Experimental Psychology and Information Processing. Chicago, IL: Rand McNally College Publishing Company; 1975.

McGee, M., 2003. Usability magnitude estimation. In: Proceedings of the Human Factors and Ergonomics Society 47th Annual Meeting, HFES, Santa Monica, CA, pp. 691–695.

McGee, M., 2004. Master usability scaling: magnitude estimation and master scaling applied to usability measurement. In: Proceedings of CHI 2004, ACM, Vienna, Austria, pp. 335–342.

McLellan S, Muddimer A, Peres SC. The effect of experience on System Usability Scale ratings. J. Usability Stud. 2012;7(2):5667.

McSweeney, R., 1992. SUMI: A psychometric approach to software evaluation (unpublished M.A. (Qual.) thesis in applied psychology). University College of Cork, Cork, Ireland. Available from: http://sumi.ucc.ie/sumipapp.html.

Molich, R., Kirakowski, J, Sauro, J., Tullis, T., 2009. Comparative usability task measurement workshop (CUE-8). Workshop conducted at the UPA 2009 Conference in Portland, OR.

Mussen P, Rosenzweig MR, Aronson E, Elkind D, Feshbach S, Geiwitz J, Glickman SE, Murdock BB, Wertheimer M, Harvey LO. Psychology: An Introduction. second ed. Lexington, MA: D.C. Heath and Company; 1977.

Nunnally JC. Psychometric Theory. New York, NY: McGraw-Hill; 1978.

Orsini S, Opwis K, Bargas-Avila JA. Response to the reviews on Bargas-Avila et al (2009) ’Intranet Satisfaction Questionnaire: Development and validation of a questionnaire to measure user satisfaction with the intranet’. Interact. Comput. 2013;25:304306.

Parasuraman A. Marketing Research. Reading, MA: Addison-Wesley; 1986.

Peres, S.C., Pham, T., Phillips, R., 2013. Validation of the System Usability Scale (SUS): SUS in the wild. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, HFES, Santa Monica, CA, pp. 192–196.

Presser S, Schuman H. The measurement of a middle position in attitude surveys. Pub. Opin. Quart. 1980;44(1):7085.

Preston CC, Colman AM. Optimal number of response categories in rating scales: reliability, validity, discriminating power, and respondent preferences. Acta Psychol. 2000;104:115.

Quilty LC, Oakman JM, Risko E. Correlates of the Rosenberg Self-Esteem Scale method effects. Struct. Eq. Model. 2006;13(1):99117.

Rauschenberger M, Schrepp M, Cota MP, Olschner S, Thomaschewski J. Efficient measurement of the user experience of interactive products: How to use the User Experience Questionnaire (UEQ). Int. J. Artif. Intell. Interact. Multimed. 2013;2(1):3945.

Reichheld FF. The one number you need to grow. Harvard Bus. Rev. 2003;81:4654.

Reichheld F. The Ultimate Question: Driving Good Profits and True Growth. Boston MA: Harvard Business School Press; 2006.

Reise SP, Ainsworth AT, Haviland MG. Item response theory: fundamentals, applications, and promise in psychological research. Curr. Dir. Psychol. Sci. 2005;14(2):95101.

Safar, J.A., Turner, C.W.,2005. Validation of a two factor structure of system trust. In: Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting, HFES, Santa Monica, CA, pp. 497–501.

Sauro, J., 2010a. Does better usability increase customer loyalty? Available from: www.measuringu.com/usability-loyalty.php.

Sauro, J., 2010b. If you could only ask one question, use this one. Available from: www.measuringu.com/blog/single-question.php.

Sauro, J., 2010c. That’s the worst website ever! Effects of extreme survey items. Available from: www.measuringu.com/blog/extreme-items.php.

Sauro, J., 2010d. Top-box scoring of rating scale data. Available from: www.measuringu.com/blog/top-box.php.

Sauro J. A Practical Guide to the System Usability Scale (SUS): Background, Benchmarks & Best Practices. Denver, CO: Measuring Usability LLC; 2011.

Sauro, J., 2011b. The Standardized User Experience Percentile Rank Questionnaire (SUPR-Q). Available from: www.suprq.com/

Sauro J. SUPR-Q: A comprehensive measure of the quality of the website user experience. J. Usability Stud. 2015;10(2):6886.

Sauro, J., Dumas, J.S., 2009. Comparison of three one-question, post-task usability questionnaires. In: Proceedings of CHI 2009, ACM, Boston, MA, pp. 1599–1608.

Sauro, J., Lewis, J.R., 2009. Correlations among prototypical usability metrics: evidence for the construct of usability. In: Proceedings of CHI 2009, ACM, Boston, MA, pp. 1609–1618.

Sauro, J., Lewis, J.R., 2011. When designing usability questionnaires, does it hurt to be positive? In: Proceedings of CHI 2011, ACM, Vancouver, Canada, pp. 2215–2223.

Schmettow, M., Vietze, W., 2008. Introducing item response theory for measuring usability inspection processes. In: Proceedings of CHI 2008, ACM, Florence, Italy, pp. 893–902.

Slaughter, L., Harper, B., Norman, K., 1994. Assessing the equivalence of the paper and on-line formats of the QUIS 5.5. In: Proceedings of the 2nd Annual Mid-Atlantic Human Factors Conference, HFES, Washington, DC, pp. 87–91.

Spector P, Van Katwyk P, Brannick M, Chen P. When two factors don’t reflect two constructs: how item characteristics can produce artifactual factors. J. Manag. 1997;23(5):659677.

Sun H, Zhang P. Causal relationships between perceived enjoyment and perceived ease of use: an alternative approach. J. Assoc. Inform. Syst. 2011;7(9):618645.

Tedesco, D.P., Tullis, T.S., 2006. A comparison of methods for eliciting post-task subjective ratings in usability testing. Paper presented at the Usability Professionals Association Annual Conference, UPA, Broomfield, CO.

Teo T, Noyes J. An assessment of the influence of perceived enjoyment and attitude on the intention to use technology among pre-service teachers: a structural equation modeling approach. Comput. Educ. 2011;57(2):16451653.

Thurstone LL. Attitudes can be measured. Am. J. Sociol. 1928;33:529554.

Travis, D. (2008). Measuring satisfaction: beyond the usability questionnaire. Available from: www.userfocus.co.uk/articles/satisfaction.html

Tullis T, Albert B. Measuring the User Experience: Collecting, Analyzing, and Presenting Usability Metrics. Burlington, MA: Morgan Kaufmann; 2008.

Tullis, T.S., Stetson, J.N., 2004. A comparison of questionnaires for assessing website usability. Paper presented at the Usability Professionals Association Annual Conference, UPA, Minneapolis, MN(home.comcast.net/∼tomtullis/publications/UPA2004TullisStetson.pdf).

van de Vijver FJR, Leung K. Personality in cultural context: methodological issues. J. Pers. 2001;69:10071031.

Wang J, Senecal S. Measuring perceived website usability. J. Internet Commer. 2007;6(4):97112.

Whiteside J, Bennett J, Holtzblatt K. Usability engineering: Our experience and evolution. In: Helander M, ed. Handbook of Human–Computer Interaction. Amsterdam, Netherlands: North-Holland, Amsterdam; 1988:791817.

Wu J, Chen Y, Lin L. Empirical evaluation of the revised end user computing acceptance model. Comput. Hum. Behav. 2007;23:162174.

Zickar MJ. Modeling item-level data with item response theory. Curr. Dir. Psychol. Sci. 1998;7:104109.

Zijlstra, R., van Doorn, L., 1985. The construction of a scale to measure subjective effort (Tech. Rep.). Delft, Netherlands: Delft University of Technology, Department of Philosophy and Social Sciences.

Zviran M, Glezer C, Avni I. User satisfaction from commercial web sites: the effect of design and use. Inform. Manag. 2006;43:157178.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.232.11