12 2. INTERACTIVE INFORMATION RETRIEVAL
2.3 SYSTEM/INTERFACE FEATURES EVALUATION
e second category (system/interface features evaluation) covers the typical IIR evaluation studies.
In this type of work, a human-related, personalized system or interface feature is typically being
evaluated. To evaluate systems with users, researchers usually employ multiple methods to collect
data on users’ search behavior, goal of search, search performance, and the overall experience of
search interaction (e.g., in situ questionnaire, post-search questionnaire, individual and focus group
interviews) (Kelly, 2009; Moat et al., 2017). System features designed for user-centered evaluation
are often directly related to some of the user characteristics, such as information need, behavior and
cognition, and information seeking and search contexts.
Unlike the user studies discussed in the previous section, the system evaluation research
usually focus on the added or manipulated system or interface features and evaluate their use-
fulness and usability in supporting users’ interactions with systems in dierent task contexts. For
instance, Syed and Collins-ompson (2017) designed novel retrieval algorithms to provide per-
sonalized results tailored to human learning goals and evaluated the eectiveness of the proposed
model on improving word-learning outcomes. In this case, the optimized ranking algorithms were
evaluated based on how they shaped the presentation of information and changed participants’
knowledge gains in the predened learning context. Kelly and Fu (2006) focused on term selec-
tion in query formulation and examined the usefulness of three relevance feedback interfaces. In
the user-centered evaluation, they demonstrated that queries formulated with the help (candidate
terms, context of search) from experimental interfaces signicantly outperformed corresponding
baseline queries. Yuan and Belkin (2007) designed an integrated IIR system which adapts to
support dierent information seeking strategies and indicated that this novel system resulted in
signicantly better performance in terms of user satisfaction with the retrieved results, eective
interaction, and system usability. Dumais et al. (2016) deployed a document nding and re-us-
ing system named Stu I’ve Seen and evaluated the usefulness of the system with employees in
Microsoft workplaces. With respect to image search, Xu et al. (2010) developed a novel image
search system named Image Search by Concept Map and evaluated the eectiveness of the system
in supporting users nd relevant images.
In the context of computer-supported group work, Hong et al. (2018) explored the possibil-
ity of supporting information seeking in the context of group decision-making and demonstrated
that providing collaborative dynamic queries to people in groups can signicantly improve the
group’s perceived eciency, eectiveness, and the level of satisfaction of decision-making process.
Similarly, Shah and Marchionini (2010) sought to support explicit collaboration in information
seeking activities and found that the system features that support group awareness is critical for
eective collaboration and that such support can be applied in information seeking without signi-
cantly decreasing system usability or adding additional cognitive loads on the users.