Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Apr:184:103216.
doi: 10.1016/j.ijhcs.2023.103216. Epub 2024 Jan 6.

Towards the design of user-centric strategy recommendation systems for collaborative Human-AI tasks

Affiliations

Towards the design of user-centric strategy recommendation systems for collaborative Human-AI tasks

Lakshita Dodeja et al. Int J Hum Comput Stud. 2024 Apr.

Abstract

Artificial Intelligence is being employed by humans to collaboratively solve complicated tasks for search and rescue, manufacturing, etc. Efficient teamwork can be achieved by understanding user preferences and recommending different strategies for solving the particular task to humans. Prior work has focused on personalization of recommendation systems for relatively well-understood tasks in the context of e-commerce or social networks. In this paper, we seek to understand the important factors to consider while designing user-centric strategy recommendation systems for decision-making. We conducted a human-subjects experiment (n=60) for measuring the preferences of users with different personality types towards different strategy recommendation systems. We conducted our experiment across four types of strategy recommendation modalities that have been established in prior work: (1) Single strategy recommendation, (2) Multiple similar recommendations, (3) Multiple diverse recommendations, (4) All possible strategies recommendations. While these strategy recommendation schemes have been explored independently in prior work, our study is novel in that we employ all of them simultaneously and in the context of strategy recommendations, to provide us an in-depth overview of the perception of different strategy recommendation systems. We found that certain personality traits, such as conscientiousness, notably impact the preference towards a particular type of system (𝑝 < 0.01). Finally, we report an interesting relationship between usability, alignment, and perceived intelligence wherein greater perceived alignment of recommendations with one's own preferences leads to higher perceived intelligence (𝑝 < 0.01) and higher usability (𝑝 < 0.01).

Keywords: Design and evaluation of innovative interactive systems; Intelligent user interfaces; Interactive decision support systems.

PubMed Disclaimer

Conflict of interest statement

Declaration of competing interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Matthew Gombolay reports financial support was provided by Office of Naval Research. Lakshita Dodeja reports financial support was provided by Office of Naval Research. Pradyumna Tambwekar reports financial support was provided by Office of Naval Research. Matthew Gombolay reports a relationship with Johns Hopkins University Applied Physics Laboratory that includes: consulting or advisory.

Figures

Fig. A.8.
Fig. A.8.
These plots denote the correlation between preference for diverse strategies and preference for similar strategies.
Fig. A.9.
Fig. A.9.
Boxplots for different personality traits of the recruited participants.
Fig. D.10.
Fig. D.10.
This figures contains all questions comprising the Calibration Questionnaire. Each question forms a different node of the decision tree defined in Fig. 4. (L) implies that selecting this answer moves you to the left branch and (R) implies that selecting this answer moves you to the right branch of the tree.
Fig. D.10.
Fig. D.10.
This figures contains all questions comprising the Calibration Questionnaire. Each question forms a different node of the decision tree defined in Fig. 4. (L) implies that selecting this answer moves you to the left branch and (R) implies that selecting this answer moves you to the right branch of the tree.
Fig. D.10.
Fig. D.10.
This figures contains all questions comprising the Calibration Questionnaire. Each question forms a different node of the decision tree defined in Fig. 4. (L) implies that selecting this answer moves you to the left branch and (R) implies that selecting this answer moves you to the right branch of the tree.
Fig. D.11.
Fig. D.11.
Detailed description of all 8 strategies that were used in the study. Each strategy contains goals, constraints and a RISK map with the drafting stage for the strategy.
Fig. D.11.
Fig. D.11.
Detailed description of all 8 strategies that were used in the study. Each strategy contains goals, constraints and a RISK map with the drafting stage for the strategy.
Fig. D.11.
Fig. D.11.
Detailed description of all 8 strategies that were used in the study. Each strategy contains goals, constraints and a RISK map with the drafting stage for the strategy.
Fig. 1.
Fig. 1.
This diagram provides a schematic overview of the entire study. We employ a two-phase study design. The goal of phase one of our study is to validate our proposed strategy recommendation methodology. In Phase 1, participants complete the strategy recommendation questionnaire (1.1), receive a recommendation (1.2), and complete a post-study alignment survey to ascertain how aligned the recommendation was to their original preferences (1.3). Using the data from Phase 1, we validate that our questionnaire can adequately be used to recommend relevant strategies. We now utilize this questionnaire in Phase 2 of the study in which our goal is to study strategy recommendation systems. After completing a pre-survey (2.1), participants answer the validated questionnaire from Phase 1 (2.2), and are recommended a strategy in one of four formats based on the study condition (2.3). Participants receive and analyze the recommendation(s) and are then asked to complete a few post-study surveys to evaluate their experience.
Fig. 2.
Fig. 2.
This figure shows the Risk Simulator used for our study. Simulation for the recommended strategy was executed by the orange player (Agent) which was playing against teal (Bravo) and pink (Charlie) players. We also included a legend so that participants could track the forces and territories of all players. Each action is annotated with a text-description in the text box at the bottom of the screen.
Fig. 3.
Fig. 3.
This figure depicts how the aligned and reverse strategies were presented to participants during the calibration study. The first half of participants were shown the aligned strategy as “Strategy 1” and the second half of participants were shown the reverse strategy as “Strategy 1”.
Fig. 4.
Fig. 4.
A depiction of how we recommend strategies based on the study condition assigned to the participant. In this illustration, based on the participant’s answers to each question, their ideal strategy is 𝑆4, as shown by the path highlighted in green. For the “single” condition, we will recommend only 𝑆4. For the “similar” condition, the participant is recommended 3 strategies, i.e. the sibling strategy, 𝑆3, and one of its “cousin” strategies, 𝑆2. In the “diverse” condition, the participant is also recommended 3 strategies, however instead of the sibling strategy, the participant is recommended a strategy on the other side of the tree, i.e. 𝑆8. Finally, with respect to the “all” condition, participants are shown all eight strategies. The full calibration questionnaire can be found in Appendix B.1.
Fig. 5.
Fig. 5.
Two bar graphs which show the performance of each study condition based on (a) usability and (b) alignment.
Fig. 6.
Fig. 6.
Summary of all the significant results in our study. The asterisks denote the level of significance, i.e. ***: 𝑝 < 0.001, **: 𝑝 < 0.01, *: 𝑝 < 0.05.
Fig. 7.
Fig. 7.
These plots denote the impact of perceived alignment on usability and perceived intelligence.

Similar articles

References

    1. Akpa OM, Unuabonah EI, 2011. Small-sample corrected Akaike information criterion: an appropriate statistical tool for ranking of adsorption isotherm models. Desalination 272 (1–3), 20–26.
    1. Anderson A, Maystre L, Anderson I, Mehrotra R, Lalmas M, 2020. Algorithmic effects on the diversity of consumption on spotify In: Proceedings of the Web Conference 2020. pp. 2155–2165.
    1. Bakir V, Laffer A, McStay A, 2023. Human-first, please: Assessing citizen views and industrial ambition for emotional AI in recommender systems. Surveill. Soc 21 (2), 205–222.
    1. Bartneck C, Kulić D, Croft E, Zoghbi S, 2009. Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robotics 1 (1), 71–81.
    1. Behera RK, Gunasekaran A, Gupta S, Kamboj S, Bala PK, 2020. Personalized digital marketing recommender engine. J. Retail. Consum. Serv 53, 101799.

LinkOut - more resources