Humans use AI-assistance in a wide variety of high- and low-stakes decision-making tasks today. However, human reliance on the AI’s assistance is often sub-optimal — with people exhibiting under- or over-reliance on the AI. We present an empirical investigation of human-AI assisted decision-making in a noisy image classification task. We analyze the participants’ reliance on AI assistance and the accuracy of human-AI assistance as compared to the human or AI working independently. We demonstrate that participants do not show automation bias which is a widely reported behavior displayed by humans when assisted by AI. In this specific instance of AI-assisted decision-making, people are able to correctly override the AI’s decision when needed and achieve close to the theoretical upper bound on combined performance. We suggest that the reason for this discrepancy from previous research findings is because 1) people are experts at classifying everyday images and have a good understanding of their ability in performing the task, 2) people engage in the metacognitive act of deliberation when asked to indicate confidence in their decision, and 3) people were able to build a good mental model of the AI by incorporating feedback that was provided after each trial. These findings should inform future experiment design.
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
Tel.: +1 703 830 6300
Fax: +1 703 830 2300 email@example.com
(Corporate matters and books only) IOS Press c/o Accucoms US, Inc.
For North America Sales and Customer Service
West Point Commons
Lansdale PA 19446
Tel.: +1 866 855 8967
Fax: +1 215 660 5042 firstname.lastname@example.org