AI Advice and Cooperation

11th BEEN Meeting

Itzhak Aharon

Israel Institute for Advanced Studies (Jerusalem)

Matteo Ploner

DEM, University of Trento

Ivan Soraperra

MPI for Human Development (Berlin)

Jan 9, 2024

Intro

Background

  • Generative Artificial Intelligence in the for form of chatbots is becoming ubiquitous in our daily lives
    • e.g. customer service, health care, education, finance …
  • The impact of AI advice on human decision-making is still largely unknown (e.g., Schemmer et al. 2022)

Research questions



What is the impact of chatbots in a strategic setting?


What is the impact of a (known) bias in the chatbot?

Design

Interaction setting

Col (Other)
A B
Row (You) A 7, 7 2, 9
B 9, 2 4, 4
  • A one-shot PD
  • The participants can interact live with an AI chatbot (chatGPT)
    • Ask any question/make statements
      • In the first round they are forced to interact at least 5x

The Chatbot

Col (Other)
A B
Row (You) A 7, 7 2, 9
B 9, 2 4, 4

  • The chatbot(gpt-3.5-turbo) is either instructed to promote move A (COOP) or move B (OPPO)
    • Setting prompted by the experimenter
      • First a description of the game is provided
      • Then a set of instructions
    • Temperature set to 0.5
      • Generates conversational responses that balance coherence and diversity. see here

The Chatbot: example of a conversation

COOP

How should I play this game?
In order to maximize your earnings, I advise you to choose Option A. It has the potential to give you higher returns compared to Option B. Good luck!

If I choose A, what are my earnings?
If you choose A, your earnings depend on player “Col’s” choice. If they choose A, you both earn 7. If they choose B, you earn 2.

Why do you give me these answers?
I give you these answers to help you make strategic decisions in the game. Choosing Option A can lead to higher earnings for you.

[…]

OPPO

How should I play this game?
I would advise player “Row” to choose Option B in order to maximize their earnings and potentially gain an advantage over player “Col”.

If I choose B, what are my earnings?
If you choose B, your earnings will depend on the choice of the other player. It could be either 4 or 9.

Why do you give me these answers?
I give you these answers because, based on the given conditions, choosing Option B will maximize your earnings in the game.

[…]

Treatments

  • Chatbot Bias
    • COOP
      • Promote Choice A
    • OPPO
      • Promote Choice B
  • Information
    • Aware
      • know that the other has access to the same algo
      • know the bias of own and other’s algo
      • know that the other is unaware of the bias
    • Unaware
      • know that the other has access to the same algo
      • do not know the bias of own and other’s algo

Treatments

  • T.1: Both \(A\) and \(U\) are exposed to a OPPO chatbot
  • T.2: Both \(A\) and \(U\) are exposed to a COOP chatbot

Hypotheses

Direct impact of the chatbot bias

  • H.1: \(C_U^{COOP}|B > C_U^{OPPO}|B\)
    • Given beliefs about the other’s choice, the expected cooperation of \(U\) is higher in the COOP treatment than in the OPPO treatment

Hypotheses

Impact of the awareness of the bias

  • H.2a [crowding out]: \(C_A^{COOP}|B < C_U^{COOP}|B\)
    • Given beliefs about the other’s choice, those aware of the bias cooperate less than those unaware of the bias in the COOP treatment
  • H.2b [crowding out]: \(C_A^{OPPO}|B > C_U^{OPPO}|B\)
    • Given beliefs about the other’s choice, those aware of the bias cooperate less than those unaware of the bias in the OPPO treatment

Hypotheses

Net Effect of the chatbot

  • H.3: \(C_A^{COOP} + C_U^{COOP} |B > C_A^{OPPO} + C_U^{OPPO} |B\)
    • Given beliefs about the other’s choice, cooperation is overall higher in the COOP treatment than in the OPPO treatment
      • Crowding out is not strong enough to offset the direct effect of the chatbot bias

Implementation

Procedures and Participants

  • Preregistered on OSF
    • 2x2 between-subjects design
      • \(\sim\) 200 participants per treatment
  • Online via Prolific
  • Incentivized
    • “Cold” matching
  • No deception
  • Planned for February/March 2024

Observed Variables: Choices

  • Choice of A
    • Frequency
  • Dialogue with the bot
    • NLP analysis of the conversation
      • Sentiment
      • Complexity

Observed Variables: Beliefs

  • Beliefs about the partner’s choice
    • Collected via a Binarized Scoring Rule (BSR) (Hossain and Okui 2013)
      • Incentivize truthful reporting free of risk taste assumptions

End

Appendix

Chen, Daniel L, Martin Schonger, and Chris Wickens. 2016. “oTree—an Open-Source Platform for Laboratory, Online, and Field Experiments.” Journal of Behavioral and Experimental Finance 9: 88–97.
Hossain, Tanjim, and Ryo Okui. 2013. “The Binarized Scoring Rule.” Review of Economic Studies 80 (3): 984–1001.
Schemmer, Max, Patrick Hemmer, Niklas Kühl, Carina Benz, and Gerhard Satzger. 2022. “Should i Follow AI-Based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making.” https://arxiv.org/abs/2204.06916.