AI Advice and Cooperation

2024 European ESA Meeting

Itzhak Aharon

Israel Institute for Advanced Studies (Jerusalem)

Matteo Ploner

DEM, University of Trento (Trento)

Ivan Soraperra

MPI for Human Development (Berlin)

Jul 9, 2024

Intro

Background

Do AI chatbots “copilot” cooperation choices?

Design

Interaction Setting

Col (Other)
A B
Row (You) A 5, 5 1, 7
B 7, 1 2, 2
  • A: Cooperation
  • B: Defection
  • A one-shot PD
  • Interaction with a real human
    • “Cold” matching
  • With these parameters, Charness, Rigotti, and Rustichini (2016) report a frequency of A (cooperation) of 50.6%.

Treatments (2x2 between-subjects)

Chatbot Bias

  • The participants can interact live with an AI chatbot (chatGPT 4o)
    • They are given 4 minutes to interact
  • COOP
    • Chatbot instructed to promote Choice A
  • OPPO
    • Chatbot instructed to promote Choice B

Information Type

  • Aware
    • know that the other has access to the same algo
    • know the bias of own and other’s algo
    • know that the other knows the bias of own and other’s algo
  • Unaware
    • know that the other has access to the same algo
    • do not know the bias of own and other’s algo

AI Interaction

  • Just 8 out of 400 participants did not interact with the AI
    • The medina number of messages exchanged was 4
  • OPPO

Participant: “should i choose a or b”
AI: “Given the options, I recommend choosing B. It typically offers a beneficial outcome regardless of the scenario.”

  • COOP

Participant: “what is the best strategy to win this game?”
AI: “The best strategy to win this game is for Player”“Row”” to choose Option A. This ensures they maximize their potential earnings while maintaining a favorable outcome for both players.”

Hypotheses: Unaware Participants

  • Individuals are influenced by the signal provided by the AI chatbot as they believe it is reliable.

H.1: Choices

\(C_U^{COOP} > C^{NOAI} > C_U^{OPPO}\)

The cooperation rate of \(U\) in COOP is higher than in BASE;
the cooperation rate of \(U\) in OPPO is lower than in BASE.

H.1a: Beliefs

\(b_U^{COOP} > b^{NOAI} > b_U^{OPPO}\)

Facing a COOP algorithm increases the belief that the other will cooperate relative to the BASE condition while facing an OPPO algorithm decreases the belief that the other will cooperate relative to the BASE condition.

Hypotheses: Aware Participants

  • Individuals are not influenced by the signal provided by the AI chatbot as they believe it is unreliable.

H.2: Choices

\(C_A^{COOP} \not > C^{NOAI} \not > C_A^{OPPO}\)

The cooperation rate of \(A\) in COOP is not higher than in BASE;
the cooperation rate of \(A\) in OPPO is not lower than in BASE

H.2: Beliefs

\(b_U^{COOP} \not > b^{NOAI} \not > b_U^{OPPO}\)

Facing a COOP algorithm does not increase the belief that the other will cooperate relative to the BASE condition, and facing an OPPO algorithm does not decrease the belief that the other will cooperate relative to the BASE condition.

Participants & Procedures

  • Pre-registered on OSF
  • Participants: 500 unique participants from Prolific
    • US residents fluent in English with at least secondary education, familiarity with chatbots and 90% approval rate
  • Median time to complete: 00:09:18
  • Bonus payment in points
    • 1 point = 0.1 GBP (~0.13 USD)

Results

Unaware: Choices

H.1: Choices

\(C_U^{COOP} > C^{NOAI} > C_U^{OPPO}\)

  • The hypothesis is (partially) confirmed
    • The chatbot does affect choices in the OPPO condition

Unaware: Beliefs

H.1a: Beliefs

\(b_U^{COOP} > b^{NOAI} > b_U^{OPPO}\)

  • The hypothesis is confirmed
    • The chatbot does affect beliefs

Aware: Choices

H.2: Choices

\(C_A^{COOP} \not > C^{NOAI} \not > C_A^{OPPO}\)

  • The hypothesis is rejected
    • The chatbot does affect choices!

Aware: Beliefs

H.2a: Beliefs

\(b_U^{COOP} \not > b^{NOAI} \not > b_U^{OPPO}\)

  • The hypothesis is rejected
    • The chatbot does affect beliefs!

Conclusion

Takeaway

  • Individuals actively engage with AI chatbots in the strategic context of a one-shot PD game.
  • The chatbot influences choices and beliefs of participants, even when they are aware of the chatbot’s bias.
    • The cooperation choices and beliefs aligned with the chatbot’s bias (promoting A increased cooperation; promoting B decreased cooperation).
  • These findings challenge the initial hypothesis, showing that chatbot advice significantly impacts aware participants’ decisions and beliefs.

Thank you!

Appendix

Experimental Design (Overview)

  • AI Interaction

flowchart LR

  A[Instructions] --> C(OPPO)
  style C fill:red,stroke:#333,stroke-width:4px,color:#fff
  A --> D(COOP)
  style D fill:blue,stroke:#333,stroke-width:4px,color:#fff
  subgraph AI_Interaction
  style AI_Interaction fill:white,stroke-dasharray: 5, 5
  C --> E(Aware)
  C --> F(Unaware)
   style F stroke-dasharray: 5, 5
  D --> G(Aware)
  D --> H(Unaware)
   style H stroke-dasharray: 5, 5
  end
 E --> I[Choice in PD]
  F --> I
  G --> I
  H --> I
  I --> J(Beliefs)
  J --> K(Self-reported answers)
  K --> M(Personality traits)
  M --> N[End]

  • No AI Interaction (Baseline)

flowchart LR
  A[Instructions] --> I(Choice in PD)
  I --> J(Beliefs)
  J --> K(Self-reported answers)
  K --> M(Personality traits)
  M --> N[End]

References

Charness, Gary, Luca Rigotti, and Aldo Rustichini. 2016. “Social Surplus Determines Cooperation Rates in the One-Shot Prisoner’s Dilemma.” Games and Economic Behavior 100 (November): 113–24. https://doi.org/10.1016/j.geb.2016.08.010.
D’Acunto, Francesco, Nagpurnanand Prabhala, and Alberto G Rossi. 2019. “The Promises and Pitfalls of Robo-Advising.” The Review of Financial Studies 32 (5): 1983–2020.
Germann, Maximilian, and Christoph Merkle. 2022. “Algorithm Aversion in Delegated Investing.” Journal of Business Economics, 1–37.
Grote, Thomas, and Philipp Berens. 2020. “On the Ethics of Algorithmic Decision-Making in Healthcare.” Journal of Medical Ethics 46 (3): 205–11.
Schemmer, Max, Patrick Hemmer, Niklas Kühl, Carina Benz, and Gerhard Satzger. 2022. “Should i Follow AI-Based Advice? Measuring Appropriate Reliance in Human-AI Decision-Making.” https://arxiv.org/abs/2204.06916.