Research lines

Human Oversight in Human-AI Cooperation

In this line of research, I investigate when human oversight (e.g., human-in-the-loop feedback) can decrease or increase the accuracy or biasof algorithmic decisions and recommendations.

Work in Progress:

  • Human Bias in Oversight to Mitigate Algorithmic Bias: Using a large incentivized behavioral experiment test the extent to which human oversight from democrats versus republicans mitigates, or conversely, introduces bias to hybrid decision-making settings where an algorithm provides a recommendation that benefits locals or immigrants
  • Human Social Preferences As Input for Algorithms With and Without Prescribed Social Preferences: Using an incentivized behavioral experiment in to investigate how people provide input to an algorithm to test whether behavior differs when providing only training data for an algorithm without initial preferences versus providing feedback for an algorithm with prescribed prosocial versus proself preferences (Using the SVO Slider)

Selected Talk:

Gossip, Reputation, and Cooperation

In this line of research, I investigate the cooperative and competitive functions of gossip. I focus mostly on how gossip can support systems of reputation-based cooperation.

Work in Progress:

  • Gossip as a second-order dilemma: Using trust games with gossip to investigate the costs and benefits of (not) gossiping and whether senders are aware of these costs
  • Motives for (In)direct Punishment: Using gossip in ultimatum game responses to investigate whether offer rejection is motivated by venting emotions or harming offenders

Selected Publication:

Selected Talk: