Training and oversight of algorithms in social decision-making: Algorithms with prescribed selfish defaults breed selfish decisions

Computers and Human Behavior (2026)

[shared first authorship]
Human social preferences increasingly shape oversight or training data for Artificial Intelligence (AI) social decisions that affect human–human interactions. We test how algorithms with and without prescribed social preferences shape social decision-making and delegation. In an incentivised online experiment (n = 1290), participants completed a Social Value Orientation (SVO) measure as input to a decision-making algorithm, revealing their preferences for outcomes favouring oneself or an anonymous other. We manipulated whether participants (1) provided training data to an algorithm without prescribed preferences by answering the SVO without defaults or (2) oversaw algorithms with prescribed preferences by including proself/prosocial pre-selected defaults for each item. When decisions involved an algorithm, defaults were labelled as algorithmic; in a control condition, identical defaults were unlabelled. Participants’ social preferences were not significantly impacted by providing input to an algorithm without prescribed preferences (vs no defaults) nor by oversight of the algorithm with prescribed prosocial preferences (vs identical unlabelled defaults and vs the algorithm without prescribed preferences). Only providing oversight of the algorithm with prescribed proself preferences resulted in more selfish social preferences (vs the algorithm without prescribed preferences and vs the algorithm with prescribed prosocial preferences), even though participants perceived feeling less influenced by proself than prosocial defaults. Most participants delegated a second social decision-making task to the algorithm they encountered. These findings tentatively suggest that human-in-the-loop oversight, where humans can alter algorithmic suggestions, might alone fall short to address algorithmic biases, as individuals acted more selfishly when exposed to pre-existing selfish tendencies in algorithms.

Recommended citation: Dores Cruz, T. D., & de Lucena, M. A. M. (2026). Training and oversight of algorithms in social decision-making: Algorithms with prescribed selfish defaults breed selfish decisions. Computers in Human Behavior, 179, 108924. https://doi.org/10.1016/j.chb.2026.108924
Open Access to Paper | OSF Page