Creating incentives for cooperation is a challenge in natural and artificial systems.
One potential answer is reputation, whereby agents trade the immediate cost of cooperation for the future benefits of having a good reputation. Game theoretical models have shown that specific social norms can make cooperation stable, but how agents can independently learn to establish effective reputation mechanisms on their own is less understood. We use a simple model of reinforcement learning to show that reputation mechanisms generate two coordination problems: agents need to learn how to coordinate on the meaning of existing reputations; and agents need to agree collectively on a social norm to assign reputations to others based on their behavior. We show how our results relate to the existing literature in Evolutionary Game Theory, and discuss implications for artificial, human and hybrid systems, where reputations can be used as a way to establish trust and cooperation.
Authors: Nicolas Anastassacos, Julian Garcia, Stephen Hailes, Mirco Musolesi