AvalonBench: Evaluating LLMs Playing the Game of Avalon

1Rensselaer Polytechnic Institute, 2Shenzhen University 3University of California, Berkeley 4California Institute of Technology

GPT-3.5-turbo🤖 playing against rule-based bots in AvalonBench

GPT-4-turbo🤖 playing against rule-based bots in AvalonBench

Abstract

We explore the potential of Large Language Models (LLMs) Agents in playing the strategic social deduction game, Resistance Avalon.

Players in Avalon are challenged not only to make informed decisions based on dynamically evolving game phases, but also to engage in discussions where they must deceive, deduce, and negotiate with other players. These characteristics make Avalon a compelling test-bed to study the decision-making and language-processing capabilities of LLM Agents. To facilitate research in this line, we introduce AvalonBench - a comprehensive game environment tailored for evaluating multi-agent LLM Agents. This benchmark incorporates: (1) a game environment for Avalon, (2) rule-based bots as baseline opponents, and (3) ReAct-style LLM agents with tailored prompts for each role. Notably, our evaluations based on AVALONBENCH highlight a clear capability gap. For instance, models like ChatGPT playing good-role got a win rate of 22.2% against rule-based bots playing evil, while good-role bot achieves 38.2% win rate in the same setting.

We envision AvalonBench could be a good test-bed for developing more advanced LLMs (with self-playing) and agent frameworks that can effectively model the layered complexities of such game environments.

Initial Results

LLMs Play Against Baseline Bots

Here are the results of LLMs playing against baseline bots.

initial results

Multi-LLMs Self-Play

We also let LLMs playing against each other. Evil has an 8:2 advantage over Good, which is similar to the stats of rookie human players! Here are also some examples of discussion under this setting.

discussion1 discussion2

Initial Results with New Codebase

We have updated our code with a new version of AgentBench (v0.2). Here are the results of LLMs playing against baseline bots.

{
    "total": 20,
    "validation": {
        "running": 0.0,
        "completed": 0.95,
        "agent context limit": 0.0,
        "agent validation failed": 0.05,
    },
    "custom": {
        "Win rate of Player 0": 0.15,
        "Avg deduction acc of Player 0": 0.5399999999999998,
        "Valid number of games": 19,
        "Average time cost": "1:58"
    }
}
          
Results of GPT-3.5-turbo🤖 playing against rule-based bots
{
    "total": 20,
    "validation": {
        "running": 0.0,
        "completed": 1.0,
        "agent context limit": 0.0,
        "agent validation failed": 0.0,
    },
    "custom": {
        "Win rate of Player 0": 0.2,
        "Avg deduction acc of Player 0": 0.55,
        "Valid number of games": 20,
        "Average time cost": "10:36"
    }
}
          
Results of GPT-4-turbo🤖 playing against rule-based bots

Related Links

Our code is based on AgentBench, the first benchmark designed to evaluate LLM-as-Agent across a diverse spectrum of different environments.

AvalonBench has now been included in their benchmark. See here for more details.

BibTeX

@inproceedings{
      light2023from,
      title={From Text to Tactic: Evaluating {LLM}s Playing the Game of Avalon},
      author={Jonathan Light and Min Cai and Sheng Shen and Ziniu Hu},
      booktitle={NeurIPS 2023 Foundation Models for Decision Making Workshop},
      year={2023},
      url={https://openreview.net/forum?id=ltUrSryS0K}
  }