Exploring the constraints on artificial general intelligence: a game-theoretic model of human vs machine interaction

Research output: Working paper/PreprintWorking paper

64 Downloads (Pure)

Abstract

The potential emergence of artificial general intelligence (AGI) systems has sparked intense debate among researchers, policymakers, and the public due to their potential to surpass human intelligence in all domains. This note argues that for an AI to be considered "general," it should achieve superhuman performance not only in zero-sum games but also in general-sum games, where winning or losing is not clearly defined. In this note, I propose a game-theoretic framework that captures the strategic interactions between a representative human agent and a potential superhuman machine agent. Four assumptions underpin this framework: Superhuman Machine, Machine Strategy, Rationality, and Strategic Unpredictability. The main result is an impossibility theorem, establishing that these assumptions are inconsistent when taken together, but relaxing any one of them results in a consistent set of assumptions. This note contributes to a better understanding of the theoretical context that can shape the development of superhuman AI.
Original languageEnglish
Place of PublicationMathematical Social Sciences
PublisherElsevier
Publication statusAccepted/In press - 18 Mar 2024

Keywords

  • Artificial general intelligence
  • Non-cooperative games
  • Superhuman performance
  • Cooperation

Fingerprint

Dive into the research topics of 'Exploring the constraints on artificial general intelligence: a game-theoretic model of human vs machine interaction'. Together they form a unique fingerprint.

Cite this