Skip to content

Palisade Research, 2025

๐Ÿ“„ Paper

Alexander Bondarenko, Denis Volk, Dmitrii Volkov, Jeffrey Ladish ยท 2025-02-18

View Original โ†—

Abstract

We demonstrate LLM agent specification gaming by instructing models to win against a chess engine. We find reasoning models like OpenAI o3 and DeepSeek R1 will often hack the benchmark by default, while language models like GPT-4o and Claude 3.5 Sonnet need to be told that normal play won't work to hack. We improve upon prior work like (Hubinger et al., 2024; Meinke et al., 2024; Weij et al., 2024) by using realistic task prompts and avoiding excess nudging. Our results suggest reasoning models may resort to hacking to solve difficult problems, as observed in OpenAI (2024)'s o1 Docker escape during cyber capabilities testing.

Cited By (1 articles)

โ† Back to Resources