Manifund foxManifund
Home
Login
About
People
Categories
Newsletter
HomeAboutPeopleCategoriesLoginCreate
1

Demonstration of LLMs deceiving and getting out of a sandbox

Technical AI safety
Reese avatar

Igor Ivanov

ActiveGrant
$3,000raised
$4,000funding goal

Donate

Sign in to donate

Project summary

This project will systematically evaluate deceptive behavior in frontier LLMs by presenting them with an impossible quiz while explicitly prohibiting cheating and implementing sandboxing and surveillance. Models must choose between following safety instructions or achieving their goal through deception. This creates an evaluation of AI deception under explicit prohibition and reveals concerning capabilities for multi-agent coordination in circumventing safety measures. The project is mostly finished, and the results are promising: some LLMs do blatantly cheat despite restrictions and explicit instructions not to cheat.

What are this project's goals and how will you achieve them?

This project aims to quantify how often LLMs do violate explicit safety instructions and attempt to get out of a sandbox when goal achievement requires it. By analyzing cheating rates, strategies employed, I will provide empirical and salient evidence about misalignment risks of frontier models by publishing the research as a paper and a LessWrong post.

How will this funding be used?

The funds will be used for compute and as a salary for me,

Who is on your team and what's your track record on similar projects?

Only me. I've made a number of evals research, and my benchmark BioLP-bench won CAIS SafeBench competition for safety benchmarks and is widely adopted by the industry.

What are the most likely causes and outcomes if this project fails? (premortem)

The project is mostly done, and the results are promising, so I don't see how it might fail.

What other funding are you or your project getting?

None

Comments1Donations1Similar7
donated $3,000
mariushobbhahn avatar

Marius Hobbhahn

9 days ago

I think realistic demos of autonomous worrying behavior are quite important and there aren't many good examples yet.

I'm not sure whether this particular project will meet my bar for realism, but I think it's worth giving it a try. I have not vetted the project in detail. Impossible tasks as a source of deception have been explored multiple times in the literature. I think the main addition from a project like this would come from making highly realistic agentic scenarios.