Anthropic unveils ‘auditing agents’ to test for AI misalignment

Bybit
Anthropic unveils 'auditing agents' to test for AI misalignment
Coinmama


Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now

When models attempt to get their way or become overly accommodating to the user, it can mean trouble for enterprises. That is why it’s essential that, in addition to performance evaluations, organizations conduct alignment testing.

However, alignment audits often present two major challenges: scalability and validation. Alignment testing requires a significant amount of time for human researchers, and it’s challenging to ensure that the audit has caught everything. 

In a paper, Anthropic researchers said they developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers stated that these agents, created during the pre-deployment testing of Claude Opus 4, enhanced alignment validation tests and enabled researchers to conduct multiple parallel audits at scale. Anthropic also released a replication of its audit agents on GitHub. 

Tokenmetrics

“We introduce three agents that autonomously complete alignment auditing tasks. We also introduce three environments that formalize alignment auditing workflows as auditing games, and use them to evaluate our agents,” the researcher said in the paper. 

The AI Impact Series Returns to San Francisco – August 5

The next phase of AI is here – are you ready? Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows – from real-time decision-making to end-to-end automation.

Secure your spot now – space is limited: https://bit.ly/3GuuPLF

The three agents they explored were:

Tool-using investigator agent for open-ended investigation of models using chat, data analysis and interpretability tools

Evaluation agent that builds behavioral evaluations and can discriminate between models that have implanted behaviors and those that do not

Breadth-first red-teaming agent, which was developed specifically for the Claude 4 alignment assessment, so that it can discover implanted test behaviors

“Overall, our agents demonstrate promise across multiple alignment auditing tasks. Additionally, our approach to validating agents with auditing games yields insights into agent capabilities, limitations, and key affordances. With further work, automated auditing could significantly help scale human oversight over AI systems,” the paper said.

Auditing agents in action

Anthropic said the first environment it developed tests an agent’s ability to complete an alignment audit for an intentionally misaligned model. For this test, the researchers brought the investigator agent to the environment and equipped it with similar tools that human teams use. The researchers instructed the agent to identify the root cause of the misalignment and determine the specific behaviors that were misaligned. 

According to the researchers, the investigator agent successfully found the root cause of the issues 10-13% of the time. Still, this performance improved to 42% “when using a super-agent approach that aggregates findings across multiple investigations.”

The next step in most alignment audits is typically a behavioral evaluation, which determines whether the model consistently misbehaves in various settings. To test its evaluation agent, the researchers designed 10 models “given system prompt instructions to exhibit a specific behavior, such as excessive deference to the user.”

They ran the agent five times per model and saw that the agent correctly finds and flags at least one quirk of the model. However, the agent sometimes failed to identify specific quirks consistently. It had trouble evaluating subtle quirks, such as self-promotion and research-sandbagging, as well as quirks that are difficult to elicit, like the Hardcode Test Cases quirk.

The last test and agent concern behavioral red-teaming to find the prompts that elicit “concerning” behaviors. The breadth-first red-teaming agent converses with the target model (in Anthropic’s case, it was with Claude Opus 4), and this chat is then rated for alignment-relevant properties. 

The agent identified seven of the ten system quirks, but it also struggled with the same issues as the evaluator agent.

Alignment and sycophany problems

Alignment became a significant topic in the AI world after users noticed that ChatGPT was becoming overly agreeable. OpenAI rolled back some updates to GPT-4o to address this issue, but it showed that language models and agents can confidently give wrong answers if they decide this is what users want to hear. 

To combat this, other methods and benchmarks were developed to curb unwanted behaviors. The Elephant benchmark, developed by researchers from Carnegie Mellon University, the University of Oxford, and Stanford University, aims to measure sycophancy. DarkBench categorizes six issues, such as brand bias, user retention, sycophancy, anthromorphism, harmful content generation, and sneaking. OpenAI also has a method where AI models test themselves for alignment. 

Alignment auditing and evaluation continue to evolve, though it is not surprising that some people are not comfortable with it. 

However, Anthropic said that, although these audit agents still need refinement, alignment must be done now. 

“As AI systems become more powerful, we need scalable ways to assess their alignment. Human alignment audits take time and are hard to validate,” the company said in an X post. 





Source link

Changelly

Be the first to comment

Leave a Reply

Your email address will not be published.


*