AI agent benchmarks are misleading, study warns

10 Min Read

We need to hear from you! Take our fast AI survey and share your insights on the present state of AI, the way you’re implementing it, and what you anticipate to see sooner or later. Learn More


AI brokers have gotten a promising new analysis path with potential purposes in the actual world. These brokers use basis fashions equivalent to giant language fashions (LLMs) and imaginative and prescient language fashions (VLMs) to take pure language directions and pursue advanced targets autonomously or semi-autonomously. AI brokers can use varied instruments equivalent to browsers, engines like google and code compilers to confirm their actions and cause about their targets. 

Nonetheless, a recent analysis by researchers at Princeton University has revealed a number of shortcomings in present agent benchmarks and analysis practices that hinder their usefulness in real-world purposes.

Their findings spotlight that agent benchmarking comes with distinct challenges, and we are able to’t consider brokers in the identical approach that we benchmark basis fashions.

Price vs accuracy trade-off

One main situation the researchers spotlight of their research is the dearth of value management in agent evaluations. AI brokers might be rather more costly to run than a single mannequin name, as they typically depend on stochastic language fashions that may produce completely different outcomes when given the identical question a number of instances. 

To extend accuracy, some agentic programs generate a number of responses and use mechanisms like voting or exterior verification instruments to decide on the perfect reply. Typically sampling lots of or hundreds of responses can improve the agent’s accuracy. Whereas this strategy can enhance efficiency, it comes at a big computational value. Inference prices usually are not all the time an issue in analysis settings, the place the purpose is to maximise accuracy.

Nonetheless, in sensible purposes, there’s a restrict to the price range out there for every question, making it essential for agent evaluations to be cost-controlled. Failing to take action might encourage researchers to develop extraordinarily expensive brokers merely to high the leaderboard. The Princeton researchers suggest visualizing analysis outcomes as a Pareto curve of accuracy and inference value and utilizing strategies that collectively optimize the agent for these two metrics.

See also  EU warns Microsoft it could be fined billions over missing GenAI risk info

The researchers evaluated accuracy-cost tradeoffs of various prompting strategies and agentic patterns launched in numerous papers.

“For considerably related accuracy, the associated fee can differ by virtually two orders of magnitude,” the researchers write. “But, the price of operating these brokers isn’t a top-line metric reported in any of those papers.”

The researchers argue that optimizing for each metrics can result in “brokers that value much less whereas sustaining accuracy.” Joint optimization can even allow researchers and builders to commerce off the mounted and variable prices of operating an agent. For instance, they’ll spend extra on optimizing the agent’s design however scale back the variable value by utilizing fewer in-context studying examples within the agent’s immediate.

The researchers examined joint optimization on HotpotQA, a preferred question-answering benchmark. Their outcomes present that joint optimization formulation offers a option to strike an optimum stability between accuracy and inference prices.

“Helpful agent evaluations should management for value—even when we in the end don’t care about value and solely about figuring out revolutionary agent designs,” the researchers write. “Accuracy alone can not establish progress as a result of it may be improved by scientifically meaningless strategies equivalent to retrying.”

Mannequin growth vs downstream purposes

One other situation the researchers spotlight is the distinction between evaluating fashions for analysis functions and creating downstream purposes. In analysis, accuracy is usually the first focus, with inference prices being largely ignored. Nonetheless, when creating real-world purposes on AI brokers, inference prices play a vital function in deciding which mannequin and approach to make use of.

See also  When to ignore — and believe — the AI hype cycle

Evaluating inference prices for AI brokers is difficult. For instance, completely different mannequin suppliers can cost completely different quantities for a similar mannequin. In the meantime, the prices of API calls are recurrently altering and would possibly differ based mostly on builders’ selections. For instance, on some platforms, bulk API calls are charged in a different way. 

The researchers created a website that adjusts mannequin comparisons based mostly on token pricing to handle this situation. 

Additionally they carried out a case research on NovelQA, a benchmark for question-answering duties on very lengthy texts. They discovered that benchmarks meant for mannequin analysis might be deceptive when used for downstream analysis. For instance, the unique NovelQA research makes retrieval-augmented technology (RAG) look a lot worse than long-context fashions than it’s in a real-world situation. Their findings present that RAG and long-context fashions had been roughly equally correct, whereas long-context fashions are 20 instances costlier.

Overfitting is an issue

In studying new duties, machine studying (ML) fashions typically discover shortcuts that enable them to attain nicely on benchmarks. One distinguished kind of shortcut is “overfitting,” the place the mannequin finds methods to cheat on the benchmark exams and offers outcomes that don’t translate to the actual world. The researchers discovered that overfitting is a significant issue for agent benchmarks, as they are usually small, sometimes consisting of just a few hundred samples. This situation is extra extreme than data contamination in coaching basis fashions, as information of take a look at samples might be instantly programmed into the agent.

To deal with this downside, the researchers recommend that benchmark builders ought to create and hold holdout take a look at units which are composed of examples that may’t be memorized throughout coaching and might solely be solved by means of a correct understanding of the goal process. Of their evaluation of 17 benchmarks, the researchers discovered that many lacked correct holdout datasets, permitting brokers to take shortcuts, even unintentionally. 

See also  EU also warns Meta over illegal content, disinfo targeting Israel-Hamas war

“Surprisingly, we discover that many agent benchmarks don’t embody held-out take a look at units,” the researchers write. “Along with making a take a look at set, benchmark builders ought to take into account preserving it secret to stop LLM contamination or agent overfitting.”

Additionally they that various kinds of holdout samples are wanted based mostly on the specified degree of generality of the duty that the agent accomplishes.

“Benchmark builders should do their finest to make sure that shortcuts are unattainable,” the researchers write. “We view this because the duty of benchmark builders somewhat than agent builders, as a result of designing benchmarks that don’t enable shortcuts is way simpler than checking each single agent to see if it takes shortcuts.”

The researchers examined WebArena, a benchmark that evaluates the efficiency of AI brokers in fixing issues with completely different web sites. They discovered a number of shortcuts within the coaching datasets that allowed the brokers to overfit to duties in ways in which would simply break with minor adjustments in the actual world. For instance, the agent may make assumptions concerning the construction of net addresses with out contemplating that it’d change sooner or later or that it will not work on completely different web sites.

These errors inflate accuracy estimates and result in over-optimism about agent capabilities, the researchers warn.

With AI brokers being a brand new area, the analysis and developer communities have but a lot to find out about tips on how to take a look at the boundaries of those new programs that may quickly change into an vital a part of on a regular basis purposes.

“AI agent benchmarking is new and finest practices haven’t but been established, making it onerous to differentiate real advances from hype,” the researchers write. “Our thesis is that brokers are sufficiently completely different from fashions that benchmarking practices must be rethought.”


Source link

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Please enter CoinGecko Free Api Key to get this plugin works.