Graham Neubig cab7a288ca Add NUM_WORKERS variable to run_infer.sh scripts for configurable woker settings (#2597) 1 rok pred
..
scripts cab7a288ca Add NUM_WORKERS variable to run_infer.sh scripts for configurable woker settings (#2597) 1 rok pred
README.md 6f235937cf Evaluation time travel: allow evaluation on a specific version (#2356) 1 rok pred
get_score.py 2d52298a1d Support GAIA benchmark (#1911) 1 rok pred
run_infer.py 05b84df9cb chore: fix some comments (#2234) 1 rok pred
scorer.py 9ada36e30b fix: restore python linting. (#2228) 1 rok pred

README.md

GAIA Evaluation

This folder contains evaluation harness for evaluating agents on the GAIA benchmark.

Configure OpenDevin and your LLM

Create a config.toml file if it does not exist at the root of the workspace. Please check README.md for how to set this up.

Run the evaluation

We are using the GAIA dataset hosted on Hugging Face. Please accept the terms and make sure to have logged in on your computer by huggingface-cli login before running the evaluation.

Following is the basic command to start the evaluation. Here we are evaluating on the validation set for the 2023_all split. You can adjust ./evaluation/gaia/scripts/run_infer.sh to change the subset you want to evaluate on.

./evaluation/gaia/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit] [gaia_subset]
# e.g., ./evaluation/gaia/scripts/run_infer.sh eval_gpt4_1106_preview 0.6.2 CodeActAgent 300

where model_config is mandatory, while git-version, agent, eval_limit and gaia_subset are optional.

  • model_config, e.g. eval_gpt4_1106_preview, is the config group name for your LLM settings, as defined in your config.toml, defaulting to gpt-3.5-turbo

  • git-version, e.g. head, is the git commit hash of the OpenDevin version you would like to evaluate. It could also be a release tag like 0.6.2.

  • agent, e.g. CodeActAgent, is the name of the agent for benchmarks, defaulting to CodeActAgent.

  • eval_limit, e.g. 10, limits the evaluation to the first eval_limit instances, defaulting to all instances.

  • gaia_subset, GAIA benchmark has multiple subsets: 2023_level1, 2023_level2, 2023_level3, 2023_all, defaulting to 2023_level1.

For example,

./evaluation/gaia/scripts/run_infer.sh eval_gpt4_1106_preview 0.6.2 CodeActAgent 10

Get score

Then you can get stats by running the following command:

python ./evaluation/gaia/get_score.py \
--file <path_to/output.json>