Yufan Song 959d21c48f remove useless code (#2922) 1 anno fa
..
EDA 959d21c48f remove useless code (#2922) 1 anno fa
agent_bench c68478f470 Customize LLM config per agent (#2756) 1 anno fa
biocoder 959d21c48f remove useless code (#2922) 1 anno fa
bird 959d21c48f remove useless code (#2922) 1 anno fa
gaia d37b2973b2 Refactoring: event stream based agent history (#2709) 1 anno fa
gorilla 2df1d67007 History clean up (#2849) 1 anno fa
gpqa c68478f470 Customize LLM config per agent (#2756) 1 anno fa
humanevalfix c68478f470 Customize LLM config per agent (#2756) 1 anno fa
logic_reasoning c68478f470 Customize LLM config per agent (#2756) 1 anno fa
miniwob c68478f470 Customize LLM config per agent (#2756) 1 anno fa
mint d37b2973b2 Refactoring: event stream based agent history (#2709) 1 anno fa
ml_bench c68478f470 Customize LLM config per agent (#2756) 1 anno fa
regression e89cc8f19b Feat: add stream output to exec_run (#1625) 1 anno fa
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 anno fa
swe_bench 959d21c48f remove useless code (#2922) 1 anno fa
toolqa 2df1d67007 History clean up (#2849) 1 anno fa
utils d37b2973b2 Refactoring: event stream based agent history (#2709) 1 anno fa
webarena d37b2973b2 Refactoring: event stream based agent history (#2709) 1 anno fa
README.md ebafb702e5 Add ML-Bench Evaluation with OpenDevin (#2015) 1 anno fa
TUTORIAL.md c68478f470 Customize LLM config per agent (#2756) 1 anno fa
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 anno fa

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.