yueqis 82d4d25b09 feat: support ToolQA benchmark (#2263) 1 rok temu
..
EDA 9ada36e30b fix: restore python linting. (#2228) 1 rok temu
agent_bench 208b1461ca [AgentBench evaluation] set run_as_devin to true (#2269) 1 rok temu
bird 05b84df9cb chore: fix some comments (#2234) 1 rok temu
gaia 9ada36e30b fix: restore python linting. (#2228) 1 rok temu
humanevalfix 05b84df9cb chore: fix some comments (#2234) 1 rok temu
logic_reasoning 9ada36e30b fix: restore python linting. (#2228) 1 rok temu
miniwob 48151bdbb0 [feat] WebArena benchmark, MiniWoB++ benchmark and related arch changes (#2170) 1 rok temu
mint 0584e428b2 [Mint evaluation] Fix bug in stopping when the agent reaches max steps or solution proposals (#2268) 1 rok temu
ml_bench beabcce16d [Hotfix] Fix ML-Bench continue ``run_inference.py`` (#2284) 1 rok temu
regression e89cc8f19b Feat: add stream output to exec_run (#1625) 1 rok temu
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 rok temu
swe_bench ae815b20d2 Improved logs (#2272) 1 rok temu
toolqa 82d4d25b09 feat: support ToolQA benchmark (#2263) 1 rok temu
webarena 48151bdbb0 [feat] WebArena benchmark, MiniWoB++ benchmark and related arch changes (#2170) 1 rok temu
README.md ebafb702e5 Add ML-Bench Evaluation with OpenDevin (#2015) 1 rok temu
TUTORIAL.md 75cecf68e0 docs: update tutorial docs (#1912) 1 rok temu
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 rok temu

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.