Xingyao Wang f45a2ff04e [Agent, Eval] Fixes LLM config issue for delegation & Add eval to measure the delegation accuracy (#2948) 1 год назад
..
EDA 959d21c48f remove useless code (#2922) 1 год назад
agent_bench c68478f470 Customize LLM config per agent (#2756) 1 год назад
biocoder 959d21c48f remove useless code (#2922) 1 год назад
bird 8f76587e5c docs: updated docstrings using ruff's autofix feature (#2923) 1 год назад
browsing_delegation f45a2ff04e [Agent, Eval] Fixes LLM config issue for delegation & Add eval to measure the delegation accuracy (#2948) 1 год назад
gaia 8f76587e5c docs: updated docstrings using ruff's autofix feature (#2923) 1 год назад
gorilla 8f76587e5c docs: updated docstrings using ruff's autofix feature (#2923) 1 год назад
gpqa 8f76587e5c docs: updated docstrings using ruff's autofix feature (#2923) 1 год назад
humanevalfix 8f76587e5c docs: updated docstrings using ruff's autofix feature (#2923) 1 год назад
logic_reasoning c68478f470 Customize LLM config per agent (#2756) 1 год назад
miniwob c68478f470 Customize LLM config per agent (#2756) 1 год назад
mint 8f76587e5c docs: updated docstrings using ruff's autofix feature (#2923) 1 год назад
ml_bench 8f76587e5c docs: updated docstrings using ruff's autofix feature (#2923) 1 год назад
regression 8f76587e5c docs: updated docstrings using ruff's autofix feature (#2923) 1 год назад
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 год назад
swe_bench 8f76587e5c docs: updated docstrings using ruff's autofix feature (#2923) 1 год назад
toolqa 2df1d67007 History clean up (#2849) 1 год назад
utils d37b2973b2 Refactoring: event stream based agent history (#2709) 1 год назад
webarena d37b2973b2 Refactoring: event stream based agent history (#2709) 1 год назад
README.md ebafb702e5 Add ML-Bench Evaluation with OpenDevin (#2015) 1 год назад
TUTORIAL.md c68478f470 Customize LLM config per agent (#2756) 1 год назад
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 год назад

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.