Leo 9ada36e30b fix: restore python linting. (#2228) 1 год назад
..
EDA 9ada36e30b fix: restore python linting. (#2228) 1 год назад
agent_bench 22e8fb39b1 add cost metrics to evaluation outputs for all benchmarks (#2199) 1 год назад
bird 05b84df9cb chore: fix some comments (#2234) 1 год назад
gaia 9ada36e30b fix: restore python linting. (#2228) 1 год назад
humanevalfix 05b84df9cb chore: fix some comments (#2234) 1 год назад
logic_reasoning 9ada36e30b fix: restore python linting. (#2228) 1 год назад
mint 05b84df9cb chore: fix some comments (#2234) 1 год назад
regression e89cc8f19b Feat: add stream output to exec_run (#1625) 1 год назад
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 год назад
swe_bench 05b84df9cb chore: fix some comments (#2234) 1 год назад
README.md 2c231c57c9 Add supported benchmarks to evaluation README (AgentBench, BIRD, LogicReasoning) (#2183) 1 год назад
TUTORIAL.md 75cecf68e0 docs: update tutorial docs (#1912) 1 год назад
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 год назад

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.