Graham Neubig ffd3c7144c Remove global args (#2760) 1 年之前
..
EDA ffd3c7144c Remove global args (#2760) 1 年之前
agent_bench ffd3c7144c Remove global args (#2760) 1 年之前
biocoder ffd3c7144c Remove global args (#2760) 1 年之前
bird ffd3c7144c Remove global args (#2760) 1 年之前
gaia ffd3c7144c Remove global args (#2760) 1 年之前
gorilla ffd3c7144c Remove global args (#2760) 1 年之前
gpqa ffd3c7144c Remove global args (#2760) 1 年之前
humanevalfix ffd3c7144c Remove global args (#2760) 1 年之前
logic_reasoning ffd3c7144c Remove global args (#2760) 1 年之前
miniwob ffd3c7144c Remove global args (#2760) 1 年之前
mint ffd3c7144c Remove global args (#2760) 1 年之前
ml_bench ffd3c7144c Remove global args (#2760) 1 年之前
regression e89cc8f19b Feat: add stream output to exec_run (#1625) 1 年之前
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 年之前
swe_bench ffd3c7144c Remove global args (#2760) 1 年之前
toolqa ffd3c7144c Remove global args (#2760) 1 年之前
utils feabc97aba Evaluation time travel: build sandbox on the fly (#2491) 1 年之前
webarena ffd3c7144c Remove global args (#2760) 1 年之前
README.md ebafb702e5 Add ML-Bench Evaluation with OpenDevin (#2015) 1 年之前
TUTORIAL.md 41564c2eac Use :main instead of :latest (#2539) 1 年之前
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 年之前

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.