Boxuan Li feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
..
EDA feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
agent_bench feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
biocoder feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
bird feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
gaia feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
gorilla feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
gpqa feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
humanevalfix feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
logic_reasoning feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
miniwob feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
mint feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
ml_bench feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
regression e89cc8f19b Feat: add stream output to exec_run (#1625) il y a 1 an
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) il y a 1 an
swe_bench b569ba710d docs: Add visualizer instruction for SWE-Bench (#2529) il y a 1 an
toolqa feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
utils feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
webarena feabc97aba Evaluation time travel: build sandbox on the fly (#2491) il y a 1 an
README.md ebafb702e5 Add ML-Bench Evaluation with OpenDevin (#2015) il y a 1 an
TUTORIAL.md 75cecf68e0 docs: update tutorial docs (#1912) il y a 1 an
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) il y a 1 an

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.