Xingyao Wang 6b16a5da0b [Eval,Arch] Update GPTQ eval and add `headless_mode` for Controller (#2994) 1 tahun lalu
..
EDA 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
agent_bench ff6ddc831f fix: runtime test for mac (#3005) 1 tahun lalu
biocoder 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
bird 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
browsing_delegation 6b16a5da0b [Eval,Arch] Update GPTQ eval and add `headless_mode` for Controller (#2994) 1 tahun lalu
gaia 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
gorilla 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
gpqa 6b16a5da0b [Eval,Arch] Update GPTQ eval and add `headless_mode` for Controller (#2994) 1 tahun lalu
humanevalfix 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
logic_reasoning 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
miniwob cf910dfa9d fix eval api_key leak in metadata; fix llm config in run infer (#2998) 1 tahun lalu
mint 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
ml_bench 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
regression 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 tahun lalu
swe_bench 9cf2b5b74b [FIX] Update SWEBenchSSHBox after global config was removed from sandbox in #2961 (#3014) 1 tahun lalu
toolqa 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
utils 3a21198424 Remove monologue agent (#3036) 1 tahun lalu
webarena cf910dfa9d fix eval api_key leak in metadata; fix llm config in run infer (#2998) 1 tahun lalu
README.md ebafb702e5 Add ML-Bench Evaluation with OpenDevin (#2015) 1 tahun lalu
TUTORIAL.md ff6ddc831f fix: runtime test for mac (#3005) 1 tahun lalu
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 tahun lalu

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.