Xingyao Wang 01ae54a69d fix swebench repo/version being string (#4241) 1 год назад
..
EDA 152f99c64f Chore Bump python version (#3545) 1 год назад
agent_bench 152f99c64f Chore Bump python version (#3545) 1 год назад
aider_bench 80a631361b eval: update aiderbench readme (#4209) 1 год назад
biocoder 090c911a50 (refactor) Make `Runtime` class synchronous (#3661) 1 год назад
bird 152f99c64f Chore Bump python version (#3545) 1 год назад
browsing_delegation 152f99c64f Chore Bump python version (#3545) 1 год назад
gaia 152f99c64f Chore Bump python version (#3545) 1 год назад
gorilla 152f99c64f Chore Bump python version (#3545) 1 год назад
gpqa 152f99c64f Chore Bump python version (#3545) 1 год назад
humanevalfix 152f99c64f Chore Bump python version (#3545) 1 год назад
logic_reasoning 152f99c64f Chore Bump python version (#3545) 1 год назад
miniwob 152f99c64f Chore Bump python version (#3545) 1 год назад
mint 152f99c64f Chore Bump python version (#3545) 1 год назад
ml_bench 090c911a50 (refactor) Make `Runtime` class synchronous (#3661) 1 год назад
regression 8fdfece059 Refactor messages serialization (#3832) 1 год назад
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 год назад
swe_bench 01ae54a69d fix swebench repo/version being string (#4241) 1 год назад
toolqa 152f99c64f Chore Bump python version (#3545) 1 год назад
utils 9cc9b19958 eval: improve swebench infer error handling and retry (#4205) 1 год назад
webarena 152f99c64f Chore Bump python version (#3545) 1 год назад
README.md 797f02ff6f rename huggingface evaluation benchmark (#3845) 1 год назад
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 год назад

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • For model outputs, they should be stored at this huggingface space for visualization.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

To learn more about how to integrate your benchmark into OpenHands, check out tutorial here.

Software Engineering

Web Browsing

Misc. Assistance

Before everything begins: Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

OpenHands in development mode uses config.toml to keep track of most configurations.

Here's an example configuration file you can use to define and use multiple LLMs:

[llm]
# IMPORTANT: add your API key here, and set the model to the one you want to evaluate
model = "gpt-4o-2024-05-13"
api_key = "sk-XXX"

[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.