tobitege c875a5fb77 (feat) Add Aider bench output visualizer (#3643) 1 gadu atpakaļ
..
EDA f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
agent_bench f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
aider_bench c875a5fb77 (feat) Add Aider bench output visualizer (#3643) 1 gadu atpakaļ
biocoder f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
bird f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
browsing_delegation f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
gaia f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
gorilla f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
gpqa f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
humanevalfix f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
logic_reasoning f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
miniwob 9c39f07430 (enh) Aider-Bench: make resumable with skip_num arg (#3626) 1 gadu atpakaļ
mint f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
ml_bench f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
regression 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 gadu atpakaļ
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 gadu atpakaļ
swe_bench 9c39f07430 (enh) Aider-Bench: make resumable with skip_num arg (#3626) 1 gadu atpakaļ
toolqa f9088766e8 Allow setting of runtime container image (#3573) 1 gadu atpakaļ
utils 9c39f07430 (enh) Aider-Bench: make resumable with skip_num arg (#3626) 1 gadu atpakaļ
webarena 9c39f07430 (enh) Aider-Bench: make resumable with skip_num arg (#3626) 1 gadu atpakaļ
README.md 9c39f07430 (enh) Aider-Bench: make resumable with skip_num arg (#3626) 1 gadu atpakaļ
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 gadu atpakaļ

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • For model outputs, they should be stored at this huggingface space for visualization.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

To learn more about how to integrate your benchmark into OpenHands, check out tutorial here.

Software Engineering

Web Browsing

Misc. Assistance

Before everything begins: Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

OpenHands in development mode uses config.toml to keep track of most configurations.

Here's an example configuration file you can use to define and use multiple LLMs:

[llm]
# IMPORTANT: add your API key here, and set the model to the one you want to evaluate
model = "gpt-4o-2024-05-13"
api_key = "sk-XXX"

[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.