tobitege 7ef5a2d1ff (fix) Rename last opendevin occurences (#3490) 1 anno fa
..
EDA 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
agent_bench 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
biocoder 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
bird 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
browsing_delegation 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
gaia 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
gorilla 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
gpqa 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
humanevalfix 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
logic_reasoning 8f0f764a85 fix: CI docker image push (#3476) 1 anno fa
miniwob 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
mint 6487175a31 refactored all relative paths to absolute paths (#3495) 1 anno fa
ml_bench 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
regression 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 anno fa
swe_bench 7ef5a2d1ff (fix) Rename last opendevin occurences (#3490) 1 anno fa
toolqa 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
utils 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
webarena 8f0f764a85 fix: CI docker image push (#3476) 1 anno fa
README.md 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 anno fa
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 anno fa

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

To learn more about how to integrate your benchmark into OpenHands, check out tutorial here.

Software Engineering

Web Browsing

Misc. Assistance

Before everything begins: Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

OpenHands in development mode uses config.toml to keep track of most configurations.

Here's an example configuration file you can use to define and use multiple LLMs:

[llm]
# IMPORTANT: add your API key here, and set the model to the one you want to evaluate
model = "gpt-4o-2024-05-13"
api_key = "sk-XXX"

[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.