Xingyao Wang ff84a3eede chore: remove specified sid (#5127) hai 1 ano
..
EDA 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) hai 1 ano
agent_bench eeb2342509 Refactor history/event stream (#3808) hai 1 ano
aider_bench 42b49e6c43 [fix eval] Fix issues with aider_bench remote runtime evaluation (#5000) hai 1 ano
biocoder 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) hai 1 ano
bird 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) hai 1 ano
browsing_delegation eeb2342509 Refactor history/event stream (#3808) hai 1 ano
discoverybench ff84a3eede chore: remove specified sid (#5127) hai 1 ano
gaia eeb2342509 Refactor history/event stream (#3808) hai 1 ano
gorilla 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) hai 1 ano
gpqa 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) hai 1 ano
humanevalfix 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) hai 1 ano
integration_tests eeb2342509 Refactor history/event stream (#3808) hai 1 ano
logic_reasoning eeb2342509 Refactor history/event stream (#3808) hai 1 ano
miniwob ce6f99d80e Add GITHUB_USERNAME env var to resolver step (#4999) hai 1 ano
mint 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) hai 1 ano
ml_bench 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) hai 1 ano
regression e6847e9e61 Move agenthub within openhands (#4130) hai 1 ano
scienceagentbench 17f4c6e1a9 Refactor sessions a bit, and fix issue where runtimes get killed (#4900) hai 1 ano
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) hai 1 ano
swe_bench a531413d86 fix(eval): support setting hard timeout per evaluation instance (#5110) hai 1 ano
toolqa 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) hai 1 ano
utils a531413d86 fix(eval): support setting hard timeout per evaluation instance (#5110) hai 1 ano
webarena eeb2342509 Refactor history/event stream (#3808) hai 1 ano
README.md 1d2a616be7 Fix issue #4739: '[Bug]: The agent doesn'"'"'t know its name' (#4740) hai 1 ano
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) hai 1 ano

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

For Benchmark Users

Setup

Before starting evaluation, follow the instructions here here to setup your local development environment and LLM.

Once you are done with setup, you can follow the benchmark-specific instructions in each subdirectory of the evaluation directory. Generally these will involve running run_infer.py to perform inference with the agents.

Implementing and Evaluating an Agent

To add an agent to OpenHands, you will need to implement it in the agenthub directory. There is a README there with more information.

To evaluate an agent, you can provide the agent's name to the run_infer.py program.

Evaluating Different LLMs

OpenHands in development mode uses config.toml to keep track of most configuration. Here's an example configuration file you can use to define and use multiple LLMs:

[llm]
# IMPORTANT: add your API key here, and set the model to the one you want to evaluate
model = "gpt-4o-2024-05-13"
api_key = "sk-XXX"

[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Supported Benchmarks

The OpenHands evaluation harness supports a wide variety of benchmarks across software engineering, web browsing, and miscellaneous assistance tasks.

Software Engineering

Web Browsing

Misc. Assistance

Result Visualization

Check this huggingface space for visualization of existing experimental results.

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.

For Benchmark Developers

To learn more about how to integrate your benchmark into OpenHands, check out tutorial here. Briefly,

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • For model outputs, they should be stored at this huggingface space for visualization.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.