Xingyao Wang ff84a3eede chore: remove specified sid (#5127) 1 жил өмнө
..
EDA 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 жил өмнө
agent_bench eeb2342509 Refactor history/event stream (#3808) 1 жил өмнө
aider_bench 42b49e6c43 [fix eval] Fix issues with aider_bench remote runtime evaluation (#5000) 1 жил өмнө
biocoder 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 жил өмнө
bird 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 жил өмнө
browsing_delegation eeb2342509 Refactor history/event stream (#3808) 1 жил өмнө
discoverybench ff84a3eede chore: remove specified sid (#5127) 1 жил өмнө
gaia eeb2342509 Refactor history/event stream (#3808) 1 жил өмнө
gorilla 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 жил өмнө
gpqa 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 жил өмнө
humanevalfix 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 жил өмнө
integration_tests eeb2342509 Refactor history/event stream (#3808) 1 жил өмнө
logic_reasoning eeb2342509 Refactor history/event stream (#3808) 1 жил өмнө
miniwob ce6f99d80e Add GITHUB_USERNAME env var to resolver step (#4999) 1 жил өмнө
mint 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 жил өмнө
ml_bench 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 жил өмнө
regression e6847e9e61 Move agenthub within openhands (#4130) 1 жил өмнө
scienceagentbench 17f4c6e1a9 Refactor sessions a bit, and fix issue where runtimes get killed (#4900) 1 жил өмнө
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 жил өмнө
swe_bench a531413d86 fix(eval): support setting hard timeout per evaluation instance (#5110) 1 жил өмнө
toolqa 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 жил өмнө
utils a531413d86 fix(eval): support setting hard timeout per evaluation instance (#5110) 1 жил өмнө
webarena eeb2342509 Refactor history/event stream (#3808) 1 жил өмнө
README.md 1d2a616be7 Fix issue #4739: '[Bug]: The agent doesn'"'"'t know its name' (#4740) 1 жил өмнө
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 жил өмнө

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

For Benchmark Users

Setup

Before starting evaluation, follow the instructions here here to setup your local development environment and LLM.

Once you are done with setup, you can follow the benchmark-specific instructions in each subdirectory of the evaluation directory. Generally these will involve running run_infer.py to perform inference with the agents.

Implementing and Evaluating an Agent

To add an agent to OpenHands, you will need to implement it in the agenthub directory. There is a README there with more information.

To evaluate an agent, you can provide the agent's name to the run_infer.py program.

Evaluating Different LLMs

OpenHands in development mode uses config.toml to keep track of most configuration. Here's an example configuration file you can use to define and use multiple LLMs:

[llm]
# IMPORTANT: add your API key here, and set the model to the one you want to evaluate
model = "gpt-4o-2024-05-13"
api_key = "sk-XXX"

[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Supported Benchmarks

The OpenHands evaluation harness supports a wide variety of benchmarks across software engineering, web browsing, and miscellaneous assistance tasks.

Software Engineering

Web Browsing

Misc. Assistance

Result Visualization

Check this huggingface space for visualization of existing experimental results.

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.

For Benchmark Developers

To learn more about how to integrate your benchmark into OpenHands, check out tutorial here. Briefly,

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • For model outputs, they should be stored at this huggingface space for visualization.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.