Graham Neubig ce6f99d80e Add GITHUB_USERNAME env var to resolver step (#4999) 1 سال پیش
..
EDA 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
agent_bench eeb2342509 Refactor history/event stream (#3808) 1 سال پیش
aider_bench 42b49e6c43 [fix eval] Fix issues with aider_bench remote runtime evaluation (#5000) 1 سال پیش
biocoder 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
bird 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
browsing_delegation eeb2342509 Refactor history/event stream (#3808) 1 سال پیش
discoverybench 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
gaia eeb2342509 Refactor history/event stream (#3808) 1 سال پیش
gorilla 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
gpqa 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
humanevalfix 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
integration_tests eeb2342509 Refactor history/event stream (#3808) 1 سال پیش
logic_reasoning eeb2342509 Refactor history/event stream (#3808) 1 سال پیش
miniwob ce6f99d80e Add GITHUB_USERNAME env var to resolver step (#4999) 1 سال پیش
mint 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
ml_bench 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
regression e6847e9e61 Move agenthub within openhands (#4130) 1 سال پیش
scienceagentbench 17f4c6e1a9 Refactor sessions a bit, and fix issue where runtimes get killed (#4900) 1 سال پیش
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 سال پیش
swe_bench 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
toolqa 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
utils 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 سال پیش
webarena eeb2342509 Refactor history/event stream (#3808) 1 سال پیش
README.md 1d2a616be7 Fix issue #4739: '[Bug]: The agent doesn'"'"'t know its name' (#4740) 1 سال پیش
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 سال پیش

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

For Benchmark Users

Setup

Before starting evaluation, follow the instructions here here to setup your local development environment and LLM.

Once you are done with setup, you can follow the benchmark-specific instructions in each subdirectory of the evaluation directory. Generally these will involve running run_infer.py to perform inference with the agents.

Implementing and Evaluating an Agent

To add an agent to OpenHands, you will need to implement it in the agenthub directory. There is a README there with more information.

To evaluate an agent, you can provide the agent's name to the run_infer.py program.

Evaluating Different LLMs

OpenHands in development mode uses config.toml to keep track of most configuration. Here's an example configuration file you can use to define and use multiple LLMs:

[llm]
# IMPORTANT: add your API key here, and set the model to the one you want to evaluate
model = "gpt-4o-2024-05-13"
api_key = "sk-XXX"

[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Supported Benchmarks

The OpenHands evaluation harness supports a wide variety of benchmarks across software engineering, web browsing, and miscellaneous assistance tasks.

Software Engineering

Web Browsing

Misc. Assistance

Result Visualization

Check this huggingface space for visualization of existing experimental results.

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.

For Benchmark Developers

To learn more about how to integrate your benchmark into OpenHands, check out tutorial here. Briefly,

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • For model outputs, they should be stored at this huggingface space for visualization.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.