|
|
11 months ago | |
|---|---|---|
| .. | ||
| benchmarks | 11 months ago | |
| integration_tests | 1 year ago | |
| regression | 1 year ago | |
| static | 1 year ago | |
| utils | 1 year ago | |
| README.md | 1 year ago | |
| __init__.py | 1 year ago | |
This folder contains code and resources to run experiments and evaluations.
Before starting evaluation, follow the instructions here to setup your local development environment and LLM.
Once you are done with setup, you can follow the benchmark-specific instructions in each subdirectory of the evaluation directory.
Generally these will involve running run_infer.py to perform inference with the agents.
To add an agent to OpenHands, you will need to implement it in the agenthub directory. There is a README there with more information.
To evaluate an agent, you can provide the agent's name to the run_infer.py program.
OpenHands in development mode uses config.toml to keep track of most configuration.
Here's an example configuration file you can use to define and use multiple LLMs:
[llm]
# IMPORTANT: add your API key here, and set the model to the one you want to evaluate
model = "gpt-4o-2024-05-13"
api_key = "sk-XXX"
[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0
[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0
The OpenHands evaluation harness supports a wide variety of benchmarks across software engineering, web browsing, and miscellaneous assistance tasks.
evaluation/benchmarks/swe_benchevaluation/benchmarks/humanevalfixevaluation/benchmarks/birdevaluation/benchmarks/ml_benchevaluation/benchmarks/ml_benchevaluation/benchmarks/gorillaevaluation/benchmarks/toolqaevaluation/benchmarks/aider_benchevaluation/benchmarks/commit0_benchevaluation/benchmarks/discoverybenchevaluation/benchmarks/webarenaevaluation/benchmarks/miniwobevaluation/benchmarks/browsing_delegationevaluation/benchmarks/gaiaevaluation/benchmarks/gpqaevaluation/benchmarks/agent_benchevaluation/benchmarks/mintevaluation/benchmarks/EDAevaluation/benchmarks/logic_reasoningevaluation/benchmarks/scienceagentbenchCheck this huggingface space for visualization of existing experimental results.
You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.
To learn more about how to integrate your benchmark into OpenHands, check out tutorial here. Briefly,
evaluation/benchmarks/swe_bench should contain
all the preprocessing/evaluation/analysis scripts.