Raj Maheshwari 80f88e14cd [Feat] Aider Benchmark (#3507) 1 سال پیش
..
EDA 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
agent_bench 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
aider_bench 80f88e14cd [Feat] Aider Benchmark (#3507) 1 سال پیش
biocoder 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
bird 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
browsing_delegation 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
gaia 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
gorilla 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
gpqa 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
humanevalfix 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
logic_reasoning 8f0f764a85 fix: CI docker image push (#3476) 1 سال پیش
miniwob 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
mint 6487175a31 refactored all relative paths to absolute paths (#3495) 1 سال پیش
ml_bench 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
regression 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 سال پیش
swe_bench 7ef5a2d1ff (fix) Rename last opendevin occurences (#3490) 1 سال پیش
toolqa 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
utils 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
webarena 8f0f764a85 fix: CI docker image push (#3476) 1 سال پیش
README.md 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 سال پیش
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 سال پیش

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

To learn more about how to integrate your benchmark into OpenHands, check out tutorial here.

Software Engineering

Web Browsing

Misc. Assistance

Before everything begins: Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

OpenHands in development mode uses config.toml to keep track of most configurations.

Here's an example configuration file you can use to define and use multiple LLMs:

[llm]
# IMPORTANT: add your API key here, and set the model to the one you want to evaluate
model = "gpt-4o-2024-05-13"
api_key = "sk-XXX"

[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.