Abhijeetsingh Meena 173018eb58 fix: Resolves HumanEval Inference by replacing task_id with instance_id (#4364) 1 年之前
..
EDA 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
agent_bench 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
aider_bench 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
biocoder 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
bird 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
browsing_delegation 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
gaia 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
gorilla 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
gpqa 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
humanevalfix 173018eb58 fix: Resolves HumanEval Inference by replacing task_id with instance_id (#4364) 1 年之前
logic_reasoning 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
miniwob 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
mint b23c7aab5a [eval] stop set sid in eval (#4311) 1 年之前
ml_bench b23c7aab5a [eval] stop set sid in eval (#4311) 1 年之前
regression e6847e9e61 Move agenthub within openhands (#4130) 1 年之前
static b2fdb963b6 Add detailed tutorial for adding new evaluation benchmarks (#1827) 1 年之前
swe_bench 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
toolqa 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
utils 25f9413965 [Eval] Fix eval stuck when `result` is too large for pbar (#4361) 1 年之前
webarena 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年之前
README.md 797f02ff6f rename huggingface evaluation benchmark (#3845) 1 年之前
__init__.py 2406b901df feat(SWE-Bench environment) integrate SWE-Bench sandbox (#1468) 1 年之前

README.md

Evaluation

This folder contains code and resources to run experiments and evaluations.

Logistics

To better organize the evaluation folder, we should follow the rules below:

  • Each subfolder contains a specific benchmark or experiment. For example, evaluation/swe_bench should contain all the preprocessing/evaluation/analysis scripts.
  • Raw data and experimental records should not be stored within this repo.
  • For model outputs, they should be stored at this huggingface space for visualization.
  • Important data files of manageable size and analysis scripts (e.g., jupyter notebooks) can be directly uploaded to this repo.

Supported Benchmarks

To learn more about how to integrate your benchmark into OpenHands, check out tutorial here.

Software Engineering

Web Browsing

Misc. Assistance

Before everything begins: Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

OpenHands in development mode uses config.toml to keep track of most configurations.

Here's an example configuration file you can use to define and use multiple LLMs:

[llm]
# IMPORTANT: add your API key here, and set the model to the one you want to evaluate
model = "gpt-4o-2024-05-13"
api_key = "sk-XXX"

[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Result Visualization

Check this huggingface space for visualization of existing experimental results.

Upload your results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.