Engel Nyst b295f5775c Revert "Fix issue #5609: Use litellm's modify_params with default True" (#5631) 11 月之前
..
scripts 9908e1b285 [Evaluation]: Log openhands version in eval output folder, instead of agent version (#5394) 1 年之前
Dockerfile 678436da30 Fix issue #5222: [Refactor]: Refactor the evaluation directory (#5223) 1 年之前
README.md 8f47547b08 docs: fix markdown linting and broken links (#5401) 1 年之前
get_avg_reward.py 678436da30 Fix issue #5222: [Refactor]: Refactor the evaluation directory (#5223) 1 年之前
run_infer.py b295f5775c Revert "Fix issue #5609: Use litellm's modify_params with default True" (#5631) 11 月之前

README.md

Mini-World of Bits Evaluation with OpenHands Browsing Agents

This folder contains evaluation for MiniWoB++ benchmark, powered by BrowserGym for easy evaluation of how well an agent capable of browsing can perform on synthetic web browsing tasks.

Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

Test if your environment works

Access with browser the above MiniWoB URLs and see if they load correctly.

Run Evaluation

./evaluation/benchmarks/miniwob/scripts/run_infer.sh llm.claude-35-sonnet-eval

Run Inference on RemoteRuntime (experimental)

This is in limited beta. Contact Xingyao over slack if you want to try this out!

./evaluation/benchmarks/miniwob/scripts/run_infer.sh [model_config] [git-version] [agent] [note] [eval_limit] [num_workers]

# Example - This runs evaluation on BrowsingAgent for 125 instances on miniwob, with 2 workers running in parallel
export ALLHANDS_API_KEY="YOUR-API-KEY"
export RUNTIME=remote
export SANDBOX_REMOTE_RUNTIME_API_URL="https://runtime.eval.all-hands.dev"
./evaluation/benchmarks/miniwob/scripts/run_infer.sh llm.eval HEAD BrowsingAgent "" 125 2

Results will be in evaluation/evaluation_outputs/outputs/miniwob/

To calculate the average reward, run:

poetry run python evaluation/benchmarks/miniwob/get_success_rate.py evaluation/evaluation_outputs/outputs/miniwob/SOME_AGENT/EXP_NAME/output.jsonl

Submit your evaluation results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results following the guide here.

BrowsingAgent V1.0 result

Tested on BrowsingAgent V1.0

MiniWoB++, 125 tasks (3 runs due to random init task), max step 10

  • GPT4o: 0.384, 0.416, 0.424, avg: 0.408
  • GPT3.5: 0.288, 0.256, 0.272, avg: 0.272