Aditya Bharat Soni 0809d26f4d fix: Allow evaluation benchmarks to pass image urls in run_controller() instead of simply passing strings (#4100) 1 ano atrás
..
scripts 31b244f95e [Refactor, Evaluation] Refactor and clean up evaluation harness to remove global config and use EventStreamRuntime (#3230) 1 ano atrás
Dockerfile 152f99c64f Chore Bump python version (#3545) 1 ano atrás
README.md 797f02ff6f rename huggingface evaluation benchmark (#3845) 1 ano atrás
__init__.py 48151bdbb0 [feat] WebArena benchmark, MiniWoB++ benchmark and related arch changes (#2170) 1 ano atrás
get_avg_reward.py 48151bdbb0 [feat] WebArena benchmark, MiniWoB++ benchmark and related arch changes (#2170) 1 ano atrás
run_infer.py 0809d26f4d fix: Allow evaluation benchmarks to pass image urls in run_controller() instead of simply passing strings (#4100) 1 ano atrás

README.md

WebArena Evaluation with OpenHands Browsing Agents

This folder contains evaluation for MiniWoB++ benchmark, powered by BrowserGym for easy evaluation of how well an agent capable of browsing can perform on synthetic web browsing tasks.

Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

Test if your environment works

Access with browser the above MiniWoB URLs and see if they load correctly.

Run Evaluation

./evaluation/miniwob/scripts/run_infer.sh llm.claude-35-sonnet-eval

Results will be in evaluation/evaluation_outputs/outputs/miniwob/

To calculate the average reward, run:

poetry run python evaluation/miniwob/get_success_rate.py evaluation/evaluation_outputs/outputs/miniwob/SOME_AGENT/EXP_NAME/output.jsonl

Submit your evaluation results

You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results following the guide here.

BrowsingAgent V1.0 result

Tested on BrowsingAgent V1.0

MiniWoB++, 125 tasks (3 runs due to random init task), max step 10

  • GPT4o: 0.384, 0.416, 0.424, avg: 0.408
  • GPT3.5: 0.288, 0.256, 0.272, avg: 0.272