|
|
1 年間 前 | |
|---|---|---|
| .. | ||
| scripts | 1 年間 前 | |
| README.md | 1 年間 前 | |
| __init__.py | 1 年間 前 | |
| get_avg_reward.py | 1 年間 前 | |
| run_infer.py | 1 年間 前 | |
This folder contains evaluation for MiniWoB++ benchmark, powered by BrowserGym for easy evaluation of how well an agent capable of browsing can perform on synthetic web browsing tasks.
Please follow this document to setup local develop environment for OpenDevin.
Create a config.toml file if it does not exist at the root of the workspace.
Add the following configurations:
[core]
max_iterations = 100
cache_dir = "/tmp/cache"
sandbox_container_image = "ghcr.io/opendevin/sandbox:latest"
sandbox_type = "ssh"
ssh_hostname = "localhost"
sandbox_timeout = 120
# TODO: Change these to the model you want to evaluate
[eval_gpt4_1106_preview]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0
[eval_some_openai_compatible_model]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0
MiniWoB++ requires you to set up websites containing a static website that is accessible via URL to the machine running the OpenDevin agents.
Clone miniwob (use a specific frozen commit for reproducibility)
git clone git@github.com:Farama-Foundation/miniwob-plusplus.git
git -C "./miniwob-plusplus" reset --hard 7fd85d71a4b60325c6585396ec4f48377d049838
Setup Miniwob URL (change PATH_TO_MINIWOB_CLONED_REPO here to the absolute path to your miniwob-plusplus folder) in evaluation/miniwob/scripts/run_infer.sh
export MINIWOB_URL="file://<PATH_TO_MINIWOB_CLONED_REPO>/miniwob/html/miniwob/"
Access with browser the above MiniWoB URLs and see if they load correctly.
bash evaluation/miniwob/scripts/run_infer.sh
Results will be in evaluation/evaluation_outputs/outputs/miniwob/
To calculate the average reward, run:
poetry run python evaluation/miniwob/get_success_rate.py evaluation/evaluation_outputs/outputs/miniwob/SOME_AGENT/EXP_NAME/output.jsonl
You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results following the guide here.
Tested on BrowsingAgent V1.0
MiniWoB++, 125 tasks (3 runs due to random init task), max step 10