|
|
11 сар өмнө | |
|---|---|---|
| .. | ||
| scripts | 1 жил өмнө | |
| README.md | 1 жил өмнө | |
| game.py | 1 жил өмнө | |
| run_infer.py | 11 сар өмнө | |
This folder contains evaluation harness for evaluating agents on the Entity-deduction-Arena Benchmark, from the paper Probing the Multi-turn Planning Capabilities of LLMs via 20 Question Games, presented in ACL 2024 main conference.
Please follow instruction here to setup your local development environment and LLM.
export OPENAI_API_KEY="sk-XXX"; # This is required for evaluation (to simulate another party of conversation)
./evaluation/benchmarks/EDA/scripts/run_infer.sh [model_config] [git-version] [agent] [dataset] [eval_limit]
where model_config is mandatory, while git-version, agent, dataset and eval_limit are optional.
model_config, e.g. eval_gpt4_1106_preview, is the config group name for your
LLM settings, as defined in your config.toml.
git-version, e.g. HEAD, is the git commit hash of the OpenHands version you would
like to evaluate. It could also be a release tag like 0.6.2.
agent, e.g. CodeActAgent, is the name of the agent for benchmarks, defaulting
to CodeActAgent.
dataset: There are two tasks in this evaluation. Specify dataset to test on either things or celebs task.
eval_limit, e.g. 10, limits the evaluation to the first eval_limit instances. By default it infers all instances.
For example,
./evaluation/benchmarks/EDA/scripts/run_infer.sh eval_gpt4o_2024_05_13 0.6.2 CodeActAgent things
@inproceedings{zhang2023entity,
title={Probing the Multi-turn Planning Capabilities of LLMs via 20 Question Games},
author={Zhang, Yizhe and Lu, Jiarui and Jaitly, Navdeep},
journal={ACL},
year={2024}
}