Xingyao Wang 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 year ago
..
scripts 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 year ago
README.md 01ae22ef57 Rename OpenDevin to OpenHands (#3472) 1 year ago
game.py 07f0d1ccb3 feat(llm): convert function call request for non-funcall OSS model (#4711) 1 year ago
run_infer.py be82832eb1 Use keyword matching for CodeAct microagents (#4568) 1 year ago

README.md

EDA Evaluation

This folder contains evaluation harness for evaluating agents on the Entity-deduction-Arena Benchmark, from the paper Probing the Multi-turn Planning Capabilities of LLMs via 20 Question Games, presented in ACL 2024 main conference.

Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

Start the evaluation

export OPENAI_API_KEY="sk-XXX"; # This is required for evaluation (to simulate another party of conversation)
./evaluation/EDA/scripts/run_infer.sh [model_config] [git-version] [agent] [dataset] [eval_limit]

where model_config is mandatory, while git-version, agent, dataset and eval_limit are optional.

  • model_config, e.g. eval_gpt4_1106_preview, is the config group name for your LLM settings, as defined in your config.toml.

  • git-version, e.g. HEAD, is the git commit hash of the OpenHands version you would like to evaluate. It could also be a release tag like 0.6.2.

  • agent, e.g. CodeActAgent, is the name of the agent for benchmarks, defaulting to CodeActAgent.

  • dataset: There are two tasks in this evaluation. Specify dataset to test on either things or celebs task.

  • eval_limit, e.g. 10, limits the evaluation to the first eval_limit instances. By default it infers all instances.

For example,

./evaluation/EDA/scripts/run_infer.sh eval_gpt4o_2024_05_13 0.6.2 CodeActAgent things

Reference

@inproceedings{zhang2023entity,
  title={Probing the Multi-turn Planning Capabilities of LLMs via 20 Question Games},
  author={Zhang, Yizhe and Lu, Jiarui and Jaitly, Navdeep},
  journal={ACL},
  year={2024}
}