|
|
vor 1 Jahr | |
|---|---|---|
| .. | ||
| scripts | vor 1 Jahr | |
| README.md | vor 1 Jahr | |
| create_dataset.py | vor 1 Jahr | |
| helper.py | vor 1 Jahr | |
| run_infer.py | vor 1 Jahr | |
This folder contains evaluation harness for evaluating agents on the Aider Editing Benchmark. This will allow us to develop better editing approach without running the full SWE-bench. The benchmark uses the RajMaheshwari/Exercism-Python Hugging Face dataset based on the Exercism python coding exercises.
Please follow instruction here to setup your local development environment and LLM.
./evaluation/agent_bench/scripts/run_infer.sh [model_config] [git-version] [agent] [eval_limit]
model_config, e.g. eval_gpt4_1106_preview, is the config group name for
your LLM settings, as defined in your config.toml.git-version, e.g. HEAD, is the git commit hash of the OpenHands version
you would like to evaluate. It could also be a release tag like 0.6.2.agent, e.g. CodeActAgent, is the name of the agent for benchmarks,
defaulting to CodeActAgent.eval_limit, e.g. 10, limits the evaluation to the first eval_limit
instances. By default, the script evaluates the entire Exercism test set
(133 issues). Note: in order to use eval_limit, you must also set agent.Following is the basic command to start the evaluation.
You can update the arguments in the script
evaluation/agent_bench/scripts/run_infer.sh, such as --max-iterations,
--eval-num-workers and so on.
--agent-cls, the agent to use. For example, CodeActAgent.--llm-config: the LLM configuration to use. For example,
eval_gpt4_1106_preview.--max-iterations: the number of iterations to run the evaluation. For
example, 30.--eval-num-workers: the number of workers to use for evaluation. For
example, 5.--eval-n-limit: the number of examples to evaluate. For example, 100.
./evaluation/aider_bench/scripts/run_infer.sh eval_gpt35_turbo HEAD CodeActAgent 1
poetry run python ./evaluation/agent_bench/scripts/summarise_results.py [path_to_output_jsonl_file]
This will list the instances that passed and the instances that failed. For each instance, the corresponding set of test cases (which can vary for each instance) are run on the file edited by the agent. We consider an instance to be passed only if ALL test cases are passed. Sometimes even a single failed test case will cause the entire instance to be marked as filed.
You can inspect the test_results field in the output json file to know the exact outcome of the tests. If there are no syntax or indentation errors, you can expect to see something like "..F...EF..", where "." means the test case passed, "E" means there was an error while executing the test case and "F" means some assertion failed and returned output was not as expected.