Xingyao Wang 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年間 前
..
.cache_program 243cb492aa Run pre-commit on all files (#3884) 1 年間 前
scripts 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) 1 年間 前
Dockerfile 152f99c64f Chore Bump python version (#3545) 1 年間 前
README.md 31b244f95e [Refactor, Evaluation] Refactor and clean up evaluation harness to remove global config and use EventStreamRuntime (#3230) 1 年間 前
__init__.py a9823491e6 Support Logic Reasoning Benchmark (#1973) 1 年間 前
instruction.txt 31b244f95e [Refactor, Evaluation] Refactor and clean up evaluation harness to remove global config and use EventStreamRuntime (#3230) 1 年間 前
logic_inference.py 31b244f95e [Refactor, Evaluation] Refactor and clean up evaluation harness to remove global config and use EventStreamRuntime (#3230) 1 年間 前
run_infer.py b23c7aab5a [eval] stop set sid in eval (#4311) 1 年間 前

README.md

Logic Reasoning Evaluation

This folder contains evaluation harness for evaluating agents on the logic reasoning benchmark ProntoQA and ProofWriter.

Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

Run Inference on logic_reasoning

The following code will run inference on the first example of the ProofWriter dataset,

./evaluation/logic_reasoning/scripts/run_infer.sh eval_gpt4_1106_preview_llm ProofWriter