Xingyao Wang 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) пре 1 година
..
.cache_program 243cb492aa Run pre-commit on all files (#3884) пре 1 година
scripts 50c13aad98 [Eval] Improve SWE-Bench Eval harness: multi-run support & entry script simplification (#4396) пре 1 година
Dockerfile 152f99c64f Chore Bump python version (#3545) пре 1 година
README.md 31b244f95e [Refactor, Evaluation] Refactor and clean up evaluation harness to remove global config and use EventStreamRuntime (#3230) пре 1 година
__init__.py a9823491e6 Support Logic Reasoning Benchmark (#1973) пре 1 година
instruction.txt 31b244f95e [Refactor, Evaluation] Refactor and clean up evaluation harness to remove global config and use EventStreamRuntime (#3230) пре 1 година
logic_inference.py 31b244f95e [Refactor, Evaluation] Refactor and clean up evaluation harness to remove global config and use EventStreamRuntime (#3230) пре 1 година
run_infer.py b23c7aab5a [eval] stop set sid in eval (#4311) пре 1 година

README.md

Logic Reasoning Evaluation

This folder contains evaluation harness for evaluating agents on the logic reasoning benchmark ProntoQA and ProofWriter.

Setup Environment and LLM Configuration

Please follow instruction here to setup your local development environment and LLM.

Run Inference on logic_reasoning

The following code will run inference on the first example of the ProofWriter dataset,

./evaluation/logic_reasoning/scripts/run_infer.sh eval_gpt4_1106_preview_llm ProofWriter