Leo 9ada36e30b fix: restore python linting. (#2228) 1 jaar geleden
..
.cache_program a9823491e6 Support Logic Reasoning Benchmark (#1973) 1 jaar geleden
scripts a9823491e6 Support Logic Reasoning Benchmark (#1973) 1 jaar geleden
README.md f188abd7a3 Delete evaluation outputs files (#2152) 1 jaar geleden
__init__.py a9823491e6 Support Logic Reasoning Benchmark (#1973) 1 jaar geleden
instruction.txt 9ada36e30b fix: restore python linting. (#2228) 1 jaar geleden
logic_inference.py 05b84df9cb chore: fix some comments (#2234) 1 jaar geleden
run_infer.py 05b84df9cb chore: fix some comments (#2234) 1 jaar geleden

README.md

Logic Reasoning Evaluation

This folder contains evaluation harness for evaluating agents on the logic reasoning benchmark ProntoQA and ProofWriter.

Configure OpenDevin and your LLM

Create a config.toml file if it does not exist at the root of the workspace.

Add the following configurations:

[core]
max_iterations = 100
cache_dir = "/tmp/cache"
ssh_hostname = "localhost"
enable_auto_lint = true

# TODO: Change these to the model you want to evaluate
[eval_gpt4_1106_preview]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[eval_some_openai_compatible_model]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Run Inference on logic_reasoning

The following code will run inference on the first example of the ProntoQA dataset with model gpt-4o.

./evaluation/logic_reasoning/scripts/run_infer.sh ProntoQA gpt-4o 1