Xingyao Wang da17665cab fix: make max_budget_per_task optional in `run_agent_controller` (#3071) 1 year ago
..
.cache_program a9823491e6 Support Logic Reasoning Benchmark (#1973) 1 year ago
scripts cab7a288ca Add NUM_WORKERS variable to run_infer.sh scripts for configurable woker settings (#2597) 1 year ago
README.md ff6ddc831f fix: runtime test for mac (#3005) 1 year ago
__init__.py a9823491e6 Support Logic Reasoning Benchmark (#1973) 1 year ago
instruction.txt 9ada36e30b fix: restore python linting. (#2228) 1 year ago
logic_inference.py 05b84df9cb chore: fix some comments (#2234) 1 year ago
run_infer.py da17665cab fix: make max_budget_per_task optional in `run_agent_controller` (#3071) 1 year ago

README.md

Logic Reasoning Evaluation

This folder contains evaluation harness for evaluating agents on the logic reasoning benchmark ProntoQA and ProofWriter.

Configure OpenDevin and your LLM

Create a config.toml file if it does not exist at the root of the workspace.

Add the following configurations:

[core]
max_iterations = 100
cache_dir = "/tmp/cache"
ssh_hostname = "localhost"

[sandbox]
enable_auto_lint = true

# TODO: Change these to the model you want to evaluate
[llm.eval_gpt4_1106_preview_llm]
model = "gpt-4-1106-preview"
api_key = "XXX"
temperature = 0.0

[llm.eval_some_openai_compatible_model_llm]
model = "openai/MODEL_NAME"
base_url = "https://OPENAI_COMPATIBLE_URL/v1"
api_key = "XXX"
temperature = 0.0

Run Inference on logic_reasoning

The following code will run inference on the first example of the ProntoQA dataset, using OpenDevin 0.6.2 version.

./evaluation/logic_reasoning/scripts/run_infer.sh ProntoQA eval_gpt4_1106_preview_llm 0.6.2 1