Ryan H. Tran 0584e428b2 [Mint evaluation] Fix bug in stopping when the agent reaches max steps or solution proposals (#2268) 1 năm trước cách đây
..
prompts 01296ff79d Add remaining subsets for MINT benchmark (#2142) 1 năm trước cách đây
scripts 0584e428b2 [Mint evaluation] Fix bug in stopping when the agent reaches max steps or solution proposals (#2268) 1 năm trước cách đây
tasks 01296ff79d Add remaining subsets for MINT benchmark (#2142) 1 năm trước cách đây
.gitignore 9434bcce48 Support MINT benchmark (MATH, GSM8K subset) (#1955) 1 năm trước cách đây
README.md 01296ff79d Add remaining subsets for MINT benchmark (#2142) 1 năm trước cách đây
config_variables.py 01296ff79d Add remaining subsets for MINT benchmark (#2142) 1 năm trước cách đây
datatypes.py 0584e428b2 [Mint evaluation] Fix bug in stopping when the agent reaches max steps or solution proposals (#2268) 1 năm trước cách đây
env.py 0584e428b2 [Mint evaluation] Fix bug in stopping when the agent reaches max steps or solution proposals (#2268) 1 năm trước cách đây
requirements.txt 01296ff79d Add remaining subsets for MINT benchmark (#2142) 1 năm trước cách đây
run_infer.py 0584e428b2 [Mint evaluation] Fix bug in stopping when the agent reaches max steps or solution proposals (#2268) 1 năm trước cách đây
utils.py 01296ff79d Add remaining subsets for MINT benchmark (#2142) 1 năm trước cách đây

README.md

MINT Benchmark

This folder contains the evaluation harness for the MINT benchmark on LLMs' ability to solve tasks with multi-turn interactions.

Configure OpenDevin and LM

Create a config.toml file if it does not exist at the root of the workspace. Please check README.md for how to set this up.

Start the evaluation

We are using the MINT dataset hosted on Hugging Face.

Following is the basic command to start the evaluation. Currently, the only agent supported with MINT is CodeActAgent.

./evaluation/mint/scripts/run_infer.sh [model_config] [subset] [eval_limit]

where model_config is mandatory, while subset and eval_limit are optional.

  • model_config, e.g. eval_gpt4_1106_preview, is the config group name for your LLM settings, as defined in your config.toml.

  • subset, e.g. math, is the subset of the MINT benchmark to evaluate on, defaulting to math. It can be either: math, gsm8k, mmlu, theoremqa, mbpp,humaneval.

  • eval_limit, e.g. 2, limits the evaluation to the first eval_limit instances, defaulting to all instances.

Note: in order to use eval_limit, you must also set subset.

Let's say you'd like to run 3 instances on the gsm8k subset using eval_gpt4_1106_preview, then your command would be:

./evaluation/swe_bench/scripts/run_infer.sh eval_gpt4_1106_preview gsm8k 3

Reference

@misc{wang2024mint,
    title={MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback},
    author={Xingyao Wang and Zihan Wang and Jiateng Liu and Yangyi Chen and Lifan Yuan and Hao Peng and Heng Ji},
    year={2024},
    eprint={2309.10691},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}