|
|
hai 1 ano | |
|---|---|---|
| .. | ||
| EDA | hai 1 ano | |
| agent_bench | hai 1 ano | |
| biocoder | hai 1 ano | |
| bird | hai 1 ano | |
| gaia | hai 1 ano | |
| gorilla | hai 1 ano | |
| gpqa | hai 1 ano | |
| humanevalfix | hai 1 ano | |
| logic_reasoning | hai 1 ano | |
| miniwob | hai 1 ano | |
| mint | hai 1 ano | |
| ml_bench | hai 1 ano | |
| regression | hai 1 ano | |
| static | hai 1 ano | |
| swe_bench | hai 1 ano | |
| toolqa | hai 1 ano | |
| utils | hai 1 ano | |
| webarena | hai 1 ano | |
| README.md | hai 1 ano | |
| TUTORIAL.md | hai 1 ano | |
| __init__.py | hai 1 ano | |
This folder contains code and resources to run experiments and evaluations.
To better organize the evaluation folder, we should follow the rules below:
evaluation/swe_bench should contain
all the preprocessing/evaluation/analysis scripts.evaluation/swe_benchevaluation/ml_benchevaluation/humanevalfixevaluation/gaiaevaluation/EDAevaluation/mintevaluation/agent_benchevaluation/birdevaluation/logic_reasoningCheck this huggingface space for visualization of existing experimental results.
You can start your own fork of our huggingface evaluation outputs and submit a PR of your evaluation results to our hosted huggingface repo via PR following the guide here.