Mind Lab Toolkit (MinT)
Cookbook

Cookbook

The MinT Cookbook is a separate repository that hosts longer recipe-style examples — each one is a runnable directory with a pyproject.toml, a train.py, an autoresearch.sh, and a README that explains what the experiment shows.

Browse the public cookbook on GitHub →

Available experiments

The currently maintained experiments, all running on Qwen/Qwen3-4B-Instruct-2507:

ExperimentWhat it showsAlgorithmPrimary metric
chat-dpoPairwise chat preference DPO with held-out preference evalDPOeval_pair_accuracy
dapo-aimeDirect GRPO on DAPO-Math-17k, AIME 2024 reportable benchmarkdirect GRPOeval_accuracy
fingptFinGPT-style finance instruction tuning, Fineval anchor + sentiment SFTLoRA SFTeval_accuracy
lawbenchFull 20-task LawBench benchmark with LoRA SFT baselineLoRA SFTeval_lawbench_avg

When to use the Cookbook

  • You want a complete, runnable experiment rather than a snippet.
  • You're looking for a baseline or a published configuration to fork.
  • You need patterns beyond the four-section algorithm pages in Customize (longer-running training, evaluation harnesses, multi-stage pipelines).

How it relates to the rest of the docs

ResourceAudienceLength
Get Started → Human QuickstartFirst-time users7-step linear flow
CustomizeDevelopers picking an algorithmOne page per algorithm/concept
mint-quickstartFirst-run reproducible scriptsOne script per topic
mint-cookbookResearchers running full experimentsOne recipe per directory

Contributing. The cookbook accepts community contributions. Open a pull request against mint-cookbook with a new recipe directory and a README describing the experiment, the dataset, and the expected metric.

On this page