Cookbook
fingpt
Self-contained MinT experiment for FinGPT-style finance instruction tuning. Two runnable lines under one runtime: the official Fineval slice as the benchmark anchor, and a maintained sentiment SFT wrapper with held-out confirmation reruns. Not a paper-faithful reproduction of the whole FinGPT family — Fineval and sentiment are separate local lines with different purposes.
At a glance
| Algorithm | LoRA SFT (two routes: Fineval slice, sentiment) |
| Base model | Qwen/Qwen3-4B-Instruct-2507 |
| Training data | Fineval slice and sentiment via autoresearch.sh |
| Benchmark | Fineval anchor data/fingpt-fineval/test.jsonl; held-out sentiment on fpb, fiqa-sa, tfns, nwgi |
| Primary metrics | METRIC eval_accuracy; sentiment also eval_micro_f1, eval_weighted_f1, eval_macro_f1 |
| Upstream README | Open in mint-cookbook → |
For setup, runnable commands, and full eval protocol, see the upstream README. The experiment follows the shared cookbook lifecycle: uv sync → --dry-run → --eval-only → train.