Mind Lab Toolkit (MinT)
Cookbook

fingpt

Self-contained MinT experiment for FinGPT-style finance instruction tuning. Two runnable lines under one runtime: the official Fineval slice as the benchmark anchor, and a maintained sentiment SFT wrapper with held-out confirmation reruns. Not a paper-faithful reproduction of the whole FinGPT family — Fineval and sentiment are separate local lines with different purposes.

At a glance

AlgorithmLoRA SFT (two routes: Fineval slice, sentiment)
Base modelQwen/Qwen3-4B-Instruct-2507
Training dataFineval slice and sentiment via autoresearch.sh
BenchmarkFineval anchor data/fingpt-fineval/test.jsonl; held-out sentiment on fpb, fiqa-sa, tfns, nwgi
Primary metricsMETRIC eval_accuracy; sentiment also eval_micro_f1, eval_weighted_f1, eval_macro_f1
Upstream READMEOpen in mint-cookbook →

For setup, runnable commands, and full eval protocol, see the upstream README. The experiment follows the shared cookbook lifecycle: uv sync--dry-run--eval-only → train.

On this page