Parameters and Types
Core data types used throughout the MinT API
AdamParams
Configuration for Adam optimizer.
Fields:
| Field | Type | Default | Description |
|---|---|---|---|
learning_rate | float | 0.0001 | Step size for parameter updates |
beta1 | float | 0.9 | Exponential decay rate for first moment estimates |
beta2 | float | 0.95 | Exponential decay rate for second moment estimates |
eps | float | 1e-12 | Small constant for numerical stability |
Example:
from mint import types
training_client.optim_step(
types.AdamParams(learning_rate=1e-4, beta1=0.9, beta2=0.95)
).result()ModelInput
Represents tokenized input to the model.
Methods:
from_ints(tokens)- Create from list of token IDsto_ints()- Convert to list of token IDscontext_length()- Get total number of tokensappend_tokens(tokens)- Add tokens to the endappend_chunks(chunks)- Add text chunks
Supports:
- Pure text (list of token IDs)
SamplingParams
Controls text generation behavior.
Fields:
| Field | Type | Default | Description |
|---|---|---|---|
max_tokens | int | required | Maximum tokens to generate |
temperature | float | 1.0 | Randomness (0.0 = deterministic, higher = more random) |
top_k | int | -1 | Consider only top-k tokens (-1 = disabled) |
top_p | float | 1.0 | Nucleus sampling threshold |
seed | int | None | Random seed for reproducibility |
stop | list | None | Stop sequences (strings or token IDs) |
stop_token_ids | list | None | Token IDs that halt generation |
Example:
from mint import types
result = sampling_client.sample(
prompt=types.ModelInput.from_ints(tokens),
num_samples=4,
sampling_params=types.SamplingParams(
max_tokens=100,
temperature=0.8,
top_p=0.95,
stop_token_ids=[tokenizer.eos_token_id]
)
).result()LoraConfig
Configuration for LoRA adaptation.
Fields:
| Field | Type | Default | Description |
|---|---|---|---|
rank | int | 32 | Rank dimension for low-rank matrices (max: 64) |
seed | int | None | Initialization seed |
train_unembed | bool | true | Train unembedding layer |
train_mlp | bool | true | Train MLP layers |
train_attn | bool | true | Train attention layers |
Example:
training_client = service_client.create_lora_training_client(
base_model="Qwen/Qwen3-4B-Instruct-2507",
rank=16,
train_mlp=True,
train_attn=True,
train_unembed=True
)Checkpoint
Represents a saved model checkpoint.
Fields:
id- Unique identifiertype- “training” or “sampler”timestamp- Creation timesize- File size in bytespublic- Whether publicly accessible
TensorData
Wrapper for tensor data with conversion utilities.
Methods:
to_numpy()- Convert to NumPy arrayto_torch()- Convert to PyTorch tensorshape- Get tensor dimensions
TrainingRun
Metadata about a training session.
Fields:
id- Unique identifierbase_model- Foundation model usedowner- User who created the runcorrupted- Whether run encountered errorscheckpoints- List of saved checkpointscreated_at- Timestamp