Mind Lab Toolkit (MinT)

OpenPI VLA SDK

This page documents demos/embodied/openpi_vla_sdk.py in mint-quickstart.

What this demo does

  • Uses top-level mint for the Tinker-compatible client surface.
  • Uses import mint.mint as mintx for the MinT-only OpenPI / VLA extension surface.
  • Runs one minimal SDK-shaped VLA training round-trip: create OpenPI training client -> train_step -> save_weights_for_sampler.
  • Keeps the OpenPI-specific API out of top-level mint, so the namespace boundary stays explicit.

Namespace shape

import mint
import mint.mint as mintx

service_client = mint.ServiceClient()
training_client = mintx.create_openpi_training_client(service_client)

Use mintx (mint.mint) for the MinT-only OpenPI helper namespace.

Core data shape

model_input.chunks
  1. image: base_0_rgb
  2. image: left_wrist_0_rgb
  3. image: right_wrist_0_rgb
  4. encoded_text: prefix_tokens

loss_fn_inputs
  - state
  - target_tokens
  - weights
  - token_ar_mask
  - optional: logprobs + advantages

The SDK helper mintx.build_openpi_fast_datum(...) constructs this Datum shape for you.

How to run

pip install git+https://github.com/MindLab-Research/mindlab-toolkit.git python-dotenv
export MINT_API_KEY=sk-...
python demos/embodied/openpi_vla_sdk.py

Use the MinT endpoint that matches your region:

  • Mainland China: https://mint-cn.macaron.xin/
  • Outside Mainland China: https://mint.macaron.xin/

Parameters (env vars)

  • MINT_API_KEY / TINKER_API_KEY: auth
  • MINT_BASE_URL / TINKER_BASE_URL: server endpoint
  • MINT_OPENPI_SDK_BASE_MODEL: default openpi/pi0-fast-libero-low-mem-finetune
  • MINT_OPENPI_SDK_LORA_RANK: default 16
  • MINT_OPENPI_SDK_LR: default 0.003
  • MINT_OPENPI_SDK_SAMPLER_NAME: default mint-openpi-sdk-example
  • MINT_OPENPI_SDK_CREATE_TIMEOUT_SECONDS: default 1200
  • MINT_OPENPI_SDK_STEP_TIMEOUT_SECONDS: default 1200
  • MINT_OPENPI_SDK_SAVE_TIMEOUT_SECONDS: default 1200
  • MINT_OPENPI_SDK_TTL_SECONDS: default 3600

Main flow

def run_example():
    service_client = mint.ServiceClient()
    training_client = mintx.create_openpi_training_client(service_client, ...)

    info = training_client.get_info()
    step_result = training_client.train_step(
        build_demo_batch(),
        "cross_entropy",
        adam_params=mint.types.AdamParams(learning_rate=0.003),
    ).result(...)

    sampler = training_client.save_weights_for_sampler(
        "mint-openpi-sdk-example",
        ttl_seconds=3600,
    ).result(...)

Expected output shape

The script prints a Python dict with these keys:

model_id
model_name
lora_rank
train_step_metrics
sampler_path

Important notes

  • This is the main MinT SDK path for OpenPI / VLA.
  • Use mintx (mint.mint) for this helper instead of the default top-level mint namespace.
  • The current helper is pinned to openpi/pi0-fast-libero-low-mem-finetune with LoRA rank 16.

On this page