Kinematics Lab

Open robotics demos

Reference apps that run on Kinematics Mini, Max, or any NVIDIA Jetson. Inspired by the Jetson AI Lab community. Source code for every demo.

Category

Hardware

20 of 20 demos

TeleopMini · Max

CockPit

Predefined teleop dashboard with dual cameras, 3D model, map, and controls.

60 Hz UI · <80ms glass-to-glassOpen demo →
DashboardMini · Max

My UI

Drag-and-drop dashboard — every CockPit widget, your layout.

RealtimeOpen demo →
NavigationMini · Max

Missions

Multi-waypoint autonomous patrols with a state-machine orchestrator.

Continuous · <50W on MiniOpen demo →
SystemMini · Max

Health

System monitor — CPU/GPU/RAM, thermal, power rails, ROS node status.

1 Hz refreshOpen demo →
SystemMini · Max

User Profile

Themes, accent colors, gamepad bindings, speed profiles per operator.

SettingsOpen demo →
FleetHardware: Max

Fleet Control

Coordinate multiple robots from one dashboard — status, missions, formation.

Realtime · scales to 50+ robotsOpen demo →
VLAHardware: Max

GR00T VLA pick-and-place

Language-conditioned manipulation with NVIDIA GR00T N1.5.

10 Hz inference · TensorRT FP16Open demo →
VLAHardware: Max

OpenVLA grasping

Open-source VLA model for manipulation — no proprietary checkpoints.

8 Hz · TensorRT INT8Open demo →
ManipulationHardware: Max

Diffusion Policy

Robust manipulation via denoising diffusion over action sequences.

30 Hz action chunksOpen demo →
ManipulationMini · Max

LeRobot ACT policy

Hugging Face LeRobot framework — train and deploy with one config.

20 HzOpen demo →
PerceptionMini · Max

Real-time object detection (YOLOv11)

TensorRT INT8 detection on dual RGBD streams with 3D position.

45 FPS @ 640px on Mini · 120 FPS on MaxOpen demo →
NavigationHardware: Max

NVBlox 3D mapping

Isaac-ROS volumetric mapping with TSDF + ESDF for navigation.

20 Hz integrationOpen demo →
NavigationHardware: Max

ReMEmbR long-horizon memory

Remember-and-reason navigation across long mission horizons.

Per-step <500msOpen demo →
TeleopHardware: Max

Voice-activated control (ROSA)

Natural-language robot control via on-device LLM + ROS2.

~1.5s end-to-end (mic → motion)Open demo →
NavigationMini · Max

Visual SLAM (GPS-denied)

Real-time stereo + IMU SLAM for indoor and drone navigation.

30 Hz, ~12W on MiniOpen demo →
LocomotionHardware: Max

Humanoid locomotion fine-tune

Pull a foundation walking policy, fine-tune in sim, deploy to G1.

1 kHz control loopOpen demo →
NavigationMini · Max

Autonomous quadruped patrol

Unitree Go2 with RTABMap SLAM, Nav2, and live RGBD streaming.

30 FPS, ~14WOpen demo →
ManipulationHardware: Max

Warehouse pick-and-stow

Cluttered-bin manipulation with F/T wrist sensor on Max.

Cycle: 4.2sOpen demo →
PerceptionHardware: Max

Edge NeRF capture

Neural Radiance Fields for environment capture, optimized for Jetson.

~3 min training per sceneOpen demo →
VLAHardware: Max

VLM agent in Isaac Sim

Vision-language reasoning over a simulated robotics scene.

~2s reasoning stepOpen demo →

Built one we should add?

OpenBrain is MIT-licensed. Open a PR with your demo and we'll feature it here. Inspired by the Jetson AI Lab community model.

Contribute on GitHub

Cookie Settings

We use cookies to analyse site traffic and personalise content. Read our Cookie Policy for details.