Kinematics Lab/OpenVLA grasping
VLAHardware: Max8 Hz · TensorRT INT8

OpenVLA grasping

Open-source VLA model for manipulation — no proprietary checkpoints.

About this demo

OpenVLA (7B parameter open Vision-Language-Action model) running on Jetson Thor-series modules for general-purpose grasping. Fine-tune on your own teleop data. Inspired by Jetson AI Lab Research Group's OpenVLA review.

Highlights

  • Open weights (no proprietary lock-in)
  • Fine-tunable on consumer GPU
  • TensorRT-optimized export
  • Drop-in replacement for proprietary VLAs

Supported robots

Any 6-DoF arm

Cookie Settings

We use cookies to analyse site traffic and personalise content. Read our Cookie Policy for details.