DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping

1Institute for AI, Peking University, 2PKU-PsiBot Joint Lab, 3HKUST (Guangzhou), 4University of Pennsylvania
*Equal contribution, Corresponding authors

Abstract

Dexterous grasping remains a fundamental yet challenging problem in robotics. A general-purpose robot must be capable of grasping diverse objects in arbitrary scenarios. However, existing research typically relies on restrictive assumptions, such as single-object settings or limited environments, showing constrained generalization. We present DexGraspVLA, a hierarchical framework for robust generalization in language-guided general dexterous grasping and beyond. It utilizes a pre-trained Vision-Language model as the high-level planner and learns a diffusion-based low-level Action controller. The key insight to achieve generalization lies in iteratively transforming diverse language and visual inputs into domain-invariant representations via foundation models, where imitation learning can be effectively applied due to the alleviation of domain shift. Notably, our method achieves a 90+% dexterous grasping success rate under thousands of challenging unseen cluttered scenes. Empirical analysis confirms the consistency of internal model behavior across environmental variations, validating our design. DexGraspVLA also, for the first time, simultaneously demonstrates free-form long-horizon prompt execution, robustness to adversarial objects and human disturbance, and failure recovery. Extended application to nonprehensile grasping further proves its generality.

Method

DexGraspVLA adopts a hierarchical architecture composed of an off-the-shelf VLM-based high-level planner and a diffusion-based low-level controller. Given a cluttered scene, the planner grounds the user prompt, e.g., "clear the table", in the observation and proposes grasping instructions \(\{l_i\}\) sequentially. For each instruction \(l\), e.g., "grasp the cookie", the planner identifies the target object \(A\) from the head image \(\mathbf{I}_{t_0}\) and marks its bounding box \((x_1^A, y_1^A, x_2^A, y_2^A)\) at initial time \(t_0\).

The controller consists of four parts:

  1. Two segmentation models including SAM, which obtains the object's mask \(\mathbf{m}_{t_0}\) at \(t_0\), and Cutie, a video segmentation model that continuously tracks the mask \(\mathbf{m}_t\) during each grasping process.
  2. Three vision encoders including two frozen DINOv2 that extract features from the third-view head-camera image \(\mathbf{I}_t^h\) and the first-view wrist-camera image \(\mathbf{I}_t^w\), and a trainable ViT that deals with the mask \(\mathbf{m}_t\).
  3. Three MLP projectors that map the visual features and robot proprioceptive state into the same feature space, forming a feature sequence.
  4. A DiT that predicts an action chunk from \(\mathbf{a}_t\) to \(\mathbf{a}_{t+H-1}\).

During the controller's grasping process, the planner monitors the execution and triggers a scripted placing motion when grasping succeeds. After each grasping attempt, the planner resets the robot and proposes a new grasping instruction. This process continues until the user prompt is fully completed.

The controller is trained on a dataset consisting of 2,094 successful grasping episodes in cluttered scenes. These demonstrations are collected at typical human motion speeds, with each episode taking approximately 3.5 seconds. In total, this amounts to roughly two hours of data.

Performance Demonstrations

Dexterous Grasping in Unseen Cluttered Scenes



Lighting Generalization



Background Generalization



Grasping Small Objects



Grasping Industry Objects



Re-grasps



Human Disturbance



Long-horizon Grasping



Extended Application: Nonprehensile Grasping

To evaluate its generality, we extend DexGraspVLA to nonprehensile grasping, where the robot first repositions the nonprehensile object toward the table edge before executing a stable grasp. The controller is trained on 1,029 human demonstrations. Below are some deployment results.



Robot Shakes Hand with Human

Conclusion

We present DexGraspVLA, a hierarchical VLA framework aiming for robust generalization in language-guided dexterous grasping and beyond. By leveraging a pre-trained VLM as the high-level planner and vision foundation models in the low-level controller, the system transforms multimodal inputs into domain-invariant representations and learns robust closed-loop policies via imitation learning. Our large-scale evaluations show over 90% grasping success across thousands of unseen cluttered scenes in a zero-shot setting, with empirical evidence of consistent internal behavior. DexGraspVLA also handles free-form long-horizon prompts, recovers from failures, and extends to nonprehensile grasping, demonstrating broad applicability.

BibTeX

@misc{zhong2025dexgraspvla,
      title={DexGraspVLA: A Vision-Language-Action Framework Towards General Dexterous Grasping}, 
      author={Yifan Zhong and Xuchuan Huang and Ruochong Li and Ceyao Zhang and Zhang Chen and Tianrui Guan and Fanlian Zeng and Ka Num Lui and Yuyao Ye and Yitao Liang and Yaodong Yang and Yuanpei Chen},
      year={2025},
      eprint={2502.20900},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2502.20900}, 
}