Visual actionable affordance has emerged as a transformative approach in robotics, focusing on perceiving interaction areas prior to manipulation. Traditional methods rely on pixel sampling to identify successful interaction samples or processing pointclouds for affordance mapping. However, these approaches are computationally intensive and struggle to adapt to diverse and dynamic environments. This paper introduces ManipGPT, a framework designed to predict optimal interaction areas for articulated objects using a large pre-trained vision transformer (ViT). We created a dataset of 9.9k simulated and real images to bridge the visual sim-to-real gap and enhance real-world applicability. By fine-tuning the vision transformer on this small dataset, we significantly improved part-level affordance segmentation, adapting the model’s in-context segmentation capabilities to robot manipulation scenarios. This enables effective manipulation across simulated and real-world environments by generating part-level affordance masks, paired with an impedance adaptation policy, sufficiently eliminating the need for complex datasets or perception systems.
Our method processes an RGB image with a visual prompt to generate an affordance mask, which determines the contact point and manipulation direction.
Our method involves fine-tuning a vision transformer on part-level affordance masks, integrating it with an impedance controller for real-world manipulation. Below is the system pipeline of our approach.
@article{kim2025manipgpt,
author = {Kim, Taewhan and Bae, Hojin and Li, Zeming and Li, Xiaoqi and Ponomarenko, Iaroslav and Wu, Ruihai and Dong, Hao},
title = {ManipGPT: Is Affordance Segmentation by Large Vision Models Enough for Articulated Object Manipulation?},
journal = {IROS},
year = {2025},
}