Manipulate-Anything:
Automating Real-World Robots using Vision-Language Models

1University of Washington 2NVIDIA
3Allen Institute for Artifical Intelligence 4Universidad Católica San Pablo

* Equal contribution

Abstract

Large-scale endeavors like RT-1 and widespread community efforts such as Open-X-Embodiment have contributed to growing the scale of robot demonstration data. However, there is still an opportunity to improve the quality, quantity, and diversity of robot demonstration data. Although vision-language models have been shown to automatically generate demonstration data, their utility has been limited to environments with privileged state information, they require hand-designed skills, and are limited to interactions with few object instances. We propose MANIPULATE-ANYTHING, a scalable automated generation method for real-world robotic manipulation. Unlike prior work, our method can operate in real-world environments without any privileged state information, hand-designed skills, and can manipulate any static object. We evaluate our method using two setups. First, MANIPULATE-ANYTHING successfully generates trajectories for all 5 real-world and 12 simulation tasks, significantly outperforming existing methods like VoxPoser. Second, MANIPULATE-ANYTHING’s demonstrations can train more robust behavior cloning policies than training with human demonstrations, or from data generated by VoxPoser and Code-As-Policies. We believe MANIPULATE-ANYTHING can be the scalable method for both generating data for robotics and solving novel tasks in a zero-shot setting.

Real World Results

Manipulate-Anything



Simulation

Generated data

Method
applied to task


Trained instance of PerAct

Method
with data for task


Manipulate-Anything Framework

The process begins by inputting a scene representation and a natural language task instruction into a VLM, which identifies objects and determines sub-goals. For each sub-goal, we provide multi-view images, verification conditions, and task goals to the action generation module, producing a task-specific grasp pose or action code. This leads to a temporary goal state, assessed by the sub-goal verification module for error recovery. Once all sub-goals are achieved, we filter the trajectories to obtain successful demonstrations for downstream policy training.

Action Generation Module

This module enables generation of two types of actions: object-centric and agent centric. For object-centric actions which require manipulation of an object, we leverage a foundation grasp model to generate all suitable grasps. Next, we leverage a VLM to detect the object from mult-view frames, and along with the candidate grasp poses and target subgoal, query the VLM to select the best view point. We filter and select the optimal grasp for the sub-goal. For more agent-centric actions, the view-point selection process is the same, and the goal is to output code representing the change in pose of the end-effector from the current frame.

Action Distribution

We compare the action distribution of data generated by various methods against human-generated demonstrations via RLBench on the same set of tasks. We observed a high similarity between the distribution of our generated data and thehuman-generated data. This is further supported by the computed CD between our methods and the RLBench data, which yields the lowest (CD=0.056).