Skip to content

RoboUniview/RoboMM

Repository files navigation

RoboMM: All-in-One Multimodal Large Model for Robotic Manipulation

🚩Project Page | 📑Paper | 🤗Data

This is the unorganized version of the code for RoboMM: All-in-One Multimodal Large Model for Robotic Manipulation.

RoboMM: All-in-One Multimodal Large Model for Robotic Manipulation
Fen Yan*, Fanfan Liu*, Liming Zheng, Yufeng Zhong, Yiyang Huang, Zechao Guan, Chenjian Feng, Lin Ma
*Equal Contribution †Corresponding Authors

In recent years, robotics has advanced significantly through the integration of larger models and large-scale datasets. However, challenges remain in applying these models to 3D spatial interactions and managing data collection costs. To address these issues, we propose the multimodal robotic manipulation model, RoboMM, along with the comprehensive dataset, RoboData. RoboMM enhances 3D perception through camera parameters and occupancy supervision. Building on OpenFlamingo, it incorporates Modality-Isolation-Mask and multimodal decoder blocks, improving modality fusion and fine-grained perception. % , thus boosting performance in robotic manipulation tasks. RoboData offers the complete evaluation system by integrating several well-known datasets, achieving the first fusion of multi-view images, camera parameters, depth maps, and actions, and the space alignment facilitates comprehensive learning from diverse robotic datasets. Equipped with RoboData and the unified physical space, RoboMM is the first generalist policy that enables simultaneous evaluation across all tasks within multiple datasets, rather than focusing on limited selection of data or tasks. Its design significantly enhances robotic manipulation performance, increasing the average sequence length on the CALVIN from 1.7 to 3.3 and ensuring cross-embodiment capabilities, achieving state-of-the-art results across multiple datasets. The code will be released following acceptance.

Performance

Results

🔥 Updates

  • 2024.12: We release RoboMM paper on arxiv!We release the training and inference code!

Training the model (using DDP):

Currently, the Calvin data has been fully uploaded, and RoboMM can now be trained using only the Calvin dataset

Download the data from Data and extract it. Modify the corresponding paths in the config file and use the following files for training.

bash tools/train.sh 8 --config ${config}

Evaluating the model

bash tools/test.sh 8 ${ckpt}

✅ TODO

  • RoboMM traing code
  • RoboMM inference code
  • RoboMM evaluation code
  • RoboMM training data
  • RoboMM model

Acknowledgment

CALVIN

Original: https://github.com/mees/calvin License: MIT

Meta-World

Original: https://github.com/Farama-Foundation/Metaworld License: MIT

LIBERO

Original: https://github.com/Lifelong-Robot-Learning/LIBERO License: MIT

RoboCasa

Original: https://github.com/robocasa/robocasa License: MIT

RoboMimic

Original: https://github.com/ARISE-Initiative/robomimic License: MIT

RoboCAS

Original: https://github.com/notFoundThisPerson/RoboCAS-v0 License: MIT

RLBench

Original: https://github.com/stepjam/RLBench License: MIT

Colosseum

Original: https://github.com/robot-colosseum/robot-colosseum License: MIT

Maniskill2

Original: https://github.com/haosulab/ManiSkill/tree/v0.5.3 License: Apache

OpenAI CLIP

Original: https://github.com/openai/CLIP License: MIT

OpenFlamingo

Original: https://github.com/mlfoundations/open_flamingo License: MIT

RoboFlamingo

Original: https://github.com/RoboFlamingo/RoboFlamingo License: MIT

RoboUniview

Original: https://github.com/RoboUniview/RoboUniview License: MIT

Cite our work:

@misc{yan2024robomm,
      title={RoboMM: All-in-One Multimodal Large Model for Robotic Manipulation}, 
      author={Feng Yan and Fanfan Liu and Liming Zheng and Yufeng Zhong and Yiyang Huang and Zechao Guan and Chengjian Feng and Lin Ma},
      year={2024},
      eprint={2412.07215},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2412.07215}, 
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages