NOTE: MMF is still in beta mode and will replace Pythia framework. To get the latest Pythia code which doesn't contain MMF changes, please use the following command:
git clone --branch v0.3 https://github.com/facebookresearch/mmf pythia
MMF is a modular framework for vision and language multimodal research. Built on top of PyTorch, it features:
- Model Zoo: Reference implementations for state-of-the-art vision and language model including LoRRA (SoTA on VQA and TextVQA), Pythia model (VQA 2018 challenge winner), BAN and BUTD.
- Multi-Tasking: Support for multi-tasking which allows training on multiple dataset together.
- Datasets: Includes support for various datasets built-in including VQA, VizWiz, TextVQA, VisualDialog and COCO Captioning.
- Modules: Provides implementations for many commonly used layers in vision and language domain
- Distributed: Support for distributed training based on DataParallel as well as DistributedDataParallel.
- Unopinionated: Unopinionated about the dataset and model implementations built on top of it.
- Customization: Custom losses, metrics, scheduling, optimizers, tensorboard; suits all your custom needs.
You can use MMF to bootstrap for your next vision and language multimodal research project.
MMF can also act as starter codebase for challenges around vision and language datasets (TextVQA challenge, VQA challenge)
Follow installation instructions in the documentation.
Learn more about MMF here.
If you use MMF in your work, please cite:
@inproceedings{singh2018pythia,
title={Pythia-a platform for vision \& language research},
author={Singh, Amanpreet and Goswami, Vedanuj and Natarajan, Vivek and Jiang, Yu and Chen, Xinlei and Shah, Meet and Rohrbach, Marcus and Batra, Dhruv and Parikh, Devi},
booktitle={SysML Workshop, NeurIPS},
volume={2018},
year={2018}
}
MMF is licensed under BSD license available in LICENSE file