Skip to content

BabyAI platform. A testbed for training agents to understand and execute language commands.

License

Notifications You must be signed in to change notification settings

gruaz-lucas/babyai

 
 

Repository files navigation

Build Status

2023 update

All BabyAI environments are now part of the Minigrid library. This repository is not actively maintained.

Training RL agents on Minigrid (and BabyAI) environments can be done using this repository.

This repository still contains scripts which, if adapted to the Minigrid library, could be used to:

BabyAI 1.1

BabyAI is a platform used to study the sample efficiency of grounded language acquisition, created at Mila.

The master branch of this repository is updated frequently. If you are looking to replicate or compare against the baseline results, we recommend you use the BabyAI 1.1 branch and cite both:

@misc{hui2020babyai,
    title={BabyAI 1.1},
    author={David Yu-Tung Hui and Maxime Chevalier-Boisvert and Dzmitry Bahdanau and Yoshua Bengio},
    year={2020},
    eprint={2007.12770},
    archivePrefix={arXiv},
    primaryClass={cs.AI}
}

and the ICLR19 paper, which details the experimental setup and BabyAI 1.0 baseline results. Its source code is in the iclr19 branch:

@inproceedings{
  babyai_iclr19,
  title={Baby{AI}: First Steps Towards Grounded Language Learning With a Human In the Loop},
  author={Maxime Chevalier-Boisvert and Dzmitry Bahdanau and Salem Lahlou and Lucas Willems and Chitwan Saharia and Thien Huu Nguyen and Yoshua Bengio},
  booktitle={International Conference on Learning Representations},
  year={2019},
  url={https://openreview.net/forum?id=rJeXCo0cYX},
}

This README covers instructions for installation and troubleshooting. Other instructions are:

Installation

Conda (Recommended)

If you are using conda, you can create a babyai environment with all the dependencies by running:

git clone https://github.com/mila-iqia/babyai.git
cd babyai
conda env create -f environment.yaml
source activate babyai

After that, execute the following commands to setup the environment.

cd ..
git clone https://github.com/maximecb/gym-minigrid.git
cd gym-minigrid
pip install --editable .

The last command installs the repository in editable mode. Move back to the babyai repository and install that in editable mode as well.

cd ../babyai
pip install --editable .

Finally, follow these instructions

Manual Installation

Requirements:

  • Python 3.6+
  • OpenAI Gym
  • NumPy
  • PyTorch 0.4.1+
  • blosc

First install PyTorch for on your platform.

Then, clone this repository and install the other dependencies with pip3:

git clone https://github.com/mila-iqia/babyai.git
cd babyai
pip3 install --editable .

Finally, follow these instructions

BabyAI Storage Path

Add this line to .bashrc (Linux), or .bash_profile (Mac).

export BABYAI_STORAGE='/<PATH>/<TO>/<BABYAI>/<REPOSITORY>/<PARENT>'

where /<PATH>/<TO>/<BABYAI>/<REPOSITORY>/<PARENT> is the folder where you typed git clone https://github.com/mila-iqia/babyai.git earlier.

Models, logs and demos will be produced in this directory, in the folders models, logs and demos respectively.

Downloading the demos

These can be downloaded here

Ensure the downloaded file has the following md5 checksum (obtained via md5sum): 1df202ef2bbf2de768633059ed8db64c

before extraction:

gunzip -c copydemos.tar.gz | tar xvf -

Using the pixels architecture does not work with imitation learning, because the demonstrations were not generated to use pixels.

Troubleshooting

If you run into error messages relating to OpenAI gym, it may be that the version of those libraries that you have installed is incompatible. You can try upgrading specific libraries with pip3, eg: pip3 install --upgrade gym. If the problem persists, please open an issue on this repository and paste a complete error message, along with some information about your platform (are you running Windows, Mac, Linux? Are you running this on a Mila machine?).

Pixel Observations

Please note that the default observation format is a partially observable view of the environment using a compact encoding, with 3 input values per visible grid cell, 7x7x3 values total. These values are not pixels. If you want to obtain an array of RGB pixels as observations instead, use the RGBImgPartialObsWrapper. You can use it as follows:

import babyai
from gym_minigrid.wrappers import *
env = gym.make('BabyAI-GoToRedBall-v0')
env = RGBImgPartialObsWrapper(env)

This wrapper, as well as other wrappers to change the observation format can be found here.

About

BabyAI platform. A testbed for training agents to understand and execute language commands.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Dockerfile 0.2%