Skip to content

Commit

Permalink
Update BabyAI to use MiniGrid NumPy-based renderer (mila-iqia#87)
Browse files Browse the repository at this point in the history
* Started refactoring to eliminate PyQT dependency

* Try to fix travis dependency issue

* Travis debugging mila-iqia#2

* Replaced test_mission_gen.py by manual_control.py

* Ask people to install pyqt5 if they run gui.py

* Remove gui.py since we also have manual_control.py
  • Loading branch information
maximecb authored Dec 16, 2019
1 parent eec7cb6 commit 0b49fe4
Show file tree
Hide file tree
Showing 9 changed files with 122 additions and 834 deletions.
4 changes: 2 additions & 2 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,9 @@ before_install:
# command to install dependencies
install:
- pip3 install http://download.pytorch.org/whl/cpu/torch-0.4.1-cp35-cp35m-linux_x86_64.whl
- pip3
- pip3 install --editable .
- pip3 install flake8
- pip3 install scikit-build
- pip3 install --editable .

# command to run tests
script:
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ Requirements:
- Python 3.5+
- OpenAI Gym
- NumPy
- PyQT5
- PyTorch 0.4.1+
- blosc

Start by manually installing PyTorch. See the [PyTorch website](http://pytorch.org/)
for installation instructions specific to your platform.
Expand Down Expand Up @@ -98,13 +98,13 @@ Models, logs and demos will be produced in this directory, in the folders `model
To run the interactive GUI application that illustrates the platform:

```
scripts/gui.py
scripts/manual_control.py
```

The level being run can be selected with the `--env` option, eg:

```
scripts/gui.py --env BabyAI-UnlockPickup-v0
scripts/manual_control.py --env BabyAI-UnlockPickup-v0
```

### The Levels
Expand Down
2 changes: 1 addition & 1 deletion docs/codebase.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ In `scripts`:
- use `train_intelligent_expert.py` to train an agent with an interactive imitation learning algorithm that incrementally grows the training set by adding demonstrations for the missions that the agent currently fails
- use `evaluate.py` to evaluate a trained agent
- use `enjoy.py` to visualze an agent's behavior
- use `gui.py` or `test_mission_gen.py` to see example missions from BabyAI levels
- use `manual_control.py` to visualize example missions from BabyAI levels
2 changes: 0 additions & 2 deletions environment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,6 @@ channels:
dependencies:
- python=3.6
- pytorch=0.4.1
- torchvision
- pyqt
- numpy
- blosc
- pip:
Expand Down
Loading

0 comments on commit 0b49fe4

Please sign in to comment.