Habitat-Lab is a modular high-level library for end-to-end development in embodied AI. It is designed to train agents to perform a wide variety of embodied AI tasks in indoor environments, as well as develop agents that can interact with humans in performing these tasks.
Towards this goal, Habitat-Lab is designed to support the following features:
- Flexible task definitions: allowing users to train agents in a wide variety of single and multi-agent tasks (e.g. navigation, rearrangement, instruction following, question answering, human following), as well as define novel tasks.
- Diverse embodied agents: configuring and instantiating a diverse set of embodied agents, including commercial robots and humanoids, specifying their sensors and capabilities.
- Training and evaluating agents: providing algorithms for single and multi-agent training (via imitation or reinforcement learning, or no learning at all as in SensePlanAct pipelines), as well as tools to benchmark their performance on the defined tasks using standard metrics.
- Human in the loop interaction: providing a framework for humans to interact with the simulator, enabling to collect embodied data or interact with trained agents.
Habitat-Lab uses Habitat-Sim
as the core simulator. For documentation refer here.
- More information on the datasets can be found here at this habitat-lab repo. And also in the Habitat-Sim repo here
- Take note for the dataset download location. In the
data
folder, there should be ascene_datasets
,datasets
andversioned_data
folder. - This has a bridge with ROS, which may be useful for the physical experiments.
- Please refer to quickstart guide for more information on how to use Habitat-lab and i will create a folder just for the quick start.
- Let's download some 3D assets using Habitat-Sim's python data download utility:
( Note for the downloaded location, the
data
folder should be in the habitat-lab working directory. )-
Download (testing) 3D scenes:
python -m habitat_sim.utils.datasets_download --uids habitat_test_scenes --data-path data/
Note that these testing scenes do not provide semantic annotations.
Dataset (habitat_test_scenes) successfully downloaded. Source: '/home/tsaisplus/mrs_llm/habitat-lab/data/versioned_data/habitat_test_scenes' Symlink: '/home/tsaisplus/mrs_llm/habitat-lab/data/scene_datasets/habitat-test-scenes'
-
Download point-goal navigation episodes for the test scenes:
python -m habitat_sim.utils.datasets_download --uids habitat_test_pointnav_dataset --data-path data/
Dataset (habitat_test_pointnav_dataset) successfully downloaded. Source: '/home/tsaisplus/mrs_llm/habitat-lab/data/versioned_data/habitat_test_pointnav_dataset_1.0' Symlink: '/home/tsaisplus/mrs_llm/habitat-lab/data/datasets/pointnav/habitat-test-scenes'
-
- To modify some of the configurations of the environment, you can also use the
habitat.gym.make_gym_from_config
method that allows you to create a habitat environment using a configuration.
config = habitat.get_config(
"benchmark/rearrange/skills/pick.yaml",
overrides=["habitat.environment.max_episode_steps=20"]
)
env = habitat.gym.make_gym_from_config(config)
If you want to know more about what the different configuration keys overrides do, you can use this reference.
See examples/register_new_sensors_and_measures.py
for an example of how to extend habitat-lab from outside the source code.
Our vectorized environments are very fast, but they are not very verbose. When using VectorEnv
some errors may be silenced, resulting in process hanging or multiprocessing errors that are hard to interpret. We recommend setting the environment variable HABITAT_ENV_DEBUG
to 1 when debugging (export HABITAT_ENV_DEBUG=1
) as this will use the slower, but more verbose ThreadedVectorEnv
class. Do not forget to reset HABITAT_ENV_DEBUG
(unset HABITAT_ENV_DEBUG
) when you are done debugging since VectorEnv
is much faster than ThreadedVectorEnv
.
Browse the online Habitat-Lab documentation and the extensive tutorial on how to train your agents with Habitat. For Habitat 2.0, use this quickstart guide.
Can't find the answer to your question? Look up for common issues or try asking the developers and community on our Discussions forum.
Common task and episode datasets used with Habitat-Lab.
Habitat-Lab includes reinforcement learning (via PPO) baselines. For running PPO training on sample data and more details refer habitat_baselines/README.md.
ROS-X-Habitat (https://github.com/ericchen321/ros_x_habitat) is a framework that bridges the AI Habitat platform (Habitat Lab + Habitat Sim) with other robotics resources via ROS. Compared with Habitat-PyRobot, ROS-X-Habitat places emphasis on 1) leveraging Habitat Sim v2's physics-based simulation capability and 2) allowing roboticists to access simulation assets from ROS. The work has also been made public as a paper.
Note that ROS-X-Habitat was developed, and is maintained by the Lab for Computational Intelligence at UBC; it has not yet been officially supported by the Habitat Lab team. Please refer to the framework's repository for docs and discussions.