Habitat 2.0 Overview

An overview of Habitat 2.0 with documentation, quickstart code, and reproducing the benchmark results.

Quick Start

To get started with Habitat 2.0, see the quick start Colab tutorial and the Gym API tutorial.

Local Installation

See the Habitat Lab README for steps to install Habitat.

Interactive Play Script

Test the Habitat environments using your keyboard and mouse to control the robot. On your local machine with a display connected run the following:

python examples/interactive_play.py --never-end

You may be asked to first install a specific version of PyGame. This script will work on Linux or MacOS. For more information about the interactive play script, see the documentation string at the top of the file.

RL Training with Habitat Baselines

Habitat includes an implementation of DD-PPO. As an example, start training a pick skill policy with:

python -u habitat_baselines/run.py --exp-config habitat_baselines/config/rearrange/ddppo_pick.yaml --run-type train

Find the complete list of RL configurations here, any config starting with “ddppo” can be substituted into --exp-config. See here for more information on how to run with Habitat Baselines.

Home Assistant Benchmark (HAB) Tasks

To run the HAB tasks, use any of the training configurations here. For example, to run monolithic RL training on the Tidy House task run:

python -u habitat_baselines/run.py --exp-config habitat_baselines/config/rearrange/hab/ddppo_tidy_house.yaml --run-type train

Task-Planning with Skills RL Baseline

Here we will detail how to run the Task-Planning with Skills trained via reinforcement learning (TP-SRL) baseline from the Habitat 2.0 paper. This method utilizes a task-planner and a Planning Domain Definition Language to sequence together low-level skill policies trained independently with reinforcement learning (RL).

  1. First, train the skills via reinforcement learning. For example, to train the Place policy run
python -u habitat_baselines/run.py --exp-config habitat_baselines/config/rearrange/ddppo_place.yaml --run-type train CHECKPOINT_FOLDER ./place_checkpoints/
  1. To work on HAB tasks, you must also train a pick, nav_to_obj, open_cab, close_cab, open_fridge, and close_fridge policy. To do so, substitute the name of the other skill for place in the above command.
  2. By default, the TP-SRL baseline will look for the skill checkpoints as data/models/[skill name].pth in the Habitat Lab directory as configured here for each skill. The tp-srl.yaml file can be changed to point to the skills you would like to evaluate, or you can copy the model checkpoints in data/models/.
  3. Evaluate the TP-SRL baseline on the tidy_house HAB task via:
python -u habitat_baselines/run.py --exp-config habitat_baselines/config/rearrange/hab/tp_srl.yaml --run-type eval BASE_TASK_CONFIG_PATH configs/tasks/rearrange/tidy_house.yaml

Evaluate on different HAB tasks by changing the BASE_TASK_CONFIG_PATH. The TP-SRL baseline only runs in evaluation mode.

Running the Benchmark

To reproduce the benchmark table from the Habitat 2.0 paper follow these steps:

  1. Download the benchmark assets:
python -m habitat_sim.utils.datasets_download --uids hab2_bench_assets
  1. Copy the benchmark episodes into the data folder.
cp data/hab2_bench_assets/bench_scene.json.gz data/ep_datasets/
  1. Run the benchmark.
bash scripts/bench_runner.sh
  1. Generate the results table.
python scripts/plot_bench.py