AI Habitat


What is Embodied AI?

The study of intelligent systems with a physical or virtual embodiment (robots and egocentric personal assistants). The embodiment hypothesis is the idea that “intelligence emerges in the interaction of an agent with an environment and as a result of sensorimotor activity”.

What is AI Habitat?

Habitat is a simulation platform for research in Embodied AI.

Our goal is to advance the science and engineering of Embodied AI. Imagine walking up to a home robot and asking “Hey robot – can you go check if my laptop is on my desk? And if so, bring it to me”. Or asking an egocentric AI assistant (sitting on your smart glasses): “Hey – where did I last see my keys?”.

AI Habitat enables training of such embodied AI agents (virtual robots and egocentric assistants) in a highly photorealistic & efficient 3D simulator, before transferring the learned skills to reality.

This empowers a paradigm shift from ‘internet AI’ based on static datasets (e.g. ImageNet, COCO, VQA) to embodied AI where agents act within realistic environments, bringing to the fore active perception, long-term planning, learning from interaction, and holding a dialog grounded in an environment.

Why Simulation?

Training/testing embodied AI agents in the real world is

  • Slow: the real world runs no faster than real time and cannot be parallelized,
  • Dangerous: poorly-trained agents can unwittingly injure themselves or the human wearing the egocentric device, the environment, or others,
  • Expensive: both the agent and the environment(s) in which they execute are expensive,
  • Difficult to control/reproduce: replicating conditions (particularly corner-cases) or experiments is often difficult.

Simulations can run orders of magnitude faster than real-time and can be parallelized over a cluster; training/testing in simulation is safe, cheap, and enables fair systematic benchmarking of progress. Once a promising approach has been developed and tested in simulation, it can be transferred to physical platforms.

Why the name Habitat? Because that’s where AI agents live 🙂.

Overall, Habitat consists of Habitat-Sim, Habitat Lab, and Habitat Challenge.

Habitat-Sim

A flexible, high-performance 3D simulator with configurable agents, multiple sensors, and generic 3D dataset handling (with built-in support for MatterPort3D, Gibson, Replica, and other datasets). When rendering a scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (FPS) running single-threaded, and reaches over 10,000 FPS multi-process on a single GPU!

Habitat-Lab

Habitat-Lab is a modular high-level library for end-to-end development in embodied AI — defining embodied AI tasks (e.g. navigation, interaction, instruction following, question answering), configuring embodied agents (physical form, sensors, capabilities), training these agents (via imitation or reinforcement learning, or no learning at all as in classical SLAM), and benchmarking their performance on the defined tasks using standard metrics.

Habitat Challenge

An annual autonomous navigation challenge (hosted on the EvalAI platform) that aims to benchmark and accelerate progress in embodied AI. Unlike classical ‘internet AI’ image dataset-based challenges (e.g., ImageNet LSVRC, COCO, VQA), this is a challenge where participants upload code not predictions. The uploaded agents are evaluated in novel (unseen) environments to test for generalization.

The first iteration (in 2019) of this challenge was held in conjuction with the Habitat: Embodied Agents Challenge and Workshop at CVPR 2019. It received over 150 competition entries from 16 teams (across the 2 challenge tracks) and ~75 workshop attendees. The 2020 Habitat challenge was held in conjunction with a special 2-day Embodied AI workshop at CVPR 2020. It received over 563 submissions from 27 teams (across all tracks).

Team: Current Contributors

Team: Past Contributors

Habitat Affiliations

Citing Habitat

If you use the Habitat platform in your research, please cite the following paper:

@inproceedings{habitat19iccv,
  title     =     {Habitat: {A} {P}latform for {E}mbodied {AI} {R}esearch},
  author    =     {{Manolis Savva*} and {Abhishek Kadian*} and {Oleksandr Maksymets*} and Yili Zhao and Erik Wijmans and Bhavana Jain and Julian Straub and Jia Liu and Vladlen Koltun and Jitendra Malik and Devi Parikh and Dhruv Batra},
  booktitle =     {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year      =     {2019}
}

Contact

Reach out at habitat@fb.com for any questions, suggestions and feedback. We also have a dev slack channel, please follow this link to get added to the channel.

Acknowledgments

The Habitat project would not have been possible without the support and contributions of many individuals. We are grateful to Angel Xuan Chang, Devendra Singh Chaplot, Xinlei Chen, Georgia Gkioxari, Daniel Gordon, Leonidas Guibas, Saurabh Gupta, Jerry (Zhi-Yang) He, Rishabh Jain, Or Litany, Joel Marcey, Dmytro Mishkin, Marcus Rohrbach, Amanpreet Singh, Yuandong Tian, Yuxin Wu, Fei Xia, Deshraj Yadav, Amir Zamir, and Jiazhi Zhang for their help.