Skip to content


AI Habitat enables training of embodied AI agents (virtual robots) in a highly photorealistic & efficient 3D simulator, before transferring the learned skills to reality. This empowers a paradigm shift from 'internet AI' based on static datasets (e.g. ImageNet, COCO, VQA) to embodied AI where agents act within realistic environments, bringing to the fore active perception, long-term planning, learning from interaction, and holding a dialog grounded in an environment.

Why the name Habitat? Because that's where AI agents live 🙂

Habitat is a platform for embodied AI research that consists of Habitat-Sim, Habitat-API, and Habitat Challenge.


A flexible, high-performance 3D simulator with configurable agents, multiple sensors, and generic 3D dataset handling (with built-in support for SUNCG, MatterPort3D, Gibson and other datasets). When rendering a scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (FPS) running single-threaded, and reaches over 10,000 FPS multi-process on a single GPU!

Github repository:


Habitat-API is a modular high-level library for end-to-end development in embodied AI -- defining embodied AI tasks (e.g. navigation, instruction following, question answering), configuring embodied agents (physical form, sensors, capabilities), training these agents (via imitation or reinforcement learning, or no learning at all as in classical SLAM), and benchmarking their performance on the defined tasks using standard metrics.

Github repository:

Habitat Challenge

An autonomous navigation challenge (hosted on the EvalAI platform) that aims to benchmark and accelerate progress in embodied AI. Unlike classical 'internet AI' image dataset-based challenges (e.g., ImageNet LSVRC, COCO, VQA), this is a challenge where participants upload code not predictions. The uploaded agents are evaluated in novel (unseen) environments to test for generalization. If you are interested in participating please fill out your name and email address in this form and we will get back to you when the submission phase goes live. The winners of challenge will be announced at CVPR-2019 at the Habitat: Embodied Agents Challenge and Workshop.

Challenge webpage:
Workshop webpage:



Citing Habitat

If you use the Habitat platform in your research, please cite the following technical report:

  title =   {Habitat: A Platform for Embodied AI Research},
  author =  {{Manolis Savva*}, {Abhishek Kadian*}, {Oleksandr Maksymets*}, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh and Dhruv Batra},
  journal = {arXiv preprint arXiv:1904.01201},
  year =    {2019}


Reach out at for any questions, suggestions and feedback.


The Habitat project would not have been possible without the support and contributions of many individuals. We are grateful to Devendra Singh Chaplot, Xinlei Chen, Georgia Gkioxari, Daniel Gordon, Leonidas Guibas, Saurabh Gupta, Rishabh Jain, Or Litany, Joel Marcey, Dmytro Mishkin, Marcus Rohrbach, Amanpreet Singh, Yuandong Tian, Yuxin Wu, Fei Xia, Deshraj Yadav, and Amir Zamir for their help.