How to Train Your [Dragon] Embodied Agent (with AI Habitat)

ECCV 2020 Tutorial, Glasgow, Scotland

Sunday, 23rd Augst, Afternoon
Room Boisdale 2, Scottish Event Campus (SEC)

Overview

There has been a recent shift in the computer vision community from tasks focusing on internet images to active settings involving embodied agents that perceive and act within 3D environments. Practical deployment of AI agents in the real world requires study of active perception and coupling of perception with control as in embodied agents.

In this tutorial, we will demonstrate how to utilize the Habitat platform to train embodied agents to perform a variety of tasks in complex and photo-realistic environments.

Tentative Outline

01:30 - 02:15 Welcome and Introduction to AI Habitat
02:15 - 3:00 Embodied AI Simulation Basics
3:00 - 3:30 Break
3:30 - 4:15 Defining Tasks & Training Agents
4:15 - 5:00 Extensions

Welcome and introduction. In this section, we will first cover the scientific motivation for embodied AI. We will cover the design philosophy and architecture of AI Habitat, to help potential users and researchers understand how AI Habitat views training virtual robots in simulated environments.

Simulation basics. In this section, we will cover the basics of simulating virtual robots. We will discuss how to initialize agents with various sensor configurations and action spaces. We will also show how to use simulation to create large-scale egocentric computer vision datasets.

Defining tasks & training agents. Here, we will cover how to define novel embodied AI tasks. We will also show how to leverage the Habitat platform to seamlessly change between tasks, datasets, and combine multiple datasets during an experiment.

Extensions. In this section, we will briefly introduce more advanced features of Habitat, including WebGL deployment for human data collection, physics simulation, and transfer of policies trained in simulation to reality.