habitat

Submodules

Module contents

class habitat.Agent[source]

Bases: object

Abstract class for defining agents which act inside Env. This abstract class standardizes agents to allow seamless benchmarking. To implement an agent the user has to implement two methods:

reset step
act(observations: habitat.core.simulator.Observations) → int[source]
Parameters:observations – observations coming in from environment to be used by agent to decide action.
Returns:action to be taken inside the environment
reset() → None[source]

Called before starting a new episode in environment.

class habitat.Benchmark(config_file: Optional[str] = None)[source]

Bases: object

Benchmark for evaluating agents in environments.

Parameters:config_file – file to be used for creating the environment.
evaluate(agent: habitat.core.agent.Agent, num_episodes: Optional[int] = None) → Dict[str, float][source]
Parameters:
  • agent – agent to be evaluated in environment.
  • num_episodes – count of number of episodes for which the evaluation should be run.
Returns:

dict containing metrics tracked by environment.

habitat.Config

alias of yacs.config.CfgNode

class habitat.Dataset[source]

Bases: typing.Generic

Base class for dataset specification.

episodes

list of episodes containing instance information

from_json(json_str: str) → None[source]
get_episodes(indexes: List[int]) → List[T][source]
Parameters:indexes – episode indices in dataset
Returns:list of episodes corresponding to indexes
get_scene_episodes(scene_id: str) → List[T][source]
Parameters:scene_id – id of scene in scene dataset
Returns:list of episodes for the scene_id
scene_ids

unique scene ids present in the dataset

Type:Returns
to_json() → str[source]
class habitat.EmbodiedTask(config: yacs.config.CfgNode, sim: habitat.core.simulator.Simulator, dataset: Optional[habitat.core.dataset.Dataset] = None)[source]

Bases: object

Base class for embodied tasks. When subclassing the user has to define the attributes listed below. When subclassing the user has to define the attributes measurements and sensor_suite.

Parameters:
  • config – config for the task.
  • sim – reference to the simulator for calculating task observations.
  • dataset – reference to dataset for task instance level information.
measurements

set of task measures.

sensor_suite

suite of task sensors.

overwrite_sim_config(sim_config: yacs.config.CfgNode, episode: Type[habitat.core.dataset.Episode]) → yacs.config.CfgNode[source]
Parameters:
  • sim_config – config for simulator.
  • episode – current episode.
Returns:

update config merging information from sim_config and episode.

class habitat.Env(config: yacs.config.CfgNode, dataset: Optional[habitat.core.dataset.Dataset] = None)[source]

Bases: object

Fundamental environment class for habitat. All the information needed for working on embodied tasks with simulator is abstracted inside Env. Acts as a base for other derived environment classes. Env consists of three major components: dataset (episodes), simulator and task and connects all the three components together.

Parameters:
  • config – config for the environment. Should contain id for simulator and task_name which are passed into make_sim and make_task.
  • dataset – reference to dataset for task instance level information. Can be defined as None in which case _episodes should be populated from outside.
observation_space

SpaceDict object corresponding to sensor in sim and task.

action_space

gym.space object corresponding to valid actions.

close() → None[source]
current_episode
episode_over
episode_start_time
episodes
get_metrics() → habitat.core.embodied_task.Metrics[source]
reconfigure(config: yacs.config.CfgNode) → None[source]
render(mode='rgb') → numpy.ndarray[source]
reset() → habitat.core.simulator.Observations[source]

Resets the environments and returns the initial observations.

Returns:Initial observations from the environment
seed(seed: int) → None[source]
sim
step(action: int) → habitat.core.simulator.Observations[source]

Perform an action in the environment and return observations

Parameters:action – action (belonging to action_space) to be performed inside the environment.
Returns:observations after taking action in environment.
task
habitat.get_config(config_file: Optional[str] = None, config_dir: str = 'configs/') → yacs.config.CfgNode[source]
habitat.make_dataset(id_dataset, **kwargs)[source]
class habitat.Measure(*args, **kwargs)[source]

Bases: object

Represents a measure that provides measurement on top of environment and task. This can be used for tracking statistics when running experiments. The user of this class needs to implement the reset_metric and update_metric method and the user is also required to set the below attributes:

uuid

universally unique id.

_metric

metric for the Measure, this has to be updated with each step call on environment.

get_metric()[source]
Returns:the current metric for Measure.
reset_metric(*args, **kwargs) → None[source]

Reset _metric, this method is called from Env on each reset.

update_metric(*args, **kwargs) → None[source]

Update _metric, this method is called from Env on each step.

class habitat.Measurements(measures: List[habitat.core.embodied_task.Measure])[source]

Bases: object

Represents a set of Measures, with each Measure being identified through a unique id.

Parameters:measures – list containing Measures, uuid of each Measure must be unique.
get_metrics() → habitat.core.embodied_task.Metrics[source]
Returns:collect measurement from all Measures and return it packaged inside Metrics.
reset_measures(*args, **kwargs) → None[source]
update_measures(*args, **kwargs) → None[source]
class habitat.RLEnv(config: yacs.config.CfgNode, dataset: Optional[habitat.core.dataset.Dataset] = None)[source]

Bases: gym.core.Env

Reinforcement Learning (RL) environment class which subclasses gym.Env. This is a wrapper over habitat.Env for RL users. To create custom RL environments users should subclass RLEnv and define the following methods:

get_reward_range get_reward get_done get_info
As this is a subclass of gym.Env, it implements
reset step
Parameters:
  • config – config to construct habitat.Env.
  • dataset – dataset to construct habtiat.Env.
close() → None[source]

Override _close in your subclass to perform any necessary cleanup.

Environments will automatically close() themselves when garbage collected or when the program exits.

episodes
get_done(observations: habitat.core.simulator.Observations) → bool[source]

Returns boolean indicating whether episode is done after performing the last action. This method is called inside the step method.

Parameters:observations – observations from simulator and task
Returns:done boolean after performing the last action.
get_info(observations) → Dict[Any, Any][source]
Parameters:observations – observations from simulator and task
Returns:info after performing the last action
get_reward(observations: habitat.core.simulator.Observations) → Any[source]

Returns reward after action has been performed. This method is called inside the step method.

Parameters:observations – observations from simulator and task
Returns:reward after performing the last action.
get_reward_range()[source]

Get min, max range of reward

Returns:[min, max] range of reward
habitat_env
render(mode: str = 'rgb') → numpy.ndarray[source]

Renders the environment.

The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:

  • human: render to the current display or terminal and return nothing. Usually for human consumption.
  • rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.
  • ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).

Note

Make sure that your class’s metadata ‘render.modes’ key includes
the list of supported modes. It’s recommended to call super() in implementations to use the functionality of this method.
Parameters:
  • mode (str) – the mode to render with
  • close (bool) – close all open renderings

Example:

class MyEnv(Env):

metadata = {‘render.modes’: [‘human’, ‘rgb_array’]}

def render(self, mode=’human’):
if mode == ‘rgb_array’:
return np.array(…) # return RGB frame suitable for video
elif mode is ‘human’:
… # pop up a window and render
else:
super(MyEnv, self).render(mode=mode) # just raise an exception
reset() → habitat.core.simulator.Observations[source]

Resets the state of the environment and returns an initial observation.

Returns: observation (object): the initial observation of the
space.
seed(seed: int) → None[source]

Sets the seed for this env’s random number generator(s).

Note

Some environments use multiple pseudorandom number generators. We want to capture all such seeds used in order to ensure that there aren’t accidental correlations between multiple generators.

Returns:
Returns the list of seeds used in this env’s random
number generators. The first value in the list should be the “main” seed, or the value which a reproducer should pass to ‘seed’. Often, the main seed equals the provided ‘seed’, but this won’t be true if seed=None, for example.
Return type:list<bigint>
step(action: int) → Tuple[habitat.core.simulator.Observations, Any, bool, dict][source]

Perform an action in the environment and return (observations, reward, done, info)

Parameters:action – action (belonging to action_space) to be performed inside the environment.
Returns:(observations, reward, done, info)
class habitat.Sensor(*args, **kwargs)[source]

Bases: object

Represents a sensor that provides data from the environment to agent. The user of this class needs to implement the get_observation method and the user is also required to set the below attributes:

uuid

universally unique id.

sensor_type

type of Sensor, use SensorTypes enum if your sensor comes under one of it’s categories.

observation_space

gym.Space object corresponding to observation of sensor

get_observation(*args, **kwargs) → Any[source]
Returns:Current observation for Sensor.
class habitat.SensorSuite(sensors: List[habitat.core.simulator.Sensor])[source]

Bases: object

Represents a set of sensors, with each sensor being identified through a unique id.

Parameters:sensors – list containing sensors for the environment, uuid of each sensor must be unique.
get(uuid: str) → habitat.core.simulator.Sensor[source]
get_observations(*args, **kwargs) → habitat.core.simulator.Observations[source]
Returns:collect data from all sensors and return it packaged inside Observation.
class habitat.SensorTypes[source]

Bases: enum.Enum

Enumeration of types of sensors.

COLOR = 1
DEPTH = 2
FORCE = 7
MEASUREMENT = 10
NORMAL = 3
NULL = 0
PATH = 5
POSITION = 6
SEMANTIC = 4
TENSOR = 8
TEXT = 9
class habitat.Simulator[source]

Bases: object

Basic simulator class for habitat. New simulators to be added to habtiat must derive from this class and implement the below methods:

reset step seed reconfigure geodesic_distance sample_navigable_point action_space_shortest_path close
action_space
action_space_shortest_path(source: habitat.core.simulator.AgentState, targets: List[habitat.core.simulator.AgentState], agent_id: int = 0) → List[habitat.core.simulator.ShortestPathPoint][source]

Calculates the shortest path between source and target agent states.

Parameters:
  • source – source agent state for shortest path calculation.
  • targets – target agent state(s) for shortest path calculation.
  • agent_id – id for agent (relevant for multi-agent setup).
Returns:

List of agent states and actions along the shortest path from source to the nearest target (both included).

close() → None[source]
geodesic_distance(position_a: List[float], position_b: List[float]) → float[source]

Calculates geodesic distance between two points.

Parameters:
  • position_a – coordinates of first point
  • position_b – coordinates of second point
Returns:

the geodesic distance in the cartesian space between points position_a and position_b, if no path is found between the points then infinity is returned.

get_agent_state(agent_id: int = 0)[source]
Parameters:agent_id – id of agent
Returns:state of agent corresponding to agent_id
is_episode_active
reconfigure(config: yacs.config.CfgNode) → None[source]
render(mode: str = 'rgb') → Any[source]
reset() → habitat.core.simulator.Observations[source]

Resets the simulator and returns the initial observations.

Returns:Initial observations from simulator.
sample_navigable_point() → List[float][source]

Samples a navigable point from the simulator. A point is defined as navigable if the agent can be initialized at that point.

Returns:Navigable point.
seed(seed: int) → None[source]
sensor_suite
step(action: int) → habitat.core.simulator.Observations[source]

Perform an action in the simulator and return observations.

Parameters:action – action to be performed inside the simulator.
Returns:observations after taking action in simulator.
class habitat.ThreadedVectorEnv[source]

Bases: habitat.core.vector_env.VectorEnv

class habitat.VectorEnv[source]

Bases: object

Vectorized environment which creates multiple processes where each process runs its own environment. All the environments are synchronized on step and reset methods.

Parameters:
  • make_env_fn – Function which creates a single environment. An environment can be of type Env or RLEnv
  • env_fn_args – tuple of tuple of args to pass to the make_env_fn.
  • auto_reset_done – automatically reset the environment when done. This functionality is provided for seamless training of vectorized environments.
  • multiprocessing_start_method – The multiprocessing method used to spawn worker processes. Valid methods are {'spawn', 'forkserver', 'fork'} 'forkserver' is the recommended method as it works well with CUDA. If 'fork' is used, the subproccess must be started before any other GPU useage.
async_step(actions: List[int]) → None[source]

Asynchronously step in the environments.

Parameters:actions – actions to be performed in the vectorized envs.
close() → None[source]
num_envs

Number of individual environments.

Type:Returns
render(mode: str = 'human', *args, **kwargs) → Optional[numpy.ndarray][source]

Render observations from all environments in a tiled image.

reset()[source]

Reset all the vectorized environments

Returns:List of outputs from the reset method of envs.
reset_at(index_env: int)[source]

Reset in the index_env environment in the vector.

Parameters:index_env – index of the environment to be reset
Returns:List containing the output of reset method of indexed env.
step(actions: List[int])[source]

Perform actions in the vectorized environments.

Parameters:actions – list of size _num_envs containing action to be taken in each environment.
Returns:List of outputs from the step method of envs.
step_at(index_env: int, action: int)[source]

Step in the index_env environment in the vector.

Parameters:
  • index_env – index of the environment to be stepped into
  • action – action to be taken
Returns:

List containing the output of step method of indexed env.

wait_step() → List[habitat.core.simulator.Observations][source]

Wait until all the asynchronized environments have synchronized.