habitat.core.env

class habitat.core.env.Env(config: yacs.config.CfgNode, dataset: Optional[habitat.core.dataset.Dataset] = None)[source]

Fundamental environment class for habitat. All the information needed for working on embodied tasks with simulator is abstracted inside Env. Acts as a base for other derived environment classes. Env consists of three major components: dataset (episodes), simulator and task and connects all the three components together.

Parameters
  • config – config for the environment. Should contain id for simulator and task_name which are passed into make_sim and make_task.

  • dataset – reference to dataset for task instance level information. Can be defined as None in which case _episodes should be populated from outside.

observation_space

SpaceDict object corresponding to sensor in sim and task.

action_space

gym.space object corresponding to valid actions.

reset() → habitat.core.simulator.Observations[source]

Resets the environments and returns the initial observations.

Returns

initial observations from the environment.

step(action: int) → habitat.core.simulator.Observations[source]

Perform an action in the environment and return observations.

Parameters

action – action (belonging to action_space) to be performed inside the environment.

Returns

observations after taking action in environment.

class habitat.core.env.RLEnv(config: yacs.config.CfgNode, dataset: Optional[habitat.core.dataset.Dataset] = None)[source]

Reinforcement Learning (RL) environment class which subclasses gym.Env. This is a wrapper over habitat.Env for RL users. To create custom RL environments users should subclass RLEnv and define the following methods: get_reward_range, get_reward, get_done, get_info.

As this is a subclass of gym.Env, it implements reset and step.

Parameters
  • config – config to construct habitat.Env.

  • dataset – dataset to construct habtiat.Env.

close() → None[source]

Override _close in your subclass to perform any necessary cleanup.

Environments will automatically close() themselves when garbage collected or when the program exits.

get_done(observations: habitat.core.simulator.Observations) → bool[source]

Returns boolean indicating whether episode is done after performing the last action. This method is called inside the step method.

Parameters

observations – observations from simulator and task.

Returns

done boolean after performing the last action.

get_info(observations) → Dict[Any, Any][source]
Parameters

observations – observations from simulator and task.

Returns

info after performing the last action.

get_reward(observations: habitat.core.simulator.Observations) → Any[source]

Returns reward after action has been performed. This method is called inside the step method.

Parameters

observations – observations from simulator and task.

Returns

reward after performing the last action.

get_reward_range()[source]

Get min, max range of reward.

Returns

[min, max] range of reward.

render(mode: str = 'rgb') → numpy.ndarray[source]

Renders the environment.

The set of supported modes varies per environment. (And some environments do not support rendering at all.) By convention, if mode is:

  • human: render to the current display or terminal and return nothing. Usually for human consumption.

  • rgb_array: Return an numpy.ndarray with shape (x, y, 3), representing RGB values for an x-by-y pixel image, suitable for turning into a video.

  • ansi: Return a string (str) or StringIO.StringIO containing a terminal-style text representation. The text can include newlines and ANSI escape sequences (e.g. for colors).

Note

Make sure that your class’s metadata ‘render.modes’ key includes

the list of supported modes. It’s recommended to call super() in implementations to use the functionality of this method.

Parameters
  • mode (str) – the mode to render with

  • close (bool) – close all open renderings

Example:

class MyEnv(Env):

metadata = {‘render.modes’: [‘human’, ‘rgb_array’]}

def render(self, mode=’human’):
if mode == ‘rgb_array’:

return np.array(…) # return RGB frame suitable for video

elif mode is ‘human’:

… # pop up a window and render

else:

super(MyEnv, self).render(mode=mode) # just raise an exception

reset() → habitat.core.simulator.Observations[source]

Resets the state of the environment and returns an initial observation.

Returns: observation (object): the initial observation of the

space.

seed(seed: int) → None[source]

Sets the seed for this env’s random number generator(s).

Note

Some environments use multiple pseudorandom number generators. We want to capture all such seeds used in order to ensure that there aren’t accidental correlations between multiple generators.

Returns

Returns the list of seeds used in this env’s random

number generators. The first value in the list should be the “main” seed, or the value which a reproducer should pass to ‘seed’. Often, the main seed equals the provided ‘seed’, but this won’t be true if seed=None, for example.

Return type

list<bigint>

step(action: int) → Tuple[habitat.core.simulator.Observations, Any, bool, dict][source]

Perform an action in the environment and return (observations, reward, done, info).

Parameters

action – action (belonging to action_space) to be performed inside the environment.

Returns

(observations, reward, done, info).