habitat.core.env.RLEnv class

Reinforcement Learning (RL) environment class which subclasses gym.Env.

This is a wrapper over Env for RL users. To create custom RL environments users should subclass RLEnv and define the following methods: get_reward_range(), get_reward(), get_done(), get_info().

As this is a subclass of gym.Env, it implements reset() and step().

Methods

def close(self) -> None
def current_episode(self, all_info: bool = False) -> dataset.BaseEpisode
Returns the current episode of the environment.
def get_done(self, observations: simulator.Observations) -> bool
Returns boolean indicating whether episode is done after performing the last action.
def get_info(self, observations) -> typing.Dict[typing.Any, typing.Any]
def get_reward(self, observations: simulator.Observations) -> typing.Any
Returns reward after action has been performed.
def get_reward_range(self)
Get min, max range of reward.
def render(self, mode: str = 'rgb') -> numpy.ndarray
def reset(self, *, return_info: bool = False, **kwargs) -> typing.Union[simulator.Observations, typing.Tuple[simulator.Observations, typing.Dict]]
def seed(self, seed: typing.Optional[int] = None) -> None
def step(self, *args, **kwargs) -> typing.Tuple[simulator.Observations, typing.Any, bool, dict]
Perform an action in the environment.

Special methods

def __class_getitem__(...)
Parameterizes a generic class.
def __enter__(self)
def __exit__(self, exc_type, exc_val, exc_tb)
def __format__(self, format_spec, /)
Default object formatter.
def __init__(self, config: DictConfig, dataset: typing.Optional[dataset.Dataset] = None)
Constructor
def __init_subclass__(...)
Function to initialize subclasses.
def __str__(self)

Properties

config: DictConfig get
episodes: typing.List[dataset.Episode] get set
habitat_env: Env get
np_random: gym.utils.seeding.RandomNumberGenerator get set
Initializes the np_random field if not done already.
unwrapped: gym.core.Env get
Completely unwrap this env.

Data

metadata = {'render_modes': []}
reward_range = (-inf, inf)
spec = None

Method documentation

def habitat.core.env.RLEnv.current_episode(self, all_info: bool = False) -> dataset.BaseEpisode

Returns the current episode of the environment.

Parameters
all_info If true, all the information in the episode will be provided. Otherwise, only episode_id and scene_id will be included.
Returns The BaseEpisode object for the current episode.

def habitat.core.env.RLEnv.get_done(self, observations: simulator.Observations) -> bool

Returns boolean indicating whether episode is done after performing the last action.

Parameters
observations observations from simulator and task.
Returns done boolean after performing the last action.

This method is called inside the step method.

def habitat.core.env.RLEnv.get_info(self, observations) -> typing.Dict[typing.Any, typing.Any]

Parameters
observations observations from simulator and task.
Returns info after performing the last action.

def habitat.core.env.RLEnv.get_reward(self, observations: simulator.Observations) -> typing.Any

Returns reward after action has been performed.

Parameters
observations observations from simulator and task.
Returns reward after performing the last action.

This method is called inside the step() method.

def habitat.core.env.RLEnv.get_reward_range(self)

Get min, max range of reward.

Returns [min, max] range of reward.

def habitat.core.env.RLEnv.step(self, *args, **kwargs) -> typing.Tuple[simulator.Observations, typing.Any, bool, dict]

Perform an action in the environment.

Returns (observations, reward, done, info)

def habitat.core.env.RLEnv.__class_getitem__(...)

Parameterizes a generic class.

At least, parameterizing a generic class is the main thing this method does. For example, for some generic class Foo, this is called when we do Foo[int] - there, with cls=Foo and params=int.

However, note that this method is also called when defining generic classes in the first place with class Foo[T]: ….

def habitat.core.env.RLEnv.__format__(self, format_spec, /)

Default object formatter.

Return str(self) if format_spec is empty. Raise TypeError otherwise.

def habitat.core.env.RLEnv.__init__(self, config: DictConfig, dataset: typing.Optional[dataset.Dataset] = None)

Constructor

Parameters
config config to construct Env
dataset dataset to construct Env.

Property documentation

habitat.core.env.RLEnv.unwrapped: gym.core.Env get

Completely unwrap this env.

Returns:
gym.Env: The base non-wrapped gym.Env instance