habitat.core.environments.RLTaskEnv class

Methods

def close(self) -> None
def current_episode(self, all_info: bool = False) -> dataset.BaseEpisode
Returns the current episode of the environment.
def get_done(self, observations)
def get_info(self, observations)
def get_reward(self, observations)
def get_reward_range(self)
def render(self, mode: str = 'rgb') -> numpy.ndarray
def reset(self, *args, return_info: bool = False, **kwargs) -> typing.Union[numpy.ndarray, typing.Dict[str, numpy.ndarray], typing.Tuple[typing.Union[numpy.ndarray, typing.Dict[str, numpy.ndarray]], typing.Dict]]
def seed(self, seed: typing.Optional[int] = None) -> None
def step(self, *args, **kwargs) -> typing.Tuple[typing.Union[numpy.ndarray, typing.Dict[str, numpy.ndarray]], float, bool, dict]

Special methods

def __enter__(self)
def __exit__(self, exc_type, exc_val, exc_tb)
def __init__(self, config: DictConfig, dataset: typing.Optional[dataset.Dataset] = None)
def __str__(self)

Properties

config: DictConfig get
episodes: typing.List[dataset.Episode] get set
habitat_env: env.Env get
np_random: gym.utils.seeding.RandomNumberGenerator get set
Initializes the np_random field if not done already.
unwrapped: gym.core.Env get
Completely unwrap this env.

Data

metadata = {'render_modes': []}
reward_range = (-inf, inf)
spec = None

Method documentation

def habitat.core.environments.RLTaskEnv.current_episode(self, all_info: bool = False) -> dataset.BaseEpisode

Returns the current episode of the environment.

Parameters
all_info If true, all the information in the episode will be provided. Otherwise, only episode_id and scene_id will be included.
Returns The BaseEpisode object for the current episode.

Property documentation

habitat.core.environments.RLTaskEnv.unwrapped: gym.core.Env get

Completely unwrap this env.

Returns:
gym.Env: The base non-wrapped gym.Env instance