habitat_sim.utils.viz_utils module

Functions

def border_frames_from_overlay(overlay_settings, observation_to_image = <function observation_to_image>)
def depth_to_rgb(depth_image: numpy.ndarray, clip_max: float = 10.0) -> numpy.ndarray
Normalize depth image into [0, 1] and convert to grayscale rgb
def display_video(video_file: str, height: int = 400)
Displays a video both locally and in a notebook. Will display the video as an HTML5 video if in a notebook, otherwise it opens the video file using the default system viewer.
def get_island_colored_map(island_top_down_map_data: numpy.ndarray) -> PIL.Image.Image
Get the topdown map for a scene with island colors.
def is_notebook() -> bool
This utility function detects if the code is running in a notebook
def make_video(observations: typing.List[numpy.ndarray], primary_obs: str, primary_obs_type: str, video_file: str, fps: int = 60, open_vid: bool = True, video_dims: typing.Optional[typing.Tuple[int]] = None, overlay_settings: typing.Optional[typing.List[typing.Dict[str, typing.Any]]] = None, depth_clip: typing.Optional[float] = 10.0, observation_to_image = <function observation_to_image>)
Build a video from a passed observations array, with some images optionally overlayed. :param observations: List of observations from which the video should be constructed. :param primary_obs: Sensor name in observations to be used for primary video images. :param primary_obs_type: Primary image observation type (“color”, “depth”, “semantic” supported). :param video_file: File to save resultant .mp4 video. :param fps: Desired video frames per second. :param open_vid: Whether or not to open video upon creation. :param video_dims: Height by Width of video if different than observation dimensions. Applied after overlays. :param overlay_settings: List of settings Dicts, optional. :param depth_clip: Defines default depth clip normalization for all depth images. :param observation_to_image: Allows overriding the observation_to_image function With overlay_settings dicts specifying per-entry:
def make_video_frame(ob, primary_obs: str, primary_obs_type: str, video_dims, overlay_settings = None, observation_to_image = <function observation_to_image>) -> PIL.Image.Image
def observation_to_image(observation_image: numpy.ndarray, observation_type: str, depth_clip: typing.Optional[float] = 10.0) -> PIL.Image.Image
Generate an rgb image from a sensor observation. Supported types are: “color”, “depth”, “semantic”
def save_video(video_file: str, frames, fps: int = 60)
Saves the video using imageio. Will try to use GPU hardware encoding on Google Colab for faster video encoding. Will also display a progressbar.
def semantic_to_rgb(semantic_image: numpy.ndarray) -> PIL.Image.Image
Map semantic ids to colors and genereate an rgb image

Data

d3_40_colors_hex = …
d3_40_colors_rgb = …

Function documentation

def habitat_sim.utils.viz_utils.depth_to_rgb(depth_image: numpy.ndarray, clip_max: float = 10.0) -> numpy.ndarray

Normalize depth image into [0, 1] and convert to grayscale rgb

Parameters
depth_image Raw depth observation image from sensor output.
clip_max Max depth distance for clipping and normalization.

def habitat_sim.utils.viz_utils.display_video(video_file: str, height: int = 400)

Displays a video both locally and in a notebook. Will display the video as an HTML5 video if in a notebook, otherwise it opens the video file using the default system viewer.

Parameters
video_file the filename of the video to display
height the height to display the video in a notebook.

def habitat_sim.utils.viz_utils.get_island_colored_map(island_top_down_map_data: numpy.ndarray) -> PIL.Image.Image

Get the topdown map for a scene with island colors.

Parameters
island_top_down_map_data The island index map data from Pathfinder.get_topdown_island_view()

def habitat_sim.utils.viz_utils.make_video(observations: typing.List[numpy.ndarray], primary_obs: str, primary_obs_type: str, video_file: str, fps: int = 60, open_vid: bool = True, video_dims: typing.Optional[typing.Tuple[int]] = None, overlay_settings: typing.Optional[typing.List[typing.Dict[str, typing.Any]]] = None, depth_clip: typing.Optional[float] = 10.0, observation_to_image = <function observation_to_image>)

Build a video from a passed observations array, with some images optionally overlayed. :param observations: List of observations from which the video should be constructed. :param primary_obs: Sensor name in observations to be used for primary video images. :param primary_obs_type: Primary image observation type (“color”, “depth”, “semantic” supported). :param video_file: File to save resultant .mp4 video. :param fps: Desired video frames per second. :param open_vid: Whether or not to open video upon creation. :param video_dims: Height by Width of video if different than observation dimensions. Applied after overlays. :param overlay_settings: List of settings Dicts, optional. :param depth_clip: Defines default depth clip normalization for all depth images. :param observation_to_image: Allows overriding the observation_to_image function With overlay_settings dicts specifying per-entry:

“type”: observation type (“color”, “depth”, “semantic” supported)

“dims”: overlay dimensions (Tuple : (width, height))

“pos”: overlay position (top left) (Tuple : (width, height))

“border”: overlay image border thickness (int)

“border_color”: overlay image border color [0-255] (3d: array, list, or tuple). Defaults to gray [150]

“obs”: observation key (string)

def habitat_sim.utils.viz_utils.observation_to_image(observation_image: numpy.ndarray, observation_type: str, depth_clip: typing.Optional[float] = 10.0) -> PIL.Image.Image

Generate an rgb image from a sensor observation. Supported types are: “color”, “depth”, “semantic”

Parameters
observation_image Raw observation image from sensor output.
observation_type Observation type (“color”, “depth”, “semantic” supported)
depth_clip Defines default depth clip normalization for all depth images.

def habitat_sim.utils.viz_utils.save_video(video_file: str, frames, fps: int = 60)

Saves the video using imageio. Will try to use GPU hardware encoding on Google Colab for faster video encoding. Will also display a progressbar.

Parameters
video_file the file name of where to save the video
frames the actual frame objects to save
fps the fps of the video (default 60)

def habitat_sim.utils.viz_utils.semantic_to_rgb(semantic_image: numpy.ndarray) -> PIL.Image.Image

Map semantic ids to colors and genereate an rgb image

Parameters
semantic_image Raw semantic observation image from sensor output.