Interactive Rigid Objects via PhysicsObjectManager

This tutorial demonstrates rigid object interactions in Habitat-sim -- instancing, dynamic simulation, and kinematic manipulation -- through direct object manipulation.

The example code below is available as a Jupyter notebook or directly runnable via:

$ python path/to/habitat-sim/examples/tutorials/nb_python/managed_rigid_object_tutorial.py

First, download the example objects used in this tutorial:

$ python -m habitat_sim.utils.datasets_download --uids habitat_example_objects locobot_merged --data-path path/to/data/

Import necessary modules, define some convenience functions, and initialize the Simulator and Agent.

import os
import random

import git
import magnum as mn
import numpy as np

import habitat_sim
from habitat_sim.utils import viz_utils as vut

repo = git.Repo(".", search_parent_directories=True)
dir_path = repo.working_tree_dir
data_path = os.path.join(dir_path, "data")
output_path = os.path.join(
    dir_path, "examples/tutorials/managed_rigid_object_tutorial_output/"
)


def place_agent(sim):
    # place our agent in the scene
    agent_state = habitat_sim.AgentState()
    agent_state.position = [-0.15, -0.7, 1.0]
    agent_state.rotation = np.quaternion(-0.83147, 0, 0.55557, 0)
    agent = sim.initialize_agent(0, agent_state)
    return agent.scene_node.transformation_matrix()


def make_configuration():
    # simulator configuration
    backend_cfg = habitat_sim.SimulatorConfiguration()
    backend_cfg.scene_id = os.path.join(
        data_path, "scene_datasets/habitat-test-scenes/apartment_1.glb"
    )
    assert os.path.exists(backend_cfg.scene_id)

    # sensor configurations
    # Note: all sensors must have the same resolution
    # setup 2 rgb sensors for 1st and 3rd person views
    camera_resolution = [544, 720]
    sensor_specs = []

    rgba_camera_1stperson_spec = habitat_sim.CameraSensorSpec()
    rgba_camera_1stperson_spec.uuid = "rgba_camera_1stperson"
    rgba_camera_1stperson_spec.sensor_type = habitat_sim.SensorType.COLOR
    rgba_camera_1stperson_spec.resolution = camera_resolution
    rgba_camera_1stperson_spec.position = [0.0, 0.6, 0.0]
    rgba_camera_1stperson_spec.orientation = [0.0, 0.0, 0.0]
    rgba_camera_1stperson_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
    sensor_specs.append(rgba_camera_1stperson_spec)

    depth_camera_1stperson_spec = habitat_sim.CameraSensorSpec()
    depth_camera_1stperson_spec.uuid = "depth_camera_1stperson"
    depth_camera_1stperson_spec.sensor_type = habitat_sim.SensorType.DEPTH
    depth_camera_1stperson_spec.resolution = camera_resolution
    depth_camera_1stperson_spec.position = [0.0, 0.6, 0.0]
    depth_camera_1stperson_spec.orientation = [0.0, 0.0, 0.0]
    depth_camera_1stperson_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
    sensor_specs.append(depth_camera_1stperson_spec)

    rgba_camera_3rdperson_spec = habitat_sim.CameraSensorSpec()
    rgba_camera_3rdperson_spec.uuid = "rgba_camera_3rdperson"
    rgba_camera_3rdperson_spec.sensor_type = habitat_sim.SensorType.COLOR
    rgba_camera_3rdperson_spec.resolution = camera_resolution
    rgba_camera_3rdperson_spec.position = [0.0, 1.0, 0.3]
    rgba_camera_3rdperson_spec.orientation = [-45.0, 0.0, 0.0]
    rgba_camera_3rdperson_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
    sensor_specs.append(rgba_camera_3rdperson_spec)

    # agent configuration
    agent_cfg = habitat_sim.agent.AgentConfiguration()
    agent_cfg.sensor_specifications = sensor_specs

    return habitat_sim.Configuration(backend_cfg, [agent_cfg])


def simulate(sim, dt=1.0, get_frames=True):
    # simulate dt seconds at 60Hz to the nearest fixed timestep
    print("Simulating " + str(dt) + " world seconds.")
    observations = []
    start_time = sim.get_world_time()
    while sim.get_world_time() < start_time + dt:
        sim.step_physics(1.0 / 60.0)
        if get_frames:
            observations.append(sim.get_sensor_observations())

    return observations
    # create the simulators AND resets the simulator

    cfg = make_configuration()
    try:  # Got to make initialization idiot proof
        sim.close()
    except NameError:
        pass
    sim = habitat_sim.Simulator(cfg)
    agent_transform = place_agent(sim)

    # get the primitive assets attributes manager
    prim_templates_mgr = sim.get_asset_template_manager()

    # get the physics object attributes manager
    obj_templates_mgr = sim.get_object_template_manager()

    # get the rigid object manager
    rigid_obj_mgr = sim.get_rigid_object_manager()

Simulation Quickstart

Basic rigid body simulation can be achieved by simply loading a template, instancing an object, and stepping the physical world. In this example, a sphere object template is loaded and the object is instanced in the scene above the table. When the simulation is stepped, it falls under the force of gravity and reacts to collisions with the scene.

    # load some object templates from configuration files
    sphere_template_id = obj_templates_mgr.load_configs(
        str(os.path.join(data_path, "test_assets/objects/sphere"))
    )[0]

    # add a sphere to the scene, returns the object
    sphere_obj = rigid_obj_mgr.add_object_by_template_id(sphere_template_id)
    # move sphere
    sphere_obj.translation = [2.50, 0.0, 0.2]

    # simulate
    observations = simulate(sim, dt=1.5, get_frames=make_video)

    if make_video:
        vut.make_video(
            observations,
            "rgba_camera_1stperson",
            "color",
            output_path + "sim_basics",
            open_vid=show_video,
        )

User-defined configuration values can be set and retrieved for each object independently. These values can be specified in the files used to build the ObjectAttributes that is used to create this object, or they can be modified directly.

    # modify an object's user-defined configurations

    # load some object templates from configuration files
    sphere_template_id = obj_templates_mgr.load_configs(
        str(os.path.join(data_path, "test_assets/objects/sphere"))
    )[0]

    # add a sphere to the scene, returns the object
    sphere_obj = rigid_obj_mgr.add_object_by_template_id(sphere_template_id)

    # set user-defined configuration values
    user_attributes_dict = {
        "obj_val_0": "This is a sphere object named " + sphere_obj.handle,
        "obj_val_1": 17,
        "obj_val_2": False,
        "obj_val_3": 19.7,
        "obj_val_4": [2.50, 0.0, 0.2],
        "obj_val_5": mn.Quaternion.rotation(mn.Deg(90.0), [-1.0, 0.0, 0.0]),
    }
    for k, v in user_attributes_dict.items():
        sphere_obj.user_attributes.set(k, v)

    for k, _ in user_attributes_dict.items():
        print(
            "Sphere Object user attribute : {} : {}".format(
                k, sphere_obj.user_attributes.get_as_string(k)
            )
        )

Forces and torques can be applied directly to the object using habitat_sim.physics.ManagedRigidObject.apply_force() and habitat_sim.physics.ManagedRigidObject.apply_torque(). Instantaneous initial velocities can also be set using the object’s properties, habitat_sim.physics.ManagedRigidObject.linear_velocity and habitat_sim.physics.ManagedRigidObject.angular_velocity.

In the example below, a constant anti-gravity force is applied to the boxes’ centers of mass (COM) causing them to float in the air. A constant torque is also applied which gradually increases the angular velocity of the boxes. A sphere is then thrown at the boxes by applying an initial velocity.

Note that forces and torques are treated as constant within each call to Simulator.step_physics() and are cleared afterward.

    observations = []
    obj_templates_mgr.load_configs(
        str(os.path.join(data_path, "objects/example_objects/"))
    )
    # search for an object template by key sub-string
    cheezit_template_handle = obj_templates_mgr.get_template_handles(
        "data/objects/example_objects/cheezit"
    )[0]
    # build multiple object initial positions
    box_positions = [
        [2.39, -0.37, 0.0],
        [2.39, -0.64, 0.0],
        [2.39, -0.91, 0.0],
        [2.39, -0.64, -0.22],
        [2.39, -0.64, 0.22],
    ]
    box_orientation = mn.Quaternion.rotation(mn.Deg(90.0), [-1.0, 0.0, 0.0])
    # instance and place the boxes
    boxes = []
    for b in range(len(box_positions)):
        boxes.append(
            rigid_obj_mgr.add_object_by_template_handle(cheezit_template_handle)
        )
        boxes[b].translation = box_positions[b]
        boxes[b].rotation = box_orientation

    # anti-gravity force f=m(-g) using first object's mass (all objects have the same mass)
    anti_grav_force = -1.0 * sim.get_gravity() * boxes[0].mass

    # throw a sphere at the boxes from the agent position
    sphere_template = obj_templates_mgr.get_template_by_id(sphere_template_id)
    sphere_template.scale = [0.5, 0.5, 0.5]

    obj_templates_mgr.register_template(sphere_template)

    # create sphere
    sphere_obj = rigid_obj_mgr.add_object_by_template_id(sphere_template_id)

    sphere_obj.translation = sim.agents[0].get_state().position + [0.0, 1.0, 0.0]
    # get the vector from the sphere to a box
    target_direction = boxes[0].translation - sphere_obj.translation
    # apply an initial velocity for one step
    sphere_obj.linear_velocity = target_direction * 5
    sphere_obj.angular_velocity = [0.0, -1.0, 0.0]

    start_time = sim.get_world_time()
    dt = 3.0
    while sim.get_world_time() < start_time + dt:
        # set forces/torques before stepping the world
        for box in boxes:
            box.apply_force(anti_grav_force, [0.0, 0.0, 0.0])
            box.apply_torque([0.0, 0.01, 0.0])
        sim.step_physics(1.0 / 60.0)
        observations.append(sim.get_sensor_observations())

    if make_video:
        vut.make_video(
            observations,
            "rgba_camera_1stperson",
            "color",
            output_path + "dynamic_control",
            open_vid=show_video,
        )

Kinematic Object Placement

Often it is enough to set the desired state of an object directly. In these cases the computational overhead of running full dynamic simulation may not be necessary to achieve a desired result. An object’s motion type can be set to habitat_sim.physics.MotionType.KINEMATIC directly through its habitat_sim.physics.ManagedRigidObject.motion_type property, which specifies that the object’s state will be directly controlled.

In the example below, a kinematic can is placed in the scene which will not react to physical events such as collision with dynamically simulated objects. However, it will still act as a collision object for other scene objects as in the following example.

    chefcan_template_handle = obj_templates_mgr.get_template_handles(
        "data/objects/example_objects/chefcan"
    )[0]
    chefcan_obj = rigid_obj_mgr.add_object_by_template_handle(chefcan_template_handle)
    chefcan_obj.translation = [2.4, -0.64, 0.0]
    # set object to kinematic
    chefcan_obj.motion_type = habitat_sim.physics.MotionType.KINEMATIC

    # drop some dynamic objects
    chefcan_obj_2 = rigid_obj_mgr.add_object_by_template_handle(chefcan_template_handle)
    chefcan_obj_2.translation = [2.4, -0.64, 0.28]
    chefcan_obj_3 = rigid_obj_mgr.add_object_by_template_handle(chefcan_template_handle)
    chefcan_obj_3.translation = [2.4, -0.64, -0.28]
    chefcan_obj_4 = rigid_obj_mgr.add_object_by_template_handle(chefcan_template_handle)
    chefcan_obj_4.translation = [2.4, -0.3, 0.0]

    # simulate
    observations = simulate(sim, dt=1.5, get_frames=True)

    if make_video:
        vut.make_video(
            observations,
            "rgba_camera_1stperson",
            "color",
            output_path + "kinematic_interactions",
            open_vid=show_video,
        )

Kinematic Velocity Control

To move a kinematic object, the state can be set directly before each simulation step. This is useful for synchronizing the simulation state of objects to a known state such as a dataset trajectory, input device, or motion capture.

    observations = []

    clamp_template_handle = obj_templates_mgr.get_template_handles(
        "data/objects/example_objects/largeclamp"
    )[0]
    clamp_obj = rigid_obj_mgr.add_object_by_template_handle(clamp_template_handle)
    clamp_obj.motion_type = habitat_sim.physics.MotionType.KINEMATIC
    clamp_obj.translation = [0.8, 0.2, 0.5]

    start_time = sim.get_world_time()
    dt = 1.0
    while sim.get_world_time() < start_time + dt:
        # manually control the object's kinematic state
        clamp_obj.translation += [0.0, 0.0, 0.01]
        clamp_obj.rotation = (
            mn.Quaternion.rotation(mn.Rad(0.05), [-1.0, 0.0, 0.0]) * clamp_obj.rotation
        )
        sim.step_physics(1.0 / 60.0)
        observations.append(sim.get_sensor_observations())

    if make_video:
        vut.make_video(
            observations,
            "rgba_camera_1stperson",
            "color",
            output_path + "kinematic_update",
            open_vid=show_video,
        )

However, when applying model or algorithmic control it is more convenient to specify a constant linear and angular velocity for the object which will be simulated without manual integration. The object’s habitat_sim.physics.VelocityControl structure provides this functionality and can be acquired directly from the object via the read only property habitat_sim.physics.ManagedRigidObject.velocity_control. Once parameters are set, control takes effect immediately on the next simulation step as shown in the following example.

    # get object VelocityControl structure and setup control
    vel_control = clamp_obj.velocity_control
    vel_control.linear_velocity = [0.0, 0.0, -1.0]
    vel_control.angular_velocity = [4.0, 0.0, 0.0]
    vel_control.controlling_lin_vel = True
    vel_control.controlling_ang_vel = True

    observations = simulate(sim, dt=1.0, get_frames=True)

    # reverse linear direction
    vel_control.linear_velocity = [0.0, 0.0, 1.0]

    observations += simulate(sim, dt=1.0, get_frames=True)

    if make_video:
        vut.make_video(
            observations,
            "rgba_camera_1stperson",
            "color",
            output_path + "velocity_control",
            open_vid=show_video,
        )

Velocities can also be specified in the local space of the object to easily apply velocity control for continuous agent actions.

    vel_control.linear_velocity = [0.0, 0.0, 2.3]
    vel_control.angular_velocity = [-4.3, 0.0, 0.0]
    vel_control.lin_vel_is_local = True
    vel_control.ang_vel_is_local = True

    observations = simulate(sim, dt=1.5, get_frames=True)

    # video rendering
    if make_video:
        vut.make_video(
            observations,
            "rgba_camera_1stperson",
            "color",
            output_path + "local_velocity_control",
            open_vid=show_video,
        )

Embodied Agents

Previous stages of this tutorial have covered adding objects to the world and manipulating them by setting positions, velocity, forces, and torques. In all of these examples, the agent has been a passive onlooker observing the scene. However, the agent can also be attached to a simulated object for embodiment and control. This can be done by passing the Agent’s scene node to the habitat_sim.physics.RigidObjectManager.add_object_by_template_handle() or habitat_sim.physics.RigidObjectManager.add_object_by_template_id() functions.

In this example, the agent is embodied by a rigid robot asset and the habitat_sim.physics.VelocityControl structure is used to control the robot’s actions.

    # load the locobot_merged asset
    locobot_template_id = obj_templates_mgr.load_configs(
        str(os.path.join(data_path, "objects/locobot_merged"))
    )[0]

    # add robot object to the scene with the agent/camera SceneNode attached
    locobot = rigid_obj_mgr.add_object_by_template_id(
        locobot_template_id, sim.agents[0].scene_node
    )
    locobot.translation = [1.75, -1.02, 0.4]

    vel_control = locobot.velocity_control
    vel_control.linear_velocity = [0.0, 0.0, -1.0]
    vel_control.angular_velocity = [0.0, 2.0, 0.0]

    # simulate robot dropping into place
    observations = simulate(sim, dt=1.5, get_frames=make_video)

    vel_control.controlling_lin_vel = True
    vel_control.controlling_ang_vel = True
    vel_control.lin_vel_is_local = True
    vel_control.ang_vel_is_local = True

    # simulate forward and turn
    observations += simulate(sim, dt=1.0, get_frames=make_video)

    vel_control.controlling_lin_vel = False
    vel_control.angular_velocity = [0.0, 1.0, 0.0]

    # simulate turn only
    observations += simulate(sim, dt=1.5, get_frames=make_video)

    vel_control.angular_velocity = [0.0, 0.0, 0.0]
    vel_control.controlling_lin_vel = True
    vel_control.controlling_ang_vel = True

    # simulate forward only with damped angular velocity (reset angular velocity to 0 after each step)
    observations += simulate(sim, dt=1.0, get_frames=make_video)

    vel_control.angular_velocity = [0.0, -1.25, 0.0]

    # simulate forward and turn
    observations += simulate(sim, dt=2.0, get_frames=make_video)

    vel_control.controlling_ang_vel = False
    vel_control.controlling_lin_vel = False

    # simulate settling
    observations += simulate(sim, dt=3.0, get_frames=make_video)

    # remove the agent's body while preserving the SceneNode
    rigid_obj_mgr.remove_object_by_id(locobot.object_id, delete_object_node=False)

    # demonstrate that the locobot object does not now exist'
    print("Locobot is still alive : {}".format(locobot.is_alive))

    # video rendering with embedded 1st person view
    if make_video:
        vut.make_video(
            observations,
            "rgba_camera_1stperson",
            "color",
            output_path + "robot_control",
            open_vid=show_video,
        )

Continuous Control on NavMesh

Instead of full dynamic simulation, kinematic state setting and the NavMesh can be used to simulate constrained, continuous navigation tasks. In this example the agent is embodied by a robot asset with habitat_sim.physics.MotionType.KINEMATIC. We use a habitat_sim.physics.VelocityControl structure to manually integrate a control velocity and snap the resulting state to the NavMesh before running dynamic simulation. We run this example scenario twice. The first iteration we configure the NavMesh to allow sliding, while the second iteration is configured to dis-allow sliding.

    # load the locobot_merged asset
    locobot_template_id = obj_templates_mgr.load_configs(
        str(os.path.join(data_path, "objects/locobot_merged"))
    )[0]
    # add robot object to the scene with the agent/camera SceneNode attached
    locobot = rigid_obj_mgr.add_object_by_template_id(
        locobot_template_id, sim.agents[0].scene_node
    )
    initial_rotation = locobot.rotation

    # set the agent's body to kinematic since we will be updating position manually
    locobot.motion_type = habitat_sim.physics.MotionType.KINEMATIC

    # create and configure a new VelocityControl structure
    # Note: this is NOT the object's VelocityControl, so it will not be consumed automatically in sim.step_physics
    vel_control = habitat_sim.physics.VelocityControl()
    vel_control.controlling_lin_vel = True
    vel_control.lin_vel_is_local = True
    vel_control.controlling_ang_vel = True
    vel_control.ang_vel_is_local = True
    vel_control.linear_velocity = [0.0, 0.0, -1.0]

    # try 2 variations of the control experiment
    for iteration in range(2):
        # reset observations and robot state
        observations = []
        locobot.translation = [1.75, -1.02, 0.4]
        locobot.rotation = initial_rotation
        vel_control.angular_velocity = [0.0, 0.0, 0.0]

        video_prefix = "robot_control_sliding"
        # turn sliding off for the 2nd pass
        if iteration == 1:
            sim.config.sim_cfg.allow_sliding = False
            video_prefix = "robot_control_no_sliding"

        # manually control the object's kinematic state via velocity integration
        start_time = sim.get_world_time()
        last_velocity_set = 0
        dt = 6.0
        time_step = 1.0 / 60.0
        while sim.get_world_time() < start_time + dt:
            previous_rigid_state = locobot.rigid_state

            # manually integrate the rigid state
            target_rigid_state = vel_control.integrate_transform(
                time_step, previous_rigid_state
            )

            # snap rigid state to navmesh and set state to object/agent
            end_pos = sim.step_filter(
                previous_rigid_state.translation, target_rigid_state.translation
            )
            locobot.translation = end_pos
            locobot.rotation = target_rigid_state.rotation

            # Check if a collision occurred
            dist_moved_before_filter = (
                target_rigid_state.translation - previous_rigid_state.translation
            ).dot()
            dist_moved_after_filter = (end_pos - previous_rigid_state.translation).dot()

            # NB: There are some cases where ||filter_end - end_pos|| > 0 when a
            # collision _didn't_ happen. One such case is going up stairs.  Instead,
            # we check to see if the the amount moved after the application of the filter
            # is _less_ than the amount moved before the application of the filter
            EPS = 1e-5
            collided = (dist_moved_after_filter + EPS) < dist_moved_before_filter

            # run any dynamics simulation
            sim.step_physics(time_step)

            # render observation
            observations.append(sim.get_sensor_observations())

            # randomize angular velocity
            last_velocity_set += time_step
            if last_velocity_set >= 1.0:
                vel_control.angular_velocity = [0.0, (random.random() - 0.5) * 2.0, 0.0]
                last_velocity_set = 0

        # video rendering with embedded 1st person views
        if make_video:
            sensor_dims = (
                sim.get_agent(0).agent_config.sensor_specifications[0].resolution
            )
            overlay_dims = (int(sensor_dims[1] / 4), int(sensor_dims[0] / 4))
            overlay_settings = [
                {
                    "obs": "rgba_camera_1stperson",
                    "type": "color",
                    "dims": overlay_dims,
                    "pos": (10, 10),
                    "border": 2,
                },
                {
                    "obs": "depth_camera_1stperson",
                    "type": "depth",
                    "dims": overlay_dims,
                    "pos": (10, 30 + overlay_dims[1]),
                    "border": 2,
                },
            ]

            vut.make_video(
                observations=observations,
                primary_obs="rgba_camera_3rdperson",
                primary_obs_type="color",
                video_file=output_path + video_prefix,
                fps=60,
                open_vid=show_video,
                overlay_settings=overlay_settings,
                depth_clip=10.0,
            )

With NavMesh sliding allowed:

With NavMesh sliding dis-allowed:

Feature Detail Review

Adding/Removing Objects

Objects can be instanced from templates (i.e. ObjectAttributes) into the scene by template id with habitat_sim.physics.RigidObjectManager.add_object_by_template_id() or by template string key with habitat_sim.physics.RigidObjectManager.add_object_by_template_handle(). These functions return a reference to the added object instance. In the case of errors in construction, an invalid reference is returned.

By default, a new SceneNode will be created when an object is instanced. However, the object can be attached to an existing SceneNode (e.g. that of the Agent) if provided. This is demonstrated in Embodied Agents.

Object instances can be removed by id with habitat_sim.physics.RigidObjectManager.remove_object_by_id() or by handle with habitat_sim.physics.RigidObjectManager.remove_object_by_handle(). Optionally, the object’s SceneNode can be left behind in the SceneGraph when it is removed (e.g. to prevent deletion of an embodied Agent’s SceneNode).

habitat_sim.physics.RigidObjectManager.get_object_handles() will return a list of the string handles for all the existing objects in the scene.

MotionType

Objects can be configured to fill different roles in a simulated scene by assigning a habitat_sim.physics.MotionType:

VelocityControl

Each object’s habitat_sim.physics.VelocityControl structure provides a simple interface for setting up continuous velocity control of the object in either the global or local coordinate frame. This can be queried from the simulator with habitat_sim.physics.ManagedRigidObject.velocity_control.

For habitat_sim.physics.MotionType.KINEMATIC objects, velocity control will directly modify the object’s rigid state.

For habitat_sim.physics.MotionType.DYNAMIC object, velocity control will set the initial velocity of the object before simulating. In this case, velocity will be more accurate with smaller timestep requests to Simulator.step_physics(). Note that dynamics such as forces, collisions, and gravity will affect these objects, but expect extreme damping as velocities are being manually set before each timestep when controlled.