class
BenchmarkBenchmark for evaluating agents in environments.
Contents
- Reference
Methods
- def evaluate(self, agent: Agent, num_episodes: typing.Optional[int] = None) -> typing.Dict[str, float]
- def local_evaluate(self, agent: Agent, num_episodes: typing.Optional[int] = None)
- def remote_evaluate(self, agent: Agent, num_episodes: typing.Optional[int] = None)
Special methods
- def __init__(self, config_paths: typing.Optional[str] = None, eval_remote = False) -> None
Method documentation
def habitat. Benchmark. evaluate(self,
agent: Agent,
num_episodes: typing.Optional[int] = None) -> typing.Dict[str, float]
Parameters | |
---|---|
agent | agent to be evaluated in environment. |
num_episodes | count of number of episodes for which the evaluation should be run. |
Returns | dict containing metrics tracked by environment. |
def habitat. Benchmark. __init__(self,
config_paths: typing.Optional[str] = None,
eval_remote = False) -> None
Parameters | |
---|---|
config_paths | file to be used for creating the environment |
eval_remote | boolean indicating whether evaluation should be run remotely or locally |