Camera API修复说明#

问题1:错误的API方法#

原始实现在录制视频时遇到错误:

AttributeError: 'RasterizerCameraSensor' object has no attribute 'render'

原因#

Genesis有两种camera API:

  1. Regular Camera (通过scene.add_camera()创建)

    • 方法:camera.render(rgb=True, depth=False, ...)

    • 返回:tuple(rgb, depth, segmentation, normal)

    • 用于:静态camera

  2. Sensor Camera (通过scene.add_sensor()创建)

    • 方法:camera.read()

    • 返回:data 对象,包含 .rgb, .depth 等属性

    • 用于:Entity-attached camera

当camera attach到entity时,我们使用scene.add_sensor(gs.sensors.RasterizerCameraOptions(...)),这会创建一个Sensor,而不是Regular Camera。因此需要使用read()而不是render()

修复1:使用正确的API#

修改前:

# ❌ 错误:sensor没有render()方法
data_func=lambda: camera.render(rgb=True, ...)[0]

修复后:

# ✅ 正确:根据camera类型使用不同API
if self._camera_is_sensor:
    def data_func():
        data = camera.read()
        return data.rgb
else:
    def data_func():
        return camera.render(rgb=True, ...)[0]

问题2:错误的数据形状#

第二个错误:

GenesisException: [VideoFileWriter] Data must be either grayscale [H, W] or color [H, W, RGB]

原因#

对于多环境场景(num_envs > 1),camera.read().rgb 返回形状为 [n_envs, H, W, 3] 的数据,但VideoFileWriter期望 [H, W, 3]

修复2:处理batch维度#

def data_func():
    data = camera.read()
    rgb = data.rgb
    # Handle both single env [H, W, 3] and multi-env [n_envs, H, W, 3]
    if rgb.ndim == 4:
        return rgb[0]  # Take first environment
    else:
        return rgb

2. render_camera()方法#

修复后:

def render_camera(self, rgb=True, depth=False, segmentation=False, normal=False):
    if getattr(self, "_camera_is_sensor", False):
        # Sensor API: read() returns data object
        # Note: for multi-env, data.rgb will be [n_envs, H, W, 3]
        return self.camera.read()
    else:
        # Regular camera API: render() returns tuple
        return self.camera.render(rgb=rgb, depth=depth, ...)

验证#

修复后,以下命令应该正常工作:

# Chase模式跟踪机器人(使用sensor API)
python third_party/genPiHub/scripts/amo/genesislab/play_amo_mesh_terrain.py \
    --headless \
    --record-video \
    --camera-track chase \
    --max-steps 300

# 静态camera(使用regular camera API)
python third_party/genPiHub/scripts/amo/genesislab/play_amo_mesh_terrain.py \
    --headless \
    --record-video \
    --camera-track static \
    --max-steps 300

API使用矩阵#

Camera Type

创建方式

方法

返回值

Static (entity_name=None)

add_camera()

render()

tuple

Attached (entity_name set)

add_sensor(RasterizerCameraOptions)

read()

data.rgb

Raytracer

add_sensor(RaytracerCameraOptions)

read()

data.rgb

BatchRenderer

add_sensor(BatchRendererCameraOptions)

read()

data.rgb

现在可以正常使用了!#

所有track mode现在都应该正常工作:

  • ✅ chase (attached sensor)

  • ✅ follow (attached sensor)

  • ✅ side (attached sensor)

  • ✅ top (attached sensor)

  • ✅ first_person (attached sensor)

  • ✅ static (regular camera)