Camera and Video Recording in GenesisLab#
GenesisLab now supports flexible camera configuration and video recording in both headless and viewer modes.
Features#
✅ Headless rendering: Record videos without opening viewer window
✅ Flexible camera setup: Configure position, resolution, FOV, attachment
✅ Optional configuration: Camera can be None (disabled)
✅ Multiple backends: Rasterizer (fast), Raytracer (quality), BatchRenderer (RL)
✅ Easy integration: Simple config-based API
Quick Start#
1. Basic Camera Setup (No Recording)#
from genesislab.engine.scene import SceneCfg, CameraCfg
scene_cfg = SceneCfg(
viewer=False, # Headless mode
camera=CameraCfg(
res=(1920, 1080),
pos=(5.0, 0.0, 3.0),
lookat=(0.0, 0.0, 0.5),
fov=45.0,
),
)
2. Camera + Video Recording#
from genesislab.engine.scene import SceneCfg, CameraCfg, RecordingCfg
scene_cfg = SceneCfg(
viewer=False, # Headless mode
camera=CameraCfg(
res=(1920, 1080),
pos=(5.0, 0.0, 3.0),
lookat=(0.0, 0.0, 0.5),
fov=45.0,
backend="rasterizer", # Fast rendering
),
recording=RecordingCfg(
enabled=True,
save_path="output/my_video.mp4",
fps=60,
codec="libx264",
codec_preset="veryfast",
),
)
3. Track Robot with Camera#
camera=CameraCfg(
res=(1280, 720),
pos=(2.0, 0.0, 1.5), # Offset from robot
lookat=(1.0, 0.0, 0.5), # Look ahead
entity_name="robot", # Attach to robot
link_name="pelvis", # Specific link (optional)
)
4. No Camera (Default)#
scene_cfg = SceneCfg(
viewer=False,
camera=None, # No camera
recording=None, # No recording
)
Command Line Usage (AMO Example)#
Record video in headless mode:#
python third_party/genPiHub/scripts/amo/genesislab/play_amo_mesh_terrain.py \
--headless \
--record-video \
--video-path output/amo_demo.mp4 \
--video-fps 60 \
--camera-res 1920 1080 \
--camera-pos 5.0 0.0 3.0 \
--max-steps 1000
Headless without recording (for testing):#
python third_party/genPiHub/scripts/amo/genesislab/play_amo_mesh_terrain.py \
--headless \
--max-steps 1000
With viewer (no recording needed):#
python third_party/genPiHub/scripts/amo/genesislab/play_amo_mesh_terrain.py \
--viewer \
--max-steps 1000
Configuration Reference#
CameraCfg#
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
tuple[int, int] |
(1280, 720) |
Camera resolution (width, height) |
|
tuple[float, float, float] |
(3.5, 0.0, 2.5) |
Camera position (x, y, z) |
|
tuple[float, float, float] |
(0.0, 0.0, 0.5) |
Look-at target (x, y, z) |
|
tuple[float, float, float] |
(0.0, 0.0, 1.0) |
Up direction |
|
float |
40.0 |
Vertical field of view (degrees) |
|
str | None |
None |
Entity to attach camera to |
|
str | None |
None |
Link to attach camera to |
|
str |
“rasterizer” |
Camera backend (rasterizer/raytracer/batch_renderer) |
|
bool |
False |
Show in viewer GUI |
RecordingCfg#
Parameter |
Type |
Default |
Description |
|---|---|---|---|
|
bool |
False |
Enable video recording |
|
str |
“output/recording.mp4” |
Output video path |
|
int |
60 |
Video frame rate |
|
str |
“libx264” |
Video codec |
|
str |
“veryfast” |
Encoding speed preset |
|
str |
“zerolatency” |
Codec tuning option |
|
bool |
True |
Render RGB |
|
bool |
False |
Render depth |
|
bool |
False |
Render segmentation |
|
bool |
False |
Render normals |
Programmatic Usage in Tasks#
from genesislab.envs import ManagerBasedRlEnvCfg
from genesislab.engine.scene import SceneCfg, CameraCfg, RecordingCfg
@configclass
class MyTaskEnvCfg(ManagerBasedRlEnvCfg):
scene: SceneCfg = SceneCfg(
num_envs=1,
viewer=False,
camera=CameraCfg(
res=(1920, 1080),
pos=(5.0, 0.0, 3.0),
lookat=(0.0, 0.0, 0.5),
),
recording=RecordingCfg(
enabled=True,
save_path="output/task_demo.mp4",
fps=60,
),
)
Manual Camera Rendering#
If you need to manually render camera frames (e.g., for custom processing):
# After env.build()
if env.scene.camera is not None:
rgb, depth, seg, normal = env.scene.render_camera(
rgb=True,
depth=False,
segmentation=False,
normal=False,
)
# Process rgb...
Backend Comparison#
Backend |
Speed |
Quality |
Multi-Env |
Use Case |
|---|---|---|---|---|
rasterizer |
⚡ Fast |
Medium |
✅ Yes |
Default, debugging, demos |
raytracer |
🐌 Slow |
High |
❌ No |
High-quality visualization |
batch_renderer |
⚡⚡ Very Fast |
Medium |
✅ Yes |
Vision-based RL (thousands of envs) |
Notes#
Recording happens automatically during
scene.step()when enabledCamera can be
Noneto disable camera entirelyRecording requires camera to be configured
Video is saved automatically when recording stops
Works in both headless and viewer modes
Camera coordinates are in world frame (or entity-local if attached)
Troubleshooting#
Q: Video file not created?
Ensure recording.enabled=True
Check that camera is configured
Verify output directory exists (created automatically)
Run enough steps for video to be saved
Q: Black/dark video?
Add lights to scene (use VisOptions for HDRI)
Adjust camera position
Check if robot/scene is visible from camera viewpoint
Q: Recording in headless mode not working?
This is normal! Recording works in headless mode by design
Set viewer=False and recording.enabled=True
Camera renders offscreen without viewer window