Frequently Asked Questions#
General Questions#
What is GenesisLab?#
GenesisLab is a unified framework for robot reinforcement learning built on top of Genesis physics simulation. It provides a manager-based architecture for defining robot learning tasks with support for multiple robots, sensors, and terrain types.
How is GenesisLab different from Isaac Lab?#
Similarities:
Both use manager-based architecture
Both support multi-robot parallel training
Both integrate with Gymnasium
Differences:
GenesisLab uses Genesis (faster than Isaac Sim)
Simpler configuration system (
@configclassvs Omni Config)Focus on ease of use and extensibility
Lighter weight and easier to get started
How is GenesisLab different from Legged Gym?#
GenesisLab is inspired by Legged Gym but offers:
Support for more robot types (not just legged)
Manager-based architecture for better modularity
Native Gymnasium integration
Built-in sensor support (cameras, LiDAR)
More extensible configuration system
Can I use GenesisLab for non-legged robots?#
Yes! While many examples focus on legged robots, GenesisLab supports:
Wheeled robots
Manipulators
Humanoids
Flying robots
Custom URDF/MJCF robots
Installation and Setup#
Do I need a GPU?#
Yes, Genesis requires CUDA for GPU-accelerated physics simulation. CPU-only mode is available but much slower.
What are the system requirements?#
OS: Linux (Ubuntu 20.04+), macOS, Windows
GPU: NVIDIA GPU with CUDA support (RTX 2070 or better recommended)
RAM: 16GB minimum, 32GB+ recommended for large-scale training
Python: 3.8+
How do I install GenesisLab?#
# Install Genesis
pip install genesis-world
# Install PyTorch
pip install torch
# Install GenesisLab
cd genesislab/source/genesislab
pip install -e .
See installation guide for details.
Configuration#
Why @configclass instead of @dataclass?#
@configclass is required for GenesisLab’s configuration system. Using @dataclass will cause errors. This is enforced throughout the codebase and documented in our memory system.
Correct:
from genesislab.utils.configclass import configclass
@configclass
class MyConfig:
value: float = 1.0
Incorrect:
from dataclasses import dataclass
@dataclass # ❌ Don't use this!
class MyConfig:
value: float = 1.0
How do I modify task configurations?#
You can:
Override in code:
from genesislab.tasks.go2_flat import Go2FlatEnvCfg
cfg = Go2FlatEnvCfg()
cfg.scene.num_envs = 8192
cfg.rewards.forward_vel.weight = 2.0
Create custom config class:
@configclass
class MyCustomCfg(Go2FlatEnvCfg):
def __post_init__(self):
super().__post_init__()
self.scene.num_envs = 8192
self.rewards.forward_vel.weight = 2.0
Observations and Actions#
How do I add a custom observation?#
Define observation function:
def my_observation(scene: LabScene) -> torch.Tensor:
# Compute observation
return observation_tensor
Add to configuration:
@configclass
class MyObservationsCfg(ObservationManagerCfg):
my_obs = ObservationTermCfg(
func=my_observation,
noise=UniformNoiseCfg(min=-0.1, max=0.1)
)
What’s the difference between obs and obs_critic?#
obs: Policy observations (actor network input)obs_critic: Privileged observations (critic network input)
Privileged observations may include information not available in the real world (e.g., terrain height map, object positions) but useful for training.
How do I use image observations?#
@configclass
class VisionObservationsCfg(ObservationManagerCfg):
# Camera images
front_camera = ObservationTermCfg(
func=sensors.camera_rgb,
params={"sensor_name": "front_camera"}
)
See sensor guide for details.
Rewards#
How do I design reward functions?#
Follow these principles:
Decompose into terms: Break reward into interpretable components
Normalize scale: Each term should be roughly [-1, 1] or [0, 1]
Use weights: Balance terms through weights, not by scaling inside functions
Log everything: Track individual reward terms for debugging
Example:
@configclass
class MyRewardsCfg(RewardManagerCfg):
# Goal achievement (main objective)
forward_vel = RewardTermCfg(
func=rewards.forward_velocity,
weight=1.0
)
# Regularization (prevent bad behavior)
energy = RewardTermCfg(
func=rewards.energy_penalty,
weight=-0.001
)
smooth_actions = RewardTermCfg(
func=rewards.action_smoothness,
weight=-0.01
)
Why are my rewards not changing?#
Common issues:
Weight is too small: Increase weight
Term is constant: Check if function returns varying values
Term not registered: Ensure term is in configuration
Numerical issues: Check for NaN/Inf values
Debug with:
# Enable reward logging
env = gym.make("GenesisLab-Go2-Flat-v0", log_rewards=True)
# Check individual terms
print(env.unwrapped.scene.reward_manager.term_rewards)
Sensors#
What’s the difference between fake and Genesis sensors?#
Fake Sensors:
Computed from robot state (no rendering)
Very fast
Examples: IMU, joint encoders
Use for training
Genesis Sensors:
Native Genesis sensors (ray-tracing, rendering)
More realistic but slower
Examples: cameras, LiDAR
Use for evaluation or vision-based tasks
Should I use fake or Genesis sensors for training?#
For proprioceptive sensing (IMU, encoders): Use fake sensors for speed.
For exteroceptive sensing (cameras, LiDAR):
Training: Consider fake sensors or low-resolution Genesis sensors
Evaluation: Use high-fidelity Genesis sensors
Sim-to-real: Use Genesis sensors to match real sensor characteristics
Training#
How many environments should I use?#
General guideline:
Minimum: 1024 for stable training
Recommended: 4096-8192 for fast training
Maximum: Limited by GPU memory
Factors:
More envs = faster training (more samples/second)
More envs = more GPU memory
Diminishing returns after ~8192
How do I speed up training?#
Increase num_envs: More parallel environments
Reduce simulation step: Use larger dt (if stable)
Use fake sensors: Avoid expensive rendering
Optimize observation terms: Remove unnecessary computations
Profile code: Find bottlenecks with
cProfile
How do I handle GPU memory issues?#
Reduce num_envs: Use fewer environments
Reduce model size: Smaller policy network
Reduce sensor resolution: Lower camera resolution
Use gradient accumulation: In your RL library
Terrains#
How do I create custom terrains?#
from genesislab.components.terrains import TerrainGenerator
class MyTerrainGenerator(TerrainGenerator):
def generate(self, difficulty: float) -> np.ndarray:
# Generate height map
return height_map
See terrain tutorial for details.
How does terrain curriculum work?#
Terrains are arranged in difficulty levels. As training progresses:
Robots start on easy terrains (flat)
Performance is tracked per-robot
Robots graduate to harder terrains based on performance
Eventually all robots train on the hardest terrains
This allows gradual learning from easy to hard.
Debugging#
How do I visualize what’s happening?#
# Create env with viewer
env = gym.make("GenesisLab-Go2-Flat-v0", num_envs=1, headless=False)
# Run and watch
obs, _ = env.reset()
for _ in range(1000):
action = policy(obs)
obs, rew, term, trunc, info = env.step(action)
How do I debug termination issues?#
# Check which terminations are triggered
term_manager = env.unwrapped.scene.termination_manager
# See individual termination terms
for name, term in term_manager.terms.items():
triggered = term.compute()
print(f"{name}: {triggered.sum()} envs terminated")
Why is my simulation unstable?#
Common causes:
Time step too large: Reduce
dtin scene configGains too high: Lower PD controller gains
Actions too large: Reduce action scale or clip range
Solver settings: Adjust Genesis solver parameters
Sim-to-Real#
How do I prepare for sim-to-real transfer?#
Domain randomization: Randomize dynamics, observations
Realistic sensing: Use Genesis sensors to match real sensors
Noise injection: Add realistic noise to observations
Action delay: Simulate communication latency
Conservative policies: Penalize aggressive behaviors
What should I randomize?#
Dynamics:
Mass (±20%)
Friction (±30%)
Motor strength (±10%)
Joint damping (±20%)
Observations:
Sensor noise (Gaussian or uniform)
Sensor bias
Latency/delays
Environment:
Terrain variations
External forces (wind, pushes)
Performance#
How many FPS should I expect?#
Typical performance (NVIDIA RTX 3090):
4096 envs, fake sensors: 10,000-50,000 FPS
4096 envs, cameras: 1,000-5,000 FPS
16,384 envs, fake sensors: 20,000-80,000 FPS
Factors:
Scene complexity
Number of sensors
Sensor types and resolution
Observation/reward computation complexity
Why is my simulation slow?#
Check:
Sensor bottleneck: Are you using high-res cameras?
Observation bottleneck: Expensive observation computations?
Number of envs: Too few envs (use more for better parallelization)
CPU bottleneck: Move computations to GPU
Profile with:
from genesislab.utils.timing import Timer
with Timer("step"):
env.step(actions)
Next Steps#
Check getting started for tutorials
See API reference for detailed docs
Join our community for more help!