Replacing DCL footage with clean synthetic data

Synthetic LED-lit gate dataset.

DCL The Game footage produces noisy labels and the wrong visual signature for the AIGP simulator due in May 2026. These synthetic frames render LED-lit square gates against procedural venue backgrounds with pixel-perfect labels — bbox + 4 corners — and AIGP-spec camera parameters.

Generator
synth_aigp_gates.py
pure CV2 + numpy, no GPU needed
Camera
640×360 · fx=fy=320 · 20° upward tilt
VADR-TS-002 §3.8 · HFoV=90° / VFoV≈58.7°
Gate
1.5 m square LED-lit, multi-color
white core + colored halo
Labels
YOLO bbox + YOLO-pose 4-corner
drop-in for Phase 1 + 2
What changed. Renderer uses a 5-layer LED bloom (dark structural frame · wide outer halo · mid bloom · inner glow · white-tinted hot core) so frames read as actual emitters, not flat outlines. Distance distribution beta-skewed toward 4–15 m (typical racing band). Backgrounds simplified to dark venue gradients with pinpoint distant LEDs — no random rectangles cluttering the frame.

§ 01Preview gallery

50 sample images at deploy/documents/synth-samples/preview/. A representative subset below — full set is shipped with the docs site.

preview 000 preview 005 preview 012 preview 018 preview 022 preview 028 preview 034 preview 041 preview 001 preview 007 preview 015 preview 025

§ 02How the renderer works

StageWhat it does
1. Random poseBeta-skewed distance (3–25 m, mode≈8 m), azimuth/elevation within FOV, 6-DoF rotation that tightens at close range.
2. BackgroundDark vertical gradient (15% chance bright/daylight), faint horizon line, a sprinkling of distant pinpoint LEDs with their own halos.
3. Project cornersPinhole projection from 4 gate corners (1.5 m square in gate-local frame) → image pixels via fx,fy from FOV.
4. Frame bodyDark polyline (8,8,10) at 1.6× LED thickness — the matte structural part of the gate.
5. LED bloom (3 layers)Big halo (×0.55, σ≈big), mid bloom (×0.95, σ≈med), inner glow (×1.4, σ≈small) — additive composite onto the scene.
6. Hot core70% white + 30% LED color polyline, slightly thinner than the bloom — the actual emitter line.
7. AugmentationsMotion blur (60% chance), exposure jitter, hue shift, sensor noise, JPEG compression at random quality 45–95.
8. LabelsYOLO bbox (axis-aligned over corners) + YOLO-pose (4 corners with visibility flag).
Why white core + colored halo. Real LEDs are saturated outside, near-white at the source — that's what makes the eye read "emitter" instead of "drawn outline". Two-layer trick: bloom is the colored glow, core is white-tinted, both are additive over the dark frame.

§ 03Generate the dataset

Quick preview (50 images, ~10 s)

python synth_aigp_gates.py --output dataset_gates_synthetic --preview 50

Outputs JPEGs under dataset_gates_synthetic/preview/. Inspect, tweak, re-run.

Full overnight dataset (10K images, ~5–10 min)

python synth_aigp_gates.py --output dataset_gates_synthetic \
  --n-train 8000 --n-val 2000 --imgsz 640 360 --seed 42

Outputs the YOLO-format tree:

dataset_gates_synthetic/
  images/{train,val}/*.jpg
  labels/{train,val}/*.txt           # YOLO-pose: bbox + 4 keypoints (Phase 2)
  labels_bbox/{train,val}/*.txt      # plain YOLO bbox (Phase 1)
  data.yaml                          # for detector training
  data_pose.yaml                     # for keypoint training

Hook into existing training

Phase 1 (detector):

python train_apex.py detector \
  --dataset dataset_gates_synthetic \
  --epochs 200 \
  --name apex_yolo11n_synth \
  --wandb-project aigp-gate-detector-synth

Phase 2 (keypoints) reads data_pose.yaml automatically when the dataset has a kpt_shape field.

§ 04Known limitations & next steps


Cross-refs: W&B uploads · training runbook · related code: synth_aigp_gates.py, train_apex.py:790 (camera constants).