AI Grand Prix — Code Submission & Model Validation Guide
How to validate, benchmark, and submit your AI for the Virtual Qualifier • VADR-TS-001 compliant
How Submission Works
Per VADR-TS-001 (Section 8): Round One verifies that contestant software can successfully navigate the racecourse. You submit Python code that connects to the DCL simulator via MAVLink. The simulator runs your AI autonomously — start gate, intermediate gates, finish gate. Max 8 minutes. Zero human interaction. Scored on gates passed + total time.
What the simulator provides
| Input | Message | Rate |
| Vehicle attitude (roll/pitch/yaw) | ATTITUDE | Streamed |
| IMU (accel + gyro) | HIGHRES_IMU | Streamed |
| Position + velocity (NED) | ODOMETRY | Streamed |
| Connection heartbeat | HEARTBEAT | Streamed |
| Time sync | TIMESYNC | Streamed |
| Forward-facing camera | Vision stream | TBD (separate spec) |
What you send back
| Command | Message | Rate |
| Attitude (roll/pitch/yaw/thrust) | SET_ATTITUDE_TARGET | 50–120 Hz |
| Velocity (NED or body frame) | SET_POSITION_TARGET_LOCAL_NED | 50–120 Hz |
Scoring
| Criteria | Weight | Notes |
| Gates passed | Primary | Must pass start, intermediate, and finish gates in sequence |
| Total time | Secondary | Faster is better. Max 8 minutes. |
| Human interaction | Disqualification | Any human input during a timed run = immediate DQ |
The submission portal details (URL, format, deadline) will be provided by Anduril/DCL. The spec does not define the upload mechanism. Watch
theaigrandprix.com for announcements. What we CAN do now: validate and benchmark locally.
Step 1: Validate Your Code
Run the submission validator
python submit_check.py
This checks:
| Check | What it tests |
| Module imports | All 10 core modules + 2 optional (gate_segmentation, rl_controller) import correctly |
| Vision smoke test | Creates synthetic gate frame, runs detection + PnP, verifies distance estimate |
| Controller smoke test | Creates mock gate, verifies pursuit commands are generated |
| Camera adapter | Creates synthetic camera, verifies 640x480 frame output |
| Config validation | RaceConfig loads, command rate valid, 8-min timeout set |
| Code quality | No hardcoded paths, valid Python syntax, entry point exists |
Expected output: all checks pass (green). Warnings for uncalibrated hover_thrust and default gate dimensions are OK.
Run the standalone race test
python test_race_standalone.py
Full race with synthetic camera + 6DOF physics. Should complete 22 gates (2 laps x 11 gates) in under 480s.
Step 2: Benchmark Your Models
The benchmark script tests each detector mode against the same simulated course and compares performance.
# Full benchmark (2 laps, ~2 min)
python benchmark_models.py
# Quick benchmark (1 lap, ~1 min)
python benchmark_models.py --quick
What it measures
| Metric | What it means | Good value |
| Gates passed | How many gates the drone flew through | 22/22 (all) |
| Total time | How fast the course was completed | <60s (competitive) |
| Avg gate time | Average time between gate passages | <3s |
| Vision latency | Per-frame inference time | <5ms (U-Net), <12ms (YOLO) |
| Detection rate | % of frames where a gate was found | >80% |
| Vision FPS | How many frames per second the detector processes | >100 (target 120Hz) |
Example output
Mode Gates Time Avg Gate Vision Det % FPS
--------------------------------------------------------
color 22 45.2s 2.05s 0.3ms 92% 3333
unet 22 42.8s 1.95s 4.8ms 96% 208
yolo 22 44.1s 2.01s 11.2ms 88% 89
RECOMMENDED: unet
Set vision.mode: "unet" in race_config.py
The benchmark recommends the best mode based on gates passed (primary) and total time (tiebreaker). It saves results to benchmark_results.json for tracking across runs.
Model comparison guide
| Detector | Best for | Corner accuracy | Speed | Training needed |
| U-Net + RANSAC | Best PnP depth (sub-pixel corners) | Excellent | ~5ms GPU | Yes (100 epochs) |
| YOLO | Complex backgrounds (VQ2) | Poor (bbox only) | ~12ms GPU | Yes (150 epochs) |
| Color (HSV) | Highlighted gates (VQ1) | Good | ~0.5ms | No |
| RL Policy | Learned racing (replaces controller) | N/A (end-to-end) | ~0.2ms | Yes (2M+ steps) |
Recommendation: For Round 1 (VQ1, highlighted gates), start with Color mode — it works out of the box, needs no training, and is extremely fast. If you've trained models, U-Net gives the best PnP accuracy which translates to better distance estimates and smoother approaches. Use the benchmark to confirm which is fastest on your hardware.
Step 3: Package for Submission
Run the packager
python submit_check.py package
Creates submission.zip containing all required files. Only runs if validation passes (0 failures).
Verify the package contents
unzip -l submission.zip
Should contain:
| File | Required | Purpose |
race_pipeline.py | Yes | Main orchestrator + entry point |
vision_pipeline.py | Yes | Gate detection + PnP |
mavsdk_bridge.py | Yes | MAVLink communication |
race_config.py | Yes | Configuration |
camera_adapter.py | Yes | Camera sources |
gate_segmentation.py | If mode=unet | U-Net model + RANSAC |
rl_controller.py | If using RL | Neural controller |
drone_mpc_foundation.py | Yes | Schemas + MPC |
race_logger.py | Yes | JSONL logging |
sim_drone.py | Yes | Physics (standalone mode) |
dashboard_server.py | Yes | Web dashboard |
fpv_renderer.py | Yes | FPV rendering |
Add model weights (if trained)
# U-Net weights
cp gate_seg_best.pt submission/
cp gate_seg.onnx submission/
# YOLO weights
cp yolo_runs/gates/weights/best.pt submission/
# RL policy
cp policy.onnx submission/
# Re-zip with weights
cd submission && zip -r ../submission.zip . && cd ..
Final config check
Open race_config.py and verify these settings match the competition environment:
| Setting | Value | Why |
vision.mode | Your best mode (benchmark result) | Best detector for this course |
connection.sim_url | "udpin://0.0.0.0:14540" | Standard MAVLink port |
connection.camera_source | Match DCL spec (TBD) | DCL camera interface |
race.max_time_s | 480.0 | 8 minute limit per spec |
control.hover_thrust | CALIBRATED | Must match simulator physics |
gate.width / height | Match actual gates | PnP accuracy depends on this |
Submission Checklist
| # | Check | Command | Expected |
| 1 | All modules import | python submit_check.py | 0 failures |
| 2 | Vision detects gates | python submit_check.py | Detection + PnP pass |
| 3 | Controller responds | python submit_check.py | Pursuit + seek commands |
| 4 | Standalone race completes | python test_race_standalone.py | 22 gates in <480s |
| 5 | Best model identified | python benchmark_models.py | Recommendation printed |
| 6 | Package created | python submit_check.py package | submission.zip exists |
| 7 | hover_thrust calibrated | Manual check | Not 0.50 (default) |
| 8 | No human interaction in code | Manual review | No input() calls |
When all 8 checks pass, your submission is ready. Upload
submission.zip to the competition portal when it opens (watch
theaigrandprix.com).
Back to Documentation Hub