AI Grand Prix — Code Submission & Model Validation Guide

How to validate, benchmark, and submit your AI for the Virtual Qualifier • VADR-TS-001 compliant

How Submission Works

Per VADR-TS-001 (Section 8): Round One verifies that contestant software can successfully navigate the racecourse. You submit Python code that connects to the DCL simulator via MAVLink. The simulator runs your AI autonomously — start gate, intermediate gates, finish gate. Max 8 minutes. Zero human interaction. Scored on gates passed + total time.

What the simulator provides

InputMessageRate
Vehicle attitude (roll/pitch/yaw)ATTITUDEStreamed
IMU (accel + gyro)HIGHRES_IMUStreamed
Position + velocity (NED)ODOMETRYStreamed
Connection heartbeatHEARTBEATStreamed
Time syncTIMESYNCStreamed
Forward-facing cameraVision streamTBD (separate spec)

What you send back

CommandMessageRate
Attitude (roll/pitch/yaw/thrust)SET_ATTITUDE_TARGET50–120 Hz
Velocity (NED or body frame)SET_POSITION_TARGET_LOCAL_NED50–120 Hz

Scoring

CriteriaWeightNotes
Gates passedPrimaryMust pass start, intermediate, and finish gates in sequence
Total timeSecondaryFaster is better. Max 8 minutes.
Human interactionDisqualificationAny human input during a timed run = immediate DQ
The submission portal details (URL, format, deadline) will be provided by Anduril/DCL. The spec does not define the upload mechanism. Watch theaigrandprix.com for announcements. What we CAN do now: validate and benchmark locally.

Step 1: Validate Your Code

Run the submission validator

python submit_check.py

This checks:

CheckWhat it tests
Module importsAll 10 core modules + 2 optional (gate_segmentation, rl_controller) import correctly
Vision smoke testCreates synthetic gate frame, runs detection + PnP, verifies distance estimate
Controller smoke testCreates mock gate, verifies pursuit commands are generated
Camera adapterCreates synthetic camera, verifies 640x480 frame output
Config validationRaceConfig loads, command rate valid, 8-min timeout set
Code qualityNo hardcoded paths, valid Python syntax, entry point exists

Expected output: all checks pass (green). Warnings for uncalibrated hover_thrust and default gate dimensions are OK.

Run the standalone race test

python test_race_standalone.py

Full race with synthetic camera + 6DOF physics. Should complete 22 gates (2 laps x 11 gates) in under 480s.

Step 2: Benchmark Your Models

The benchmark script tests each detector mode against the same simulated course and compares performance.

# Full benchmark (2 laps, ~2 min)
python benchmark_models.py

# Quick benchmark (1 lap, ~1 min)
python benchmark_models.py --quick

What it measures

MetricWhat it meansGood value
Gates passedHow many gates the drone flew through22/22 (all)
Total timeHow fast the course was completed<60s (competitive)
Avg gate timeAverage time between gate passages<3s
Vision latencyPer-frame inference time<5ms (U-Net), <12ms (YOLO)
Detection rate% of frames where a gate was found>80%
Vision FPSHow many frames per second the detector processes>100 (target 120Hz)

Example output

  Mode       Gates     Time   Avg Gate   Vision  Det %    FPS
  --------------------------------------------------------
  color         22    45.2s      2.05s     0.3ms   92%  3333
  unet          22    42.8s      1.95s     4.8ms   96%   208
  yolo          22    44.1s      2.01s    11.2ms   88%    89

  RECOMMENDED: unet
  Set vision.mode: "unet" in race_config.py
The benchmark recommends the best mode based on gates passed (primary) and total time (tiebreaker). It saves results to benchmark_results.json for tracking across runs.

Model comparison guide

DetectorBest forCorner accuracySpeedTraining needed
U-Net + RANSACBest PnP depth (sub-pixel corners)Excellent~5ms GPUYes (100 epochs)
YOLOComplex backgrounds (VQ2)Poor (bbox only)~12ms GPUYes (150 epochs)
Color (HSV)Highlighted gates (VQ1)Good~0.5msNo
RL PolicyLearned racing (replaces controller)N/A (end-to-end)~0.2msYes (2M+ steps)
Recommendation: For Round 1 (VQ1, highlighted gates), start with Color mode — it works out of the box, needs no training, and is extremely fast. If you've trained models, U-Net gives the best PnP accuracy which translates to better distance estimates and smoother approaches. Use the benchmark to confirm which is fastest on your hardware.

Step 3: Package for Submission

Run the packager

python submit_check.py package

Creates submission.zip containing all required files. Only runs if validation passes (0 failures).

Verify the package contents

unzip -l submission.zip

Should contain:

FileRequiredPurpose
race_pipeline.pyYesMain orchestrator + entry point
vision_pipeline.pyYesGate detection + PnP
mavsdk_bridge.pyYesMAVLink communication
race_config.pyYesConfiguration
camera_adapter.pyYesCamera sources
gate_segmentation.pyIf mode=unetU-Net model + RANSAC
rl_controller.pyIf using RLNeural controller
drone_mpc_foundation.pyYesSchemas + MPC
race_logger.pyYesJSONL logging
sim_drone.pyYesPhysics (standalone mode)
dashboard_server.pyYesWeb dashboard
fpv_renderer.pyYesFPV rendering

Add model weights (if trained)

# U-Net weights
cp gate_seg_best.pt submission/
cp gate_seg.onnx submission/

# YOLO weights
cp yolo_runs/gates/weights/best.pt submission/

# RL policy
cp policy.onnx submission/

# Re-zip with weights
cd submission && zip -r ../submission.zip . && cd ..

Final config check

Open race_config.py and verify these settings match the competition environment:

SettingValueWhy
vision.modeYour best mode (benchmark result)Best detector for this course
connection.sim_url"udpin://0.0.0.0:14540"Standard MAVLink port
connection.camera_sourceMatch DCL spec (TBD)DCL camera interface
race.max_time_s480.08 minute limit per spec
control.hover_thrustCALIBRATEDMust match simulator physics
gate.width / heightMatch actual gatesPnP accuracy depends on this

Submission Checklist

#CheckCommandExpected
1All modules importpython submit_check.py0 failures
2Vision detects gatespython submit_check.pyDetection + PnP pass
3Controller respondspython submit_check.pyPursuit + seek commands
4Standalone race completespython test_race_standalone.py22 gates in <480s
5Best model identifiedpython benchmark_models.pyRecommendation printed
6Package createdpython submit_check.py packagesubmission.zip exists
7hover_thrust calibratedManual checkNot 0.50 (default)
8No human interaction in codeManual reviewNo input() calls
When all 8 checks pass, your submission is ready. Upload submission.zip to the competition portal when it opens (watch theaigrandprix.com).

Back to Documentation Hub