AI Grand Prix — Getting Started

From zero to racing in 30 minutes • No hardware needed • Just Python + this repo

This is the overall quickstart. It covers the full software stack: installing dependencies, running your first race in simulation, understanding the pipeline, training vision models, tuning the controller, and preparing for submission. No drone, no Jetson, no soldering. Just code.

Who is this for? Anyone who just cloned the repo and wants to get the AI flying as fast as possible.

What you're building

An autonomous drone AI that flies through gates using only a forward-facing camera. No GPS, no LiDAR, no depth sensor, no human control. Your software is the only variable.

The pipeline:

Camera Frame (640x480)
  → Gate Detection (U-Net / YOLO / Color)
  → Corner Extraction (RANSAC)
  → Depth Estimation (PnP)
  → State Machine (SEEK → APPROACH → TRANSIT)
  → Controller (attitude / velocity commands)
  → MAVLink → PX4 → Motors

Competition timeline

May–Jul 2026Virtual qualifier — submit Python AI to simulator. No hardware.
Sep 2026Physical qualifier — Southern California. Drone provided.
Nov 2026Finals in Columbus, OH. $500K prize pool.

Spec: VADR-TS-001 — MAVLink v2 over UDP, 120 Hz physics, 50–120 Hz commands, 8-minute max, Python 3.14.2 runtime.

Step-by-step: your first race ~30 min total

Clone the repo 1 min

git clone https://github.com/blakefarabi/grandprix.git
cd grandprix

Install Python dependencies 5 min

# Create virtual environment (Python 3.10+ required, 3.14.2 recommended)
python3 -m venv .venv
source .venv/bin/activate

# Core dependencies
pip install mavsdk opencv-python numpy scipy pyyaml

# Vision / ML (optional for first run — needed for training)
pip install torch ultralytics onnxruntime stable-baselines3 gymnasium

# MPC (optional — for trajectory optimization)
pip install casadi
Minimum install: Just mavsdk opencv-python numpy scipy pyyaml is enough to run the standalone test with the synthetic camera and color detector. Add the ML packages when you're ready to train.

Run your first race (no simulator needed) 2 min

python test_race_standalone.py

This runs a complete race with:

You should see gates being passed and timing printed:

  GATE 1 passed! (4.82s, best: inf)
  GATE 2 passed! (3.14s, best: 3.14s)
  ...
  RACE COMPLETE
  Gates passed:  22
  Total time:    68.4s
If you get import errors: make sure your venv is activated and all core deps are installed. torch is only needed for U-Net/RL modes.

Open the web dashboard 1 min

While the race is running, open http://localhost:8090 in your browser. You'll see:

Understand the file structure 5 min

FileWhat it doesYou'll modify
race_pipeline.pyMain orchestrator: state machine + race loopRarely
race_config.pyAll tunable parameters (gains, thresholds, timing)Often
vision_pipeline.py3 detector modes + PnP depth estimationSometimes
gate_segmentation.pyU-Net model + RANSAC corners + trainerFor training
mavsdk_bridge.pyMAVLink communication layerRarely
trajectory_optimizer.pyQuintic polynomial path planningAdvanced
rl_controller.pyGym environment + RL inferenceAdvanced
camera_adapter.pyCamera sources (synthetic, video, Gazebo)Rarely
sim_drone.py6DOF physics for standalone modeNever
test_race_standalone.pyFull race without simulatorFor testing

Tune the controller 10 min

Edit race_config.py or create a YAML override file. The most impactful parameters:

ParameterDefaultWhat to try
hover_thrust0.50Calibrate first. If drone descends, raise to 0.55. If climbs, lower to 0.45.
kp_yaw50.0Lower to 35 if oscillating. Raise to 65 if sluggish.
cruise_pitch-25.0Less negative = slower but safer. -15 is conservative.
seek_yaw_rate180.0If vision misses gates during spin, lower to 90.
transit_distance1.5If gates aren't registering, raise to 2.5.
# Run with a custom YAML config
python race_pipeline.py my_config.yaml attitude

See Tuning & Configuration Reference for all 30+ parameters with safe ranges.

Train a vision model varies

Option A: U-Net (recommended)

# Train gate segmentation
python gate_segmentation.py train \
  --data dataset_gates_seg

# Export to ONNX
python gate_segmentation.py export \
  --weights best.pt

Set vision.mode: "unet" in config.
Requires: training images + binary masks.

Option B: YOLO

# Auto-label from simulator
python yolo-auto-label.py

# Train YOLOv8
python yolo-train.py train

# Export TensorRT
python yolo-train.py export

Set vision.mode: "yolo" in config.
Requires: simulator frames for auto-labeling.

For the virtual qualifier (VQ1): Gates are highlighted with distinctive colors. The color detector (vision.mode: "color") works out of the box. U-Net/YOLO give better corner accuracy for faster PnP depth.

Train an RL policy (advanced) 2-8 hours

# Train PPO policy (privileged mode — ground truth gates)
python rl_train.py train --steps 5000000

# Export to ONNX for deployment
python rl_train.py export

The RL policy replaces the classical controller. It learns to fly faster by optimizing gate passage speed. See AI Integration Plan for the 3-phase training strategy.

Validate & submit 5 min

# Run all pre-submission checks
python submit_check.py

# Package for submission
python submit_check.py package

Creates submission.zip with all required files + model weights. See Submission Pipeline for the full checklist.

Three paths to winning

ApproachDifficultyExpected Lap TimeWhat to do
Classical controller
SAFE
Easy 60–90s Tune race_config.py gains. Color detector. Proportional pursuit. Passes Round 1.
Trained vision + tuned gains
COMPETITIVE
Medium 30–45s Train U-Net for sub-pixel corners. Aggressive gains. Late braking. Top 50%.
RL policy + trajectory optimizer
PODIUM
Hard <25s PPO-trained policy with vision-in-loop. Racing line optimization. Top tier.
Start with the classical controller. Get it passing gates reliably. Then layer on trained vision for better PnP accuracy. Then try RL if you have time. Never skip the baseline — it's your safety net.

Where to go next

I want to...Read this
Understand the full system architectureSystem Architecture
Deep-dive into the state machine & controlRace Pipeline Deep Dive
Learn about gate detection & corner extractionVision & Detection Guide
Tune every parameter with safe rangesTuning & Configuration Reference
Debug an issue or fix a crashTroubleshooting & FAQ
Plan training, submission, and competition prepAI Integration Plan
Build a physical practice droneHardware Quick Start + DIY Build Guide
Pick an RC controllerRF Controller Picker

Back to Documentation Hub