From zero to racing in 30 minutes • No hardware needed • Just Python + this repo
An autonomous drone AI that flies through gates using only a forward-facing camera. No GPS, no LiDAR, no depth sensor, no human control. Your software is the only variable.
The pipeline:
Camera Frame (640x480)
→ Gate Detection (U-Net / YOLO / Color)
→ Corner Extraction (RANSAC)
→ Depth Estimation (PnP)
→ State Machine (SEEK → APPROACH → TRANSIT)
→ Controller (attitude / velocity commands)
→ MAVLink → PX4 → Motors
| May–Jul 2026 | Virtual qualifier — submit Python AI to simulator. No hardware. |
| Sep 2026 | Physical qualifier — Southern California. Drone provided. |
| Nov 2026 | Finals in Columbus, OH. $500K prize pool. |
Spec: VADR-TS-001 — MAVLink v2 over UDP, 120 Hz physics, 50–120 Hz commands, 8-minute max, Python 3.14.2 runtime.
git clone https://github.com/blakefarabi/grandprix.git
cd grandprix
# Create virtual environment (Python 3.10+ required, 3.14.2 recommended)
python3 -m venv .venv
source .venv/bin/activate
# Core dependencies
pip install mavsdk opencv-python numpy scipy pyyaml
# Vision / ML (optional for first run — needed for training)
pip install torch ultralytics onnxruntime stable-baselines3 gymnasium
# MPC (optional — for trajectory optimization)
pip install casadi
mavsdk opencv-python numpy scipy pyyaml is enough to run the standalone test with the synthetic camera and color detector. Add the ML packages when you're ready to train.python test_race_standalone.py
This runs a complete race with:
sim_drone.py)http://localhost:8090You should see gates being passed and timing printed:
GATE 1 passed! (4.82s, best: inf)
GATE 2 passed! (3.14s, best: 3.14s)
...
RACE COMPLETE
Gates passed: 22
Total time: 68.4s
torch is only needed for U-Net/RL modes.While the race is running, open http://localhost:8090 in your browser. You'll see:
| File | What it does | You'll modify |
|---|---|---|
race_pipeline.py | Main orchestrator: state machine + race loop | Rarely |
race_config.py | All tunable parameters (gains, thresholds, timing) | Often |
vision_pipeline.py | 3 detector modes + PnP depth estimation | Sometimes |
gate_segmentation.py | U-Net model + RANSAC corners + trainer | For training |
mavsdk_bridge.py | MAVLink communication layer | Rarely |
trajectory_optimizer.py | Quintic polynomial path planning | Advanced |
rl_controller.py | Gym environment + RL inference | Advanced |
camera_adapter.py | Camera sources (synthetic, video, Gazebo) | Rarely |
sim_drone.py | 6DOF physics for standalone mode | Never |
test_race_standalone.py | Full race without simulator | For testing |
Edit race_config.py or create a YAML override file. The most impactful parameters:
| Parameter | Default | What to try |
|---|---|---|
hover_thrust | 0.50 | Calibrate first. If drone descends, raise to 0.55. If climbs, lower to 0.45. |
kp_yaw | 50.0 | Lower to 35 if oscillating. Raise to 65 if sluggish. |
cruise_pitch | -25.0 | Less negative = slower but safer. -15 is conservative. |
seek_yaw_rate | 180.0 | If vision misses gates during spin, lower to 90. |
transit_distance | 1.5 | If gates aren't registering, raise to 2.5. |
# Run with a custom YAML config
python race_pipeline.py my_config.yaml attitude
See Tuning & Configuration Reference for all 30+ parameters with safe ranges.
# Train gate segmentation
python gate_segmentation.py train \
--data dataset_gates_seg
# Export to ONNX
python gate_segmentation.py export \
--weights best.pt
Set vision.mode: "unet" in config.
Requires: training images + binary masks.
# Auto-label from simulator
python yolo-auto-label.py
# Train YOLOv8
python yolo-train.py train
# Export TensorRT
python yolo-train.py export
Set vision.mode: "yolo" in config.
Requires: simulator frames for auto-labeling.
vision.mode: "color") works out of the box. U-Net/YOLO give better corner accuracy for faster PnP depth.# Train PPO policy (privileged mode — ground truth gates)
python rl_train.py train --steps 5000000
# Export to ONNX for deployment
python rl_train.py export
The RL policy replaces the classical controller. It learns to fly faster by optimizing gate passage speed. See AI Integration Plan for the 3-phase training strategy.
# Run all pre-submission checks
python submit_check.py
# Package for submission
python submit_check.py package
Creates submission.zip with all required files + model weights. See Submission Pipeline for the full checklist.
| Approach | Difficulty | Expected Lap Time | What to do |
|---|---|---|---|
| Classical controller SAFE |
Easy | 60–90s | Tune race_config.py gains. Color detector. Proportional pursuit. Passes Round 1. |
| Trained vision + tuned gains COMPETITIVE |
Medium | 30–45s | Train U-Net for sub-pixel corners. Aggressive gains. Late braking. Top 50%. |
| RL policy + trajectory optimizer PODIUM |
Hard | <25s | PPO-trained policy with vision-in-loop. Racing line optimization. Top tier. |
| I want to... | Read this |
|---|---|
| Understand the full system architecture | System Architecture |
| Deep-dive into the state machine & control | Race Pipeline Deep Dive |
| Learn about gate detection & corner extraction | Vision & Detection Guide |
| Tune every parameter with safe ranges | Tuning & Configuration Reference |
| Debug an issue or fix a crash | Troubleshooting & FAQ |
| Plan training, submission, and competition prep | AI Integration Plan |
| Build a physical practice drone | Hardware Quick Start + DIY Build Guide |
| Pick an RC controller | RF Controller Picker |