Get your DIY practice drone from zero to autonomous flight • Ctrl+P to save as PDF
Your code runs in sim first, but sim-to-real transfer is the hardest part. A practice drone lets you:
Your DIY build matches the Archer's specs: 5" frame, ~1kg, 12MP wide-angle camera, Jetson compute, IMU-only (no GPS), vision-based navigation.
| Component | Recommended | Price |
|---|---|---|
| Frame | Holybro QAV250 kit (frame+motors+ESCs) or TBS Source One V5 | $25-200 |
| Flight controller | Pixhawk 6C Mini (PX4 firmware, MAVSDK offboard) | $90 |
| Companion computer | Jetson Orin Nano Super ($249) or Orin NX ($500) | $249-500 |
| Camera | RPi Camera Module 3 Wide (12MP, 120° FOV) — exact Archer match | $35 |
| Battery | 4S 1300-1550mAh LiPo (XT60) | $20-25 |
| Power | PM02 V3 (→Pixhawk) + 5V 5A UBEC (→Jetson) | $30 |
| RC safety | RadioMaster Pocket + ELRS receiver (manual override + kill switch) | $75 |
| Data link | USB-to-UART adapter (Pixhawk TELEM2 ↔ Jetson) | $8 |
| Storage | 128GB NVMe SSD (M.2 for Jetson) | $20 |
| Total | $650-900 | |
This section gets you from a fresh Jetson Orin to a working autonomous drone pipeline. Everything runs on the Jetson — no cloud required.
On a host Ubuntu PC (not the Jetson itself):
# Install NVIDIA SDK Manager on host PC (download from developer.nvidia.com)
# Connect Jetson via USB-C while holding recovery button
# In SDK Manager: select JetPack 6.2+, flash to NVMe SSD
# Boot Jetson, complete Ubuntu setup, connect to WiFi
After boot, enable SSH for headless access:
sudo systemctl enable ssh && sudo systemctl start ssh
# Update everything
sudo apt update && sudo apt upgrade -y
# Core dev tools
sudo apt install -y python3-pip python3-venv python3-dev git cmake \
build-essential pkg-config libopencv-dev python3-opencv
# GStreamer (for camera pipeline)
sudo apt install -y gstreamer1.0-tools gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly libgstreamer1.0-dev
# Serial port access (for Pixhawk UART link)
sudo usermod -aG dialout $USER
# Reboot for group change to take effect
sudo reboot
MAVSDK is the official PX4 SDK for sending MAVLink commands. This is the same interface the sim uses.
# Create a virtual environment for your project
python3 -m venv ~/aigp
source ~/aigp/bin/activate
# Install MAVSDK
pip install mavsdk aioconsole numpy
# Also install pymavlink for low-level debugging
pip install pymavlink mavproxy
TensorRT is pre-installed with JetPack. Add PyTorch for training/converting models:
# PyTorch for Jetson (JetPack 6.x ships with CUDA 12)
# Check NVIDIA's PyTorch for Jetson page for the latest wheel URL:
# https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048
# Example (verify URL matches your JetPack version):
pip install torch torchvision
# Verify CUDA works
python3 -c "import torch; print(torch.cuda.is_available())"
# Should print: True
# TensorRT Python bindings (usually pre-installed)
pip install tensorrt
Plug RPi Camera Module 3 Wide into the Jetson's CSI-0 port (the one closest to the power jack). Lock the FPC ribbon cable tab.
# Quick test — should show a live video window
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! \
'video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1' ! \
nvvidconv ! 'video/x-raw,width=640,height=480' ! \
nvvidconv ! nveglglessink
# If no display attached (headless), save a test frame:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=1 ! \
'video/x-raw(memory:NVMM),width=1920,height=1080' ! \
nvjpegenc ! filesink location=test.jpg
OpenCV capture (use this in your Python code):
import cv2
gst = ("nvarguscamerasrc sensor-id=0 ! "
"video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1 ! "
"nvvidconv ! video/x-raw,format=BGRx,width=640,height=480 ! "
"videoconvert ! video/x-raw,format=BGR ! appsink drop=1")
cap = cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)
ret, frame = cap.read()
print(f"Frame shape: {frame.shape}") # (480, 640, 3)
cap.release()
Connect Pixhawk to a PC via USB-C. Open QGroundControl (qgroundcontrol.com).
# In QGroundControl:
# 1. Flash PX4 firmware
# Vehicle Setup → Firmware → PX4 Pro Stable → Flash
# 2. Select airframe
# Vehicle Setup → Airframe → Generic Quadrotor X → Apply + Reboot
# 3. Calibrate sensors
# Vehicle Setup → Sensors → Compass, Gyro, Accel, Level Horizon
# 4. Configure TELEM2 for companion computer
# Parameters tab — set these three:
MAV_1_CONFIG = TELEM2
MAV_1_MODE = Onboard
SER_TEL2_BAUD = 921600
# 5. Reboot Pixhawk
# 6. Configure RC + flight modes
# Radio → Calibrate transmitter
# Flight Modes → assign switches:
# - Stabilized (manual safety)
# - Offboard (Jetson takes control)
# - Kill Switch (emergency motor cutoff — CRITICAL)
Plug USB-UART adapter into Jetson USB port. Wire adapter TX→Pixhawk TELEM2 RX, adapter RX→Pixhawk TELEM2 TX, adapter GND→Pixhawk GND. Do NOT connect VCC.
# Verify serial device exists
ls /dev/ttyUSB0
# Quick MAVProxy test (should show heartbeat messages)
source ~/aigp/bin/activate
mavproxy.py --master=/dev/ttyUSB0 --baudrate=921600
# You should see: "HEARTBEAT" messages scrolling
# Type "status" to see telemetry. Ctrl+C to exit.
MAVSDK connection test:
#!/usr/bin/env python3
"""test_connection.py"""
import asyncio
from mavsdk import System
async def main():
drone = System()
await drone.connect(system_address="serial:///dev/ttyUSB0:921600")
print("Waiting for drone...")
async for state in drone.core.connection_state():
if state.is_connected:
print("✓ Connected to Pixhawk!")
break
print("Streaming attitude...")
async for att in drone.telemetry.attitude_euler():
print(f" Roll:{att.roll_deg:6.1f}°"
f" Pitch:{att.pitch_deg:6.1f}°"
f" Yaw:{att.yaw_deg:6.1f}°")
asyncio.run(main())
serial:///dev/ttyUSB0:921600 instead of udp://:14540).This is the minimal vision → control loop. Replace the gate detector with your trained model.
#!/usr/bin/env python3
"""autonomous_flight.py — minimal vision+control loop"""
import asyncio, cv2, numpy as np
from mavsdk import System
from mavsdk.offboard import VelocityBodyYawspeed, OffboardError
# ── Camera setup ──
gst = ("nvarguscamerasrc sensor-id=0 ! "
"video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1 ! "
"nvvidconv ! video/x-raw,format=BGRx,width=640,height=480 ! "
"videoconvert ! video/x-raw,format=BGR ! appsink drop=1")
cap = cv2.VideoCapture(gst, cv2.CAP_GSTREAMER)
# ── Gate detection (replace with your neural net) ──
def detect_gate(frame):
"""Returns (error_x, error_y, area) or None"""
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, (0,100,100), (15,255,255)) # tune for gate color
cnts, _ = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
if cnts:
c = max(cnts, key=cv2.contourArea)
if cv2.contourArea(c) > 500:
M = cv2.moments(c)
cx, cy = int(M["m10"]/M["m00"]), int(M["m01"]/M["m00"])
err_x = (cx - 320) / 320 # normalized [-1, 1]
err_y = (cy - 240) / 240
return err_x, err_y, cv2.contourArea(c)
return None
# ── Control loop ──
async def main():
drone = System()
await drone.connect(system_address="serial:///dev/ttyUSB0:921600")
async for s in drone.core.connection_state():
if s.is_connected: break
print("Connected. Waiting for position estimate...")
async for h in drone.telemetry.health():
if h.is_local_position_ok: break
await drone.action.arm()
await drone.offboard.set_velocity_body(
VelocityBodyYawspeed(0, 0, 0, 0))
await drone.offboard.start()
print("Offboard started — flying autonomously")
try:
while True:
ret, frame = cap.read()
if not ret: continue
det = detect_gate(frame)
if det:
err_x, err_y, area = det
fwd = float(np.clip(3.0 - area/10000, 0.5, 4.0))
yaw = float(-err_x * 40.0)
vrt = float(err_y * 2.0)
else:
fwd, yaw, vrt = 0.0, 15.0, 0.0 # search spin
await drone.offboard.set_velocity_body(
VelocityBodyYawspeed(fwd, 0.0, vrt, yaw))
await asyncio.sleep(0.01)
except KeyboardInterrupt:
await drone.offboard.stop()
await drone.action.land()
finally:
cap.release()
asyncio.run(main())
udp://:14540 → serial:///dev/ttyUSB0:921600cv2.calibrateCamera()┌─────────────────────────────────────────────────────────────┐
│ JETSON ORIN (your code) │
│ │
│ Camera ──→ GStreamer ──→ OpenCV frame (640×480 @ 30-60fps) │
│ │ │
│ ┌────────▼────────┐ │
│ │ GATE DETECTOR │ ← Your neural net │
│ │ (TensorRT) │ (train in sim, │
│ └────────┬────────┘ deploy here) │
│ │ │
│ ┌────────▼────────┐ │
│ │ PLANNER │ ← Pure pursuit / │
│ │ (path+speed) │ MPC / RL policy │
│ └────────┬────────┘ │
│ │ │
│ ┌────────▼────────┐ │
│ │ MAVSDK │ ← Velocity or │
│ │ (offboard cmd) │ attitude cmds │
│ └────────┬────────┘ @ 50-120 Hz │
│ │ │
└──────────────────────────────┼──────────────────────────────┘
│ UART 921600 baud
┌──────────▼──────────┐
│ PIXHAWK 6C (PX4) │
│ Motor mixing │
│ Attitude control │
│ ESC PWM output │
└──────────┬──────────┘
│
┌──────────▼──────────┐
│ 4x Motors + Props │
│ → FLIGHT │
└─────────────────────┘
test_connection.py shows attitude telemetry