MLWorldOfML v2.0Hack Club

Phase 2: Local Model on Device

This path takes your trained model from laptop to low-cost hardware. Follow setup, optimize artifacts, run local inference, and submit proof for review.

Hardware Setup Guide

  1. Flash Raspberry Pi OS Lite to SD card with Raspberry Pi Imager.
  2. Enable SSH by placing an empty `ssh` file on the boot partition.
  3. Create `wpa_supplicant.conf` on boot for Wi-Fi provisioning.
  4. Boot device and SSH in: `ssh pi@raspberrypi.local`.
  5. Install deps: `sudo apt update && sudo apt install -y python3-venv python3-pip`.
  6. Clone your project and create a dedicated virtual environment.

Alternative TinyML path: ESP32-S3 is supported for lower-power deployments. See `DEVICE_DEPLOYMENT/setup_hardware.md` for firmware flow.

Wiring Diagram (ASCII)

Raspberry Pi Zero 2 W + I2S Mic (INMP441)

Pi Pin         Sensor Pin
------------------------------
3.3V (Pin 1) -> VDD
GND  (Pin 6) -> GND
GPIO18 (12)  -> SCK
GPIO19 (35)  -> WS
GPIO20 (38)  -> SD

Keep wire lengths short to reduce noise. Always power off before rewiring GPIO.

Model Optimization

  • Train model on laptop and export baseline checkpoint.
  • Run pruning pass (`ml-pipeline/common/prune.py`) if model is over target size.
  • Apply post-training quantization for TFLite/ONNX.
  • Benchmark latency + memory before and after optimization.
  • Pick fastest model that still beats minimum metric threshold.
Target: keep median inference under 250 ms on Pi Zero 2 W.

Test Harness Commands

  • python ml-pipeline/export/to_tflite.py --input models/best.pt --output models/best.tflite
  • python ml-pipeline/export/to_onnx.py --input models/best.pt --output models/best.onnx
  • python ml-pipeline/device-harness/benchmark.py --model models/best.tflite --runs 50
  • python ml-pipeline/device-harness/verify.py --model models/best.tflite --sample data/device/sample.json

Expected verification output should include `latency_ms`, `peak_memory_mb`, and deterministic prediction checks.

Completion Checklist

  • Model exported to ONNX and/or TFLite.
  • On-device inference command succeeds with sample input.
  • Benchmark report saved to `ml-pipeline/device-harness/reports/`.
  • Demo video or serial logs uploaded in final project submission.