Academic Year 2026–27

PREKSHA Projects Build. Ship. Present.

42 detailed projects across 3 tracks — 2 per class, Class 6 through 12. Every student builds something real, every single year.

0
Total Projects
0
Class Levels
0
Tracks
0
Live Deployments
14
Coding Projects
14
IoT Projects
14
AI/ML Projects
8
Live Deployments
10
Team Projects
6
Grand Capstones
💻
Track 1 — Coding & Web Development
HTML → CSS → JavaScript → Python → Django → Node.js → React/MERN · Class 6–12
14 Projects · 7 Classes
Class 6
Coding / Web DevClass 6Beginner
School Digital Notice Board
HTML5 + CSS3 · Individual · 2 weeks
2 weeks Individual VS Code + Chrome Static HTML/CSS

A fully responsive static website displaying school announcements, events, timetable, and teacher info — built entirely with HTML5 and CSS3. Includes a working dark/light theme toggle using CSS custom properties. Students sketch wireframes first on paper, then code from scratch without any framework.

What you build
  • Landing page: hero banner, scrolling announcements ticker, and event cards
  • Teacher directory with profile cards and subject colour tags
  • Class timetable as a responsive HTML table with alternating row styles
  • Dark / Light theme toggle with CSS variables and one JS line
  • Photo gallery with hover zoom effect in pure CSS
HTML5 SemanticsCSS GridFlexboxCSS VariablesMedia QueriesCSS Transitions
Learning Outcomes
Structure a multi-page site using semantic HTML5 elements
Build responsive layouts without any CSS framework
Apply CSS custom properties for dynamic theme switching
Debug layout issues using Chrome DevTools
Coding / Web DevClass 6Beginner
Animated CSS Art Portfolio
HTML5 + CSS Animations · Individual · 2 weeks
2 weeks Individual VS Code + Browser Pure CSS art

Students create a personal portfolio page showcasing three pieces of pure CSS art — geometric illustrations drawn entirely with div elements, borders, and transforms. No images, no canvas, no JavaScript. A sidebar lists CSS properties learned. This project deepens understanding of the box model, transforms, and keyframe animations in a creative, non-intimidating context.

What you build
  • Three CSS-only illustrations: a sunset scene, a robot face, and a flag of their choice
  • Each piece uses at least 5 different CSS properties not used in the others
  • Hover animations on each artwork: colour shifts, spins, scale bounces
  • Sticky sidebar listing every CSS property used across all three pieces
  • Responsive layout — all three pieces reflow on mobile without breaking
  • Page deployed to GitHub Pages (first GitHub account setup)
CSS Transforms@keyframesborder-radiusCSS Grid:hoverGitHub Pages
Learning Outcomes
Use CSS transforms (rotate, scale, translate) for visual effects
Write multi-step @keyframes animations with timing functions
Deploy a static site to GitHub Pages for the first time
Think spatially about the CSS box model through creative constraints
Class 7
Coding / Web DevClass 7Beginner
Interactive Quiz Platform
JavaScript DOM · Individual · 3 weeks
3 weeks Individual VS Code localStorage

A 20-question multiple-choice quiz app on any school subject. Features a per-question countdown timer, live score bar, animated result screen with pass/fail fanfare, and a "review wrong answers" mode. Questions are stored in a JSON array the student authors themselves — making the subject personal.

What you build
  • Start screen with subject selector and difficulty dropdown
  • 15-second per-question countdown, colour shifting green → yellow → red
  • Live progress bar and cumulative score throughout quiz
  • Animated results page — confetti on perfect score
  • Wrong-answers review mode re-presents only missed questions
  • High-score hall of fame persisted via localStorage
Vanilla JSDOM APIJSONlocalStorageCSS AnimationssetInterval
Learning Outcomes
Dynamically update the DOM without page reloads
Drive application flow entirely through event listeners
Persist data across browser sessions with localStorage
Manage app state in a single plain JavaScript object
Coding / Web DevClass 7Beginner
Live Weather Dashboard (API Fetch)
JavaScript + OpenWeatherMap API · Individual · 2 weeks
2 weeks Individual OpenWeatherMap Async/Await

Students build a weather app that fetches real-time data from the OpenWeatherMap API using the Fetch API and async/await. Search any city, see current conditions, a 5-day forecast in cards, and an animated background that changes with weather type (sunny, rainy, cloudy, snowy). First introduction to real-world APIs, JSON parsing, and asynchronous JavaScript patterns.

What you build
  • Search bar: type any city name → fetch current weather JSON → render card
  • Current weather card: temperature (°C / °F toggle), humidity, wind speed, icon
  • 5-day forecast strip: daily min/max with animated weather icon per day
  • Dynamic CSS background gradient that matches weather (orange for sunny, dark grey for storm)
  • Error handling: graceful "City not found" message with retry prompt
  • Geolocation fallback: load weather for user's current location on page open
Fetch APIasync/awaitJSON ParsingREST APItry/catchGeolocation API
Learning Outcomes
Consume a real-world REST API using async/await and Fetch
Parse and traverse nested JSON response objects
Handle network errors and bad user input gracefully with try/catch
Use the browser Geolocation API to get user coordinates
Class 8
Coding / Web DevClass 8Intermediate
Student Grade Manager (Python CLI)
Python OOP + CSV · Individual · 2 weeks
2 weeks Individual Python IDLE / Thonny CSV persistence

A colour-coded CLI application that manages student records stored in a CSV file. Menu-driven: add, search, update, delete students. Auto-computes grade letter, percentage, and class rank. Combines file I/O, OOP, and exception handling in one practical tool.

What you build
  • ANSI colour-coded main menu loop with numbered options
  • Add student: name, roll number, marks in 5 subjects → auto grade and rank
  • Search and update any field by roll number with full validation
  • Delete record with yes/no confirmation to prevent accidents
  • Class report: full table sorted by rank, top-3 and bottom-3 highlighted
  • All data persisted to grades.csv between programme runs
PythonCSV ModuleOOP ClassesFile I/OException HandlingSorting / Ranking
Learning Outcomes
Design a class with CRUD methods and proper encapsulation
Read and write structured data to CSV files safely
Handle FileNotFoundError and invalid-input exceptions gracefully
Sort records by computed field and display class rankings
Coding / Web DevClass 8Intermediate
Snake Game in Python Turtle
Python Turtle + OOP · Individual · 2 weeks
2 weeks Individual Python Turtle / Pygame OOP game loop

Students implement the classic Snake game using Python's Turtle module — fully object-oriented with separate classes for Snake, Food, and Scoreboard. The project cements OOP principles (encapsulation, composition) in a context students are already familiar with. Optional: port to Pygame for smoother rendering and add levels with increasing speed.

What you build
  • Snake class: segment list, move(), extend(), detect_collision() methods
  • Food class: random_spawn(), detect_eat(snake_head) using coordinate proximity
  • Scoreboard class: update_score(), high_score persistence using a txt file
  • Game loop with configurable speed — speed increments every 5 points scored
  • Wall collision and self-collision end-game detection
  • Optional Pygame version: smooth pixel movement, sound effects on eat/die
Python OOPturtle ModuleEncapsulationCompositionGame LoopFile Persistence
Learning Outcomes
Design multiple cooperating classes with clear responsibilities
Implement a real-time game loop with state management
Persist high-score data to disk between sessions
Apply collision detection logic using coordinate comparison
Class 9
Coding / Web DevClass 9Intermediate
PREKSHA Academy Blog CMS
Django · Pair Project · 4 weeks · Live on PythonAnywhere
4 weeks Pair PythonAnywhere Live Deployed

A real CMS for the school blog. Teachers log in, create and edit posts with a rich-text editor, upload cover images, and toggle publish status. Students browse, search by tag, and comment. Deployed on PythonAnywhere — every pair gets a live public URL they can share with parents.

What you build
  • Django project: apps for accounts, blog, and comments
  • Custom user model with Teacher / Student role separation
  • Post model: title, rich body, cover image, tags, publish flag, created_at
  • Tag-based sidebar filter and full-text search bar
  • Comment system with teacher moderation queue
  • Bootstrap 5 responsive templates with PREKSHA dark branding
  • Static files and media correctly served on PythonAnywhere
Django MVTORM + MigrationsCustom AuthBootstrap 5PillowPythonAnywhere Deploy
Learning Outcomes
Understand Django's MVT architecture end to end
Implement role-based access control with login_required decorators
Handle image uploads securely with Pillow
Deploy a Django project live for the first time
Coding / Web DevClass 9Intermediate
PREKSHA Student Portal (Django REST)
Django REST Framework + React Frontend · Pair · 5 weeks
5 weeks Pair DRF + React Deployed

Students build a functional student portal: a Django REST Framework API backend paired with a React frontend. Students can view their enrolled courses, submit assignments as file uploads, and see grades. Teachers have a separate dashboard to grade submissions and post announcements. Introduces the pattern of decoupled backend + frontend that powers modern web products.

What you build
  • DRF serializers for Student, Course, Assignment, Submission, Grade models
  • Token-based authentication (DRF TokenAuth) with role-based permission classes
  • File upload endpoint for assignment submissions (PDF/ZIP) with size validation
  • React frontend: student dashboard, course list, submission form, grade viewer
  • Teacher dashboard: pending submissions list, inline grading with feedback field
  • Swagger/OpenAPI auto-generated API docs using drf-spectacular
  • Both services deployed: DRF on Railway, React on Vercel
Django REST FrameworkSerializersToken AuthReactFile Uploaddrf-spectacularAxios
Learning Outcomes
Build a decoupled API with DRF and consume it from a React frontend
Implement permission classes for role-based access control
Handle file uploads with size and type validation
Auto-generate API documentation with drf-spectacular / Swagger UI
Class 10
Coding / Web DevClass 10Intermediate
RESTful Task Management API
Node.js + Express + MongoDB · Individual · 3 weeks · Deployed on Render
3 weeks Individual Render.com JWT Auth

A production-style REST API for task management — like a simplified Trello backend. Supports user registration, JWT login, and full CRUD on tasks with deadlines and priority tags. Tested with an 18-case Postman collection and deployed live on Render with MongoDB Atlas.

What you build
  • Express server with route separation: /api/auth, /api/tasks, /api/users
  • Register + login with bcrypt hashing and JWT token issuance
  • Task CRUD: create with tags/priority/deadline, update, delete, complete toggle
  • Query filter by status, priority, and due-date range via query params
  • Helmet, CORS, and express-rate-limit security middleware stack
  • Postman collection with 18 test cases — all success and error paths covered
  • MongoDB Atlas free cluster + Render free web service deployment
Node.jsExpress.jsMongoDB + MongooseJWTbcryptPostman TestingRender Deploy
Learning Outcomes
Design REST endpoints following HTTP method and status code conventions
Implement stateless JWT authentication from first principles
Write Mongoose schemas with validators and virtual fields
Test every endpoint systematically with a Postman collection
Coding / Web DevClass 10Intermediate
Real-Time Class Chatroom (Socket.io)
Node.js + Socket.io + React · Individual · 3 weeks
3 weeks Individual WebSockets Railway Deploy

A real-time multi-room chat application used by the actual PREKSHA class for project coordination. Node/Express backend with Socket.io handles room join/leave, message broadcast, typing indicators, and user presence. React frontend shows message history, active users sidebar, and emoji reactions. Room history persisted to MongoDB so messages survive page refresh.

What you build
  • Socket.io server: events for join-room, leave-room, send-message, typing, reaction
  • Room management: multiple named rooms (General, Track-1, Track-2, Track-3)
  • Typing indicator: "Ravi is typing..." disappears after 3 s of inactivity
  • Active users sidebar: live list of users in current room with join/leave toasts
  • Emoji reactions: click-to-react on any message, counts update for all users live
  • Message history: last 50 messages per room fetched from MongoDB on join
  • Deployed on Railway — classmates use it for real project communication
Socket.ioWebSocketsNode.js EventsReact useEffectMongoDBRooms / Namespaces
Learning Outcomes
Implement bidirectional real-time communication with Socket.io
Manage room-scoped events and user presence state on the server
Synchronise React UI state with server-pushed events via useEffect
Persist and hydrate chat history using MongoDB on connection
Class 11
Coding / Web DevClass 11Advanced
Full-Stack E-Commerce Platform
MERN Stack · Team of 2 · 6 weeks · Deployed on Vercel + Railway
6 weeks Team of 2 Vercel + Railway Stripe Test Mode

A complete, live-deployed online store. React SPA + Node/Express API + MongoDB Atlas. Ships product catalog with search and filter, Redux cart, order management with email confirmation, and a minimal admin panel. Stripe test-mode checkout included. Both partners push feature branches and merge via pull requests.

What you build
  • React SPA: product list and detail, Redux cart, checkout, order history
  • Admin dashboard: add/edit products, view all orders, update delivery status
  • Node/Express API: products, orders, users, Stripe payment intent endpoint
  • JWT auth with admin / customer role guards on protected routes
  • Stripe payment intent + webhook → Nodemailer order confirmation email
  • Team workflow: GitHub feature branches, PR reviews, merge to main CI
  • Frontend on Vercel, API on Railway, DB on MongoDB Atlas
React 18Redux ToolkitReact Router v6Node.jsExpressMongoDBStripe APINodemailer
Learning Outcomes
Architect a full MERN app with separate frontend and backend deployments
Manage complex client state with Redux Toolkit slices
Integrate Stripe payment intent with webhook confirmation
Collaborate on a shared codebase using Git feature-branch workflow
Coding / Web DevClass 11Advanced
Developer Portfolio + Headless CMS
Next.js + Contentful CMS + Framer Motion · Individual · 4 weeks
4 weeks Individual Vercel + Contentful ISR + SSG

Students build their personal developer portfolio using Next.js with static generation and Incremental Static Regeneration, pulling project content from a Contentful headless CMS. Polished Framer Motion page transitions and scroll animations. The portfolio showcases all their PREKSHA projects and is submitted as part of their college/internship applications from Class 11 onward.

What you build
  • Next.js 14 with SSG (getStaticProps) for home, about, and project pages
  • Contentful CMS: Project content model with title, description, tags, demo URL, GitHub URL, thumbnail
  • Dynamic project detail pages with ISR (revalidate: 60) — update without redeploy
  • Framer Motion: page transitions, stagger-in project cards, scroll-reveal sections
  • Dark / light mode with system preference detection and localStorage preference
  • Contact form using Resend API — sends email to student directly from the form
  • Core Web Vitals optimisation: Lighthouse score > 95 on all four metrics
Next.js SSG/ISRContentful CMSFramer MotionResend APIDark ModeCore Web Vitals
Learning Outcomes
Choose between SSG, SSR, and ISR based on data freshness requirements
Pull content from a headless CMS using the Content Delivery API
Implement page transitions and scroll animations with Framer Motion
Optimise Core Web Vitals to achieve a Lighthouse score above 95
Class 12
Coding / Web DevClass 12Expert
SaaS Starter Kit — Next.js + TypeScript
Next.js · TypeScript · Prisma · NextAuth.js · Team of 2 · 8 weeks
8 weeks Team of 2 Vercel Edge NextAuth.js

A production-ready multi-tenant SaaS starter. Next.js 14 App Router, TypeScript end-to-end, Prisma ORM, NextAuth.js (Google + GitHub OAuth), Stripe subscription gating, GitHub Actions CI/CD, and Playwright E2E tests. Presented as a real product pitch to faculty and invited industry mentors at year end.

What you build
  • Next.js 14 App Router: server components, server actions, parallel routes
  • Prisma schema: users, teams, workspaces, subscriptions, audit log
  • NextAuth.js: Google + GitHub OAuth and credentials provider
  • Stripe billing: subscription tiers, checkout session, webhook → DB sync
  • Settings dashboard: profile, team invite, billing portal, API key management
  • GitHub Actions pipeline: lint → type-check → test → preview deploy on each PR
  • Playwright E2E: auth flow, workspace creation, billing page, API key generation
Next.js 14TypeScriptPrisma ORMNextAuth.jsStripePlaywrightGitHub ActionsVercel
Learning Outcomes
Build a type-safe full-stack app with TypeScript end to end
Set up CI/CD with automated lint, test, and deploy on every PR
Design a multi-tenant data model with Prisma and run migrations
Deliver a live product pitch with technical and business substance
Coding / Web DevClass 12Expert
Microservices Platform with Docker + API Gateway
Docker · API Gateway · Microservices · Team of 3 · 8 weeks
8 weeks Team of 3 Docker + Railway API Gateway

Students decompose a monolith into three microservices (Auth, Products, Orders) each in its own Docker container, communicating via REST. An Express API Gateway handles routing, JWT verification, and rate limiting. Docker Compose orchestrates the full stack locally. Each service has its own database schema. Demonstrates industry patterns: service isolation, container orchestration, and distributed tracing basics.

What you build
  • Auth Service: register, login, JWT issue, token refresh — Dockerized Node.js + PostgreSQL
  • Product Service: CRUD for products, stock management — Dockerized Node.js + MongoDB
  • Order Service: place order, order history, stock deduction via internal HTTP call
  • API Gateway: Express router, JWT middleware, per-route rate limiting, request logging
  • Docker Compose: all 4 containers + PostgreSQL + MongoDB with health checks
  • Distributed logging with Winston + correlation ID header threaded across services
  • GitHub Actions: build + push images to Docker Hub on each merge to main
DockerDocker ComposeAPI GatewayMicroservicesPostgreSQLInter-service HTTPDistributed Logging
Learning Outcomes
Decompose a monolith into isolated, independently deployable services
Containerize Node.js apps with Docker and orchestrate with Compose
Build an API gateway with JWT verification and rate limiting
Implement distributed request tracing with correlation IDs across services
Track 2 — IoT & Robotics
Electronics → Arduino → Sensors → RPi + ESP32 → MQTT → Computer Vision → Robotics · Class 6–12
14 Projects · 7 Classes
Class 6
IoT & RoboticsClass 6Beginner
5-LED Chasing Circuit
NE555 Timer + Breadboard · Individual · 1 week
1 week Individual Breadboard + kit Tinkercad sim first

Students simulate a sequentially blinking LED chase circuit in Tinkercad, then build it physically on a breadboard using a NE555 timer IC as an oscillator feeding a CD4017 decade counter through 5 LEDs. Students hand-draw the schematic, calculate resistor values, and submit a one-page lab report explaining the NE555 clock signal.

What you build
  • Tinkercad simulation: adjust R and C values to change chase speed
  • Physical breadboard: NE555 + CD4017 + 5 LEDs + current-limiting resistors
  • Hand-drawn schematic with all component values labelled
  • Lab report: circuit description, speed calculation, troubleshooting log
Breadboard WiringNE555 Timer ICCD4017 CounterTinkercadSchematic ReadingOhm's Law
Learning Outcomes
Read and draw basic electronic circuit schematics correctly
Calculate correct resistor values using Ohm's Law
Use Tinkercad to verify a design before physical construction
Troubleshoot a non-working breadboard circuit systematically
IoT & RoboticsClass 6Beginner
Clap-Activated LED Matrix
Sound Sensor + Arduino + LED Matrix · Individual · 1 week
1 week Individual Arduino Nano Sound sensor

Students wire an analogue sound sensor to an Arduino Nano to detect claps, then use the output to cycle through three LED patterns on an 8×8 LED matrix. Double-clap toggles the entire matrix on/off. Triple-clap cycles display mode. This project introduces analogue-to-digital conversion, threshold comparators, and simple pattern storage as 2D arrays in PROGMEM.

What you build
  • Analogue sound sensor wired to A0 — threshold calibration via Serial plotter
  • Clap detection logic: peak within 100 ms window counts as one clap
  • Double-clap (two peaks within 500 ms) toggles matrix on/off
  • Triple-clap cycles through 3 stored patterns: heart, smiley, arrow
  • 8×8 LED matrix driven via MAX7219 SPI chip (LedControl library)
  • Patterns stored as byte arrays in PROGMEM to save SRAM
analogReadADC / ThresholdMAX7219SPI ProtocolLedControl.hPROGMEM
Learning Outcomes
Understand how analogue sensors produce a voltage that Arduino reads via ADC
Calibrate a detection threshold using the Serial Plotter tool
Communicate with peripherals via SPI protocol using a library
Store constant data in PROGMEM to free up precious SRAM on Nano
Class 7
IoT & RoboticsClass 7Beginner
Arduino Traffic Light System
Arduino + LEDs + Buzzer · Individual · 1 week
1 week Individual Arduino Uno Buzzer tone()

Students code their first Arduino sketch: a proper traffic light cycling Red → Yellow → Green → Yellow with correct timing. A pedestrian push button interrupts the cycle to show a "walk" signal with a buzzer chirp. The key teaching moment is millis()-based non-blocking timing, replacing the bad habit of delay().

What you build
  • 3-LED traffic light with correct timing (Red 4 s, Amber 1 s, Green 4 s)
  • Pedestrian button triggers walk signal overriding the main cycle
  • Non-blocking timing using millis() instead of delay()
  • Buzzer chirp on pedestrian crossing using tone() with PWM
  • State machine diagram drawn on paper before writing any code
Arduino C++digitalWritedigitalReadmillis() timingtone()State Machine
Learning Outcomes
Write non-blocking timing code with millis() instead of delay()
Use digital I/O pins for both input (button) and output (LED, buzzer)
Model system behaviour as a finite state machine before coding
Understand the Arduino event-loop execution model
IoT & RoboticsClass 8Intermediate
Autonomous Line-Following Robot
Arduino + IR Sensors + L298N · Pair · 3 weeks
3 weeks Pair Arduino + L298N IR sensor array

Students build a two-wheeled robot that autonomously follows a black line on white cardboard using an array of 5 IR sensors. Starts with simple if/else bang-bang control, then students improve it to a proportional controller (P-control) that reduces oscillation. Race day at the end: whose robot completes the PREKSHA track layout fastest?

What you build
  • 5-sensor IR array wired to Arduino digital pins — threshold calibration on track surface
  • Bang-bang controller v1: if leftmost sensor → turn hard right, etc.
  • Proportional controller v2: weighted average of sensor readings → smooth motor correction
  • L298N motor driver: separate speed and direction control per wheel
  • Cardboard track layout: straight, S-bend, 90° turn, T-junction
  • Race scoring: time trial over standard PREKSHA track, logged on whiteboard
IR SensorsBang-Bang ControlP-ControlL298NSensor CalibrationPWM Motor Speed
Learning Outcomes
Calibrate an IR sensor array for a specific surface and lighting condition
Understand why proportional control outperforms bang-bang switching
Tune a P-gain parameter empirically by observing robot behaviour
Implement independent speed control on two motors via PWM
Class 8
IoT & RoboticsClass 7–8Intermediate
Automated Smart Parking System
Arduino + HC-SR04 + Servo + I2C LCD · Pair · 3 weeks
3 weeks Pair Arduino Uno I2C LCD 16×2

An Arduino parking lot model: two HC-SR04 ultrasonic sensors detect car entry and exit, a servo motor controls a barrier gate, and a 16×2 I2C LCD shows available slot count. Students build the physical model from cardboard with a toy car for the live demo. A manual override button lets an attendant bypass the sensors.

What you build
  • Entry sensor triggers gate open when object detected within 10 cm
  • Exit sensor decrements slot counter, opens gate on exit
  • Servo arm sweeps 0°–90° smoothly on trigger (Servo.h library)
  • LCD: "Slots: 3 / 5" real-time; switches to "FULL" at capacity
  • Manual override button bypasses sensor logic for attendant use
  • Cardboard model with LED indicators per parking slot
Arduino C++HC-SR04Servo.hI2C ProtocolLiquidCrystal_I2CCounter Logic
Learning Outcomes
Interface multiple sensors and actuators on one Arduino simultaneously
Use I2C protocol to drive an LCD display with only 2 wires
Implement a bounded counter with upper and lower guard checks
Build and demo a physical cardboard prototype to an audience
IoT & RoboticsClass 8Intermediate
Automatic Plant Watering System
Arduino + Soil Sensor + Water Pump · Individual · 2 weeks
2 weeks Individual Arduino + Relay Real plant demo

Students build an automatic plant watering system using a capacitive soil moisture sensor (more accurate than resistive) and a 5V mini submersible pump controlled via a relay module. The system checks soil moisture every 10 minutes; if below threshold it waters for 3 seconds then checks again. An OLED shows real-time moisture percentage and last-watered timestamp. Demonstrated with a real plant on demo day.

What you build
  • Capacitive soil sensor calibrated with dry soil and water-saturated soil readings
  • Moisture percentage mapped from raw ADC value using calibration curve
  • Relay module controlling 5 V mini submersible pump via optocoupled driver
  • OLED display: moisture %, soil status (Dry/OK/Wet), last-watered time
  • RTC DS3231 module: accurate timestamps persisted across power cycles
  • Data logged to SD card: timestamp, moisture reading, pump-on duration
  • Configurable threshold and watering duration via Serial commands
Capacitive Soil SensorRelay ModuleOLED + I2CDS3231 RTCSD Card LoggingSensor Calibration
Learning Outcomes
Calibrate a sensor against known physical reference points
Control mains-level or high-current devices safely via relay + optocoupler
Log time-stamped sensor data to an SD card for later analysis
Design an autonomous feedback-control system for a real use case
Class 9
IoT & RoboticsClass 9Intermediate
Wi-Fi Environment Monitor
ESP32 + MQTT + Node-RED · Individual · 3 weeks
3 weeks Individual ESP32 + MQTT Node-RED dashboard

An ESP32 reads DHT22 (temperature + humidity), BMP280 (air pressure), and MQ-135 (air quality) every 30 seconds, publishing JSON to an MQTT broker. A Node-RED flow on the school RPi subscribes and renders live gauges, line charts, and sends a Telegram alert if temperature exceeds 38 °C. Dashboard accessible from any browser on the LAN.

What you build
  • ESP32 firmware: Wi-Fi connect → sensor read loop → JSON → MQTT publish
  • Self-hosted Mosquitto broker on the school Raspberry Pi
  • Node-RED flow: MQTT in → JSON parse → gauge + chart + alert nodes
  • Telegram bot alert when temperature > 38 °C or AQI is poor
  • Dashboard on LAN: http://[RPi-IP]:1880/ui — viewable by whole class
  • Optional: deep-sleep between readings for a battery-powered demo
ESP32MQTT (Mosquitto)DHT22BMP280MQ-135Node-REDRaspberry Pi
Learning Outcomes
Implement the MQTT publish/subscribe pattern on a microcontroller
Build a real-time sensor dashboard with Node-RED
Configure and run a self-hosted MQTT broker on Raspberry Pi
Use deep-sleep modes to reduce power consumption by ~90%
IoT & RoboticsClass 9Intermediate
GPS Asset Tracker with Live Map
ESP32 + NEO-6M GPS + Google Maps · Individual · 4 weeks
4 weeks Individual NEO-6M GPS Live Google Maps

Students build a real GPS tracker using an ESP32 and NEO-6M GPS module. The device reads NMEA sentences, parses latitude/longitude/speed, and publishes to a Firebase Realtime Database via Wi-Fi every 15 seconds. A simple HTML dashboard with the Google Maps JavaScript API displays the live position as a moving marker with a breadcrumb trail. Students attach it to a bag and watch their position update as they walk the school campus.

What you build
  • ESP32 firmware: parse NMEA sentences from NEO-6M via hardware UART
  • TinyGPSPlus library: extract latitude, longitude, speed, HDOP accuracy estimate
  • Wi-Fi + Firebase REST: push {lat, lng, speed, timestamp} to Realtime DB
  • HTML dashboard: Google Maps JS API with real-time marker via Firebase onValue listener
  • Breadcrumb trail: last 50 positions drawn as polyline on map
  • Speed display: "Walking / Cycling / Stationary" classification from speed threshold
  • OLED on device: shows fix status, satellite count, current speed
NEO-6M GPSNMEA ParsingTinyGPSPlusFirebase RTDBGoogle Maps JS APIUART Comms
Learning Outcomes
Parse UART serial protocols (NMEA) from a real GPS receiver
Push IoT data to a cloud database using REST from an ESP32
Render a real-time moving marker on Google Maps with live DB listeners
Interpret HDOP accuracy estimates to assess GPS fix quality
Class 10
IoT & RoboticsClass 10Intermediate
Smart Home Control Dashboard
ESP32 + ThingSpeak + Web UI · Team of 2 · 4 weeks
4 weeks Team of 2 ThingSpeak cloud Web + Mobile

A full smart-home simulation: ESP32 controls an LED strip (PWM brightness), a DC fan via L298N motor driver, and reads DHT22. All devices are controllable from a custom HTML/JS dashboard via the ThingSpeak API. Dashboard shows live temperature charts, lets the user set an auto-fan threshold, and includes a time-based light scheduler.

What you build
  • ESP32 firmware: reads DHT22, drives LED (PWM) and fan (L298N), polls ThingSpeak control field every 5 s
  • ThingSpeak channel: fields for temperature, humidity, LED state, fan speed, control
  • Web dashboard: LED toggle + brightness slider, fan slider, Chart.js temperature graph
  • Auto mode: fan turns on when temp exceeds user-set threshold
  • Light scheduler: set on/off times stored in localStorage, sent to ESP32
ESP32ThingSpeak APIL298N Motor DriverPWM ControlREST API (fetch)Chart.jsHTML / CSS / JS
Learning Outcomes
Implement bidirectional cloud-to-device IoT communication
Build a web UI that controls physical hardware through a cloud API
Control DC motor speed and direction using an L298N driver
Design a time-based scheduling system with persistent rules
IoT & RoboticsClass 10Intermediate
FPV Drone Controller Simulator
RPi + Pygame + Flight Controller Protocol · Team of 2 · 4 weeks
4 weeks Team of 2 Pygame Sim MSP Protocol

Students build a Python-based drone controller simulator using Pygame for visualisation and the MSP (MultiWii Serial Protocol) to send commands to a Betaflight-flashed flight controller board. Phase 1 simulates a top-down 2D drone responding to keyboard commands. Phase 2 connects to an actual flight controller via USB serial and tests roll/pitch/yaw responses with motors disconnected (safe bench testing).

What you build
  • Pygame simulation: 2D overhead drone sprite responding to WASD + QE yaw keys
  • Physics model: simplified drag, inertia, and altitude hold simulation
  • MSP protocol encoder: pack RC channel values into MSP SET_RAW_RC payload
  • Serial connection to Betaflight FC: send 8 RC channels at 50 Hz update rate
  • Betaflight Configurator overlay: show live sensor data (gyro, accelerometer) alongside sim
  • Failsafe: emergency stop binding — single key returns all channels to safe values
  • Latency logger: measure serial round-trip time and plot distribution histogram
PygameMSP ProtocolpyserialBetaflight FCRC Channel EncodingReal-Time Control Loop
Learning Outcomes
Implement a real-time control loop with strict timing requirements (50 Hz)
Encode and send a binary serial protocol (MSP) from Python
Understand RC channel mapping: throttle, roll, pitch, yaw, aux channels
Design and implement a safe failsafe mechanism for physical systems
Class 11
IoT & RoboticsClass 11Advanced
AI Intruder Alert System
OpenCV + RPi + Flask + Telegram · Team of 2 · 4 weeks
4 weeks Team of 2 Pi Camera + OpenCV Telegram alerts

Raspberry Pi with Pi Camera monitors a room. OpenCV detects motion via MOG2 background subtraction; a Haar cascade then confirms a human face before triggering. On detection a Telegram bot sends an alert with a timestamped snapshot. A lightweight Flask app serves a live MJPEG stream viewable from any browser on the LAN.

What you build
  • MOG2 background subtraction loop → contour detection → motion trigger
  • Haar cascade face verification on triggered frames only (CPU efficient)
  • Telegram Bot API: photo + timestamp on confirmed detection
  • Flask MJPEG stream endpoint + JSON status endpoint
  • 30-second alert cooldown to prevent spam
  • Detection log file: timestamp, confidence, snapshot filename
  • GPIO buzzer alarm on high-confidence detection (optional extension)
OpenCVPython picamera2MOG2 SubtractionHaar CascadesFlaskTelegram Bot APIThreading
Learning Outcomes
Apply real-time computer vision to a practical embedded system
Serve a live video stream from a Raspberry Pi over HTTP
Integrate a third-party messaging API for autonomous notifications
Optimise CPU usage with event-driven vs polling architecture
IoT & RoboticsClass 11Advanced
3-DOF Robot Arm with Inverse Kinematics
Arduino + 3 Servos + Python IK Solver + GUI · Team of 2 · 5 weeks
5 weeks Team of 2 Arduino + Servos IK Maths

Students build a 3-DOF robot arm (shoulder, elbow, wrist) using 3D-printed links and MG996R servos, then write a Python inverse kinematics solver that calculates joint angles from a desired (x, y, z) end-effector position. A Tkinter GUI lets users click a target point; the arm moves there smoothly via interpolated servo commands sent over serial. Teaches the bridging of mathematics and physical hardware.

What you build
  • 3D-printed arm links: shoulder, elbow, wrist, and end-effector gripper (designed in Tinkercad)
  • Arduino firmware: receive joint angle commands over serial, smooth servo interpolation (10° steps)
  • Python IK solver: geometric IK for 3-DOF planar arm — shoulder, elbow, wrist angles from (x, y, z)
  • Workspace boundary check: reject (x, y, z) targets outside arm reach without crashing
  • Tkinter GUI: 2D canvas — click target point → IK solves → arm moves in real time
  • Trajectory interpolation: moves arm through N waypoints smoothly over T seconds
  • Pick-and-place demo: arm picks a small block and deposits it in a target zone
Inverse KinematicsMG996R ServoPython IK SolverTkinter GUIpyserialTrajectory Planning3D Printing
Learning Outcomes
Derive and implement geometric inverse kinematics for a 3-DOF planar arm
Design and 3D-print robot arm links with correct servo mounting geometry
Implement smooth trajectory interpolation between waypoints
Bridge Python control software and Arduino hardware via serial protocol
Class 12
IoT & RoboticsClass 12Expert
Autonomous Warehouse Robot
RPi + Arduino + Vision + MQTT + Cloud Dashboard · Team of 2 · 10 weeks
10 weeks Team of 2 RPi + Arduino Grand Capstone

The Class 12 IoT grand capstone: a robot navigates a 5×5 grid, identifies coloured packages via camera, and delivers to the matching shelf slot. RPi handles high-level navigation and computer vision; Arduino manages low-level motor and servo control. A Next.js cloud dashboard shows live robot position, task queue, and delivery history via MQTT.

What you build
  • A* pathfinding on a 5×5 tile map with dynamic obstacle avoidance
  • OpenCV colour classifier: identifies red / green / blue packages at pickup
  • RPi ↔ Arduino serial protocol: drive commands and encoder odometry feedback
  • PID motor control loop for straight driving and accurate 90° turns
  • MQTT telemetry: robot publishes position and status every 500 ms
  • Next.js cloud dashboard: live grid map, task queue, delivery history log
  • Safety: emergency stop when ultrasonic detects obstacle within 5 cm
  • 3D-printed chassis components designed in Tinkercad
Raspberry PiArduinoA* PathfindingOpenCVPID ControlMQTTNext.js Dashboard3D Printing
Learning Outcomes
Design a complete embedded + cloud system architecture end to end
Implement A* pathfinding and PID closed-loop control on a real robot
Integrate computer vision with physical robot actuation
Present a live autonomous demo to faculty and industry guests
IoT & RoboticsClass 12Expert
Multi-Robot Swarm Coordination
3× RPi Robots + MQTT Swarm Controller + Webots Sim · Team of 3 · 10 weeks
10 weeks Team of 3 3× RPi Robots Swarm MQTT

Three robots work together to cover a grid area, divide pick-and-place tasks without collision, and communicate task completion over MQTT. A central Python swarm controller assigns tasks using a greedy nearest-robot algorithm. Webots simulation validates the algorithm before physical deployment. Each robot reports position and status; the cloud dashboard shows all three in real time. Demonstrates fleet management, task allocation, and collision avoidance.

What you build
  • 3 identical RPi robots each with encoders, ultrasonic sensor, and colour camera
  • Swarm MQTT topics: /robot/[id]/position, /robot/[id]/status, /swarm/task
  • Greedy task allocator: assigns new tasks to the nearest idle robot using Euclidean distance
  • Collision avoidance: each robot publishes its planned path; swarm controller detects and resolves conflicts
  • Webots simulation: validate algorithm with 10 tasks before physical deployment
  • Cloud dashboard: three robots shown as icons on grid, task queue, completion rate KPI
  • Performance report: area covered per hour, task completion rate, average idle time
Swarm RoboticsTask AllocationMulti-Agent MQTTCollision AvoidanceWebots SimFleet DashboardEncoder Odometry
Learning Outcomes
Design a decentralised multi-agent communication architecture via MQTT
Implement greedy task allocation and measure its performance vs optimal
Detect and resolve path conflicts in a multi-robot system
Validate a multi-robot algorithm in simulation before physical deployment
🧠
Track 3 — AI, ML & Automation
Data Literacy → Python DS → Scikit-Learn → GitHub/N8N → TensorFlow → LLM APIs & RAG · Class 6–12
14 Projects · 7 Classes
Class 6
AI / MLClass 6Beginner
Rock-Paper-Scissors AI (Teachable Machine)
Google Teachable Machine + TensorFlow.js · Individual · 1 week
1 week Individual Webcam + browser Zero-code ML

Students train their very first machine learning model with Google Teachable Machine — capturing 50+ webcam images per hand sign (Rock, Paper, Scissors, Background) and training an image classifier. The exported TF.js model is embedded in a web page that plays a live game, showing class prediction and confidence score in real time.

What you build
  • Teachable Machine project: 4 classes, 50+ images each, trained and tested
  • Model exported to TensorFlow.js hosted format
  • HTML page: loads model, reads webcam, shows live class + confidence bar
  • Game logic: computer picks random move, evaluates winner, updates score
  • Reflection writeup: what happens when training data is noisy or unbalanced?
Teachable MachineImage ClassificationTensorFlow.jsHTML + JSModel Export
Learning Outcomes
Train and test an image classifier without writing model code
Understand why training data quality and balance matter
Deploy a TF.js model in a plain HTML page
Reason about model confidence scores and their limitations
AI / MLClass 6Beginner
FAQ Chatbot in Scratch
Scratch + Decision Trees · Individual · 1 week
1 week Individual Scratch 3.0 Rule-based AI

Students build a rule-based chatbot in Scratch that answers frequently asked questions about PREKSHA Academy — admission fees, tracks, timings, and contact info. Sprites respond to typed keywords using if/else chains and nested conditions. Students learn this is how early AI assistants worked before machine learning: explicit rules written by a human. They then reflect on what limitations this creates versus a trained model.

What you build
  • Scratch chatbot sprite: typed input → keyword detection → canned response
  • At least 15 distinct question keywords with appropriate answers each
  • Fallback handler: "I don't understand, try asking about fees or timings"
  • Memory variable: bot remembers the user's name from first message
  • Visualisation: decision tree drawn on paper showing all branches
  • Reflection writeup: 3 limitations of rule-based AI vs a trained language model
Scratch BlocksConditional LogicString MatchingVariablesDecision TreesRule-Based AI
Learning Outcomes
Implement a rule-based decision system using nested conditional logic
Draw and reason about a decision tree before coding it
Articulate the limitations of rule-based AI compared to ML models
Design a conversational flow with appropriate fallback handling
Class 7
AI / MLClass 7Beginner
Class Data Story Dashboard
Google Sheets + Forms + Canva · Individual · 2 weeks
2 weeks Individual Google Sheets Canva presentation

Students design a Google Form survey, collect 30+ responses from classmates, clean the data in Google Sheets, and build an analysis dashboard with pivot tables and four chart types. The final deliverable is a five-slide Canva "data story" presenting one headline insight and a recommendation to the class.

What you build
  • Google Form: 10+ questions covering MCQ, Likert scale, and short answer
  • Sheets cleaning: remove blanks, fix inconsistencies, add computed columns
  • COUNTIF / AVERAGEIF analysis: 5+ insights extracted
  • 4 chart types: bar, pie, scatter plot, and a conditional-formatting heatmap
  • 5-slide Canva deck: hook → data → insight → recommendation → conclusion
Google FormsGoogle SheetsCOUNTIF / AVERAGEIFPivot TablesData VisualisationCanva
Learning Outcomes
Design a survey that produces clean, analysable data
Use Sheets formulas to extract meaningful statistics
Choose the right chart type for each kind of data
Present findings as a coherent visual narrative
AI / MLClass 7Beginner
Sales Dashboard from Raw CSV
Excel / Google Sheets · Individual · 2 weeks
2 weeks Individual Excel or Sheets Dynamic dashboard

Students receive a deliberately messy 500-row retail sales CSV file (duplicate rows, inconsistent product names, missing values, mixed date formats) and must clean, model, and visualise it into a professional interactive Excel/Sheets dashboard. Slicers allow filtering by Region, Product Category, and Month. This project builds the "data wrangling" muscle that underpins all future data science work.

What you build
  • Raw data cleaning sheet: de-duplicate, fix product names with SUBSTITUTE, parse dates
  • Data model: normalised Product, Region, and Date dimension tables linked by VLOOKUP
  • Pivot tables: revenue by region, top 10 products, monthly trend, category breakdown
  • Dynamic dashboard sheet: 4 charts tied to pivots with Region and Month slicers
  • KPI cards: Total Revenue, Best Month, Top Product — computed with MAXIFS/SUMIFS
  • Conditional formatting: highlight negative growth cells red on month-over-month comparison
Data CleaningVLOOKUP / XLOOKUPPivot TablesSUMIFS / MAXIFSSlicersConditional Formatting
Learning Outcomes
Identify and fix common data quality issues in a real messy dataset
Build a normalised data model with lookup relationships in Sheets
Create pivot-driven interactive charts with slicer controls
Use SUMIFS / MAXIFS / AVERAGEIFS for multi-criteria aggregation
Class 8
AI / MLClass 8Intermediate
IPL Season Analyser (Python + Pandas)
Pandas + Matplotlib + Seaborn · Individual · 3 weeks
3 weeks Individual Google Colab IPL 2008–2023 dataset

Students load a real IPL dataset (2008–2023) into a Google Colab notebook and perform guided exploratory data analysis. They clean the data, answer 8 specific analytical questions, create 8 visualisations, and export a one-page PDF summary. Subjects include top run-scorers, winning teams by venue, and toss-effect analysis.

What you build
  • Data cleaning: handle nulls, standardise team names across 16 seasons
  • EDA: top-10 batsmen by total runs, most wickets, highest-scoring venues
  • Groupby aggregation: wins by team per season, toss decision analysis
  • 8 visualisations: bar, horizontal bar, heatmap, line chart, scatter plot
  • Correlation analysis: does winning the toss correlate with winning?
  • One-page PDF summary with insights and recommendations exported from Colab
PythonpandasmatplotlibseabornJupyter / Colabgroupby / aggData Cleaning
Learning Outcomes
Clean and standardise a real-world messy CSV dataset
Use groupby and aggregation to answer precise analytical questions
Select appropriate chart types for categorical vs numeric data
Interpret correlations without overstating causation
AI / MLClass 8Intermediate
Twitter/X Sentiment Analyser
Python + VADER + Tweepy + Matplotlib · Individual · 3 weeks
3 weeks Individual Google Colab Real tweet data

Students analyse the sentiment of real tweets on a topic they choose (a Bollywood film, a cricket match, a tech launch) using VADER (Valence Aware Dictionary and sEntiment Reasoner) — a lexicon-based sentiment analyser designed for social media. They pre-process tweets (remove stopwords, URLs, mentions), compute sentiment scores, and create visualisations showing sentiment distribution, time-series sentiment shifts, and top positive/negative words.

What you build
  • Tweet collection: Tweepy API v2 or pre-collected CSV dataset (500+ tweets on chosen topic)
  • Text pre-processing: remove URLs, @mentions, #hashtags, and non-ASCII characters
  • VADER sentiment scoring: positive, negative, neutral, compound score per tweet
  • Classification: compound >0.05 = Positive, <-0.05 = Negative, else Neutral
  • Visualisations: sentiment donut chart, compound score histogram, time-series line chart
  • Word frequency: top 20 positive words and top 20 negative words as coloured bar charts
  • Insight writeup: 3 observations about public sentiment and what drove it
VADER SentimentText Pre-processingTweepy / CSVpandasmatplotlibNLP BasicsWord Frequency
Learning Outcomes
Apply lexicon-based sentiment analysis without training any model
Pre-process noisy social media text for analysis
Identify limitations of VADER compared to transformer-based sentiment models
Derive actionable insights from sentiment distributions in real data
Class 9
AI / MLClass 9Intermediate
Student Score Predictor (Scikit-Learn)
Regression + Random Forest + Gradio · Individual · 3 weeks
3 weeks Individual Google Colab Gradio deploy

Using the Student Performance dataset (UCI ML Repo), students build and compare Linear Regression and Random Forest models to predict final exam scores from study hours, attendance, and past grades. Models are evaluated with MSE and R², feature importance is visualised, and the best model is served as a live Gradio web app anyone can use.

What you build
  • EDA: distribution plots, correlation heatmap, box plots for outlier detection
  • Feature engineering: one-hot encoding, StandardScaler normalisation
  • Linear Regression: coefficients table, R² score, residual plot
  • Random Forest: GridSearchCV hyperparameter tuning, cross-validation
  • Comparison table: MAE, MSE, R² for both models side-by-side
  • Feature importance bar chart: which factor predicts score most?
  • Gradio app: enter student details → get predicted score live
Scikit-LearnLinear RegressionRandom ForestGridSearchCVFeature EngineeringGradioModel Evaluation
Learning Outcomes
Build and evaluate regression models using Scikit-Learn pipelines
Tune hyperparameters systematically with GridSearchCV
Interpret feature importance to explain model decisions
Deploy an ML model as an interactive web app with Gradio
AI / MLClass 9Intermediate
Fruit & Vegetable Image Classifier
CNN from Scratch + Keras · Individual · 4 weeks
4 weeks Individual Google Colab (GPU) Fruits-360 dataset

Students build a Convolutional Neural Network from scratch using Keras (no transfer learning) to classify 36 fruit and vegetable categories from the Fruits-360 dataset. They manually design the architecture — experimenting with number of Conv layers, filter sizes, dropout rates — and document each experiment in a structured "experiment log" showing how each change affected validation accuracy. Teaches model architecture intuition before transfer learning shortcuts are introduced in Class 11.

What you build
  • Data loading: ImageDataGenerator with train/val/test split from Fruits-360 directory
  • Baseline CNN: Conv2D(32) → MaxPool → Conv2D(64) → MaxPool → Flatten → Dense(128) → Softmax(36)
  • Experiment log: 5 architecture variations documented — filter count, depth, dropout, batch norm
  • Best model: confusion matrix, per-class precision/recall, top-5 misclassified examples
  • Data augmentation: random flip, rotation, zoom — measure accuracy improvement
  • Gradio app: upload photo of fruit/vegetable → get top-3 predictions with confidence
  • Model card: intended use, training data, performance metrics, known failure modes
Keras CNNConv2D + MaxPoolDropout / BatchNormImageDataGeneratorConfusion MatrixGradioModel Card
Learning Outcomes
Design and build a CNN architecture from scratch without pre-trained weights
Run structured experiments and document each change's effect on accuracy
Interpret a confusion matrix to find systematic model errors
Write a model card explaining limitations and appropriate use cases
Class 10
AI / MLClass 10Intermediate
Assignment Reminder Bot (N8N Automation)
N8N + Google Sheets + Twilio/Gmail · Individual · 2 weeks
2 weeks Individual N8N self-hosted Runs every morning

A no-code automation built in N8N (self-hosted on the school Raspberry Pi) that reads a Google Sheets deadline tracker every morning at 08:00, finds assignments due within 48 hours, and sends personalised WhatsApp or email reminders — then marks them "Reminded" in the sheet to prevent duplicates. Real students use this from Day 1.

What you build
  • Google Sheet: Student / Subject / Assignment / Deadline / Reminded columns
  • N8N Schedule Trigger: fires every day at 08:00
  • Google Sheets node: read all rows, filter deadlines within 48 hours
  • IF node: skip rows where Reminded = Yes
  • Send node: WhatsApp (Twilio) or Gmail with personalised message
  • Update node: set Reminded = Yes after successful send
  • Bonus: HTML form webhook to add new deadlines to Sheets via N8N
N8N WorkflowsGoogle Sheets APISchedule TriggerTwilio / GmailWebhookJSON MappingGit / GitHub
Learning Outcomes
Design an automation from trigger through to final action
Connect SaaS tools via APIs without writing custom code
Build conditional filtering and state management visually
Prevent duplicate messages using a Reminded flag pattern
AI / MLClass 10Intermediate
GitHub Portfolio Auto-Update Bot
Python + GitHub API + N8N · Individual · 2 weeks
2 weeks Individual GitHub API Automated commits

Students build an automation that keeps their GitHub profile README dynamically up-to-date: fetches their latest public repos, star counts, and language breakdown via the GitHub GraphQL API, generates a rich markdown file with charts and stats using Python, and commits + pushes the updated README automatically via a GitHub Action that runs every Sunday. Teaches API authentication, GraphQL queries, and automated Git workflows.

What you build
  • GitHub GraphQL query: fetch top-10 repos by stars, language stats, contribution streak
  • Python generator: render a Markdown README with emoji stats, language bar chart (ASCII), and project table
  • GitHub Action workflow: runs every Sunday at 00:00 UTC — fetch, generate, commit, push
  • Dynamic SVG badge generator: "Top Language: Python 🐍 45%" rendered as inline SVG
  • Contribution graph embed: GitHub's own contribution graph embedded in README via img tag
  • N8N alternate pipeline: same output built without Python, using N8N HTTP + Code nodes
GitHub GraphQL APIPython requestsMarkdown GenerationGitHub ActionsAutomated Git CommitSVG GenerationN8N
Learning Outcomes
Write GraphQL queries to fetch structured data from the GitHub API
Generate dynamic content files programmatically from API data
Automate Git commit and push from a GitHub Actions workflow
Manage API authentication securely using GitHub Secrets
Class 11
AI / MLClass 11Advanced
Crop Disease Detector (CNN + Streamlit)
TensorFlow + MobileNetV2 + Streamlit Cloud · Team of 2 · 5 weeks
5 weeks Team of 2 Streamlit Cloud PlantVillage dataset

Students fine-tune MobileNetV2 on the PlantVillage dataset (54,000 leaf images, 38 disease classes) and deploy it as a Streamlit app where farmers upload a leaf photo and receive an instant disease diagnosis, confidence score, and treatment recommendation. Model is also exported to TFLite for potential on-device deployment.

What you build
  • Data pipeline: TF Datasets → augmentation (flip, rotate, colour jitter) → batching
  • MobileNetV2 base (frozen) + custom head: GAP → Dense(256) → Softmax(38)
  • Training: 10-epoch fine-tune, EarlyStopping + ReduceLROnPlateau callbacks
  • Evaluation: accuracy/loss curves, confusion matrix for top 10 disease classes
  • Export: .h5 SavedModel + quantised TFLite for embedded deployment
  • Streamlit app: upload → preprocess → predict → disease name + confidence + remedy
  • Deployed to Streamlit Community Cloud via GitHub integration
TensorFlow / KerasMobileNetV2Transfer LearningData AugmentationTFLiteStreamlitStreamlit Cloud
Learning Outcomes
Fine-tune a pre-trained CNN for a domain-specific classification task
Build an efficient training pipeline with augmentation and callbacks
Export a model to TFLite for lightweight embedded deployment
Create and deploy a user-facing ML app with Streamlit Cloud
AI / MLClass 11Advanced
Real-Time Object Detection with YOLOv8
YOLOv8 + OpenCV + Streamlit + Custom Training · Team of 2 · 5 weeks
5 weeks Team of 2 Webcam + YOLOv8 Streamlit + Roboflow

Students run YOLOv8 inference on live webcam feed to detect COCO objects in real time — then fine-tune it on a custom 5-class dataset of school lab equipment (Arduino, breadboard, multimeter, RPi, soldering iron) using 200 images annotated in Roboflow. Final model detects lab equipment in real time and triggers an inventory alert when an item is removed from the bench. Deployed as a Streamlit app using CUDA acceleration on the school Master PC.

What you build
  • Phase 1: YOLOv8n inference on webcam — display bounding boxes, class labels, FPS counter
  • Dataset creation: 200 images of lab equipment annotated in Roboflow (40 per class)
  • Fine-tuning: YOLOv8 custom training for 50 epochs on 5 lab equipment classes
  • Evaluation: mAP@50, precision-recall curves, per-class AP comparison before/after fine-tune
  • Inventory alert: beep and log when a registered item disappears from frame for >5 s
  • Streamlit app: upload image or use webcam — show detections with confidence and inventory status
  • Model comparison report: YOLOv8n vs YOLOv8s — accuracy vs speed tradeoff analysis
YOLOv8UltralyticsRoboflow AnnotationTransfer LearningmAP EvaluationOpenCVStreamlit
Learning Outcomes
Fine-tune a YOLO model on a custom annotated dataset using Roboflow
Evaluate object detection models using mAP, precision, and recall
Deploy real-time inference on a webcam feed with OpenCV
Make informed model size vs speed tradeoff decisions with benchmark data
Class 12
AI / MLClass 12Expert
AI Study Assistant with RAG
LangChain + GPT-4o + Pinecone + FastAPI + Next.js · Team of 2 · 8 weeks
8 weeks Team of 2 GPT-4o + RAG Grand Capstone

The Class 12 AI grand capstone: a full RAG chatbot grounded in PREKSHA curriculum PDFs and textbook chapters. LangChain RetrievalQA + GPT-4o + Pinecone vector database, a FastAPI streaming backend using Server-Sent Events, and a polished Next.js chat UI with source citations. Evaluated with RAGAS metrics and pitched as a real school product to academy management.

What you build
  • Document ingestion: PDF loader → recursive text splitter → OpenAI embeddings → Pinecone upsert
  • LangChain RetrievalQA chain with custom system prompt and cited source output
  • Conversation memory: last 6 message pairs passed as rolling context window
  • FastAPI backend: /chat endpoint with Server-Sent Events streaming
  • Next.js chat UI: message bubbles, source citation drawer, typing indicator, dark theme
  • RAGAS evaluation: faithfulness, answer relevancy, context recall scores
  • Deployed: FastAPI on Railway, Next.js on Vercel, Pinecone free tier
  • Product pitch deck: problem, solution, live demo, metrics, cost, roadmap
LangChainOpenAI GPT-4oPineconeRAG PipelineFastAPIServer-Sent EventsNext.jsRAGAS Eval
Learning Outcomes
Build a complete RAG pipeline from ingestion to cited answers
Evaluate LLM app quality with automated RAGAS metrics
Stream LLM responses to React frontend using Server-Sent Events
Pitch a full AI product with compelling technical and business content
AI / MLClass 12Expert
AI Content Moderation System
DistilBERT + FastAPI + React Moderation Dashboard · Team of 2 · 8 weeks
8 weeks Team of 2 DistilBERT Extended Capstone

Students fine-tune a DistilBERT transformer model on a multi-label toxicity classification dataset (hate speech, profanity, threats, spam — 4 classes) and deploy it as a production-ready FastAPI microservice. A React moderation dashboard lets a human reviewer see queued content, model predictions with confidence scores, approve or override, and the human decisions are fed back to retrain the model weekly — closing the MLOps feedback loop.

What you build
  • DistilBERT fine-tuned on Jigsaw Toxic Comment dataset (4-class multi-label)
  • Training: Hugging Face Trainer API, class-weighted loss for imbalanced labels
  • Evaluation: F1-score per class, ROC-AUC curve, threshold calibration per class
  • FastAPI service: /classify endpoint returning label probabilities + threshold decision
  • React dashboard: content queue, model prediction display, approve/override/escalate actions
  • Human feedback loop: overrides logged to PostgreSQL → weekly re-training pipeline in MLflow
  • Bias audit: measure false-positive rates across demographic word groups, document findings
  • Ethics report: known failure modes, misuse risks, and deployment recommendations
DistilBERTHugging Face TrainerMulti-label ClassificationFastAPIReact DashboardMLflowBias AuditMLOps Loop
Learning Outcomes
Fine-tune a transformer for multi-label classification with class-weighted loss
Calibrate per-class thresholds using ROC-AUC analysis
Build a human-in-the-loop MLOps feedback cycle with retraining
Audit a model for demographic bias and document ethical deployment guidelines
Learning Path

Project Journey by Class

Two projects per class, every year — each pair builds on the previous year's skills and introduces one major new concept.

Track 1 — Coding
Class 6
School Notice BoardCSS Art Portfolio
HTML5CSS3GitHub Pages
Class 7
Interactive Quiz AppWeather Dashboard
JS DOMFetch APIasync/await
Class 8
Grade Manager CLISnake Game (OOP)
PythonOOPGame Loop
Class 9
Blog CMS (Django)Student Portal (DRF)
DjangoDRFReact
Class 10
Task Management APIReal-Time Chatroom
REST APISocket.ioJWT
Class 11
E-Commerce PlatformDev Portfolio + CMS
MERNNext.js ISRFramer
Class 12 🏆
SaaS Starter KitMicroservices Platform
TypeScriptDockerCI/CD
Track 2 — IoT
Class 6
5-LED Chasing CircuitClap-Activated LED Matrix
NE555SPIADC
Class 7
Traffic Light SystemLine-Following Robot
Arduinomillis()P-Control
Class 8
Smart Parking SystemAuto Plant Watering
I2CRelayRTC
Class 9
Wi-Fi Env MonitorGPS Asset Tracker
ESP32MQTTGPS
Class 10
Smart Home DashboardDrone Controller Sim
ThingSpeakMSP Protocol
Class 11
AI Intruder Alert3-DOF Robot Arm + IK
OpenCVIK Maths
Class 12 🏆
Autonomous Warehouse RobotMulti-Robot Swarm
A*PIDSwarm AI
Track 3 — AI/ML
Class 6
RPS Teachable MachineFAQ Chatbot (Scratch)
TF.jsRule-Based AI
Class 7
Class Data StorySales Dashboard
SheetsPivot Tables
Class 8
IPL Season AnalyserTwitter Sentiment
pandasVADER NLP
Class 9
Score PredictorFruit CNN Classifier
Scikit-LearnKeras CNN
Class 10
Reminder Bot (N8N)GitHub Portfolio Bot
N8NGitHub GraphQL
Class 11
Crop Disease DetectorYOLOv8 Detection
TensorFlowYOLORoboflow
Class 12 🏆
AI Study Assistant RAGContent Moderation AI
LangChainDistilBERTMLOps
Skills Gained

Technology Skills Matrix

Proficiency each track builds by Class 12 — measured by complexity of work produced across all 42 projects.

Track 1 — Coding
HTML / CSS
98%
JavaScript
92%
Python
87%
React / Next.js
84%
Node + Express
82%
Databases
78%
TypeScript
72%
DevOps / Docker
65%
Real-Time (WS)
70%
Track 2 — IoT
Electronics
93%
Arduino (C++)
90%
Sensors + Actuators
88%
Raspberry Pi
84%
ESP32 / Wi-Fi IoT
82%
Control Theory
76%
Computer Vision
74%
Robot Kinematics
68%
Swarm / Multi-Robot
60%
Track 3 — AI/ML
Python DS Stack
96%
pandas / EDA
93%
Scikit-Learn
87%
TensorFlow / Keras
84%
Computer Vision (CV)
78%
NLP / Transformers
75%
LLM APIs / RAG
70%
MLOps / Feedback Loop
62%
Automation (N8N)
80%