EdgeBrain: Building a Free, Open-Source AI-Powered Edge Intelligence Platform
By Rudra Sarker • Published March 31, 2026
Why I Built EdgeBrain
IoT platforms are everywhere. But almost all of them have the same problems: they require cloud subscriptions, send your data to someone else's servers, charge you monthly fees, and lock you into their ecosystem. If you're a student learning IoT, a researcher prototyping a system, or a developer who just wants to simulate smart devices — the existing options are either too expensive or too closed.
I wanted a platform where I could simulate IoT devices, process real-time data, run AI inference, make autonomous decisions, and control actuators — all on my laptop, without any cloud service, without any paid API, and without anyone else touching my data.
So I built EdgeBrain.
What EdgeBrain Does
EdgeBrain is an AI-powered edge intelligence platform designed to run locally. Here's what it includes:
- 11 virtual IoT devices across 3 rooms (Living Room, Bedroom, Kitchen) — temperature, motion, energy, humidity, and light sensors with realistic circadian patterns and Gaussian noise
- Multi-agent AI system with three specialized agents: Data Agent (validates and stores), Decision Agent (evaluates strategies), and Action Agent (executes commands)
- Statistical anomaly detection using three independent methods — Z-Score, IQR, and Gradient — with multi-vote consensus to reduce false positives by ~70%
- Prediction engine with linear regression, Simple Moving Average, and Exponential Moving Average forecasting
- Real-time React dashboard with WebSocket-powered live charts, 5 navigable pages, and agent pipeline visualization
- 20+ REST API endpoints with Pydantic validation, Swagger docs, CSV/JSON export, and bidirectional WebSocket
- ESP32 firmware (C++/Arduino) for real DHT11 temperature and PIR motion sensors with LED and buzzer actuators
- Plugin architecture — add custom decision strategies with just 10 lines of Python
The Architecture
EdgeBrain follows an event-driven architecture where data flows through a clear pipeline:
- IoT Sensors — 11 virtual devices stream data via MQTT topics
- MQTT Broker — Mosquitto handles topic-based pub/sub routing
- Data Agent — validates value ranges, stores readings in PostgreSQL
- Decision Agent — evaluates multiple strategies (threshold rules, anomaly detection, predictions) simultaneously
- Action Agent — creates alerts and sends MQTT commands to actuators
- Actuators — lights, fans, and alarms respond in real-time
- Dashboard — React frontend receives live updates via WebSocket
Every component is modular, independently testable, and can be replaced or extended. The system uses Redis for caching and inter-agent communication, PostgreSQL for persistent storage, and Mosquitto MQTT for device messaging.
Anomaly Detection: Three Methods, One Consensus
One of the most interesting parts of EdgeBrain is how it detects anomalies. Instead of relying on a single algorithm, I implemented three independent statistical methods:
- Z-Score Analysis — measures how many standard deviations a reading is from the rolling mean. Effective for detecting sudden spikes in stable environments.
- IQR (Interquartile Range) — identifies outliers based on the 25th and 75th percentiles. Robust against extreme values that would skew standard deviation.
- Gradient Detection — monitors the rate of change between consecutive readings. Catches rapid transitions even if absolute values are within normal range.
The key insight: no single method is perfect. Z-Score fails with skewed distributions. IQR is slow to respond to gradual drifts. Gradient detection can be noisy. So I made them vote — an anomaly is only flagged when at least 2 out of 3 methods agree. This reduces false positives by approximately 70% compared to any single method alone.
Multi-Agent AI: Specialized by Design
Rather than one monolithic AI component, EdgeBrain uses a multi-agent architecture where each agent has a single, clear responsibility:
- Data Agent — receives raw MQTT messages, validates that values are within realistic ranges (e.g., temperature between -40°C and 80°C), and stores them in PostgreSQL with timestamps
- Decision Agent — subscribes to stored data, runs all registered decision strategies (threshold rules, anomaly detection, prediction analysis), and publishes decision messages
- Action Agent — consumes decisions and executes actions: turning on lights when motion is detected, activating fans when temperature exceeds thresholds, sending alerts for detected anomalies
Agents communicate through an internal message bus. Each agent tracks its own performance (processing time, decisions made, errors), which is visible on the dashboard. The system also supports a plugin architecture — you can write custom strategies in Python, implement two methods (evaluate() and name), and they'll be evaluated alongside the built-in ones.
Hysteresis: Solving the Flapping Problem
A common problem in IoT automation is actuator flapping — imagine a temperature hovering right at 30°C. Without hysteresis, the fan would turn on, then the temperature drops to 29.9°C and it turns off, then 30.1°C and it turns on again. Rapid cycling that wastes energy and wears out hardware.
EdgeBrain implements a hysteresis model: the fan turns on at 30°C, but doesn't turn off until the temperature drops below 25°C. Similarly, lights turn on with motion detection but only turn off after 5 minutes of no motion. This simple pattern makes the system behave like a real, practical home automation setup.
The Dashboard: Real-Time and Informative
The React 18 frontend provides 5 main pages:
- Dashboard — live sensor readings, device status grid, and recent alerts
- Analytics — historical charts, anomaly timeline, and prediction trends
- Devices — per-device details with current values, room placement, and configuration
- Agents — visual pipeline showing how data flows through each agent with real-time performance metrics
- Settings — system configuration for thresholds, simulation parameters, and notification preferences
Everything updates in real-time via WebSocket — no page refreshes needed. The design uses a dark theme that's easy on the eyes for monitoring dashboards.
ESP32 Hardware Support
While the default setup uses virtual devices, EdgeBrain includes ready-to-flash ESP32 firmware written in C++ for the Arduino framework. It supports:
- DHT11 temperature and humidity sensor reading
- PIR motion detection
- LED (light) and buzzer (alarm) actuator control
- WiFi connectivity and MQTT messaging
Just flash the firmware, connect your sensors, and the ESP32 will stream real sensor data into the same MQTT topics that virtual devices use. The rest of the pipeline — agents, AI, dashboard — works identically whether data comes from simulated or real hardware.
Tech Stack
Every component is free and open-source:
- Backend: Python 3.11, FastAPI, SQLAlchemy, Pydantic
- Frontend: React 18, Recharts, WebSocket client
- Database: PostgreSQL 16 (data), Redis 7 (cache + messaging)
- Message Bus: Eclipse Mosquitto MQTT broker
- AI/ML: NumPy, SciPy — all inference on CPU
- Hardware: ESP32 / Arduino (C++)
- DevOps: Docker Compose, GitHub Actions CI (37 tests)
Setup: 2 Minutes to Running
One of my design goals was that EdgeBrain should be trivially easy to start. No manual database setup, no environment variable configuration, no build steps. Just:
git clone https://github.com/rudra496/EdgeBrain.git cd EdgeBrain docker compose up --build -d # Open http://localhost:3000
Docker Compose spins up all 5 services (backend, frontend, PostgreSQL, Redis, Mosquitto), initializes the database schema, and starts generating simulated device data. Within seconds, the dashboard is alive with moving charts and real-time data.
Testing and CI
EdgeBrain has 37 automated tests covering the core components: AI strategies (threshold, anomaly, prediction), agent pipeline, API endpoints, and data validation. GitHub Actions CI runs the full test suite on every push to ensure nothing breaks. All 37 tests pass on the current codebase.
What's Next
EdgeBrain is at v1.0.0 and the core platform is stable. Planned improvements include:
- More decision strategies and a strategy marketplace
- Support for additional sensor types (air quality, water level, gas detection)
- Mobile companion app for remote monitoring
- Federated learning for distributed anomaly detection
- Integration with Home Assistant and other smart home platforms
Get Started
EdgeBrain is fully open-source under the MIT License. You can find the code, documentation, and Docker setup guide on GitHub:
- GitHub: github.com/rudra496/EdgeBrain
- Live Demo: rudra496.github.io/EdgeBrain
- Release: v1.0.0 on GitHub
If you're working on IoT, edge computing, or AI — or just curious about how autonomous decision systems work — I'd love to hear your thoughts. Feel free to open an issue, submit a PR, or reach out to me on any of my social channels.
🧠 EdgeBrain v1.0.0
11 IoT devices · 3 AI agents · 37 tests · $0 cost · MIT License
GitHub ·
Live Demo ·
Issues
Related Posts
- Team SignTalk: Translating Sign Language to Speech with a Smart Glove
- SightlineAI: Building AI-Powered Assistive Eyewear for the Visually Impaired
- PID Controllers: The Magic Behind Every Robot
Connect With Me
Follow my work and connect across platforms: