FireSentinel – AI-Powered Early Fire Detection
Computer Vision for Real-Time Fire Alerts
Presenter: [Your Name] • [Course / Startup / Date]
See the fire before it spreads.
The Problem: Fires Are Fast and Expensive
Fires cause billions of dollars in property damage every year.
Traditional systems (smoke detectors, human monitoring) are:
  • Slow – react only after smoke is dense or flames are large.
  • Local – each detector covers a tiny area.
  • Blind – no visual context or remote monitoring.
Growing risk from:
  • Dense urban housing, aging buildings.
  • Hotter, drier climates increasing wildfire risk.
We need faster, smarter, camera-based detection before fires get out of control.
Impact of the Problem
Human Impact
Injuries, deaths, long-term health issues from smoke.
Economic Impact
Businesses shut down, supply chains disrupted. Communities spend years rebuilding homes and infrastructure.
Environmental Impact
Forests and habitats destroyed. Massive CO₂ and air pollution from large fires.
Even a few minutes of earlier detection can dramatically reduce damage.
Our Solution: FireSentinel
FireSentinel: A computer-vision system that detects fire from camera feeds in real time and immediately alerts responders.
Connects to:
  • Existing CCTV / IP cameras.
  • Standalone low-cost camera units.
Uses deep learning (AlexNet-based CNN) to classify frames as:
  • "Fire" or "No Fire"
When fire is detected:
  • Sends alerts to building managers / security / authorities.
  • Can integrate with existing safety systems (alarms, sprinklers, dashboards).
Turning every camera into a potential early fire warning sensor.
Why Now / Market Relevance
Urbanization & infrastructure strain → higher fire risk.
Camera infrastructure is already everywhere:
  • Buildings, factories, malls, parking lots, cities.
AI & edge computing now fast enough to run CNNs in real time.
Potential customers:
  • Commercial buildings, warehouses, data centers.
  • Cities and municipalities.
  • Industrial plants, energy facilities, transportation hubs.
Key message: "FireSentinel rides the intersection of safety, AI, and existing camera networks."
How FireSentinel Works (System Overview)
01
Video Input
Continuous frames from an existing camera feed.
02
Preprocessing
Resize and normalize images to match AlexNet input (e.g., 224×224 RGB).
03
CNN Inference (AlexNet)
Model analyzes each frame and outputs: Probability of Fire vs No Fire.
04
Decision & Alerts
If probability of fire exceeds a threshold: Trigger alert (notification, SMS, API call, control room dashboard). Optionally cross-check consecutive frames to reduce false alarms.
05
Result
Automated, always-on monitoring without needing a human to stare at screens.
Technical Deep Dive: AlexNet Architecture (High-Level)
What is AlexNet?
  • A Convolutional Neural Network (CNN) designed for image classification.
  • We adapt it to a 2-class problem: fire vs no fire.
Architecture Overview:
  • Input: 3-channel RGB image, 224×224.
  • Feature extraction: Stacked convolution + ReLU + pooling layers.
  • Classification: Fully connected layers ending in a 2-logit output (fire / no fire).
Why AlexNet for FireSentinel?
  • Proven architecture, easy to understand and explain.
  • Works well with transfer learning from ImageNet.
  • Lightweight enough for deployment on many GPUs/edge devices.
AlexNet Layer-by-Layer (Feature Extraction)
01
Input Layer
Takes normalized RGB image of shape 3 × 224 × 224.
02
Convolutional Layers + ReLU
  • Conv1: Large receptive field to capture low-level patterns (colors, edges, blobs).
  • Conv2–Conv5: Stacked conv layers learn more complex features: Flames' shapes, edges, textures, high-contrast regions.
  • Each layer applies ReLU (Rectified Linear Unit) for non-linearity.
03
Pooling Layers (Max Pooling)
  • Inserted after some conv layers.
  • Reduce spatial resolution (downsampling), focusing on important regions.
  • Provide translation invariance so the model still detects fire even if it moves in the frame.
Key idea: These layers turn raw pixels into rich feature maps that highlight patterns associated with fire.
AlexNet Layer-by-Layer (Classification Head)
After the final convolution and pooling:
01
4. Flatten + Fully Connected Layers
Feature maps are flattened into a long feature vector.
  • FC1 & FC2 (Fully Connected):
  • High-capacity layers that combine features learned across the whole image.
  • Dropout is used to reduce overfitting.
02
5. Output Layer
  • Final fully connected layer outputs 2 logits:
  • Logit for "Fire"
  • Logit for "No Fire"
  • During training:
  • These logits go into a Cross-Entropy Loss, which compares predictions to the true label.
The network converts pixels → features → a final decision: 'Is there fire in this frame?'
Training Process (How We Teach FireSentinel)
1
1. Dataset Setup
To teach FireSentinel, we need to provide it with a large and diverse dataset.
  • Two main classes:
  • onfire: images containing visible flames.
  • notfire: scenes without fire (indoor, outdoor, day/night).
  • Split into train / validation / test sets to evaluate performance objectively.
2
2. Preprocessing & Augmentation
Preparing the images for the AlexNet model and making the model robust.
  • Resize/crop all images to 224×224 pixels, the required input size for AlexNet.
  • Normalize pixel values using ImageNet mean/std statistics (crucial for transfer learning).
  • Data augmentation (optional but recommended):
  • Random horizontal flips, slight rotations or crops, small color shifts.
  • Helps the model generalize to different lighting conditions and environments, reducing overfitting.
Training Process (Optimization Details)
1
Transfer Learning
  • Start from AlexNet pretrained on ImageNet.
  • Replace final layer with a 2-output layer (fire / no fire).
  • Either:
  • Freeze early layers and train only the last layers, or
  • Fine-tune more layers for better performance.
2
Optimization
  • Loss function: CrossEntropyLoss (for classification).
  • Optimizer: SGD with momentum or Adam.
  • Hyperparameters:
  • Batch size (e.g., 32).
  • Learning rate (e.g., 1e-3, tuned via experiments).
  • Number of epochs (e.g., 10–30 depending on convergence).
3
Evaluation
  • Track:
  • Training & validation accuracy.
  • Confusion matrix (false positives / false negatives).
  • Adjust learning rate, batch size, and augmentation to improve results.
How Our Data Fits into AlexNet
1
1. Input Compatibility
  • All images (onfire and notfire) are converted to the exact input format AlexNet expects:
  • 3 channels (RGB)
  • 224×224 resolution
  • Normalized pixel values
2
2. Fire vs No-Fire Representation
  • onfire images teach the network to recognize:
  • Orange/yellow flame patterns.
  • Flickering edges, high-contrast bright regions.
  • notfire images show:
  • Everyday scenes without flames (indoors, streets, forests without fire, etc.).
  • Edge cases: sunsets, bright lights, etc., so the model learns not to overreact.
3
3. Generalization
  • Because of augmentation and diverse scenes, FireSentinel can:
  • Detect fire in different locations, sizes, and lighting conditions.
  • Reduce false alarms from "fire-like" visuals.
Made with