🧱 Early Detection of Structural Damage Using Deep Learning & Computer Vision
📌 Project Overview
This project focuses on early detection of structural damage (cracks, spalling, collapse risk) in walls and concrete structures using a combination of Deep Learning (MobileNet CNN) and Computer Vision (OpenCV + image processing) techniques.
The system provides:
- Crack / Non‑crack classification
- Damage severity analysis
- Crack visualization (minor & major cracks)
- Training performance metrics (loss & accuracy)
- Interactive Streamlit web application with upload & camera support
This project is suitable for academic projects, research prototypes, and real‑world inspection assistance systems.
🎯 Key Objectives
- Detect cracks and damage in wall/structure images
- Ignore irrelevant objects (people, windows, background clutter)
- Highlight minor cracks (green) and major cracks (red)
- Provide interpretable metrics (edge density, texture variance, confidence)
- Visualize model training performance
🧠 Why This Approach?
Why CNN (MobileNet)?
- Lightweight and fast
- Works well on limited hardware
- Pretrained on ImageNet → strong feature extraction
- Suitable for real‑time and deployment
Why OpenCV + Image Processing?
- Crack geometry is thin and irregular → classical CV works well
- Edge detection & skeletonization help localize cracks
- Enhances explainability beyond black‑box ML
Why Hybrid (ML + CV)?
- CNN → classification (cracked / non‑cracked)
- OpenCV → localization & severity analysis
- Produces both decision + visualization
🗂️ Datasets Used
1️⃣ Crack500 Dataset
Used for training the CNN model.
- Contains cropped crack images
- High‑quality labeled crack patterns
2️⃣ SDNET2018 Dataset
Used for realistic structure damage analysis.
📁 Dataset Structure (Required)
datasets/
├── train/
│ ├── Cracked/
│ └── Non-cracked/
├── val/
│ ├── Cracked/
│ └── Non-cracked/
⚠️ Only this structure works with ImageFolder in PyTorch.
🤖 Models Used
🔹 MobileNetV2 (PyTorch)
- Pretrained weights: ImageNet
- Modified classifier layer for binary classification
- Saved model:
mobilenet_crack.pth
🔹 Grad‑CAM
- Used for visual explanation of CNN focus regions
🔬 Training Process
Steps:
- Load dataset using
ImageFolder
- Apply data augmentation
- Load pretrained MobileNetV2
- Replace classifier head
- Train using CrossEntropyLoss
- Track loss & accuracy per epoch
-
Save:
- Model weights
- Training loss graph
- Training accuracy graph
Output:
outputs/
├── loss.png
├── accuracy.png
🖼️ Image Analysis Process (Inference)
- Image upload / camera capture
- Object filtering (ignore people/windows)
-
CNN prediction:
- Cracked / Non‑cracked
- Confidence score
-
OpenCV analysis:
- Edge density
- Texture variance (GLCM)
- Damage severity
-
Crack highlighting:
- Minor cracks → Green
- Major crack → Red (longest skeleton path)
🌐 Streamlit Web Application
Features
- Upload image
- Camera capture
- Training metrics display (front page)
- Real‑time crack analysis
- Highlighted crack visualization
Main File
▶️ How to Run the Project
1️⃣ Clone / Open Project
2️⃣ Create Virtual Environment
python -m venv venv
.\venv\Scripts\activate
3️⃣ Install Dependencies
pip install -r requirements.txt
4️⃣ Train Model (Optional)
5️⃣ Run Web App
🧾 Software Requirements
- Windows / Linux
- Python 3.9+
- VS Code (recommended)
- Streamlit
Libraries Used
- torch, torchvision
- opencv‑python
- numpy
- matplotlib
- scikit‑image
- streamlit
- pillow
💻 Hardware Requirements
-
Minimum:
- CPU i5 / Ryzen 5
- 8 GB RAM
-
Recommended:
- GPU (CUDA supported)
- 16 GB RAM
🌲 Project Structure
project c/
├── app.py
├── train.py
├── evaluate.py
├── requirements.txt
├── README.md
├── datasets/
│ ├── crack500/
│ └── sdnet/
├── models/
│ ├── mobilenet_model.py
│ └── gradcam.py
├── utils/
│ ├── classifier.py
│ ├── cv_analysis.py
│ ├── image_utils.py
│ └── object_filter.py
├── outputs/
│ ├── loss.png
│ └── accuracy.png
└── venv/
⚠️ Things to Keep in Mind
✅ Do
- Activate venv before running
- Maintain dataset folder structure
- Run
train.py before first inference
- Use clear wall/structure images
❌ Don’t
- Rename dataset folders randomly
- Mix cracked & non‑cracked images
- Delete
outputs/ after training
- Upload unrelated images (people, objects)
🚀 Future Improvements
- Add YOLO‑based object filtering
- Multi‑class damage classification
- Crack width & length measurement
- Video / live camera stream analysis
- Cloud deployment
- Automated report generation (PDF)
- Improved major crack confidence scoring
🏁 Conclusion
This project demonstrates a practical, explainable, and deployable approach for structural damage detection using modern AI and classical vision techniques. It balances accuracy, performance, and interpretability, making it suitable for real‑world inspection systems.
📌 Developed as an academic and research‑oriented structural inspection system.