How weapon Detection System was made ?

This is a weapon detection backend system that uses artificial intelligence to identify weapons in images and videos. Think of it as a smart security system that can automatically spot dangerous objects like guns or knives in camera footage.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Frontend │ │ Backend │ │ Database │
│ (React/Web) │◄──►│ (Flask API) │◄──►│ (PostgreSQL) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ AI Models │
│ (YOLO + LLM) │
└─────────────────┘
crime-server/
├── 📄 app.py # Main application file
├── 📄 models.py # Database table definitions
├── 📄 config.py # Configuration settings
├── 📄 ollama_service.py # AI chat service
├── 📄 requirements.txt # Python dependencies
├── 📄 schema.sql # Database setup
├── 📁 model/
│ └── 📄 best.pt # Trained YOLO model
├── 📁 .venv/ # Virtual environment
├── 📄 .env # Environment variables
├── 📄 README.md # Project documentation
├── 📄 COMPLETE_CODE_EXPLANATION.md # Detailed code explanation
├── 📄 BOUNDING_BOX_IMPLEMENTATION.md # Bounding box guide
└── 📄 test_api.py # API testing script
app.py (The Heart)models.py (Database Structure)config.py (Settings)ollama_service.py (AI Chat)# Load the pre-trained YOLO model
model = YOLO('model/best.pt')
# Connect to database
# Start Flask application
User uploads image → Backend receives it → YOLO processes it →
Detects weapons → Returns coordinates → Stores in database →
Sends response to frontend
User uploads video → Backend splits into frames →
Process each frame → Combine results →
Return detection summary with timestamps
{
"success": true,
"weapons_detected": true,
"detections": [
{
"class": "gun",
"confidence": 0.85,
"bbox": [100, 50, 200, 150] # [x1, y1, x2, y2]
}
],
"detection_count": 1
}
POST /detect-weapons-image
POST /detect-weapons-video
GET /analytics
GET /health
POST /chat
GET /detections
DetectionResultStores every weapon detection:
- id: Unique identifier
- filename: Name of processed file
- file_type: 'image' or 'video'
- weapons_detected: True/False
- detection_count: Number of weapons found
- confidence_scores: AI confidence levels
- timestamp: When detection occurred
- bounding_boxes: Weapon locations (JSON)
AnalyticsStores summary statistics:
- id: Unique identifier
- date: Date of analytics
- total_detections: Daily detection count
- weapon_types: Types of weapons found
- average_confidence: Average AI confidence
git clone <repository-url>
cd crime-server
python -m venv .venv
.venv\Scripts\activate # Windows
source .venv/bin/activate # Linux/Mac
pip install -r requirements.txt
Create .env file:
DATABASE_URL=postgresql://username:password@localhost/crime_db
SECRET_KEY=your-secret-key
OLLAMA_BASE_URL=http://localhost:11434
# Create database tables
python -c "from app import app, db; app.app_context().push(); db.create_all()"
python app.py
The server will start at http://localhost:5000
def detect_weapons_in_image(image_path):
"""
This function takes an image and finds weapons in it
Steps:
1. Load the image using OpenCV
2. Run YOLO model on the image
3. Filter results for weapons only
4. Extract bounding box coordinates
5. Return results with confidence scores
"""
# Load and process image
image = cv2.imread(image_path)
# Run AI detection
results = model(image)
# Process results
detections = []
for result in results:
for box in result.boxes:
# Extract weapon information
class_name = model.names[int(box.cls)]
confidence = float(box.conf)
bbox = box.xyxy[0].tolist() # [x1, y1, x2, y2]
detections.append({
'class': class_name,
'confidence': confidence,
'bbox': bbox
})
return detections
What are bounding boxes?
[x1, y1, x2, y2](x1, y1): Top-left corner(x2, y2): Bottom-right cornerExample:
bbox = [100, 50, 300, 200]
# This means:
# - Box starts at pixel (100, 50) from top-left
# - Box ends at pixel (300, 200)
# - Width: 300 - 100 = 200 pixels
# - Height: 200 - 50 = 150 pixels
requirements.txtpip install# 1. Preprocessing
image = cv2.imread(image_path)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# 2. Inference
results = model(image_rgb)
# 3. Post-processing
for result in results:
boxes = result.boxes
if boxes is not None:
# Non-maximum suppression already applied
confidences = boxes.conf.cpu().numpy()
class_ids = boxes.cls.cpu().numpy()
bboxes = boxes.xyxy.cpu().numpy()
model/best.ptError: "No module named 'ultralytics'"
Solution: pip install ultralytics
Error: "could not connect to server"
Solution: Check DATABASE_URL in .env file
Error: "File too large"
Solution: Check UPLOAD_FOLDER and MAX_CONTENT_LENGTH settings
Error: "Model file not found"
Solution: Ensure model/best.pt exists
Error: "CUDA out of memory"
Solution: Reduce batch size or use CPU inference
test_api.py to verify functionalityThis weapon detection backend is a complete AI-powered system that:
✅ Detects weapons in images and videos using YOLO
✅ Returns bounding box coordinates for frontend visualization
✅ Stores detection results in a PostgreSQL database
✅ Provides analytics and detection history
✅ Includes AI chat for intelligent responses
✅ Offers comprehensive APIs for frontend integration
For Beginners:
For ML/CV Engineers:
For Full-Stack Developers:
This guide covers the complete codebase structure and functionality. For specific implementation details, refer to the individual markdown files and code comments throughout the project.


