AI Waste Sorting

Dual-Model Detection

Snap a photo to see results from both YOLOv11 and Mask R-CNN (Detectron2) to determine if an item is Recyclable or Trash.
Note: Browser camera access required.

Launch Camera  

How It Works

Waste mismanagement is a global crisis. This project aims to simplify recycling by using Artificial Intelligence to instantly classify waste items via a smartphone camera.

The system now employs two distinct neural network architectures:

  • YOLOv11: Optimized for speed and real-time detection.
  • Mask R-CNN (Detectron2): Optimized for high-accuracy segmentation.

System Architecture

1. Model Training: Both models were trained on the TACO (Trash Annotations in Context) Dataset using an NVIDIA A100 GPU. The models map specific waste items (Bottles, Cans, Bags, etc.) into two broad categories: Recyclable or Trash.

2. The Backend: The backend runs on Hugging Face Spaces using a Docker container equipped with PyTorch and Detectron2. When you snap a photo, the server runs both models simultaneously and returns two segmented results.

3. The Frontend: The interface allows you to toggle between the model outputs to compare their performance in real-time.

Running Dual Models...
(Approx 20 seconds)