Giter Site home page Giter Site logo

pedbrgs / fire-detection Goto Github PK

View Code? Open in Web Editor NEW
36.0 2.0 9.0 7.13 MB

Fire and smoke detection using spatial and temporal patterns.

License: MIT License

Python 97.78% Shell 1.81% Dockerfile 0.41%
fire-detection convolutional-neural-networks smoke-detection yolo dfire

fire-detection's Introduction

Fire Detection

Author: Pedro Vinícius A. B. Venâncio1

1 Graduate Program in Electrical Engineering (PPGEE/UFMG)


About

This repository contains the models and source codes of hybrid systems for fire detection implemented during my master's degree, as well as some baseline models for comparison purposes. The proposed hybrid systems are composed of two sequential stages: (i) spatial detection, which consists of identifying and locating fire and smoke events on the scene based on spatial patterns, and (ii) temporal analysis of the events detected in the previous stage, in order to make a final decision on whether a fire is actually taking place. The baseline models are simple convolutional neural networks for fire classification proposed in the literature.

How to run fire and smoke detection on a video

Tutorial

  1. After cloning the repository, copy the videos you want to run the algorithms to the examples/ folder.

  2. Build the fire-detection image from the available Dockerfile.

docker build -t fire-detection .
  1. Create and run a new container from the fire-detection image.
docker run -it --rm fire-detection /bin/bash
  1. Choose which model you want to run and follow the steps in its respective subsection.

Detection using a hybrid system

The first stage of the hybrid system is a YOLOv5 network (small or large) and the second stage can be a area variation technique (AVT) or a temporal persistence technique (TPT). We recommend AVT for outdoor scenes and TPT for indoor scenes.

After running the system, the videos with the detections are saved in runs/detect/exp/.

YOLOv5+AVT

If you want to use the hybrid system YOLOv5+AVT, run the following command inside the container:

python detect.py --source <video_file> --weights ./weights/<weights_file> --temporal tracker

where <video_file> is the video in which you will detect fire and <weights_file> is the file with the network weights (can be yolov5s.pt or yolov5l.pt). You can change the parameters of the area variation technique by specifying the additional flags --area-thresh and window-size.

YOLOv5+TPT

If you want to use the hybrid system YOLOv5+TPT, run the following command inside the container:

python detect.py --source <video_file> --weights ./weights/<weights_file> --temporal persistence

where <video_file> is the video in which you will detect fire and <weights_file> is the file with the network weights (can be yolov5s.pt or yolov5l.pt). You can change the parameters of the persistence temporal technique by specifying the additional flags --persistence-thresh and window-size.

Detection using YOLOv5

YOLOv5

If you want to use only the YOLOv5 network, run the following command inside the container:

python detect.py --source <video_file> --imgsz 640 --weights ./weights/<weights_file>

where <video_file> is the video in which you will detect fire and <weights_file> is the file with the network weights (can be yolov5s.pt or yolov5l.pt). You can change the parameters of the YOLOv5 network by specifying the additional flags --img-size, --conf-thres and --iou-thres.

Detection using baseline models

If you want to use a baseline model, run the following command inside the container:

python baseline.py --video <video_file> --model <model_name>

where <video_file> is the video in which you will detect fire and <model_name> is the name of the model to be used (can be 'firenet' or 'mobilenet').

Models

Download the model weights from the root of this repository by running the ./scripts/download_models.sh script or manually using the links below.

Citation

Please cite the following paper if you use our proposed hybrid systems for fire and smoke detection:

If you use our YOLOv4 models for fire and smoke detection, please cite the following paper:

References

fire-detection's People

Contributors

pedbrgs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

fire-detection's Issues

Show fire/smoke detections?

I am running the model with python3 baseline.py --video ex1.mp4 --model mobilenet, but it only shows some bash outputs and no output videos. Does it save the output file somewhere?

1/1 [==============================] - 0s 25ms/step
Time taken =  0.04462027549743652
Prediction: non_fire
1/1 [==============================] - 0s 25ms/step
Time taken =  0.043842315673828125
Prediction: non_fire
1/1 [==============================] - 0s 26ms/step
Time taken =  0.04700660705566406
Prediction: non_fire
1/1 [==============================] - 0s 27ms/step
Time taken =  0.050565481185913086
Prediction: non_fire
Results
     video    network  detected  first_frame  time_avg
0  ex1.mp4  mobilenet      True           10  0.049584

Is baseline models for Fire only?

I was trying your baseline models (Firenet and Mobilenet). Are they for Fire only and not Smoke?
python baseline.py --video <video_file> --model firenet

Runtime error on CUDA

I have a hard time running it. I guess because the code is for a few years ago and I get issues with new API & A100 GPU etc.
Is there any plans to test or address issues with newer library versions, new GPUs, etc?

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.

Help with some configuration values

Thank you for your work "A hybrid method for fire detection based on spatial and temporal patterns (2023)".

My question is:

  • About test video: Is a video start with "FP" is the positive video (having fire/smoke) and those starting with "VP" is the negative sample?
    The video:
    image

  • About the result table 3: are you use these following values as provided in the code (or the different ones) to produce this result table?

 + conf_thres=0.25,  # confidence threshold
 + iou_thres=0.45,  # NMS IOU threshold"

Result table: image

Thank you for your clarification!

How can convert yolov5 to yolov8

Hi, I want to ask how can I convert this code into yolov8 format to use with temporal features after detection? Please tell me how can I do this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.