Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2013 Jul;22(7):2786-97.
doi: 10.1109/TIP.2013.2258353. Epub 2013 Apr 16.

Optical flow estimation for flame detection in videos

Affiliations

Optical flow estimation for flame detection in videos

Martin Mueller et al. IEEE Trans Image Process. 2013 Jul.

Abstract

Computational vision-based flame detection has drawn significant attention in the past decade with camera surveillance systems becoming ubiquitous. Whereas many discriminating features, such as color, shape, texture, etc., have been employed in the literature, this paper proposes a set of motion features based on motion estimators. The key idea consists of exploiting the difference between the turbulent, fast, fire motion, and the structured, rigid motion of other objects. Since classical optical flow methods do not model the characteristics of fire motion (e.g., non-smoothness of motion, non-constancy of intensity), two optical flow methods are specifically designed for the fire detection task: optimal mass transport models fire with dynamic texture, while a data-driven optical flow scheme models saturated flames. Then, characteristic features related to the flow magnitudes and directions are computed from the flow fields to discriminate between fire and non-fire motion. The proposed features are tested on a large video database to demonstrate their practical usefulness. Moreover, a novel evaluation method is proposed by fire simulations that allow for a controlled environment to analyze parameter influences, such as flame saturation, spatial resolution, frame rate, and random noise.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
A Visual outline of the scope of this paper. The underlying goal of the research is improved robustness to rigid motion of fire-colored objects and unfavorable backgrounds, which tend to cause many false detections in current systems.
Fig. 2
Fig. 2
The Proposed fire detection algorithm. The paper's focus is put on the feature extraction block, where two optical flow fields (OMT and NSD) are computed in parallel from which the 4D feature vector is built.
Fig. 3
Fig. 3
Hue term f(min{|HcH|, 1 –|HcH|}) in Eq. (9).
Fig. 4
Fig. 4
Two examples for the generalized mass transformation Eq. (9). (a) and (c): Original images. (b) and (d): Respective generalized mass (black - 0, white - 1). Fire texture is preserved, saturated regions are assigned as low mass.
Fig. 5
Fig. 5
OMT flow fields: fire with dynamic texture (left) and a white hat (right) moving up/right. The red box indicates the area for which the flow field is shown.
Fig. 6
Fig. 6
NSD flow fields: saturated fire (left) and a white hat (right) moving up/right. The red box indicates the area for which the flow field is shown.
Fig. 7
Fig. 7
(a) Ideal source flow template and (b) OMT flow field for the fire image in Fig. 5. The source matching feature is obtained by the maximum absolute value of the convolution between (a) and (b).
Fig. 8
Fig. 8
Motion histograms for flows in Fig. 6. (a) (fire) has a multi-directional distribution, whereas (b) (hat) is dominantly moving up and right.
Fig. 9
Fig. 9
Examples of detected fire scenes with the resulting probabilities.
Fig. 10
Fig. 10
Examples of potential false positives correctly classified as non-fire scenes with the resulting probabilities.
Fig. 11
Fig. 11
Examples of false negative detections (Fire scene falsely classified as non-fire scene) with the resulting probabilities.
Fig. 12
Fig. 12
Examples of false positive detections (Non-fire scene falsely classified as fire scene) with the resulting probabilities.
Fig. 13
Fig. 13
Sample frames from the fire simulation experiment.
Fig. 14
Fig. 14
Directional features f3 and f4 for frames from the simulation experiment. Each marker corresponds to one frame. Different markers indicate different backgrounds: asterisk - black, circle - city, cross - tree, diamond -valley, and square - sky. The separation line L best separates the two clusters according to Eq. (25).
Fig. 15
Fig. 15
Influence of parameter changes on the error Eq. (25) and the slope of the separation line L in Fig. 14.
Fig. 16
Fig. 16
From OpenCV Horn-Schunck (CVHS) computed on the simulation experiment in Section V-C, the probability of characteristic direction ϕ[v] and the probability of characteristic magnitude Λ[v] as defined in [13] are extracted. This figure stands in direct comparison to Fig. 14 (asterisk - black, circle - city, cross - tree, diamond - valley, square - sky). It is observed that CVHS works well only for black background, whereas for non-trivial backgrounds, the features become unreliable.

References

    1. du Bartas G. La Sepmaine ou Creation du Monde. Michel Gadoulleau et Jean Febvrier; Paris, France: 1578.
    1. Çelik T, Demirel H. Fire detection in video sequences using a generic color model. Fire Safety J. 2009;44(2):147–158.
    1. Borges P, Izquierdo E. A probabilistic approach for vision-based fire detection in videos. IEEE Trans. Circuits Syst. Video Technol. 2010 May;20(5):721–731.
    1. Ho C. Machine vision-based real-time early flame and smoke detection. Meas. Sci. Technol. 2009;20(4):045502.
    1. Marbach G, Loepfe M, Brupbacher T. An image processing technique for fire detection in video images. Fire Safety J. 2006;41(4):285–289.

Publication types