Spot That Fire V3.0

Recent wildfires worldwide have demonstrated the importance of rapid wildfire detection, mitigation, and community impact assessment analysis. Your challenge is to develop and/or augment an existing application to detect, predict, and assess the economic impacts from actual or potential wildfires by leveraging high-frequency data from a new generation of geostationary satellites, data from polar-orbiting environmental satellites, and other open-source datasets.

Rapid Responders

Summary

Our project aims to identify and respond to fire hazards rapidly by initiating recovery protocols based on the parameters of the fire. We gather these parameters using satellites that spot the fire and return geographical data to determine the spread, intensity and threat it poses to nearby life. Based on the collected data, the model grades the fires into 5 levels of threat. The model then leverages this data to reach the nearby QRTs with suggestive protocols and SOS alerts depending on the level of fire.

How We Addressed This Challenge

In 2019 alone, America spent $5 Billion for fighting major wildfires. Wildfires cost around $110 Billion to the continent of Australia in the same year. Wildfires have increased 5x since 1970 and are not slowing down. The damage is not just human lives lost but wildlife and collateral damage. 26 people died in the Australian bushfire and 25,000 koalas died on an island due to raging flames. All these incidents have a serious impact on our biosphere which is why it’s of the utmost importance to use technology to save lives.


We developed a model which addresses the various issues once a fire is detected. The model’s foundation is laid by a three pillar analytics structure of a fire breakout. The three pillars are Descriptive Analytics, Predictive Analytics, & Prescriptive Analytics. 


Descriptive analytics use statistical methods such as statistical modeling and data mining to provide insight into what happened in the past. This would include real-time and post hoc monitoring of suppression operations to gauge effectiveness and development performance measures related to resource use, productivity, and effectiveness. We utilize parameters like fire weather, fire size and shape, burn severity, daily perimeter growth, fire duration & suppression expenditures in our model.


Predictive analytics use techniques such as forecasting and machine learning to assess what might happen in the future. This would include predictions of fire weather, fire behavior, and potential control locations. The model is designed to predict based weather forecasts, burn probabilities, fire intensity probabilities, fire arrival times, estimated containment time to determine the Fire Risk Index (FRI). The FRI of a fire dictates the response necessary by predicting potential control locations, suppression difficulty, safety zones and escape routes. 


Prescriptive analytics use operation research methods such as optimization and simulation to recommend efficient solutions. This would include assigning suppression resources to tasks such as asset protection or for line construction. Our website is equipped with SOS alerts that broadcast alerts to concerned citizens in the area. We’ve mapped relief points for QRT (Quick response teams) deployment to control the line of the fire. The idea is to optimize methods for construction, holding, and mop-up point protection (location, type, length, timing), safety zones creation and burnout operations on the basis of live data points provided by the dashboard.


All of the above techniques fetch their data from NASA FIRMS, NASA SRTM, MeteoMatics, Google places etc. to predict the damage and the recovery steps needed to tackle the fire.

How We Developed This Project

We analysed the satellite and ground data collected by NASA to find out the correlation between different data points. Our goal was to find relations between weather and topological data to predict the spread of wildfires.

 

In this process we first established parameters for our Multi Layer Perceptron to establish relations between data collected and potential wildfires. We strategized a model which incorporates this data to grade the fires based on their risk. Once the FRI (Fire Risk Index) is identified, our prescriptive model uses live resource data to suggest an action plan and critical points to respond to the fire. In addition, we map the live status of relief activities. We relay all this data with action items in a live dynamic geolocational dashboard which predicts and describes the current situation in a locality.


To achieve the final solution we used tools and technology ranging from cloud service to data analytics such as Google Cloud Platform, Google Colab, Python Django, SKLearn and MatplotLib. The final service is accessible by the local Fire Department which integrates right into their system to notify firemen, citizens & fellow relief stations.. A section of the dashboard is live and open source for everyone to view.


The biggest challenge that we faced as a team was working from remote locations. Effective communication was strenuous and collaborating online was a new challenge. Despite being thousand miles apart from different specializations, we feel our biggest achievement was to come up with such a comprehensive solution which dived deep into a vast problem with several complexities.

How We Used Space Agency Data in This Project

Data delivers value only when it is used to solve problems or answer important questions. In our project we tried to achieve the same by extracting as much as possible insights from the data which can help us infer the solution for the problem we were trying to address.

The process based on which we can infer or conclude some meaningful outcome from the data can be divided into 4 major steps, namely


  1. Producing Data
  2. Data Preprocessing
  3. Exploratory Data Analysis
  4. Model/Probability


The step wise inference process has been described below:


1.   Producing Data:

We mostly relied on the satellite data provided by NASA. Sections for data gathering along with the data source link are:


  1. Coordinates of the Fire events
  2. Climate conditions in and around the coordinate region
  3. Topography and Vegetation of the coordinate region


The data parameters extracted from each of the above sections are mentioned category wise below:

Fire event coordinates:

a.   Latitude

b.   Longitude

c.   Date

d.   Time

e.   Confidence

f.   Day/Night


Climate Conditions:

a.   Temperature

b.   Precipitation probability

c.   Wind Speed

d.   Wind Direction


Topography and Vegetation:

a.   Soil Type

b.   Altitude

c.   Surface Type

d.   Vegetation Type

e.   Area Type

 

2.   Data Preprocessing:

Having accumulated data for the above mentioned 15 parameters lying under 3 major categories, the next immediate step was to create a mapping between different parameters for the identified fire events(coordinates) termed as data points.

Mapping: For each identified fire event the coordinates of the place of fire broke out, date and time of happening of the event were extracted from the NASA FIRMS dataset along with the confidence percentage of the event being fire.

For each of the coordinates extracted above, the climatic parameters namely: temperature, wind speed, wind direction, precipitation probability were extracted out from the meteomatics-weather-parameters dataset

Similarly for each of the coordinate, topography and vegetation parameters namely: soil type being humid or not, surface type, vegetation category, etc were extracted from meteomatics-topography-and-land-usage dataset as these parameters directly or indirectly influence the growth and spread of fire.

All the above extracted parameters were tied together with the coordinates of the event into a single data frame and appended to the excel. This excel became the knowledge base for our mathematical model to draw inferences from.


3.   Exploratory Data Analysis:

Having created the knowledge base for our mathematical model, we performed some statistical operations such as identifying the spread of data, cross correlation between various parameters to make sure that the gathered data features provide us with sufficient insights to draw conclusions on our hypothesis for the risk of spread of the fire.

Visualizing the data plots and correlation matrices helped us identify the top 10 parameters which directly influence the risk of the fire spread. These 10 parameters became our driving parameters to be fed into the ML model.

Data Creation for the Model:

To train the ML model in order to identify the potential high spread risk fire events our next step was to create the training data for the model. The step was to create the labelled data set for the supervised learning of the model. Our pre-processing script identified the past fire events which turned into major wildfire broke outs, we labelled such data points as 1 indicating that data points with such parameters lead to the major fire broke outs and others as 0.


4.   Model:

We developed a multilayer perceptron classification neural network which got trained on the labelled dataset created above. Model was able to provide us with a risk index for the given fire event being in the category of major or small scale fire, based on its learning on the past similar such fire events which were labelled as 0 or 1 for the event being small scale or large scale event respectively.

Apart from giving the risk index(on the scale of 1-10) for the fire spread probability our model was also able to provide us with the most probable direction of spread of the fire, inferring from wind-direction and geographical features around the fire spot. This inference can directly help us reduce the impact and spread of the fire taking necessary actions in and around the area of the fire spot.

Project Demo
Tags
#airquality, #fire, #wildfire, #wildlife
Judging
This project was submitted for consideration during the Space Apps Judging process.