We've created a machine learning model designed to detect the presence of cloud streets (horizontal convective rolls) in satellite images. We leveraged NASA's Global Imagery Browse Services (GIBS) to collect these high resolution images, and used the data set provided in this challenges resources tab to train the model.
We also successfully created a clone of NASA's WorldView on a local server (https://spaceboys.xyz), which we call Nimbus Worldview, and created a new layer there which shows which of the ~12,800 images our model has detected cloud streets in. Our "Nimbus Horizontal Convective Rolls" layer provides an accessible, user-friendly interface to view these results.
Google's Tensorflow platform was the perfect start for our ML (machine learning) system, since it supports Convolutional Neural Networks (ideal for machine learning with analysis of visual imagery) out of the box. After configuration, we found the performance quite impressive - on a moderate gaming desktop (using a 2060 Super graphics card), our model ran through the cloud streets training data to the point of 80% accuracy in just under 30 minutes! In addition, the model takes less than 100 microseconds to evaluate an image (which is 1/100th of a millisecond)!
NASA's Worldview application displays layers by first requesting satellite imagery from GIBS, displaying those image tiles, and then by adding layers from separate information also obtained from GIBS. In Worldview Nimbus, all the same layers are available, however Nimbus also calls upon our own server (spaceboys.xyz) to overlay red pixels where cloud streets have been detected.
In summary, our ML model classifies each image tile it evaluates as "yes" or "no" for the presence of cloud streets. It then takes "yes" tiles, translates the row/column values into a set of coordinates compatible with Nimbus Worldview, and then feeds that information to our spaceboys.xyz server. When using Nimbus Worldview, each image tile is rendered before any layers - and when you zoom in, more tiles are displayed for a given 'piece' of the Earth's surface. When the Nimbus layer is applied, upon increasing the magnification of the map, each newly displayed tile is rendered, and Nimbus calls upon the server to determine if cloud streets have been detected in each new tile by our ML model - if they have been, it will draw an overlay (in real time) on that tile. This means we are not pre-rendering images for the overlay, but requesting, drawing, and rendering the presence of each part of our layer with no buffering each time the magnification increases, or the map is moved!
While, in this project, our model has been trained to detect cloud streets, the model could be given other data in order to detect (or even predict) different natural phenomena.
We were motivated to pursue this challenge because we had an interest in machine learning, but not much experience - we wanted something difficult enough to prove engaging, while being within the realm of our capabilities! We delegated different tasks to each group member, forming 'teams' of 2 when necessary, and touched base intermittently to share our progress.
Our server was built in Java, our machine learning model uses Google's "Tensorflow" platform and Python, and Worldview is built using Node (Javascript) and Python. We downloaded images from GIBS using a C# script. All of this is available in our github repository!
First, we used Worldview (https://worldview.earthdata.nasa.gov/) to navigate the available resources, and as a frame of reference to being building our script for downloading the "tiles" of satellite images which we ran through our ML model. That script downloaded all the necessary tiles for evaluation from GIBS, or NASA's Global Imagery Browse Services.
Following that, we cloned Worldview in order to add our own layer, resulting in what we call Nimbus Worldview.
Watch our project video on YouTube! (click here)
Test the live prototype!(click here)
**When testing the prototype, Nimbus layer is available only for Oct 1, 2, and 3rd of 2020**
**Click the settings icon beside "Nimbus" layer to change opacity (view video for reference)**
https://worldview.earthdata.nasa.gov/
https://wiki.earthdata.nasa.gov/display/GIBS/GIBS+API+for+Developers
https://www.tensorflow.org/