Tech Footprint has received the following awards and nominations. Way to go!
We have data provided by NASA about carbon concentrations in each part of the globe. We have Landsat8 images that we use to show how much land and oceans are available in each place and the registration of temperatures from each day. Why not cross this data and provide a general perspective of how this place is and how climate change will impact this area in the next few years?
To achieve this our solution was divided by three different parts: collect and connect data to improve the quality of information about emission of GHG, predicting the future emission and its consequences, and, finally, informing and helping the government to make policies for controlling the emission.
Your aim is to make the information about all this stuff more accessible, easier to read and more complete so the government and the general population would be more aware of how critical our environment situation is and how urgent we need to change, giving them more accuracy to make decisions.
A datastudio is avaible on: http://bit.ly/wherescarbon.
The first step in development was to decide which data is the most important to show. After some quick research, we decided to use amounts of vegetation, water, air quality data and temperature for some area on a specified date. After the objective was set, we moved to the creation of a tool that could make this information easily accessible by anyone. We decided to create a very simple server that has HTTP endpoints to fetch said data.
With the objective defined, we started developing. The first thing developed was the estimation of vegetation in some areas. We achieved that by using Landsat-8 images. Our HTTP endpoint receives a coordinate (X, Y) and a date. With that, it searches for available images for that point and date. If found, it will calculate the NDVI for that area and estimate the amount of vegetation. For the amount of water in the vicinity, we used another broadly used index, NDWI. It is executed in the same logic as the NDVI/vegetation. We get the information of the area, calculate the index and return an estimate of the amount of water in the area.
The air quality data was one of the most interesting to develop. We had to search through all of the publicly available datasets from Nasa. Eventually we stumbled into a very nice catalog of datasets (https://neo.sci.gsfc.nasa.gov/dataset_index.php). With those, we created a new endpoint used to show that we can, indeed, read datasets from nasa to extract useful information. We tried using the ones that could be exported in TIF format. We choose that because of the simplicity. Every other format could be used, such as NetCDF, used by OCO2 and 3. If we had more time, we could easily learn about the datasets and use them.
Last, but not least, we used an already available API to fetch historical data for a coordinate on some specified date. This way, we have gathered all the data we want to start making analysis on the data.
With all the data sources in hand, we simply created a very simple HTTP Rest API to interact with the data.
After we have a way to fetch data, we created a Jupyter Notebook that made use of it (https://colab.research.google.com/drive/1-L1a1qk2ys1DB6lN_895P5HgFFXd964A). This notebook was created as a way to demonstrate what someone can use with the data we provide. One can create algorithms capable of predicting future carbon emissions based on previous data fetched using our prototype. It makes use of machine learning techniques to do that.
Our current prototype is running in the Heroku cloud service. It is a python based web server that has some endpoints that will be further explained below. The URL for the server is https://spaceapp2020.herokuapp.com. The links copied below can be accessed via the browser, it will only show the resulting JSON values. But it is good enough for demonstration purposes. Note that because of the way Heroku works, the server may not be running and it will be started by heroku when the first request to the server happens. This way, the first request made in some time may take some time more to respond.
Returns percentage of vegetation and water near the specified point. If there is no Landsat image for the passed day, it will return a 404. Note that this endpoint performs the downloads of some raster data for calculations. So it can take up to 20 seconds to complete. Click the link below to see the result.
https://spaceapp2020.herokuapp.com/green?x=-48.422&y=-27.573&day=2020-09-11
Return value:
{"vegetation":0.29378858024691357,"water":0.3333333333333333}
These values are percentages of a circle around the point passed. Which, in the example above, would be 33.33% percent of the area is water (coast/river) and 29.37% is covered in vegetation.
Returns historical temperature information from World Weather Online. We could have used data from NASA’s own temperature dataset, but, because of time, we decided to use a commercial, though with a free trial, to serve the same purpose.
https://spaceapp2020.herokuapp.com/temperature?x=-48.422&y=-27.573&day=2020-09-11
Return value:
{"date":"2020-09-11","maxtempC":"23","maxtempF":"74","mintempC":"17","mintempF":"63","avgtempC":"21","avgtempF":"70","totalSnow_cm":"0.0","sunHour":"11.6","uvIndex":"5"}
This endpoint is the most interesting one. It returns the data available for the passed point. All data from this endpoint was fetched from NASA datasets available on the lin (https://neo.sci.gsfc.nasa.gov/dataset_index.php).
https://spaceapp2020.herokuapp.com/datasets?x=-48.422&y=-27.573
Return values:
{"carbon_monoxide":61.400001525878906,"aerosol_particles":99999.0,"nitrogen_dioxide":106.0,"uvindex":14.430000305175781}
The HTTP server code is freely available in the following GitHub repository:
https://github.com/meyer1994/spaceapp2020
The Jupyter Notebook is made available by Google Collab:
https://colab.research.google.com/drive/1-L1a1qk2ys1DB6lN_895P5HgFFXd964A
We have used Landsat-8 data for most of our analysis. We use the following bands: green, red and near infrared (nir). Red and nir are used in the NDVI calculations, while green and nir are used in NDWI calculations.
Besides, we used some of the datasets available on NASA's website (https://neo.sci.gsfc.nasa.gov/dataset_index.php). We used Aerosol, Carbon Monoxide, Nitrogen Dioxide and UV Index. We could have used much more, but, because of time constraints, we only used these ones.
Now to the part of how the data was read. The datasets were all exported in TIF type. This file format can be easily read by some python libraries. Such as rasterio and gdal. Because the data is from the whole world, with precision of 0.25 degrees, we could easily fetch data from any coordinate on the world. Yet, not even NASA’s datasets are perfect. Some places do not have data and have placeholder values, such as 99999.
Your demo is avaible on: https://youtu.be/LimwNQW4mmg
BBC news: Six urgent changes to contain the climate emergency, according to 11 thousend scientists (https://www.bbc.com/portuguese/geral-50321928)
EPA - United States Envorionmental Protection Agency: Global Greenhouse Gas Emissions Data (https://www.epa.gov/ghgemissions/global-greenhouse-gas-emissions-data)
Department of Motor Vehicles: estimated vehicles registered by county (https://www.dmv.ca.gov/portal/uploads/2020/06/2019-Estimated-Vehicles-Registered-by-County-1.pdf)
Global Carbon Atlas: CO2 emissions (http://www.globalcarbonatlas.org/en/CO2-emissions)
Global Carbon Project: Global Carbon Budget (https://www.globalcarbonproject.org/carbonbudget/19/files/GCP_CarbonBudget_2019.pdf)
Google Environmental Insights Explorer (https://insights.sustainability.google/)
Google Forms - Forms for validation of the lack of knowledge of carbon footprint by population (https://docs.google.com/forms/d/e/1FAIpQLSdeK8snnPbgxMPwBSyLQ0DKq4IDXu4-aVDZQINGHVd6_n8ZvQ/viewform?usp=sf_link)
Heroku (https://www.heroku.com/)
IPCC: Greenhouse Gases: Sources and Sinks (https://www.ipcc.ch/site/assets/uploads/2018/05/ipcc_wg_I_1992_suppl_report_section_a1.pdf)
IPCC: Summary for Policymakers (https://www.ipcc.ch/site/assets/uploads/2018/02/ipcc_wg3_ar5_summary-for-policymakers.pdf)
Kurzgesagt: Climate Responsibility
(https://www.youtube.com/watch?v=ipVxxxqwBQw&list=PLFs4vir_WsTyXrrpFstD64Qj95vpy-yo1&index=1&ab_channel=Kurzgesagt%E2%80%93InaNutshell)
Kurzgesagt: Is it too late to stop climate change? Well, its complicated (https://www.youtube.com/watch?v=wbR-5mHI6bo&ab_channel=Kurzgesagt%E2%80%93InaNutshell)
Nasa - Global climate change: The Causes of Climate Change (https://climate.nasa.gov/causes/)
Nasa: Nasa Earth Observations (https://neo.sci.gsfc.nasa.gov/dataset_index.php)
Nasa: OCO2_L2_Lite_FP - OCO-2 Level 2 bias-corrected XCO2 and other select fields from the full-physics retrieval aggregated as daily files, Retrospective processing V9r (https://disc.gsfc.nasa.gov/datasets/OCO2_L2_Lite_FP_9r/summary?keywords=OCO-2)
San Diego County Government: Greenhouse Gas Reduction Strategies and Measures (https://www.sandiegocounty.gov/content/dam/sdc/pds/advance/cap/publicreviewdocuments/CAPfilespublicreview/Chapter%203%20Greenhouse%20Gas%20Reduction%20Strategies%20and%20)
Wikipedia: Landsat 8 (https://en.wikipedia.org/wiki/Landsat_8)
Wikipedia: Normalized difference vegetation index (https://en.wikipedia.org/wiki/Normalized_difference_vegetation_index)
Wikipedia: Normalized difference water index (https://en.wikipedia.org/wiki/Normalized_difference_water_index)