We have developed an app that will help users to be aware of the nature, specifically, trees around us, helping in sustaining the Planet for future Generations. Nowadays 7 billion trees are being cut down every single year. We want to make the masses aware how a single plant can be of benefit to them. When we look at big picture, (7 billion trees per year) our brain is just unable to process that huge information and ends up getting skipped and ignored. So, we want to make them aware from the smallest, miniscule step; a single tree. We shall compare the benefits of a single tree (scraped from NASA's climate portal). The benefits that we have from a single plant is enough to make every young person think twice before they allow someone to cut down a tree. It makes use of pre-trained machine learning model that identifies a tree and splashes out the information. Our vision is to make the masses aware of the environmental benefits starting from a tree and comparing it to materialistic things that our present generation can relate to and also the future generations will. In further development processes, we shall implement Augmented Reality and a more robust User Interface and User Experience. The 48 hour time frame was a little hectic for us as we are still in learning phase but didn't stop us from developing it.
The whole team lives in villages, here in villages, there is no issue of environment degradation and all, it's green everywhere, however, the city dwellers seldom get to see a tree around. They boast about how important trees are, we realized, we were ignoring the benefits of a single tree in the picture. It was then when we found out that most people lacked the knowledge of the benefits of a single tree, imagine if everyone knew how much a single tree could do, people would hesitate to cut down the only tress remaining.
We first ideated the project, brainstormed. Then designed the wireframes, researched on the tech stack to be used, prototyped, brainstormed again.
Then started development, debugging, testing and then ready. We were abit inexperienced regarding the UI/UX and the AR part that we thought couldn't be developed properly due to some technical issues due to which the app lost is aesthetic part to an extent. However, it does serve the main purpose and is still an aesthetic as well as useful app.
Flutter and dart.
Google Firebase
API integration.
ML kits
Importation of images from gallery or by camera
GitHub for collaboration.
Figma and Adobe Suite for Wireframe and various designs and Logo
Protopie for prototyping
We used Flutter as our development platform using it with Dart. We linked Google Firebase with our app for real time responses and integrated ML kit which helped us indentifying and labeling
the provided image from the user. If given more time we would been able to implement cloud labeling with ML which gives more precise answer. With the help of labeler, we labeled image and
then stored it into a variable, which helped us to provide different data for different images for the user. We even implemented Splash screen to the app which made the app more aesthethic.
We used camera and image picker dependencies to select or click images to provide it to app.
We used GitHub for collaboration among us who are far from each other.
The most time consuming part was implementation of ML kit with the help of Google firebase which led us to have less time in developing UI.
We even used Figma for creating our wireframes and Adobe Suite for making our own logo and used Protopie for prototyping.
We used the resources from our challenge to display it to user after getting correct Label/Image.
We have used data scraped from NASA CLIMATE, NASA CITIZEN
for the various info we have shown.
We were going to use NASA IMAGE GALLERY, JAXA iff we had a little more time and
G-Portal if we had been able to implement AR correctly.
PPT Demo: Please Click Here