Giter Site home page Giter Site logo

call-for-code / frida Goto Github PK

View Code? Open in Web Editor NEW
20.0 15.0 8.0 24.46 MB

Frida, AI and IoT comes to the aid of teachers, students, and first responders. A Call for Code project from IBM.

Home Page: http://frida-ai.org

License: Apache License 2.0

call-for-code tech-for-good iot earthquakes code-and-response mobile

frida's Introduction

Frida, AI and IoT comes to the aid of teachers, students, and first responders

Frida

An end-to-end solution with a mobile AI-enabled application called Frida, and an IoT device called fridaSOS, which can be installed in schools and universities. The solution uses built-in AI functions to provide earthquake preparation, guide people through drills, predict the magnitude of earthquakes based on sensor data and existing algorithm, identify the best escape routes during earthquakes, and detect people trapped in damaged classrooms. It is a first-of-a-kind complete solution for providing data collection, monitoring, notifications, and guidance before, during, and after a disaster.

Blog: When an earthquake hits, Watson solution helps schools cope

Frida

Vision Statement

To provide a lifesaving solution for natural disasters

Mission Statement

To make emergency data accessible and act on them.

Solution Name: Frida

We named it after Frida who is a rescue dog belonging to the Mexican Navy, with her handler Israel Arauz Salinas, takes part in the effort to look for people trapped at the Rebsamen school in Mexico City, on September 22, 2017. She has detected the bodies of 52 people throughout her career and has become the hero symbolic for their country's rescuing effort.

Solution in Detail

In 2018, the Call for Code Global Challenge asks developers to create solutions that significantly improve preparedness for natural disasters and relief when they hit. We are committed to this cause! We are pushing for change, answering the call, and developing a lifesaving solution for natural disasters, called Frida. We choose to focus on earthquakes because they tend to happen unexpectedly and affect everyone in that area. Specifically, our solution targets schools and earthquakes. Because of the vulnerability of the students and cost of damage to school buildings, helping schools prepare for an earthquake, respond during an earthquake, and recover after an earthquake are extremely critical goals.

We have developed an end to end solution with a mobile AI-enabled application called Frida, and an IoT device called fridaSOS, that can be installed in schools and universities. The solution uses built-in AI functions to provide earthquake preparation, guide people through drills, predict the magnitude of earthquakes based on sensor data, identify the best escape routes during earthquakes, and detect people trapped in damaged classrooms. It is a first-of-a-kind complete solution for providing data collection, monitoring, notifications, and guidance before, during, and after a disaster.

We assembled the fridaSOS IoT device by integrating a device that has an earthquake Gyroscope sensor, a heat sensor, and a camera with the IBM IoT Platform. fridaSOS registers the sensor devices, monitors the near-real-time IoT data, and stores the IoT data. By building this AI-enabled mobile application through IBM Watson Studio, we can use a combination of AI technologies. We use a pre-trained earthquake deep learning model to predict the magnitude of the earthquake by streaming data from the IoT platform. We apply a visual recognition model to show the damage of buildings that people should avoid. We combine the information about damaged buildings with heat detection data to identify where people are trapped.

Solution Roadmap

We have developed the IoT integration tool kit and the AI-enabled mobile application contains key features. Frida mobile application contains an end-to-end solution that integrates with the IBM IoT platform and IBM Watson Studio. The mobile application's current features include: letting students sign up, submit a location, and select a school from stored school list, sending notifications of the earthquake magnitude, guiding drills, managing student lists, and texting entire classes. Our future roadmap adds more functionality to Frida, including applying a visual recognition model to show damaged buildings, using a Node.RED application to analyze school location data to guide the disaster drills, finding trapped victims, showing escape routes on a map through AI functions, and applying a blockchain for fundraising transparency in a web application. For this solution, we are focusing on earthquakes and schools, however, our kit can definitely be expanded to handle other types of natural disasters, such as volcanic eruptions, floods, landslides, hurricanes, and tornados. The IoT device can be manufactured in large scale and deployed into institutions. Frida can become one of most powerful and widely used mobile applications for disaster management across institutions globally.

How to contribute

Please fork the code, make a contribution and create a pull request. If you have idea/requirement and want us to address a topic not already addressed, please open an issue and label it as type-requirement. If you would like to take on issue to work on, please select label as type-claimed. Let's make big impact on this life saving solution together!

Frida will consist of four repos including this one, which serves as the entry point and contains umbrella documentation:

Frida Solution Development

frida's People

Contributors

imgbotapp avatar isimar avatar kant avatar krook avatar linju avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

frida's Issues

Agenda for the Demo

Agenda for the Demo

  1. User Registration process described in #20
  2. Create project using Template (https://github.com/IBM/Frida/blob/master/samples/AutoAI/YORK-PROJECT---26th-June.zip)
  3. Create, deploy and score model on WML using Notebooks in WS
  4. AutoAI demo to achieve the above step without coding
  5. Create VR model
  6. Create Node-Red app on IBM Cloud using Starter Kits and import flow using https://github.com/IBM/Frida/blob/master/samples/AutoAI/NodeRed-flows.json. This Node-Red flow demonstrate easy setup of applications to use VR models in WS

[Code] Simulate Sensor data flow into application

Requirement:
In order to simulate sensor data streaming into Frida Application, we can develop Node.RED application to Watson IoT platform to show the workflow.
Implementation:
We are using Node.RED in IBM Cloud.

  1. Log in or sign-up for an account at bluemix.net

  2. Navigate to the catalog and search for ‘Node-RED’

  3. This will present you with two options:

  4. Node-RED Starter - a vanilla Node-RED instance

  5. Internet of Things Platform Starter - this gives you everything you need to start quickly using Node-RED with the Watson IoT Platform, including some default flows to show how things work

  6. a Cloudant database instance to store your flow configuration
    a collection of nodes that make it easy to access various Bluemix services, including both the Watson IoT platform and the Watson Cognitive services

  7. Click the starter application you want to use, give it a name and click create.

  8. A couple of minutes later, you’ll be able to access your instance of Node-RED at https://.mybluemix.net

[Code] Frida Application backend API

As Frida mobile application, it contains two major code bases: one is for application and one is for backend server which is node.js base. This issue is dedicated for server API.

[Feature] Predict Earthquake's magnitude and send out alert to Frida Mobile App

Requirement: When earthquake is about to hit above certain threshold, we would like to send user alert notification through Frida Model App within 30 sec. so that they can prepare and escape to seek shelter.
Approach:

  1. Earthquake detection and location has been studied and researched for many years by seismologists. In our solution, we referenced the latest algorithm ConNetQuake which is highly scalable convolutional neural network for earthquake detection and location from a single waveform. Further enhanced by using Keras deep learning model to train existing data.
    Training Data: 2.5 years of monthly streams from GSOK029.mseed and earthquake catalogs the OGS (years 2014 to 2016).
  2. We use Jupyter Notebook in Watson Studio along with GPU enabled environment (under close beta) to train this model.
  3. The end point of trained model is used into Frida Mobile App.

[Feature] Frida official website Requirements

Requirement: Provide a means to support and communicate with communities.
Initial MVP should include following:

  • Home
  1. Vision Statement: Provide a lifesaving solution for natural disasters
  2. Get Started (download): Link to Frida repo Github
  • Frida (About)
  • Download
  • Roadmap (Solution roadmap/Features)
  • Projects (who have adopted)
  • Contact
  • Footer: Share this page with facebook, twitter, G+, In
    Legal Notices, Privacy Policy

Watson Studio Registration Steps

Steps to register a user on Watson Studio and IBM cloud

  1. Go to https://dataplatform.cloud.ibm.com/ and click Sign Up

Step 1

  1. Under Create an IBM Cloud Account title, enter your email address. If you already have an IBM Cloud account, click Log in to activate Watson. For the latter scenario, you will be redirected to IBM cloud login page. Upon successful login, you will be moved to Step 6 of this process.

Step 2

  1. In case you do not have IBM Cloud account, you need to fill up the registration form. After completion click Create Account

Step 3

  1. You have registered successfully. Check your inbox for the confirmation email.

Step 4

  1. Open the confirmation email and click Confirm account

Step 6

  1. You will be automatically redirected and registered on Watson Studio with all the required service you need to get started.

Step 7

  1. Click Get Started to start playing with Watson Studio

Step 8

  1. Learn more about Watson Studio at https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/welcome-main.html?audience=wdp&context=wdp

  2. You can also access your IBM cloud account by going to https://cloud.ibm.com
    Step 10

Unable to simulate Frida APP

Hi Team,

I tried simulating the Frida app. I have pushed the frida backend to Ibm cloud as per the steps. Looks like some DB setup is needed. Can you please share the updated Frida backend repo. The below is the error :

image

[Feature] Show escape routes on a map

Each school building has map which are showing the location of classroom, entry, exit, stair etc.
In order to show the escape routes by associating with damaged room, we will need to have feature to show the following routes:

  1. The shortest path from Room to exit
  2. If there is a room damaged (with certain level), the new path need to be calculated.

How you can help

  1. Implement simple school map
  2. Identify the shortest routes from room to exit.

Contributing to Community

  1. Make sure this issue is labeled up-for-grabs and not labeled claimed, to verify no one else is working on it.
  2. Comment in this issue that you would like to do it.
  3. Once you've made sure all your changes work correctly and committed all your changes, push your local changes back to github with $ git push -u origin master
  4. Create a pull request for your changes.
  5. Make sure your pull request describes exactly what you changed and references this issue

[Feature] Classify Classroom Damage Level in order to determine scape route and rescue priority

In order to determine scape route and set right rescue priority, we will train Visual Recognition Model in Watson Studio and get saved Classroom damaged level's model endpoint.
The following steps are to Train Watson Visual Recognition Without Code in Watson Studio:

  1. Sign up in IBM Watson Studio
  2. Create a New project, From the Projects page in Watson Studio, add a new project.
    When prompted, select the Visual Recognition project type.
    If you have no instances of the Visual Recognition service, a new instance will be created. If you have existing instances, you have a choice:
    Click the Existing option to associate an existing instance with the project
    Click the New option to create another new instance
  3. Prepare your images: Collect a minimum of 10 images (damaged classroom and normal classroom) for each class in .zip files, and then upload them to your project.
  4. Add .zip files to your model from the data panel. (If the data panel isn't open, you can open it by clicking the Find and add data icon)
  5. Click Train Model.
  6. Test your custom model: in the Test area of the Visual Recognition model builder:
    Test individual image files by dragging them from your computer onto the test area.
  7. Use trained VR model in your application:
  • API key
    You can find your API key in the Visual Recognition service credentials:
    Select "Watson Services" from the Services drop-down list in Watson Studio. This shows all your Visual Recognition service instances.
    Click the Visual Recognition service instance you want to use.
    View the credentials in the Credentials tab, and then copy the API key.
  • Model ID
    After creating a custom model in Watson Studio, you can find the model ID for your custom model in the Visual Recognition model builder:
    From the Assets page of your project, click a Visual Recognition custom model to open that model in the model builder. In the Overview area of the model builder, you can find the model ID.
  • Sample code
    After creating a custom model in Watson Studio, you can find sample code for using your custom model in the Visual Recognition model builder:
    From the Assets page of your project, click a Visual Recognition custom model to open that model in the model builder. In the Implementation area of the model builder, you can find sample code.

If you would like to have detail instruction, please refer to VR in Watson Studio

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.