Giter Site home page Giter Site logo

ivision's Introduction

iVision

An assistive system for the blind based on augmented reality and machine learning

SYSTEM DESIGN

iVision’s prototype is developed on iPhone and iOS platform, using Vision, Core ML, ARKit, etc.

  1. Firstly, used the Vision framework and the Core ML framework provided by iOS to capture images from the camera, and the YoloV3 deep learning model is used to obtain the name of a specific object, the position and size of the bounding box in the two-dimensional image. You only look once (YOLO) is a real-time object detection, which is extremely fast and accurate(Redmon & Farhadi, 2018). With YoloV3, you can get the detected object name, bounding box, confidence and the time it takes. This model's training set covers many commonly used items such as cups, mice, and bananas.
  2. Use the augmented reality framework to find feature point in space. Use hit-testing methods to find real-world surfaces corresponding to a point in the camera image. The two-dimensional coordinates in the screen can be converted to points in three-dimensional space by the hit-testing methods. When the number of recognition points is sufficient, the real spatial position of the object is finally obtained.
  3. Convert its spatial orientation to 3D positional effects audio. The human ear judges the source direction of the sound by the time difference between the left and right and the difference in sound size and judges the distance between you and the object by the loudness(Dunai, Fajarnes, Praderas, Garcia, & Lengua, 2010; Ribeiro, Florencio, Chou, & Zhang, 2012). The user can know the spatial position of a specific object and finally find the object through the 3D positional effects audio.
  4. Because blind people use touch screens with low efficiency, the system uses voice interfaces to build interactive processes. The speech-based interface combines real-time object detection model and augmented reality, into an interactive speech-based positioning and navigation device.

The picture shows the process by which the iVision app recognizes the mouse, keyboard, smartphone and marks its spatial location.

Todo List

  1. 不同设备的屏幕适配
  2. Node二次播放问题
  3. TTS通道堵塞
  4. 屏幕旋转
  5. AR识别过滤低质量结果
  6. 颜色呼吸效果 7.更多功能的实现

ivision's People

Contributors

igloo302 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

Forkers

muvich3n

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.