Giter Site home page Giter Site logo

iot-engine's People

Contributors

ducphantrung avatar ifindthanh avatar itsmeccr avatar pramil-paudel avatar raibnod avatar riteshakya037 avatar shiny380 avatar zero-88 avatar zero88 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

Forkers

bellmit

iot-engine's Issues

REST API for 3rd party communication drivers

3rd party comm modules (BACent, Modbus, Haystack) need a REST API to communicate to from configuration dashboard.
Instead of building separate api for each module. This will standardise communication and save on resources by building it into one Dynamic API.

  • api will define standard outside endpoints (all common and individual for each connector)
  • When one of these modules is installed it will add it's event bus endpoints to this api so api knows where to pass request to

Initialization and Synchronization of Ditto Thing

BoneScript micro-service will start on edge device and it consists H2 database for storing the Ditto Thing. To sync that database content and the edge device, we need to do care of following two things:

  1. For the first time initialization: it should able to create skeleton ditto object and start the process.
  2. If already initialized: we need to read the points from DB and set those values.

Parent ticket: #42

NubeConfig can be overridden by system properties and environment variables

Brief
Using Docker, instead of mount config file, Config should able read from Java System Properties and Environment Variables then override default config when starting NubeLauncher
Foreseen problem:

  • Cast data type
  • Unit verticle environment maybe conflict with Container verticle

Acceptable

  • Java system properties is lowercase and separated by dot .
  • Environment variable is uppercase and separated by underscore _
  • ClassCastException will be fallback to default config.
  • The overridden order: default config file < provide config file < Java system properties < Environment variables

For example:

{
  "__app__": {
    "__http__": {
      "host": "0.0.0.0"
   }
}
  • java properties: nubeio.app.http.host=0.0.0.0
  • Environment properties: NUBEIO_APP_HTTP_HOST=0.0.0.0

Refers:

Haystack Module

a Java jar file/module that can be deployed on the Edge device that makes the edge a Haystack compatible device, allowing it to discover other controllers/points on the network and also enabling it to be discovered by other Haystack devices (details need to be hashed out further).

Edge status

Show IoT device resources status (usage/available) in on demand:

  • RAM | CPU | Storage | OS name | OS version | Kernel version | Local IP | Public IP

Need to introduce:

  • New project in :edge:module:monitor
  • MonitorEventModel class in :eventbus:edge for sharing eventbus address to dashboard connector and edge device
  • RestEventAPI in :dashboard:connector:edge for getting request from user
  • Auto install when starting BIOS, that is likes a mechanism with :edge:module:installer

MUST READ #35 BEFORE STARTING

Replace HALT event to PATCH event in BIOS

  • Obsolete HALT event and replace PATCH event in BIOS and BIOS Installer
  • Affect sub projects:
    • :edge:bios
    • :edge:core
    • :edge:module:installer
    • :dashboard:connector:edge

Clean up unused media files on the system

We need to have a scheduler runs on the background. And it will run on a certain interval of time (daily/weekly/bi-weekly, should be configurable) and its job is to detect the unused files and removing it out from the system.

The basic idea is:

  • We will have all uploaded files locations in a collection, let's say collection a.
  • We will used that collection ids on other different collections, let say collection x, collection y, and collection z.
  • And scheduler will be created for generating diff between (collection a) - (collection x + collection y + collection z) and remove out the files from the system.

Modbus module

a Java jar file/module that can be deployed on the Edge device that makes the edge a Modbus compatible device, allowing it to discover other controllers/points on the network and also enabling it to be discovered by other Modbus devices.

This module would be an "app" within the iot engine.
Communication over event bus
Allows communication between other existing Modbus devices
Reads/updates points from bonecript api
Features to be added currently:

  • discover devices
  • read/write device objects (points)
  • subscribe to object COVs (change of values)
  • add/remove device objects

Bonescript API java initialization

First step for Bonescript API

Acceptance Criteria

  • Project: :edge:connector:bonescript
  • Verticle: BonescriptService extends ContainerVerticle
  • Websocket server for 4 endpoints: points, history, command, schedule. Use :core:httpserver
  • Define list of EventModel for each kind of data. Be aware that event model must be local and it will be consume by other services to distribute it to cloud
  • No authentication because it is local service
  • Unit Test

Unifying group_id

There are two types of group_id currently.

  1. com.nubeiot: our package naming convention
  2. com.nubeio: mainly on ModuleTypeRule related packages

It's giving problem on installing edge devices on bios throwing message:
{ "code": "INVALID_ARGUMENT", "message": "Artifact is not valid" }

Need to unify this package name convention on all over the places.

Integrate Dashboard server and Edge connector

1 client will have one dashboard, and manage some edge cluster devices, 1 cluster has many devices, 1 cluster can contains different set of edge device services (with other clusters)
So 1 dashboard will have multiple REST services for each cluster.
So these tasks will be:

  1. Dashboard web must have model to manage many edge REST clusters
  2. REST for each cluster is able to manage multiple devices
  3. REST to manage device services
    Ref: #12

Integrate Ditto in Bonescript

Precondition

Acceptance Criteria

  • Project: :edge:connector:bonescript:ditto
  • Verticle: BonescriptDittoService extends BonescriptService
  • Ditto client. Use :core:ditto #57
  • Consume EventModel that defined in #58 for each kind of data then distributed data to ditto
  • Unit Test

Dashboard Edge connector handle cluster edge devices

1 client will have one dashboard, and manage some edge cluster devices, 1 cluster has many devices, 1 cluster can contains different set of edge device services (with other clusters)
So 1 dashboard will have multiple REST services for each cluster.
So these tasks will be:

  1. Dashboard web must have model to manage many edge REST clusters
  2. REST for each cluster is able to manage multiple devices
  3. REST to manage device services (done)
    Ref: #15

Current Situation

  • REST service is project :dashboard:connector:edge
  • Edge device is project :edge:bios
  • Using hazelcast member to join all edge devices and REST service in one cluster
  • Bad things:
    • hazelcast member uses too much RAM when cluster is grown up
    • REST service is not considered as leader, when it is gone, one of edge device is selected as leader

Overview Solution

  • Use hazelcast client in edge device
  • Make REST service as leader with hazelcast member
  • Maybe consider Kafka as alternative for hazelcast

Useful resource

Acceptance criteria

  • Project :core:base
    • Add interface listener ClientListener when one client is joined or unjoined. Similar with ClusterNodeListener
    • In IClusterDelegate add registerClientListener, findClientById, getAllClients
    • Modify ClusterConfig to add listener address for member listener and client listener. Remember modify json config file. See here and here
  • Project :core:cluster:hazelcast:client (new project)
    • Add hazelcast-client dependencies
    • Some shared code
  • Project :core:cluster:hazelcast
    • Create HazelcastClientListener implements ClientListener
    • Implement HazelcastClusterDelegate based on changes in IClusterDelegate
    • Add some configuration for Hazelcast client
  • Project :dashboard:connector:edge
    • Design database to keep information about list of devices. Use :core:sql (test in h2, production in postgres)
    • Update REST endpoint to
  • Project :edge:bios
    • Add :core:cluster:hazelcast:client in dependencies
    • Using hazelcast client to connect :dashboard:connector:edge
    • Check :edge:bios can send/receive data from :dashboard:connector:edge

Health Checks and Resource Monitor System

Ditto client

Project should be :core:ditto

Provide:

  • Ditto connection configuration
  • Http client
  • Websocket client
  • Ditto policy for authentication

Will be integrated in :dashboard and :edge

Dashboard Kafka connector and Event bus

Dashboard Kafka connector is consumer role in edge kafka streaming. After receiving data from specific kafka address, connector will do 2 tasks:

  • Push data to postgresql
  • Push data to specific event bus address for dashboard frontend can consume via websocket

To achieve that:

  • Create socket server in kafka connector
  • Create postgreSQL core module

Distinguish PATCH and PUT event in handling of BIOS and Installer

Current situation

  • PATCH ~ HTTP PATCH
  • UPDATE ~ HTTP PUT
    But the business logic is not separated clearly when handle these event

Expectation
In Bios and Installer event handler

  • PATCH: update partly database entity properties if request payload present these properties
  • UPDATE: update all database entity properties. Normally, it is required all request payload ~ entity model

These entity properties are update-able:

  • version
  • state
  • deploy_config
  • published_by

Affect sub projects

  • :edge:bios
  • :edge:core
  • :edge:module:installer

More information

  • Some error logs when updating
org.jooq.exception.DataAccessException: SQL [update "PUBLIC"."TBL_MODULE" set "PUBLIC"."TBL_MODULE"."SERVICE_NAME" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."MODIFIED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."CREATED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."DEPLOY_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_CONFIG_JSON" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."VERSION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."STATE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_LOCATION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_TYPE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."PUBLISHED_BY" = cast(? as varchar) where (1 = 1 and "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar))]; NULL not allowed for column "CREATED_AT"; SQL statement:
update "PUBLIC"."TBL_MODULE" set "PUBLIC"."TBL_MODULE"."SERVICE_NAME" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."MODIFIED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."CREATED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."DEPLOY_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_CONFIG_JSON" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."VERSION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."STATE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_LOCATION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_TYPE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."PUBLISHED_BY" = cast(? as varchar) where (1 = 1 and "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar)) [23502-197]
        at org.jooq_3.11.8.H2.debug(Unknown Source)
        at org.jooq.impl.Tools.translate(Tools.java:2384)
        at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:822)
        at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:364)
        at org.jooq.impl.AbstractDelegatingQuery.execute(AbstractDelegatingQuery.java:127)
        at io.github.jklingsporn.vertx.jooq.rx.jdbc.JDBCRXGenericQueryExecutor.lambda$execute$1(JDBCRXGenericQueryExecutor.java:46)
        at io.vertx.reactivex.core.Vertx$3.handle(Vertx.java:625)
        at io.vertx.reactivex.core.Vertx$3.handle(Vertx.java:623)
        at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:272)
        at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.h2.jdbc.JdbcSQLException: NULL not allowed for column "CREATED_AT"; SQL statement:
update "PUBLIC"."TBL_MODULE" set "PUBLIC"."TBL_MODULE"."SERVICE_NAME" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."MODIFIED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."CREATED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."DEPLOY_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_CONFIG_JSON" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."VERSION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."STATE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_LOCATION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_TYPE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."PUBLISHED_BY" = cast(? as varchar) where (1 = 1 and "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar)) [23502-197]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
        at org.h2.message.DbException.get(DbException.java:179)
        at org.h2.message.DbException.get(DbException.java:155)
        at org.h2.table.Column.validateConvertUpdateSequence(Column.java:374)
        at org.h2.table.Table.validateConvertUpdateSequence(Table.java:798)
        at org.h2.command.dml.Update.update(Update.java:157)
        at org.h2.command.CommandContainer.update(CommandContainer.java:102)
        at org.h2.command.Command.executeUpdate(Command.java:261)
        at org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:249)
        at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
        at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
        at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:209)
        at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:432)
        at org.jooq.impl.AbstractDMLQuery.execute(AbstractDMLQuery.java:613)
        at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:350)
        ... 10 common frames omitted

Ditto response failure on empty body

Ditto Dashboard Connector is unable to send the response to the client while we have empty body to send. It shows the error as: java.lang.IllegalStateException: You must set the Content-Length header to be the total size of the message body BEFORE sending any data if you are not using HTTP chunked encoding.

Integrate GPIO functionalities

GPIO functionalities will be written on C/C++. And we need able to call that functions from Java for doing GPIO operations.

Parent ticket: #42

BACnet service

  1. BACnet simulator

    • BACnet simulator
    • Integration test and docker
  2. BACnet service

    • BACnet Verticle
    • BACnet Instance
    • BACnet Config
    • Mock Junit test
    • IP support:
      • by subnet
      • by network interface name
    • BACnet Listener for requests from 3rd party BACnet device. publish BACnet data to external NubeIO service
      • COV notification
      • write Object Request
    • NubeIO Listener for external NubeIO service events/requests
      • NubeIO events (global publishes i.e point value change)
      • single BACnet Instance. It is business functional of BACnetService
        • discover / read / write
      • multiple BACnet Instance ~ number of network

Create JNI core lib

NOTE:
Currently, I have 2 solutions:

  1. C++ <== communicate via jni interface ==> Java
  2. C++ <== communicate via eventbus port ==> Java

Some drawbacks for each solution:

  1. jni

    • Hard to maintain jni interface and share between C++ lib and Java service. Because it requires many steps, make this works more complexity. For example: generate C++ header, maintain header in both Java and C++ project, exception handler, etc
    • Syntax is not well format in C++
    • Java memory when loading native lib. It is pain for monitor, especially native memory, OOM can raise anytime and make bios is unresponsive unexpectedly
  2. eventbus

Via eventbus, some advantage points if we can port C++ eventbus lib from Java:

  • Decouple modules. Maintain java and c++ in 2 different processes. Easy maintain on both development and deployment phase
  • Dynamic register many C++ libraries. We can add more C++ lib in minimum efforts.

========================================================

Criteria

  • Project :core:jni
  • Core lib provide interface to load C/C++ library (*.dll (Windows) or *.so (Unixes)) by given path from IConfig and fallback to classpath resource
  • Add test with some mandatory cases:
    • Load lib and error handler in case not found lib
    • C/C++ library must expose some simple methods:
      • void: no args, no output
      • void: has one or more args, no output
      • primitive data type output: (no args, primitive output)
      • primitive data type output: (has one or more args, primitive output)

Exception Handler
https://www.developer.com/java/data/exception-handling-in-jni.html
https://www.ibm.com/developerworks/library/j-jni/index.html#exceptions
https://www.angelhernandezm.com/recipe-extract-exception-details-in-java-jvm-from-a-jni-c-solution/
https://www.codeproject.com/Articles/17558/Exception-handling-in-JNI

JNI interesting lib
https://github.com/spotify/JniHelpers
https://github.com/bytedeco/javacpp
https://github.com/mapbox/jni.hpp

Nexus remote service

Make a call to Nexus server to check available modules/services on BIOS and BIOS-Installer

  • Configuration: nexus url, nexus username, nexus password
  • Security issue for storing reused password
  • Event bus point to point mode: only send message to one BIOS in cluster (one cluster can have many BIOS)

Date time response in iso format

Create generic solution to apply date time response in iso8601 format
Mock solution: JsonObject convert(JsonObject, String... fieldName)

Create validation base

Basic idea of the validation base is extracted from joi.

Our validation base should able to do validation of Java primitives, objects as well as the JSON. And also it should able to assign some default values.

Integrate Kafka in Bonescript

Precondition

  • Finish #59
  • Analysis data structure of bonescript data in Ditto to reuse as much as possible in both dashboard and edge

Acceptance Criteria

  • Project: :edge:connector:bonescript:kafka
  • Verticle: BonescriptKafkaService extends BonescriptService
  • Convert point, command, history, schedule in Kafka record
  • Kafka client. Use :core:kafka
  • Consume EventModel that defined in #58 for each kind of data then distributed data to kafka
  • Unit Test

Standardize request module/service payload in BIOS

Current

{
  "group_id": "com.nubeiot.edge.connector.sample",
  "artifact_id": "kafka",
  "service_name": "edge-kafka-demo",
  "version": "1.0.0-SNAPSHOT",
  "deploy_config": {
    "__kafka__": {
      "__client__": {
        "bootstrap.servers": [
          "localhost:9092"
        ]
      },
      "__security__": {
        "security.protocol": "PLAINTEXT"
      }
    }
  }
}

Expectation

{
  "metadata": {
    "group_id": "com.nubeiot.edge.connector.sample",
    "artifact_id": "kafka",
    "service_name": "edge-kafka-demo",
    "version": "1.0.0-SNAPSHOT"
  },
  "appConfig": {
    "__kafka__": {
      "__client__": {
        "bootstrap.servers": [
          "localhost:9092"
        ]
      },
      "__security__": {
        "security.protocol": "PLAINTEXT"
      }
  }
}

Note: Use RequestedServiceData.java

Startup BIOS module

Current context
When startup bios/installer, it will install modules/services has a state ENABLED. However somehow, bios/installer shutdown or fail installing unexpected, so modules/services is marked as PENDING and the last transaction is in WIP

Acceptance criteria
When startup bios/installer:

  • Install modules/services satisfy one of these conditions:
    • State ENABLED
    • State PENDING and last transaction is WIP with prev_state has action is INIT/CREATE/UPDATE/PATCH (UPDATE and PATCH prev action should not mark to module/service to DISABLE state
  • Mark the remaining modules/services in PENDING state to DISABLED

SVG Image Upload REST API

SVG Image Upload REST Api

As discussed with @RaiBnod in Hangouts.

Summary

The task is to provide REST API endpoints for the dashboard frontend to upload and manage SVG images and their metadata.

Background

The device-status-visualizer will be implemented as a widget into the dashboard frontend. We want to be able to dynamically add SVG graphics as symbols by uploading and retrieving them from the backend.

Description

The backend must provide endpoints to manage

  • the SVG image files themselves
  • metadata associated with an image file, such as a "title" attribute.

Since it is somewhat clumsy and complicated to upload or download both image + metadata in one request, I suggest different API endpoints for the management of the metadata, and on the other hand, for the management of the image files themselves.

Metadata

Each image file will be associated with the following attributes:

  • title [string]
  • category [string]

Entry

An such metadata entry will usually be represented in the form of JSON, for example:

{
	"id": ...,
	"title": "Light Bulb",
	"category": "Symbols",
	"contentUrl": "http://.../path/to/image.svg"
}

Endpoint overview

<prefix>/media, GET: returns all existing media file entries (see above) as an array of JSON instances
<prefix>/media, POST: Adds a new file entry from the provided JSON, and creates a new public file (contentUrl) that can be changed to an uploaded SVG via other API endpoints.
<prefix>/media/{id}, DELETE: deletes the metadata entry of the given id AND it's associate file entry (contentUrl)
<prefix>/media/{id}, POST: update a media file entry of the given id in JSON format (to change title or category)
<prefix>/media-files/{id}, GET: returns the (SVG) file of the given ID in the response body
<prefix>/media-files/{id}, PUT: store/replace the file accessible under this URL with the file submitted in the body of the request

Uploading a new SVG image is a two-step process.

  1. First, a new entry is submitted in JSON format via POST to <prefix>/media. The response from the server (also JSON) includes a contentUrl.
  2. Secondly, the SVG image is submitted via PUT to the contentUrl returned by step 1. This makes the image accessible under the given URL.

Endpoints

Same as above in more detail, inspired by raml format:

/media:
  get:
    description: |
      returns all existing media file entries
    responses:
      200:
        example: |
          [
            {
              id: "abc",
              title: "Symbol Title",
              category: "Something",
              contentUrl: "http://.../files/123"
            },
            {
              id: "xyz",
              title: "Other Symbol",
              category: "Something",
              contentUrl: "http://.../files/130"
            }
          ]
  post:
    description: |
      Add a new media file entry
    body:
      example: |
        {
          title: "Symbol Title",
          category: "Something",
        }
    responses:
      200:
        body:
          example: |
            {
              success: true,
              contentUrl: "http://.../files/123"
            }
  /{id}:
    delete:
      description: |
        deletes the media metadata entry AND it's associate file entry (contentUrl)
    post:
      description: |
        updates the entry metadata
      example: |
        {
          title: "Symbol Title",
          category: "Something",
        }

/files:
  /{id}:
    get:
      description: |
        returns the file in the response body
    put:
      description: |
        store/replace the file accessible under this URL with the file submitted in the body of the request

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.