nubeio / iot-engine Goto Github PK
View Code? Open in Web Editor NEWIoT Engine
IoT Engine
3rd party comm modules (BACent, Modbus, Haystack) need a REST API to communicate to from configuration dashboard.
Instead of building separate api for each module. This will standardise communication and save on resources by building it into one Dynamic API.
Add more test coverage for UnitVerticle
and ContainerVerticle
BoneScript micro-service will start on edge
device and it consists H2
database for storing the Ditto Thing
. To sync that database content and the edge device, we need to do care of following two things:
For the first time initialization
: it should able to create skeleton ditto object and start the process.If already initialized
: we need to read the points from DB and set those values.Parent ticket: #42
Brief
Using Docker, instead of mount config
file, Config
should able read from Java System Properties
and Environment Variables
then override default config when starting NubeLauncher
Foreseen problem:
Unit
verticle environment maybe conflict with Container
verticleAcceptable
Java system properties
is lowercase and separated by dot .
Environment variable
is uppercase and separated by underscore _
ClassCastException
will be fallback to default config.default config file
< provide config file
< Java system properties
< Environment variables
For example:
{
"__app__": {
"__http__": {
"host": "0.0.0.0"
}
}
nubeio.app.http.host=0.0.0.0
NUBEIO_APP_HTTP_HOST=0.0.0.0
Refers:
a Java jar file/module that can be deployed on the Edge device that makes the edge a Haystack compatible device, allowing it to discover other controllers/points on the network and also enabling it to be discovered by other Haystack devices (details need to be hashed out further).
Show IoT
device resources status (usage/available) in on demand:
RAM
| CPU
| Storage
| OS name
| OS version
| Kernel version
| Local IP
| Public IP
Need to introduce:
:edge:module:monitor
MonitorEventModel
class in :eventbus:edge
for sharing eventbus address to dashboard connector
and edge device
RestEventAPI
in :dashboard:connector:edge
for getting request from userBIOS
, that is likes a mechanism with :edge:module:installer
MUST READ #35 BEFORE STARTING
Move explicit configuration for each executable
connector from bios.db
to json
config file in demo folder.
HALT
event and replace PATCH
event in BIOS and BIOS Installer:edge:bios
:edge:core
:edge:module:installer
:dashboard:connector:edge
We need to have a scheduler runs on the background. And it will run on a certain interval of time (daily/weekly/bi-weekly, should be configurable) and its job is to detect the unused files and removing it out from the system.
The basic idea is:
collection a
.collection x
, collection y
, and collection z
.collection a
) - (collection x
+ collection y
+ collection z
) and remove out the files from the system.a Java jar file/module that can be deployed on the Edge device that makes the edge a Modbus compatible device, allowing it to discover other controllers/points on the network and also enabling it to be discovered by other Modbus devices.
This module would be an "app" within the iot engine.
Communication over event bus
Allows communication between other existing Modbus devices
Reads/updates points from bonecript api
Features to be added currently:
First step for Bonescript API
Acceptance Criteria
:edge:connector:bonescript
BonescriptService extends ContainerVerticle
points
, history
, command
, schedule
. Use :core:httpserver
EventModel
for each kind of data. Be aware that event model
must be local and it will be consume by other services to distribute it to cloud
There are two types of group_id
currently.
com.nubeiot
: our package naming conventioncom.nubeio
: mainly on ModuleTypeRule
related packagesIt's giving problem on installing edge devices on bios throwing message:
{ "code": "INVALID_ARGUMENT", "message": "Artifact is not valid" }
Need to unify this package name convention on all over the places.
To support the frontend ticket: https://github.com/NubeIO/dashboard/issues/1, we need certain extra fields on Site
model and hence different sites can save their layouts dynamically.
Fields needed:
1 client will have one dashboard, and manage some edge cluster devices, 1 cluster has many devices, 1 cluster can contains different set of edge device services (with other clusters)
So 1 dashboard will have multiple REST services for each cluster.
So these tasks will be:
1 client will have one dashboard, and manage some edge cluster devices, 1 cluster has many devices, 1 cluster can contains different set of edge device services (with other clusters)
So 1 dashboard will have multiple REST services for each cluster.
So these tasks will be:
done
)Current Situation
REST service
is project :dashboard:connector:edge
Edge device
is project :edge:bios
hazelcast member
to join all edge devices and REST service
in one clusterhazelcast member
uses too much RAM when cluster is grown upREST service
is not considered as leader
, when it is gone, one of edge device
is selected as leader
Overview Solution
hazelcast client
in edge device
REST service
as leader
with hazelcast member
Kafka
as alternative for hazelcast
Useful resource
Acceptance criteria
:core:base
ClientListener
when one client is joined or unjoined. Similar with ClusterNodeListener
IClusterDelegate
add registerClientListener
, findClientById
, getAllClients
ClusterConfig
to add listener address for member listener
and client listener
. Remember modify json config file. See here and here:core:cluster:hazelcast:client
(new project)
hazelcast-client
dependencies:core:cluster:hazelcast
HazelcastClientListener implements ClientListener
HazelcastClusterDelegate
based on changes in IClusterDelegate
Hazelcast client
:dashboard:connector:edge
:core:sql
(test in h2
, production in postgres
)REST endpoint
to:edge:bios
:core:cluster:hazelcast:client
in dependencieshazelcast client
to connect :dashboard:connector:edge
:edge:bios
can send/receive data from :dashboard:connector:edge
Java resource monitor system features:
CPU
, Memory
, Storage
CRITICAL
, HIGH
, LOW
eventbus
Must understand memory concept and garbage collect
in Java
Some useful resources to get started:
Project should be :core:ditto
Provide:
Will be integrated in :dashboard
and :edge
Need to convert our existing Node.js bonescript-api on our Vert.x Java MicroService base as an edge component.
Dashboard Kafka connector is consumer
role in edge kafka streaming
. After receiving data from specific kafka
address, connector will do 2 tasks:
postgresql
event bus
address for dashboard frontend
can consume via websocket
To achieve that:
socket server
in kafka connectorpostgreSQL
core moduleNeed event bus functionality for all other local edge verticals to access data instead of using http and needing to sign in and pass token
REST
serverCurrent situation
PATCH
~ HTTP PATCH
UPDATE
~ HTTP PUT
Expectation
In Bios
and Installer
event handler
PATCH
: update partly database entity properties if request payload
present these propertiesUPDATE
: update all database entity properties. Normally, it is required all request payload
~ entity model
These entity properties are update-able:
version
state
deploy_config
published_by
Affect sub projects
:edge:bios
:edge:core
:edge:module:installer
More information
org.jooq.exception.DataAccessException: SQL [update "PUBLIC"."TBL_MODULE" set "PUBLIC"."TBL_MODULE"."SERVICE_NAME" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."MODIFIED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."CREATED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."DEPLOY_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_CONFIG_JSON" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."VERSION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."STATE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_LOCATION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_TYPE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."PUBLISHED_BY" = cast(? as varchar) where (1 = 1 and "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar))]; NULL not allowed for column "CREATED_AT"; SQL statement:
update "PUBLIC"."TBL_MODULE" set "PUBLIC"."TBL_MODULE"."SERVICE_NAME" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."MODIFIED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."CREATED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."DEPLOY_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_CONFIG_JSON" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."VERSION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."STATE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_LOCATION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_TYPE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."PUBLISHED_BY" = cast(? as varchar) where (1 = 1 and "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar)) [23502-197]
at org.jooq_3.11.8.H2.debug(Unknown Source)
at org.jooq.impl.Tools.translate(Tools.java:2384)
at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:822)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:364)
at org.jooq.impl.AbstractDelegatingQuery.execute(AbstractDelegatingQuery.java:127)
at io.github.jklingsporn.vertx.jooq.rx.jdbc.JDBCRXGenericQueryExecutor.lambda$execute$1(JDBCRXGenericQueryExecutor.java:46)
at io.vertx.reactivex.core.Vertx$3.handle(Vertx.java:625)
at io.vertx.reactivex.core.Vertx$3.handle(Vertx.java:623)
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:272)
at io.vertx.core.impl.TaskQueue.run(TaskQueue.java:76)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.h2.jdbc.JdbcSQLException: NULL not allowed for column "CREATED_AT"; SQL statement:
update "PUBLIC"."TBL_MODULE" set "PUBLIC"."TBL_MODULE"."SERVICE_NAME" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."MODIFIED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."CREATED_AT" = cast(? as timestamp), "PUBLIC"."TBL_MODULE"."DEPLOY_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_CONFIG_JSON" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."VERSION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."STATE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."DEPLOY_LOCATION" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."SERVICE_TYPE" = cast(? as varchar), "PUBLIC"."TBL_MODULE"."PUBLISHED_BY" = cast(? as varchar) where (1 = 1 and "PUBLIC"."TBL_MODULE"."SERVICE_ID" = cast(? as varchar)) [23502-197]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:357)
at org.h2.message.DbException.get(DbException.java:179)
at org.h2.message.DbException.get(DbException.java:155)
at org.h2.table.Column.validateConvertUpdateSequence(Column.java:374)
at org.h2.table.Table.validateConvertUpdateSequence(Table.java:798)
at org.h2.command.dml.Update.update(Update.java:157)
at org.h2.command.CommandContainer.update(CommandContainer.java:102)
at org.h2.command.Command.executeUpdate(Command.java:261)
at org.h2.jdbc.JdbcPreparedStatement.execute(JdbcPreparedStatement.java:249)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.execute(ProxyPreparedStatement.java:44)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.execute(HikariProxyPreparedStatement.java)
at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:209)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:432)
at org.jooq.impl.AbstractDMLQuery.execute(AbstractDMLQuery.java:613)
at org.jooq.impl.AbstractQuery.execute(AbstractQuery.java:350)
... 10 common frames omitted
Ditto Dashboard Connector is unable to send the response to the client while we have empty body to send. It shows the error as: java.lang.IllegalStateException: You must set the Content-Length header to be the total size of the message body BEFORE sending any data if you are not using HTTP chunked encoding.
GPIO functionalities will be written on C/C++. And we need able to call that functions from Java for doing GPIO operations.
Parent ticket: #42
BACnet simulator
docker
BACnet service
IP
support:
subnet
network interface name
3rd party BACnet device
. publish BACnet data to external NubeIO service
NubeIO service
events/requests
BACnet Instance
. It is business functional of BACnetService
BACnet Instance
~ number of network
NOTE:
Currently, I have 2 solutions:
C++
<== communicate via jni interface
==> Java
C++
<== communicate via eventbus port
==> Java
Some drawbacks for each solution:
jni
C++ lib
and Java
service. Because it requires many steps, make this works more complexity. For example: generate C++ header, maintain header in both Java
and C++
project, exception handler, etcC++
native memory
, OOM
can raise anytime and make bios
is unresponsive unexpectedlyeventbus
Vertx eventbus
.Via eventbus
, some advantage points if we can port C++ eventbus
lib from Java
:
java
and c++
in 2 different processes. Easy maintain on both development and deployment phaseC++ lib
in minimum efforts.========================================================
Criteria
:core:jni
C/C++
library (*.dll
(Windows) or *.so
(Unixes)) by given path from IConfig
and fallback to classpath resource
C/C++
library must expose some simple methods:
void
: no args
, no output
void
: has one or more args
, no output
primitive
data type output: (no args
, primitive output
)primitive
data type output: (has one or more args
, primitive output
)Exception Handler
https://www.developer.com/java/data/exception-handling-in-jni.html
https://www.ibm.com/developerworks/library/j-jni/index.html#exceptions
https://www.angelhernandezm.com/recipe-extract-exception-details-in-java-jvm-from-a-jni-c-solution/
https://www.codeproject.com/Articles/17558/Exception-handling-in-JNI
JNI interesting lib
https://github.com/spotify/JniHelpers
https://github.com/bytedeco/javacpp
https://github.com/mapbox/jni.hpp
:core:common
System.out
)We have list of REST operations and need to convert those API endpoints.
jvm options
to run java on Beaglebone blackhazelcast client
instead of hazelcast member
on edgeREST API generate path
improvement
Make a call to Nexus
server to check available modules/services on BIOS
and BIOS-Installer
BIOS
in cluster (one cluster can have many BIOS
)Swagger for bonescript API endpoints.
Parent ticket: #42
Create generic solution to apply date time response in iso8601
format
Mock solution: JsonObject convert(JsonObject, String... fieldName)
These Java/Maven groups are acceptable for installing service:
BIOS
: com.nubeiot.edge.module
BIOS-installer
:
com.nubeio.edge.connector
com.nubeio.edge.rule
Basic idea of the validation base is extracted from joi.
Our validation base should able to do validation of Java primitives, objects as well as the JSON. And also it should able to assign some default values.
Precondition
Ditto
to reuse as much as possible in both dashboard
and edge
Acceptance Criteria
:edge:connector:bonescript:kafka
BonescriptKafkaService extends BonescriptService
point
, command
, history
, schedule
in Kafka record:core:kafka
EventModel
that defined in #58 for each kind of data then distributed data to kafka
Current
{
"group_id": "com.nubeiot.edge.connector.sample",
"artifact_id": "kafka",
"service_name": "edge-kafka-demo",
"version": "1.0.0-SNAPSHOT",
"deploy_config": {
"__kafka__": {
"__client__": {
"bootstrap.servers": [
"localhost:9092"
]
},
"__security__": {
"security.protocol": "PLAINTEXT"
}
}
}
}
Expectation
{
"metadata": {
"group_id": "com.nubeiot.edge.connector.sample",
"artifact_id": "kafka",
"service_name": "edge-kafka-demo",
"version": "1.0.0-SNAPSHOT"
},
"appConfig": {
"__kafka__": {
"__client__": {
"bootstrap.servers": [
"localhost:9092"
]
},
"__security__": {
"security.protocol": "PLAINTEXT"
}
}
}
Note: Use RequestedServiceData.java
Current context
When startup bios/installer
, it will install modules/services
has a state ENABLED
. However somehow, bios
/installer
shutdown or fail installing unexpected, so modules/services
is marked as PENDING
and the last transaction
is in WIP
Acceptance criteria
When startup bios/installer
:
modules/services
satisfy one of these conditions:
ENABLED
PENDING
and last transaction
is WIP
with prev_state
has action is INIT/CREATE/UPDATE/PATCH
(UPDATE
and PATCH
prev action should not mark to module/service
to DISABLE
statemodules/services
in PENDING
state to DISABLED
As discussed with @RaiBnod in Hangouts.
The task is to provide REST API endpoints for the dashboard frontend to upload and manage SVG images and their metadata.
The device-status-visualizer will be implemented as a widget into the dashboard frontend. We want to be able to dynamically add SVG graphics as symbols by uploading and retrieving them from the backend.
The backend must provide endpoints to manage
Since it is somewhat clumsy and complicated to upload or download both image + metadata in one request, I suggest different API endpoints for the management of the metadata, and on the other hand, for the management of the image files themselves.
Each image file will be associated with the following attributes:
An such metadata entry
will usually be represented in the form of JSON, for example:
{
"id": ...,
"title": "Light Bulb",
"category": "Symbols",
"contentUrl": "http://.../path/to/image.svg"
}
<prefix>/media
, GET
: returns all existing media file entries (see above) as an array of JSON instances
<prefix>/media
, POST
: Adds a new file entry from the provided JSON, and creates a new public file (contentUrl
) that can be changed to an uploaded SVG via other API endpoints.
<prefix>/media/{id}
, DELETE
: deletes the metadata entry of the given id
AND it's associate file entry (contentUrl
)
<prefix>/media/{id}
, POST
: update a media file entry of the given id
in JSON format (to change title
or category
)
<prefix>/media-files/{id}
, GET
: returns the (SVG) file of the given ID in the response body
<prefix>/media-files/{id}
, PUT
: store/replace the file accessible under this URL with the file submitted in the body of the request
Uploading a new SVG image is a two-step process.
<prefix>/media
. The response from the server (also JSON) includes a contentUrl
.contentUrl
returned by step 1. This makes the image accessible under the given URL.Same as above in more detail, inspired by raml
format:
/media:
get:
description: |
returns all existing media file entries
responses:
200:
example: |
[
{
id: "abc",
title: "Symbol Title",
category: "Something",
contentUrl: "http://.../files/123"
},
{
id: "xyz",
title: "Other Symbol",
category: "Something",
contentUrl: "http://.../files/130"
}
]
post:
description: |
Add a new media file entry
body:
example: |
{
title: "Symbol Title",
category: "Something",
}
responses:
200:
body:
example: |
{
success: true,
contentUrl: "http://.../files/123"
}
/{id}:
delete:
description: |
deletes the media metadata entry AND it's associate file entry (contentUrl)
post:
description: |
updates the entry metadata
example: |
{
title: "Symbol Title",
category: "Something",
}
/files:
/{id}:
get:
description: |
returns the file in the response body
put:
description: |
store/replace the file accessible under this URL with the file submitted in the body of the request
This should cover
JWT
OAuth2
flowhttpclient
connect to external auth service (e.g: KeyCloak
)httpserver
and websocket
We have changed the media_files life cycle on ticket: #61. To support that ticket, we need to create a migration file for MongoDB.
Instead of prefix group com.nubeiot
for all sub projects, gradle
build script auto generate to com.nubeiot.iot-engine
It will affect to Nexus
deployment and installer rule
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.