annetteplatform / annette Goto Github PK
View Code? Open in Web Editor NEWPlatform to build distributed, scalable, enterprise-wide business applications
Home Page: https://annetteplatform.github.io/
License: Apache License 2.0
Platform to build distributed, scalable, enterprise-wide business applications
Home Page: https://annetteplatform.github.io/
License: Apache License 2.0
There are a lot of use cases when mapping and analysis parameters of Elastic index needs to be customised according to user requirements.
To provide this functionality configuration of Elastic index parameters should be implemented. When Elastic event processor starts it should perform the following activities:
Index configuration for person category could be defined as follow:
elastic {
connection {
url = "https://localhost:9200"
url = ${?ELASTIC_URL}
username = "admin"
username = ${?ELASTIC_USERNAME}
password = "admin"
password = ${?ELASTIC_PASSWORD}
allowInsecure = true
allowInsecure = ${?ELASTIC_ALLOW_INSECURE}
}
personCategoryIndex {
index = ${?ELASTIC_PREFIX}person-category
index = ${?PERSON_CATEGORY_INDEX}
mappings {
id {
field = id
type = keyword
}
name {
field = name
type = text
fielddata = true
analyzer = name_analyzer
searchAnalyzer = standard
fields {
english {
field = english
type = text
analyzer = english
}
keyword {
type = keyword
}
}
}
updatedAt {
field = updatedAt
type = date
}
}
}
}
To provide authenticated & authorized access to media files (images, video, audio content) and documents (pdf and office files) we cannot use JWT token as it is required complicated development efforts on frontend. To simplify the solution it would be better to use cookie to authenticate access to media files and documents. Using cookie should be restricted for file access purposes only.
Session cookie should be issued after successful JWT authentication. It should contain primary principal and token expiration time.
To provide cookie authentication CookieAuthenticatedAction should be implemented. It should perform authentication using cookie, extract principal from cookie, validate expiration time.
Project structure should be refactored. There could be several options:
Both options assume that ignition functionality specific to microservice should be extracted to separate subproject
Option A - by microservice
api-gateway
api-gateway
api-gateway-core
application
application
application-api
application-api-gateway
application-ignition
authorization
authorization
authorization-api
authorization-api-gateway
authorization-ignition
cms
cms
cms-api
cms-api-gateway
cms-ignition
core
core
microservice-core
ignition
ignition-core
ignition-demo
org-structure
org-structure
org-structure-api
org-structure-api-gateway
org-structure-ignition
persons
persons
persons-api
persons-api-gateway
persons-ignition
principal-groups
principal-groups
principal-groups-api
principal-groups-api-gateway
principal-groups-ignition
subscriptions
subscriptions
subscriptions-api
subscriptions-ignition
Option B - by functional area
api-gateway
api-gateway
application-api-gateway
authorization-api-gateway
cms-api-gateway
org-structure-api-gateway
persons-api-gateway
principal-groups-api-gateway
application
application
application-api
application-ignition
authorization
authorization
authorization-api
authorization-ignition
cms
cms-blogs
cms-blogs-api
cms-blogs-ignition
cms-pages
cms-pages-api
cms-pages-ignition
subscriptions
subscriptions-api
subscriptions-ignition
core
api-gateway-core
core
microservice-core
ignition
ignition-core
ignition-demo
principals
org-structure
org-structure-api
org-structure-ignition
persons
persons-api
persons-ignition
principal-groups
principal-groups-api
principal-groups-ignition
Org-structure microservices can log exception Batch too large
:
07:49:33.882 [warn] akka.stream.scaladsl.RestartWithBackoffSource [akkaAddress=akka://[email protected]:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=RestartWithBackoffSource(akka://application), sourceActorSystem=application, akkaTimestamp=07:49:33.882UTC] - Restarting graph due to failure. stack_trace:
--
| java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
| at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:553)
| at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:514)
| at akka.persistence.cassandra.package$ListenableFutureConverter$anon$2.$anonfun$run$2(package.scala:50)
| at scala.util.Try$.apply(Try.scala:210)
| at akka.persistence.cassandra.package$ListenableFutureConverter$anon$2.run(package.scala:50)
| at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
| at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
| at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
| at java.base/java.lang.Thread.run(Thread.java:834)
| Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
| at com.datastax.driver.core.Responses$Error.asException(Responses.java:181)
| at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:215)
| at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:236)
| at com.datastax.driver.core.RequestHandler.access$2600(RequestHandler.java:62)
| at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:1005)
| at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:808)
| at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1240)
| at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1158)
| at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
| at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
| at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
| at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
| at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
| at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
| at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
| at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
| at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
| at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
| at com.datastax.driver.core.InboundTrafficMeter.channelRead(InboundTrafficMeter.java:38)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
| at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
| at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
| at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
| at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
| at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
| at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
| at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
| at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
| at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
| at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
| at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
| at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
| ... 1 more
One event processed by HierarchyDbEventProcessor & HierarchyCassandraDbDao can produce lot of `BatchStatement' that could exceed Cassandra batch size.
This issue is described in lagom/lagom#1715
As far it is not solved in Lagom repo it should be solved in Annette. The solution is to use session.executeWrite in HierarchyCassandraDbDao event processor methods and these methods should return empty Seq[BatchStatement]
.
As this issue can arise in other microservices it is good idea to use this solution in other *CassandraDbDao
Currently, custom attributes in entities are defined using attribute schema in attribute microservice. Transfer of the attribute schema updates and attribute values to target microservice performed using Apache Kafka. Changing attribute schema and attribute assignment aren't synchronized. So this approach can cause inconsistency, especially for indexed data.
To solve this issue the following changes are proposed:
To illustrate this approach lets use Person entity as example. In this example we define the following attributes:
Custom attribute schema are defined as follow:
attributes {
person-schema {
birthDate {
# type specifies attribute datatype: string, boolean, int, double, decimal, local-date, local-time,
# offset-datetime, json
type = date
# caption-text specifies attribute caption
# caption-text = Birth Date
# caption-code specifies attribute caption code for translation. caption-code has priority over caption-text
# caption-code = annette.person.attribute.birthDate
# index defines reference to index alias. If index is not defined attribute will not indexed
index = birthDate
# read-side-persistence specifies attribute with persistence on read-side only. Default value is false
# read-side-persistence = false
}
gender {
type = string
# subtype defines detailed type information
subtype = gender
# allowed-values specifies values that can be assigned to attribute
allowed-values = [ "M", "F"]
caption-text = Gender
index = gender
}
isMarried {
type = boolean
caption-text = Is Married
index = isMarried
}
salary {
type = decimal
caption-text = Salary
}
education {
type = json
subtype = education
caption-text = Education
read-side-persistence = true
}
}
}
Method that provide attribute schema
def getPersonMeta: ServiceCall[NotUsed, Seq[AttributeMetadata]]
Get methods that returns entities should return them with attributes specified:
def getPersonById(id: PersonId, fromReadSide: Boolean = true, withAttributes: Option[String]): ServiceCall[NotUsed, Person]
pathCall("/api/persons/v1/getPersonById/:id/:fromReadSide?withAttributes", getPersonById _)
To get person with attributes you should use the following URL:
GET http://localhost:9000/api/persons/v1/getPersonById/P0001/false?withAttributes=birthDate,gender,salary
If you want to get all attributes use:
GET http://localhost:9000/api/persons/v1/getPersonById/P0001/false?withAttributes=all
CMS microservice and api gateway should be rewritten to implement the following ideas:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.