Giter Site home page Giter Site logo

annetteplatform / annette Goto Github PK

View Code? Open in Web Editor NEW
16.0 2.0 0.0 2.54 MB

Platform to build distributed, scalable, enterprise-wide business applications

Home Page: https://annetteplatform.github.io/

License: Apache License 2.0

Scala 97.13% Dockerfile 0.01% Shell 0.20% FreeMarker 2.66%
annette-platform enterprise ecosystem lagom-framework distributed-systems microservices reactive scala akka headless

annette's People

Contributors

valerylobachev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

annette's Issues

Define Elastic index parameters (mapping & analysis) in config

There are a lot of use cases when mapping and analysis parameters of Elastic index needs to be customised according to user requirements.

To provide this functionality configuration of Elastic index parameters should be implemented. When Elastic event processor starts it should perform the following activities:

  1. Load index configuration
  2. Load Elastic index parameters from Elastic Cluster
  3. Compare configuration with Elastic index parameters and apply index changes in Elastic Cluster (if possible)

Index configuration for person category could be defined as follow:

elastic {
  connection {
    url = "https://localhost:9200"
    url = ${?ELASTIC_URL}
    username = "admin"
    username = ${?ELASTIC_USERNAME}
    password = "admin"
    password = ${?ELASTIC_PASSWORD}
    allowInsecure = true
    allowInsecure = ${?ELASTIC_ALLOW_INSECURE}
  }
  personCategoryIndex {
    index = ${?ELASTIC_PREFIX}person-category
    index = ${?PERSON_CATEGORY_INDEX}
    mappings {
      id {
        field = id
        type = keyword
      }
      name {
        field = name
        type = text
        fielddata = true
        analyzer = name_analyzer
        searchAnalyzer = standard
        fields {
          english {
            field = english
            type = text
            analyzer = english
          }
          keyword {
            type = keyword
          }
        }
      }
      updatedAt {
        field = updatedAt
        type = date
      }
    }
  }
}

Implement authentication using cookie

To provide authenticated & authorized access to media files (images, video, audio content) and documents (pdf and office files) we cannot use JWT token as it is required complicated development efforts on frontend. To simplify the solution it would be better to use cookie to authenticate access to media files and documents. Using cookie should be restricted for file access purposes only.

Session cookie should be issued after successful JWT authentication. It should contain primary principal and token expiration time.

To provide cookie authentication CookieAuthenticatedAction should be implemented. It should perform authentication using cookie, extract principal from cookie, validate expiration time.

Refactor project structure

Project structure should be refactored. There could be several options:

  • Option A - organize subprojects by microservice
  • Option B - organize subprojectg according functional area

Both options assume that ignition functionality specific to microservice should be extracted to separate subproject

Option A - by microservice

api-gateway	
	api-gateway
	api-gateway-core
application	
	application
	application-api
	application-api-gateway
	application-ignition
authorization	
	authorization
	authorization-api
	authorization-api-gateway
	authorization-ignition
cms	
	cms
	cms-api
	cms-api-gateway
	cms-ignition
core	
	core
	microservice-core
ignition	
	ignition-core
	ignition-demo
org-structure	
	org-structure
	org-structure-api
	org-structure-api-gateway
	org-structure-ignition
persons	
	persons
	persons-api
	persons-api-gateway
	persons-ignition
principal-groups	
	principal-groups
	principal-groups-api
	principal-groups-api-gateway
	principal-groups-ignition
subscriptions	
	subscriptions
	subscriptions-api
	subscriptions-ignition

Option B - by functional area

api-gateway	
	api-gateway
	application-api-gateway
	authorization-api-gateway
	cms-api-gateway
	org-structure-api-gateway
	persons-api-gateway
	principal-groups-api-gateway
application	
	application
	application-api
	application-ignition
authorization	
	authorization
	authorization-api
	authorization-ignition
cms	
	cms-blogs
	cms-blogs-api
	cms-blogs-ignition
	cms-pages
	cms-pages-api
	cms-pages-ignition
	subscriptions
	subscriptions-api
	subscriptions-ignition
core	
	api-gateway-core
	core
	microservice-core
ignition	
	ignition-core
	ignition-demo
principals	
	org-structure
	org-structure-api
	org-structure-ignition
	persons
	persons-api
	persons-ignition
	principal-groups
	principal-groups-api
	principal-groups-ignition

Exception "Batch too large" in org-structure microservice

Org-structure microservices can log exception Batch too large:

07:49:33.882 [warn] akka.stream.scaladsl.RestartWithBackoffSource [akkaAddress=akka://[email protected]:25520, sourceThread=application-akka.actor.default-dispatcher-19, akkaSource=RestartWithBackoffSource(akka://application), sourceActorSystem=application, akkaTimestamp=07:49:33.882UTC] - Restarting graph due to failure. stack_trace:
--
  | java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
  | at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:553)
  | at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:514)
  | at akka.persistence.cassandra.package$ListenableFutureConverter$anon$2.$anonfun$run$2(package.scala:50)
  | at scala.util.Try$.apply(Try.scala:210)
  | at akka.persistence.cassandra.package$ListenableFutureConverter$anon$2.run(package.scala:50)
  | at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)
  | at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  | at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  | at java.base/java.lang.Thread.run(Thread.java:834)
  | Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Batch too large
  | at com.datastax.driver.core.Responses$Error.asException(Responses.java:181)
  | at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:215)
  | at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:236)
  | at com.datastax.driver.core.RequestHandler.access$2600(RequestHandler.java:62)
  | at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:1005)
  | at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:808)
  | at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1240)
  | at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1158)
  | at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
  | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
  | at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
  | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
  | at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
  | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
  | at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
  | at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
  | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
  | at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
  | at com.datastax.driver.core.InboundTrafficMeter.channelRead(InboundTrafficMeter.java:38)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
  | at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
  | at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
  | at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
  | at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
  | at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
  | at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714)
  | at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:650)
  | at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:576)
  | at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
  | at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
  | at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
  | at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
  | ... 1 more

One event processed by HierarchyDbEventProcessor & HierarchyCassandraDbDao can produce lot of `BatchStatement' that could exceed Cassandra batch size.

This issue is described in lagom/lagom#1715
As far it is not solved in Lagom repo it should be solved in Annette. The solution is to use session.executeWrite in HierarchyCassandraDbDao event processor methods and these methods should return empty Seq[BatchStatement].

As this issue can arise in other microservices it is good idea to use this solution in other *CassandraDbDao

Provide custom entity attributes in microservices where entity is located

Currently, custom attributes in entities are defined using attribute schema in attribute microservice. Transfer of the attribute schema updates and attribute values to target microservice performed using Apache Kafka. Changing attribute schema and attribute assignment aren't synchronized. So this approach can cause inconsistency, especially for indexed data.

To solve this issue the following changes are proposed:

  1. Remove Attribute microservice
  2. Define entity's custom attribute schema directly in configuration of microservice where entity is located.
  3. Methods that provide attribute schema (attribute metadata) should be implemented in microservice API.
  4. Store attribute values in entity (write-side persistence) and entity's table (read-side persistency). To decrease size of entity some attributes can be stored on read-side only.
  5. Get methods that returns entities should return them with attributes specified
  6. Find query should include attribute selection criteria

To illustrate this approach lets use Person entity as example. In this example we define the following attributes:

  1. birthDate - date
  2. gender - string with following values: M/F
  3. isMarried - boolean
  4. salary - decimal
  5. education - json

Custom attribute schema are defined as follow:

attributes {
  person-schema {

    birthDate {
      # type specifies attribute datatype: string, boolean, int, double, decimal, local-date, local-time,
      # offset-datetime, json
      type = date

      # caption-text specifies attribute caption 
      # caption-text = Birth Date
      # caption-code specifies attribute caption code for translation. caption-code has priority over caption-text
      # caption-code = annette.person.attribute.birthDate

      # index defines reference to index alias. If index is not defined attribute will not indexed
      index = birthDate

      # read-side-persistence specifies attribute with persistence on read-side only. Default value is false
      # read-side-persistence = false  
    }

    gender {
      type = string
      # subtype defines detailed type information
      subtype = gender
      
      # allowed-values specifies values that can be assigned to attribute
      allowed-values = [ "M", "F"]
     
      caption-text = Gender     
      index = gender
    }
    
    isMarried {
      type = boolean     
      caption-text = Is Married     
      index = isMarried
    }
    
    salary {
      type = decimal     
      caption-text = Salary     
    }

    education {
      type = json
      subtype = education     
      caption-text = Education  
      read-side-persistence = true
    }
  }
}

Method that provide attribute schema

def getPersonMeta: ServiceCall[NotUsed, Seq[AttributeMetadata]]

Get methods that returns entities should return them with attributes specified:

def getPersonById(id: PersonId, fromReadSide: Boolean = true, withAttributes: Option[String]): ServiceCall[NotUsed, Person]

pathCall("/api/persons/v1/getPersonById/:id/:fromReadSide?withAttributes",  getPersonById _)

To get person with attributes you should use the following URL:

GET http://localhost:9000/api/persons/v1/getPersonById/P0001/false?withAttributes=birthDate,gender,salary

If you want to get all attributes use:

GET http://localhost:9000/api/persons/v1/getPersonById/P0001/false?withAttributes=all

Rewrite CMS

CMS microservice and api gateway should be rewritten to implement the following ideas:

  • Blog entity should be extracted from space entity
  • Wiki functionality is not main priority on this stage and should be removed. It could be implemented later after redesign.
  • Page entity should be implemented. The page is an entity that implements landing page with user content
  • Content should be represented as ordered sequence of widgets that implements certain functionality.
  • Widget is basic block of content design. It could be simple html or markdown content, more complicated design blocks like Froala Design Blocks (see https://froala.com/design-blocks/) or Tilda Blocks (https://tilda.cc/) or even small application that provide certain functionality
  • Widget should be implemented on frontend and CMS should provide flexible interface to store widget data and make it content searchable

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.