Giter Site home page Giter Site logo

localega-deploy-swarm's Introduction

Build Status

Docker Swarm Deployment

Prerequisites

The deployment tool of LocalEGA for Docker Swarm is based on Gradle, so you will need Gradle 5 to be installed on your machine in order to use it. On MacOS with Homebrew it can be done by executing brew install gradle. Please, refer to official documentation to find instruction for other platforms.

Make sure Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy is set up.

Structure

Gradle project has the following groups of tasks:

  • cluster - code related to Docker Machine and Docker Swarm cluster provisioning
  • cega - "fake" CentralEGA bootstrapping and deployment code
  • lega - main LocalEGA microservices bootstrapping and deployment code
  • swarm - root project aggregating both cega and lega
  • test - sample test case: generating a file, encrypting it, uploading to the inbox and ingesting it

Cluster provisioning

Docker Swarm cluster can be provisioned using gradle provision command. Provisioning is done via Docker Machine. Two providers are supported at the moment: virtualbox (default one) and openstack.

To provision cluster in the OpenStack one needs to have OpenStack configuration file with filled settings from this list (there's a sample file called openstack.properties.sample in the project folder). Then the command will look like this: gradle provision -PopenStackConfig=/absolute/path/to/openstack.properties.

Note that it may take a while to provision the cluster in OpenStack. To see how many nodes are ready one can run gradle list. By default machine names are cega and lega.

gradle destroy will remove all the virtual machines and destroy the cluster.

Bootstrapping

Here's an example of bootstrapping with local VMs (VirtualBox driver) (NOTE that LEGA Private should be bootstrapped prior to LEGA Public, because LEGA Public depends on some LEGA Private configs):

gradle :cega:createConfiguration
gradle :lega-private:createConfiguration
gradle :lega-public:createConfiguration

This can be replaced with a single command gradle bootstrap.

If Docker Machine VM names are not default (i.e. not cega and lega) you will have to use additional parameters:

gradle :cega:createConfiguration -Pmachine=<CEGA_MACHINE_NAME>
gradle :lega-private:createConfiguration -Pmachine=<LEGA_PRIVATE_MACHINE_NAME>
gradle :lega-public:createConfiguration -Pmachine=<LEGA_PUBLIC_MACHINE_NAME> -PcegaIP=$(docker-machine ip <CEGA_MACHINE_NAME>) -PlegaPrivateIP=$(docker-machine ip <LEGA_PRIVATE_MACHINE_NAME>)

During bootstrapping, two test users are generated: john and jane. Credentials, keys and other config information can be found under .tmp folder of each subproject.

Deploying

After successful bootstrapping, deploying should be as simple as:

gradle :cega:deployStack
gradle :lega-private:deployStack
gradle :lega-public:deployStack

This can be replaced with a single command gradle deploy.

You can also use -Pmachine=<MACHINE_NAME> option with any of those commands to specify machine name.

To make sure that the system is deployed you can execute gradle ls.

gradle :cega:removeStack, gradle :lega-private:removeStack, lega-public :cega:removeStack will remove deployed stacks (yet preserving bootstrapped configuration). To clean configurations and remove stack you can use script like this:

gradle :cega:removeStack
gradle :cega:clearConfiguration
gradle :lega-private:removeStack
gradle :lega-private:clearConfiguration
gradle :lega-public:removeStack
gradle :lega-public:clearConfiguration

Testing

There's a built-in simple test to check that the basic scenario works fine. Try to execute gradle ingest after successful deploying to check if ingestion works. It will automatically generate 10MBs file, encrypt it with Crypt4GH, upload to the inbox of test-user john, ingest this file and check if it has successfully landed to the vault.

Note that in case of non-standard machine names, additional parameters will be required: gradle ingest -PcegaIP=$(docker-machine ip <CEGA_MACHINE_NAME>) -PlegaPublicIP=$(docker-machine ip <LEGA_PUBLIC_MACHINE_NAME>) -PlegaPrivateIP=$(docker-machine ip <LEGA_PRIVATE_MACHINE_NAME>)

Demo

There's a short demo recorded with explanations on provisioning and deployment process: Demo

Also there's an updated Asciinema recording: asciicast

localega-deploy-swarm's People

Contributors

dtitov avatar amgadhanafy avatar viklund avatar blankdots avatar

Watchers

 avatar Erik Ylipää avatar James Cloos avatar Malin Klang avatar Björn avatar  avatar Jonas Hagberg avatar  avatar Richèl Bilderbeek avatar Per Johnsson avatar Jessica Lindvall avatar John Lövrot avatar Nanjiang Shu avatar Andreas Kähäri avatar Guilherme Borges Dias avatar  avatar Agustín Andrés Corbat avatar Martin Pippel avatar Juliana Assis avatar Dimitris Bampalikis avatar Oskar Vidarsson avatar Emilio Mármol Sánchez avatar  avatar

Forkers

t0rrant

localega-deploy-swarm's Issues

Network separation

Description

We want to create production-like deployment using Docker Swarm, where each stack will be deployed to its own Swarm.

Definition of Done

Green build on Jenkins.

How to test

gradle provision
gradle bootstrap
gradle deploy
sleep 160
gradle ingest

Speed up Jenkins pipeline by putting aside some services and switching to real CEGA connectivity

Description

Currently, Jenkins (full) build approx. 15 minutes. This is due to the fact of creating 3 VMs in OpenStack: CEGA, Lega Public, Lega Private and spinning up many components there. We spin up not only LocalEGA microservices but also PostgreSQL database, RabbitMQ brokers, S3 services, etc.

In the real setup, we will not have any machine for CEGA part, because there will not be fake CEGA - we will talk to the real CEGA. Also, we will not snip up the PostgreSQL database and S3 services - we will use services, provided by TSD. So we want to omit spinning up these parts of the setup.

Definition of Done

  • VMs (both temp and staging) are using real CEGA servers for both MQ and users authentication (hellgate.crg.eu and egatest.crg.eu).
  • We deploy PostgreSQL database and S3 services "once and forever" and then just reuse them every build.

How to test

Make sure that pipeline on Jenkins is green (e2e test is passing).

Staging environment

Description

We would like to set up a so-called "staging environment": a place where LocalEGA will be running permanently in a production-like setup (Norwegian production-like setup, to be precise).

This should include:

  • Network separation
  • Machines' firewalls configuration (to simulate TSD restrictions)
  • Linking it to Jenkins pipeline so that Docker images in this environment are automatically updated on successful integrated builds.

Definition of Done

Having a working environment in OpenStack.

How to test

Manual smoke-testing.

Verification fails because of some key-server issue

Expected Behavior

Verification is successful after ingestion happened.

Current Behavior

Verification fails with [lega.utils.db][CRITICAL] (L288) Keyserver error: Expected: ASCII-armored PGP data

Possible Solution

I guess it's some protocol issue between verify and keyserver.

Steps to Reproduce

  1. Perform ingestion.
  2. See the logs of verify.

Context (Environment)

Swarm deployment.

Adapt Swarm deployment for use in TSD

Description

The deployment should be looser to become more applicable for TSD environment.

Proposed solution

Decouple the deployment in two parts: public and private, get rid of Docker Machine.

Definition of Done

Tests are green.

Support S3 backend for Inbox and Ingest

Description

After this PR is merged, we need to enable these changes in the Swarm deployment: EGA-archive/LocalEGA#22

Proposed solution

We enable S3 backend for the Inbox and make ingestion microservice ingest files from this staging S3.

Definition of Done

The end-to-end test in this repo should be passing on Jenkins.

Fix build after Mina SSL changes

Expected Behavior

The build on Jenkins is green.

Current Behavior

The build fails.

Possible Solution

INBOX_JKS_PASSWORD -> KEYSTORE_PASSWORD

Steps to Reproduce

  1. Run build on Jenkins.
  2. See results.

Replace Mediator component with intermediate RabbitMQ broker

Description

Conclusions of meeting with USIT team (who will provide us with the environment in TSD for the LocalEGA deployment): they will enable RabbitMQ protocol for us (ampqs), so we don't need the Mediator anymore. But they want instead to set up an intermediate message broker between LocalEGA broker and CentralEGA broker. It will also reside on their premises, but outside of the TSD project itself. They want to have it as extra layer isolation to add more fine-tuned security rules (firewall rules, packets filters, etc.)

Definition of Done

The mediator should be replaced with another broker and it should be transparent for all existing component (they should not know about this broker).

How to test

The end-to-end test in this repo should be passing on Jenkins.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.