Giter Site home page Giter Site logo

mobile-services-installer's Introduction

Mobile Services Installer

This repo contains ansible playbook for installing Mobile Services into existing OpenShift 3.11 instance. It also contains scripts for local development of Mobile Services (using Minishift or oc cluster up).

Prerequisites:

  • Ansible version 2.7.6 and above
  • Running instance of OpenShift 3.11
    • If you are using minishift, it is recommended to allocate at least 6 vCPUs and 6GB of memory to it.
  • Cluster-admin access to targeted OpenShift instance
  • oc client v3.11
  • If you are using minishift, or if the OpenShift cluster doesn't already have a secret to access https://registry.redhat.io, then a service account is required to access https://registry.redhat.io.
    • This is because IDM service uses productized images that are stored in this registry.
    • To get a service account, go to https://registry.redhat.io, then click on Service Accounts tab on the top right corner, and then login using your Red Hat developer account. Click on New Service Account to create a new one. Take note of the username and password.
    • For more information, please check Accessing and Configuring the Red Hat Registry.

Installation

  1. Open a terminal and make sure you are logged in to the target OpenShift cluster as a cluster-admin using oc

  2. Use git to clone https://github.com/aerogear/mobile-services-installer and cd into the repo.

  3. Run the installation playbook:

    ansible-playbook install-mobile-services.yml -e registry_username="<registry_service_account_username>" -e registry_password="<registry_service_account_password>" -e openshift_master_url="<public_url_of_openshift_master>"
    

    If the cluster can already access the Red Hat container registry, you can skip the part that sets up the pull secrets:

    ansible-playbook install-mobile-services.yml -e openshift_master_url="<public_url_of_openshift_master>" --skip-tags "pullsecret"
    
  4. You will also need to update the CORS configuration of the OpenShift cluster to allow the mobile developer console to communicate with the OpenShift API server (you only need to do this once).

    • If you are using minishift, you should run this script.
    • Otherwise, you should run this playbook to update the master config of the OpenShift cluster. To run this playbook, you need to:
      1. Get the host names of the master nodes by running oc get nodes. Take notes of the host names.
      2. Copy the sample hosts inventory file, and update it to add the correct host names for master nodes.
      3. You should also make sure that you can ssh into the master nodes from the workstation.
      4. Run the playbook and specify the inventory file:
        ansible-playbook -i ./inventories/hosts update-master-cors-config.yml
        
        Please note that this playbook will restart the api and controller servers of the OpenShift cluster.

Setup services for demo

If you want to also setup all the required services for a demo, you can run this playbook:

ansible-playbook setup-demo.yml

This playbook will:

  • Provision the showcase data sync server into a specified namespace.
  • Create a mobile client for the ionic-showcase app.
  • Bind all the available services to the showcase app (if no push information is provided, then push service won't be bound)
  • Make sure the showcase server app is protected by the IDM service, and supports file upload.
  • Setup the following users in the IDM service
    • User 1:
      • username: admin
      • password: admin
      • realm role: admin
      • client role for the showcase app: admin
    • User 2:
      • username: developer
      • password: developer
      • realm role: developer
      • client role for the showcase app: developer

You can then login to the Mobile Developer Console and copy the configuration for the showcase app, and paste it into the mobile-services.json file for the showcase client app.

Local development

By following next steps, you can spin up your local OpenShift instance with Mobile Services already installed.

๐Ÿง Linux

You may need to configure your firewall first:

sudo firewall-cmd --permanent --add-port=8443/tcp
sudo firewall-cmd --permanent --add-port=8053/tcp
sudo firewall-cmd --permanent --add-port=53/udp
sudo firewall-cmd --permanent --add-port=443/tcp
sudo firewall-cmd --permanent --add-port=80/tcp
sudo firewall-cmd --reload

Download archive with oc client binary, extract it, add it to your $PATH and run:

export REGISTRY_USERNAME=<registry_service_account_username>
export REGISTRY_PASSWORD=<registry_service_account_password>
./scripts/oc-cluster-up.sh

See OpenShift documentation for more details.

๐ŸŽ Mac

Since oc cluster up is causing problems for users using Mac OS (since OpenShift version 3.10), it is advised to use Minishift as an alternative.

To spin up OpenShift 3.11 cluster locally, run:

export REGISTRY_USERNAME=<registry_service_account_username>
export REGISTRY_PASSWORD=<registry_service_account_password>
./scripts/minishift.sh

Once the setup is complete, it is possible to stop the cluster with minishift stop and then run it again with minishift start.

See Minishift documentation for more details.

mobile-services-installer's People

Contributors

b1zzu avatar danielpassos avatar evanshortiss avatar grdryn avatar jhellar avatar jstaffor avatar opuk avatar psturc avatar secondsun avatar stephencoady avatar stephinrachel avatar wtrocki avatar ziccardi avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mobile-services-installer's Issues

scripts/oc-cluster-up.sh not working on centos7 and ubuntu 18.04

Hello together,
I am currently trying to get the aerogear environment set up with oc cluster up. I tried the installer now in different ways. First approach like described in the README on this site with the master branch and also the way for the old 2.0.0 release like described here:

https://docs.aerogear.org/aerogear/Native/getting-started-installing.html

Unfortunately without success yet. I like to focus an the master branch in the following. Here is the output of the installer when used on a fresh centos minimal install. I also included ansible and net-tools packages.
I always see the same behavior "Wait for IDM DB pod to be ready" times out. When I look into the namespace mobile-developer-services, I see that the keycloak pod was successfully deployed and that sso-postgresql failed.

Can anyone provide directions here what I am doing wrong or how I can work around the issue?

[root@openshift mobile-services-installer]# ./scripts/oc-cluster-up.sh --public-ip 192.168.0.4 --registry-username $REGISTRY_USERNAME --registry-password $REGISTRY_PASSWORD
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11 ...
I1124 16:55:46.127869    4241 config.go:40] Running "create-master-config"
I1124 16:55:50.974000    4241 config.go:46] Running "create-node-config"
Wrote config to: "/root/mobile-services-installer/scripts/../openshift.local.clusterup"
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Checking type of volume mount ...
Determining server IP ...
Checking if OpenShift is already running ...
Checking for supported Docker version (=>1.22) ...
Checking if insecured registry is configured properly in Docker ...
Checking if required ports are available ...
Checking if OpenShift client is configured properly ...
Checking if image openshift/origin-control-plane:v3.11 is available ...
Starting OpenShift using openshift/origin-control-plane:v3.11 ...
I1124 16:56:00.270175    4738 flags.go:30] Running "create-kubelet-flags"
I1124 16:56:02.516120    4738 run_kubelet.go:49] Running "start-kubelet"
I1124 16:56:03.937573    4738 run_self_hosted.go:181] Waiting for the kube-apiserver to be ready ...
I1124 16:56:35.975177    4738 interface.go:26] Installing "kube-proxy" ...
I1124 16:56:35.975245    4738 interface.go:26] Installing "kube-dns" ...
I1124 16:56:35.975256    4738 interface.go:26] Installing "openshift-service-cert-signer-operator" ...
I1124 16:56:35.975266    4738 interface.go:26] Installing "openshift-apiserver" ...
I1124 16:56:35.975316    4738 apply_template.go:81] Installing "openshift-apiserver"
I1124 16:56:35.976054    4738 apply_template.go:81] Installing "kube-dns"
I1124 16:56:35.978254    4738 apply_template.go:81] Installing "openshift-service-cert-signer-operator"
I1124 16:56:35.980007    4738 apply_template.go:81] Installing "kube-proxy"
I1124 16:56:49.001502    4738 interface.go:41] Finished installing "kube-proxy" "kube-dns" "openshift-service-cert-signer-operator" "openshift-apiserver"
I1124 16:58:37.124600    4738 run_self_hosted.go:242] openshift-apiserver available
I1124 16:58:37.124687    4738 interface.go:26] Installing "openshift-controller-manager" ...
I1124 16:58:37.124718    4738 apply_template.go:81] Installing "openshift-controller-manager"
I1124 16:58:43.039318    4738 interface.go:41] Finished installing "openshift-controller-manager"
Adding default OAuthClient redirect URIs ...
Adding router ...
Adding persistent-volumes ...
Adding registry ...
Adding sample-templates ...
Adding web-console ...
Adding centos-imagestreams ...
I1124 16:58:43.074975    4738 interface.go:26] Installing "openshift-router" ...
I1124 16:58:43.075011    4738 interface.go:26] Installing "persistent-volumes" ...
I1124 16:58:43.075023    4738 interface.go:26] Installing "openshift-image-registry" ...
I1124 16:58:43.075033    4738 interface.go:26] Installing "sample-templates" ...
I1124 16:58:43.075046    4738 interface.go:26] Installing "openshift-web-console-operator" ...
I1124 16:58:43.075058    4738 interface.go:26] Installing "centos-imagestreams" ...
I1124 16:58:43.075203    4738 apply_list.go:67] Installing "centos-imagestreams"
I1124 16:58:43.075958    4738 interface.go:26] Installing "sample-templates/mariadb" ...
I1124 16:58:43.075996    4738 interface.go:26] Installing "sample-templates/mysql" ...
I1124 16:58:43.076009    4738 interface.go:26] Installing "sample-templates/postgresql" ...
I1124 16:58:43.076020    4738 interface.go:26] Installing "sample-templates/cakephp quickstart" ...
I1124 16:58:43.076033    4738 interface.go:26] Installing "sample-templates/nodejs quickstart" ...
I1124 16:58:43.076045    4738 interface.go:26] Installing "sample-templates/rails quickstart" ...
I1124 16:58:43.076058    4738 interface.go:26] Installing "sample-templates/mongodb" ...
I1124 16:58:43.076069    4738 interface.go:26] Installing "sample-templates/dancer quickstart" ...
I1124 16:58:43.076081    4738 interface.go:26] Installing "sample-templates/django quickstart" ...
I1124 16:58:43.076092    4738 interface.go:26] Installing "sample-templates/jenkins pipeline ephemeral" ...
I1124 16:58:43.076104    4738 interface.go:26] Installing "sample-templates/sample pipeline" ...
I1124 16:58:43.076356    4738 apply_list.go:67] Installing "sample-templates/sample pipeline"
I1124 16:58:43.076799    4738 apply_template.go:81] Installing "openshift-web-console-operator"
I1124 16:58:43.077375    4738 apply_list.go:67] Installing "sample-templates/rails quickstart"
I1124 16:58:43.077726    4738 apply_list.go:67] Installing "sample-templates/mariadb"
I1124 16:58:43.077969    4738 apply_list.go:67] Installing "sample-templates/mongodb"
I1124 16:58:43.078018    4738 apply_list.go:67] Installing "sample-templates/postgresql"
I1124 16:58:43.078237    4738 apply_list.go:67] Installing "sample-templates/cakephp quickstart"
I1124 16:58:43.078399    4738 apply_list.go:67] Installing "sample-templates/dancer quickstart"
I1124 16:58:43.078445    4738 apply_list.go:67] Installing "sample-templates/nodejs quickstart"
I1124 16:58:43.078802    4738 apply_list.go:67] Installing "sample-templates/django quickstart"
I1124 16:58:43.078898    4738 apply_list.go:67] Installing "sample-templates/jenkins pipeline ephemeral"
I1124 16:58:43.077975    4738 apply_list.go:67] Installing "sample-templates/mysql"
I1124 16:59:12.104125    4738 interface.go:41] Finished installing "sample-templates/mariadb" "sample-templates/mysql" "sample-templates/postgresql" "sample-templates/cakephp quickstart" "sample-templates/nodejs quickstart" "sample-templates/rails quickstart" "sample-templates/mongodb" "sample-templates/dancer quickstart" "sample-templates/django quickstart" "sample-templates/jenkins pipeline ephemeral" "sample-templates/sample pipeline"
I1124 16:59:41.735103    4738 interface.go:41] Finished installing "openshift-router" "persistent-volumes" "openshift-image-registry" "sample-templates" "openshift-web-console-operator" "centos-imagestreams"
Login to server ...
Creating initial project "myproject" ...
Server Information ...
OpenShift server started.

The server is accessible via web console at:
    https://192.168.0.4.nip.io:8443

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin

error: You are not a member of project "default".
You have one project on this server: My Project (myproject)
To see projects on another server, pass '--server=<server>'.
Error from server (Forbidden): secrets "router-certs" is forbidden: User "developer" cannot get secrets in the namespace "default": no RBAC policy matched
Error from server (NotFound): secrets "router-certs" not found
Error from server (NotFound): error when replacing "STDIN": secrets "router-certs" not found
Error from server (NotFound): services "router" not found
Error from server (NotFound): services "router" not found
Error from server (NotFound): deploymentconfigs.apps.openshift.io "router" not found

*******************
Cluster certificate is located in /tmp/oc-certs/localcluster.crt. Install it to your mobile device.
Logged into "https://127.0.0.1:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    default
    kube-dns
    kube-proxy
    kube-public
    kube-system
  * myproject
    openshift
    openshift-apiserver
    openshift-controller-manager
    openshift-core-operators
    openshift-infra
    openshift-node
    openshift-service-cert-signer
    openshift-web-console

Using project "myproject".
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'


PLAY [Install Mobile Services to an OpenShift cluster] *****************************************************************************************************************

TASK [Gathering Facts] *************************************************************************************************************************************************
ok: [localhost]

TASK [prerequisites : fail] ********************************************************************************************************************************************
skipping: [localhost]

TASK [namespace : Check namespace doesn't already exist] ***************************************************************************************************************
changed: [localhost]

TASK [namespace : Creating namespace mobile-developer-services] ********************************************************************************************************
changed: [localhost]

TASK [pull-secrets : Ensure registry username and password are set] ****************************************************************************************************
skipping: [localhost]

TASK [pull-secrets : Create imagestream pull secret in the openshft namespace] *****************************************************************************************
changed: [localhost]

TASK [pull-secrets : Create image pull secret in the mobile-developer-services namespace] ******************************************************************************
changed: [localhost]

TASK [pull-secrets : Link the secret for pulling images] ***************************************************************************************************************
changed: [localhost]

TASK [idm : Setup RH-SSO Imagestreams] *********************************************************************************************************************************
included: /root/mobile-services-installer/roles/idm/tasks/imagestream.yml for localhost

TASK [idm : Ensure redhat-sso73-openshift:1.0 tag is present for redhat sso in openshift namespace] ********************************************************************
ok: [localhost]

TASK [idm : Ensure redhat-sso73-openshift:1.0 tag has an imported image in openshift namespace] ************************************************************************
FAILED - RETRYING: Ensure redhat-sso73-openshift:1.0 tag has an imported image in openshift namespace (50 retries left).
ok: [localhost]

TASK [idm : Install IDM] ***********************************************************************************************************************************************
included: /root/mobile-services-installer/roles/idm/tasks/install.yml for localhost

TASK [include_role : namespace] ****************************************************************************************************************************************

TASK [namespace : Check namespace doesn't already exist] ***************************************************************************************************************
changed: [localhost]

TASK [namespace : Creating namespace mobile-developer-services] ********************************************************************************************************
skipping: [localhost]

TASK [idm : Create required objects] ***********************************************************************************************************************************
changed: [localhost] => (item=https://raw.githubusercontent.com/integr8ly/keycloak-operator/v1.9.2/deploy/rbac.yaml)
changed: [localhost] => (item=https://raw.githubusercontent.com/integr8ly/keycloak-operator/v1.9.2/deploy/crds/Keycloak_crd.yaml)
changed: [localhost] => (item=https://raw.githubusercontent.com/integr8ly/keycloak-operator/v1.9.2/deploy/crds/KeycloakRealm_crd.yaml)
changed: [localhost] => (item=https://raw.githubusercontent.com/integr8ly/keycloak-operator/v1.9.2/deploy/operator.yaml)

TASK [idm : Create IDM resource template] ******************************************************************************************************************************
changed: [localhost]

TASK [idm : Create IDM resource] ***************************************************************************************************************************************
changed: [localhost]

TASK [idm : Remove IDM template file] **********************************************************************************************************************************
changed: [localhost]

TASK [idm : Wait for IDM operator pod to be ready] *********************************************************************************************************************
FAILED - RETRYING: Wait for IDM operator pod to be ready (50 retries left).
FAILED - RETRYING: Wait for IDM operator pod to be ready (49 retries left).
changed: [localhost]

TASK [idm : Wait for IDM DB pod to be ready] ***************************************************************************************************************************
FAILED - RETRYING: Wait for IDM DB pod to be ready (50 retries left).
FAILED - RETRYING: Wait for IDM DB pod to be ready (49 retries left).
...
fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "oc get pods --namespace=mobile-developer-services --selector=deploymentConfig=sso-postgresql -o jsonpath='{.items[*].status.phase}' | grep Running", "delta": "0:00:00.432069", "end": "2019-11-24 17:21:52.952178", "msg": "non-zero return code", "rc": 1, "start": "2019-11-24 17:21:52.520109", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}

PLAY RECAP *************************************************************************************************************************************************************
localhost                  : ok=15   changed=10   unreachable=0    failed=1    skipped=4    rescued=0    ignored=0

Resources requests and limits should be configurable

I think this is not configurable right now? I can provide custom images but I do not have any influence on resource requests and limits or in some cases they are just not set. Requests and limits should be configurable and should be always set - some clusters may even require it.

Problems on OKD 3.11

I tested it on OKD 3.11 in this version exactly: openshift v3.11.0+1cd89d4-542, kubernetes v1.11.0+d4cacc0. Aerogear was installed using ansible and was successful - Mobile tab showed up in Service Catalog. And there are problems as Provisioned Services are always in Pending state and cannot be installed successfully. I can see such events from ServiceInstance:

Events:
Type Reason Age From Message


Normal Provisioning 16m service-catalog-controller-manager The instance is being provisioned asynchronously
Normal Provisioning 11m (x8 over 16m) service-catalog-controller-manager The instance is being provisioned asynchronously (action started)
Warning ProvisionCallFailed 7m service-catalog-controller-manager Provision call failed: Error occurred during provision. Please contact administrator if the issue persists.
Warning StartingInstanceOrphanMitigation 7m service-catalog-controller-manager The instance provision call failed with an ambiguous error; attempting to deprovision the instance in order to mitigate an orphaned resource
Normal Deprovisioning 7m service-catalog-controller-manager The instance is being deprovisioned asynchronously

and errors in service catalog controller manager:

W0204 17:22:33.579558 1 reflector.go:341] github.com/kubernetes-incubator/service-catalog/pkg/client/informers_generated/externalversions/factory.go:118: watch of *v1beta1.ServicePlan ended with: The resourceVersion for the provided watch is too old.
I0204 17:22:33.658776 1 controller_instance.go:1731] ServiceInstance "testdejw/ag-metrics-metrics-apb-6sp5d" v389663: Provision call failed: Error occurred during provision. Please contact administrator if the issue persists.
I0204 17:22:33.658933 1 controller_instance.go:1731] ServiceInstance "testdejw/ag-metrics-metrics-apb-6sp5d" v389663: Provision call failed: Error occurred during provision. Please contact administrator if the issue persists.
I0204 17:22:33.659116 1 controller_instance.go:1770] ServiceInstance "testdejw/ag-metrics-metrics-apb-6sp5d" v389663: Setting lastTransitionTime, condition "OrphanMitigation" to 2021-02-04 17:22:33.658922627 +0000 UTC m=+778.333140405
I0204 17:22:33.659204 1 controller_instance.go:1731] ServiceInstance "testdejw/ag-metrics-metrics-apb-6sp5d" v389663: The instance provision call failed with an ambiguous error; attempting to deprovision the instance in order to mitigate an orphaned resource
I0204 17:22:33.659947 1 event.go:221] Event(v1.ObjectReference{Kind:"ServiceInstance", Namespace:"testdejw", Name:"ag-metrics-metrics-apb-6sp5d", UID:"605feb4d-670c-11eb-9444-0a580a800009", APIVersion:"servicecatalog.k8s.io/v1beta1", ResourceVersion:"389663", FieldPath:""}): type: 'Warning' reason: 'ProvisionCallFailed' Provision call failed: Error occurred during provision. Please contact administrator if the issue persists.
I0204 17:22:33.660497 1 event.go:221] Event(v1.ObjectReference{Kind:"ServiceInstance", Namespace:"testdejw", Name:"ag-metrics-metrics-apb-6sp5d", UID:"605feb4d-670c-11eb-9444-0a580a800009", APIVersion:"servicecatalog.k8s.io/v1beta1", ResourceVersion:"389663", FieldPath:""}): type: 'Warning' reason: 'StartingInstanceOrphanMitigation' The instance provision call failed with an ambiguous error; attempting to deprovision the instance in order to mitigate an orphaned resource

So it cannot be used actually.

Very old docker images provided

I can see you are using very old docker images in your last 2.0.0 release with quite old components inside (like Keycloak). Do you have any plans to refresh them and test them against the latest versions of Kubernetes and Openshift 3.11 and 4? Is this project still alive? Do you work one any next major release and docker images' versions?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.