A very flexible and customizable Operator in Go developed using the Operator Framework to package, install, configure and manage a PostgreSQL database.
The following prerequisites are just required if you would like to install it as standalone (without OLM).
Install Golang v1.11+ |
The following prerequisite is just required if you would like to contribute with.
ℹ️
|
The following steps will allow you use this project as standalone (without OLM). |
By the following commands you will create a local directory and clone this project.
$ git clone [email protected]:dev4devs-com/postgresql-operator.git $GOPATH/src/github.com/dev4devs-com/postgresql-operator
Install Minishift then enable Operators on it by running the following commands.
# create a new profile to test the operator
$ minishift profile set postgresql-operator
# enable the admin-user add-on
$ minishift addon enable admin-user
# add insecure registry to download the images from docker
$ minishift config set insecure-registry 172.30.0.0/16
# start the instance
$ minishift start
Use the following command to install the Operator and Database
ℹ️
|
To install you need be logged in as a user with cluster privileges like the system:admin user. E.g. By using: oc login -u system:admin .
|
$ make install
To verify that the installation was successful completed you can check the Database Status
field of Postgresql CRD in the cluster. The expected result when all was installed with success is OK
.
$ oc describe Postgresql
...
Status:
Database Status: OK
...
ℹ️
|
Following an example of the fully expect result after run make install .
|
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/postgresql-6cf5f48c78-dp2cw 1/1 Running 0 1m
pod/postgresql-operator-7dd97d8885-v7rnf 1/1 Running 0 2m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/postgresql ClusterIP 172.30.227.3 <none> 5432/TCP 1m
service/postgresql-operator-metrics ClusterIP 172.30.223.235 <none> 8383/TCP,8686/TCP 1m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/postgresql 1 1 1 1 1m
deployment.apps/postgresql-operator 1 1 1 1 2m
NAME DESIRED CURRENT READY AGE
replicaset.apps/postgresql-6cf5f48c78 1 1 1 1m
replicaset.apps/postgresql-operator-7dd97d8885 1 1 1 2m
By the specs in Postgresql CR you are able to customize the setup for this operator. Note that by the spec configMapName
you are able to inform the name of a configMapName which has the keys and values which the PostgreSQL should use in its required env vars.
If just the name of the configMap be informed, configMapName
, then it will look for the values stored with the same keys required for each env var of the image used for its database version (databaseNameParam
, databasePasswordParam
, databaseUserParam
). However, you are able to customize the keys too by using the optional specs; configMapDatabaseNameParam
, configMapDatabasePasswordParam
, configMapDatabaseUserParam
. In this way, this operator will be able to looking for the values stored in some config with keys which are not the ones used to create the environment variables used in the database deployment.
By using the command make install
the default namespace postgresql
, defined in the Makefile will be created and the operator will be installed in this namespace. You are able to install the operator in another namespace if you wish, however, you need to set up its roles (RBAC) in order to apply them on the namespace where the operator will be installed. The namespace name needs to be changed in the Cluster Role Binding file. Note, that you also need to change the namespace in the Makefile in order to use the command make install
for another namespace.
# Replace this with the namespace where the operator will be deployed.
namespace: postgresql
The backup service is implemented by using integr8ly/backup-container-image. It will do the backup of the database to be restore in the case of failures. Following the steps to enable it.
-
Setup the AWS in order to store the backup outside of the cluster. You need to add your AWS details to Backup CR as follows or add the name of the secret which has already this data in the cluster.
# --------------------------------- # Stored Host - AWS # ---------------------------- awsS3BucketName: "example-awsS3BucketName" awsAccessKeyId: "example-awsAccessKeyId" awsSecretAccessKey: "example-awsSecretAccessKey"
❗Also, you can add the name of the secret which is created already in the cluster. -
Run the command
make backup/install
in the same namespace where the Database is installed in order to apply the CronJob which will do this process.
ℹ️
|
To install you need be logged in as a user with cluster privileges like the system:admin user. E.g. By using: oc login -u system:admin .
|
To verify that the backup has been successful created you can run the following command in the namespace where the operator is installed.
$ oc get cronjob.batch/backup
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
backup 0 * * * * False 0 13s 12m
To check the jobs executed you can run the command oc get jobs
in the namespace where the operator is installed as the following example.
$ oc get jobs
NAME DESIRED SUCCESSFUL AGE
backup-1561588320 1 0 6m
backup-1561588380 1 0 5m
backup-1561588440 1 0 4m
backup-1561588500 1 0 3m
ℹ️
|
In the above example the schedule was made to run this job each minute (*/1 * * * * )
|
To check the logs and troubleshooting you can run the command oc logs $podName -f
in the namespace where the operator is installed as the following example.
$ oc logs job.batch/backup-1561589040 -f
dumping postgresql
dumping postgres
==> Component data dump completed
/tmp/intly/archives/postgresql.postgresql-22_46_06.pg_dump.gz
WARNING: postgresql.postgresql-22_46_06.pg_dump.gz: Owner username not known. Storing UID=1001 instead.
upload: '/tmp/intly/archives/postgresql.postgresql-22_46_06.pg_dump.gz' -> 's3://camilabkp/backups/postgresql/postgres/2019/06/26/postgresql.postgresql-22_46_06.pg_dump.gz' [1 of 1]
1213 of 1213 100% in 1s 955.54 B/s done
ERROR: S3 error: 403 (RequestTimeTooSkewed): The difference between the request time and the current time is too large.
Following the steps required to be performed in case of be required do the restore based in the backup service.
-
Install the PostgreSQL by following the steps in Installing.
-
Restore the database with the dump which was stored in the AWS S3 bucket.
ℹ️To restore we should run gunzip -c filename.gz | psql dbname
This operator is cluster-scoped
. For further information see the Operator Scope section in the Operator Framework documentation. Also, check its roles in Deploy directory.
ℹ️
|
The operator and database will be installed in the namespace postgresql which will be created by this project.
|
CustomResourceDefinition |
Description |
Packages, manages, installs and configures the Database on the cluster. |
|
Packages, manages, installs and configures the CronJob to do the backup using the image backup-container-image |
-
Resource
Description
Define the Deployment resource of Database. (E.g container and resources definitions)
Define the PersistentVolumeClaim resource used by its Database.
Define the Service resource of Database.
-
Resource
Description
Define the CronJob resources in order to do the Backup.
Define the database and AWS secrets resources created.
-
Status
Description
databaseStatus
For this status is expected the value
OK
which means that all required objects are created.deploymentStatus
Deployment Status from ks8 API (appsv1.DeploymentStatus).
serviceStatus
Deployment Status from ks8 API (v1core.ServiceStatus).
PersistentVolumeClaimStatus
PersistentVolumeClaim Status from ks8 API (persistentvolumeclaimstatus[v1core.PersistentVolumeClaimStatus])
-
Status
Description
backupStatus
Should show
OK
when everything is created successfully.cronJobName
Name of cronJob resource created by it.
cronJobStatus
CronJob Status from ks8 API (v1beta1.CronJobStatus).
dbSecretName
Name of database secret resource created in order to allow the integr8ly/backup-container-image connect to the database .
dbSecretData
Data used into the secret to connect to the database .
awsSecretName
Name of AWS S3 bucket secret resource used in order to allow the integr8ly/backup-container-image connect to AWS to send the backup .
awsSecretData
Data used to in the secret to send the backup files to the AWS S3.
awsSecretDataNamespace
Namespace where the backup image will looking for the of the Aws Secret used.
encryptKeySecretName
Name of the EncryptKey used.
encryptKeySecretNamespace
Namespace where the backup image will looking for the of the EncryptKey used.
encryptKeySecretData
Data used into the EncryptKey.
hasEncryptionKey
Expected true when it was configured to use an EncryptnKey secret
isDatabasePodFound
The value expected here is true which shows that the database pod was found.
isDatabaseServiceFound
The value expected here is true which shows that the database service was found.
Run the following command to setup this project locally.
$ make setup
ℹ️
|
It is using go modules o manage dependencies. |
The following command will install the operator in the cluster and run the changes performed locally without the need to publish a dev
tag. In this way, you can verify your code in the development environment.
$ make code/run/local
❗
|
The local changes are applied when the command operator-sdk up local --namespace=postgresql is executed then it is not a hot deploy and to get the latest changes you need re-run the command.
|
By the following commands you are able to connect in the Database. You can check it by OpenShift UI in the Database’s pod terminal.
# Login into the the Postgres
psql -U postgres
# To connect into the default database
\c <database-name>
# To list the tables
\dt
Follow the below steps to debug the project in some IDEs.
ℹ️
|
The code needs to be compiled/built first. |
$ make setup/debug
$ cd cmd/manager/
$ dlv debug --headless --listen=:2345 --api-version=2
Then, debug the project from the IDE by using the default setup of Go Remote
option.
$ make setup/debug
$ dlv --listen=:2345 --headless=true --api-version=2 exec ./build/_output/bin/postgresql-operator-local --
debug the project using the following Visual Studio Code launch config.
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "test",
"type": "go",
"request": "launch",
"mode": "remote",
"remotePath": "${workspaceFolder}/cmd/manager/main.go",
"port": 2345,
"host": "127.0.0.1",
"program": "${workspaceFolder}",
"env": {},
"args": []
}
]
}
Command |
Description |
|
Creates the |
|
Uninstalls the operator and DB. Deletes the |
|
Installs the backup Service in the operator’s namespace |
|
Uninstalls the backup Service from the operator’s namespace. |
|
Runs the operator locally for development purposes. |
|
Sets up environment for debugging proposes. |
|
Examines source code and reports suspicious constructs using vet. |
|
Formats code using gofmt. |
|
It will automatically generated/update the files by using the operator-sdk based on the CR status and spec definitions. |
|
It will tun the dev commands to check, fix and generated/update the files. |
|
Used by CI to build operator image from |
|
Used by CI to push the |
|
Used by CI to build operator image from a tagged commit and add |
|
Used by CI to push the |
|
Runs test suite |
|
Run coverage check |
|
Compile image for tests |
|
Run locally e2e tests (Required have cluster installed locally) |
ℹ️
|
The Makefile is implemented with tasks which you should use to work with. |
Images are automatically built and pushed to our image repository in the following cases:
-
For every change merged to master a new image with the
master
tag is published. -
For every change merged that has a git tag a new image with the
<operator-version>
andlatest
tags are published.
If the image does not get built and pushed automatically the job may be re-run manually via the CI dashboard.
Following the steps
-
Create a new version tag following the semver, for example
0.1.0
-
Bump the version in the version.go file.
-
Update the the CHANGELOG.MD with the new release.
-
Create a git tag with the version value, for example:
$ git tag -a 0.1.0 -m "version 0.1.0"
-
Push the new tag to the upstream repository, this will trigger an automated release by the CI, for example:
$ git push upstream 0.1.0
ℹ️
|
The image with the tag will be created and pushed to the postgresql-operator image hosting repository by the CI. |
|
Do not use letters in the tag such as v . It will not work.
|
This operator was developed using the Kubernetes APIs in order to be compatible with OpenShift and Kubernetes.
All contributions are hugely appreciated. Please see our Contribution Guide for guidelines on how to open issues and pull requests. Please check out our Code of Conduct too.