Giter Site home page Giter Site logo

zalando / ghe-backup Goto Github PK

View Code? Open in Web Editor NEW
30.0 18.0 7.0 733 KB

Github Enterprise backup at ZalandoTech (Kubernetes, AWS, Docker)

License: Apache License 2.0

Shell 44.35% Python 43.86% Dockerfile 11.79%
github-enterprise backup aws zalando ebs ebs-volumes backup-data docker kubernetes cloud

ghe-backup's Introduction

Deprecation Notice

This repository is deprecated. No further engineering work or support will happen. If you are interested in further development of the code please feel free to fork it.

Github Enterprise Backup

Build Status Code Climate Hex.pm

Zalando Tech's Github Enterprise backup approach.

Overview

Github Enterprise at Zalando Tech is a high availability setup running master and replica instances on AWS. The AWS account that runs the high availability setup also runs one backup host. Zalando Tech's Github Enterprise backup can also run as a POD inside a Kubernetes cluster.

We believe this backup approach provides reliable backup data even in case one AWS account or Kubernetes cluster is compromised.

overview

Basically Zalando Tech's Github Enterprise backup wraps github's backup-utils in a Docker container.

If running on Kubernetes, a stateful set including volumes and volume claims stores the actual backup data. See a sample statefulset belowhttps://github.com/zalando/ghe-backup/blob/master/README.md#kubernetes-stateful-set,-volume,-volume-claim) Zalando Kubernetes is based on AWS, so volume claims are based on EBS.

If running on AWS, an EBS volume stores the actual backup data. This way one can access the data even if the regarding backup host is down.

Local docker development

create a ghe-backup docker image

docker build --rm -t [repo name]:[tag] .
e.g.
docker build --rm -t pierone.stups.zalan.do/machinery/ghe-backup:0.0.7 .

run the image

docker run -d --name [repo name]:[tag]
e.g.
docker run -d --name ghe-backup pierone.stups.zalan.do/machinery/ghe-backup:0.0.7

or with connected bash:
docker run -it --entrypoint /bin/bash --name [repo name]:[tag]
e.g.
docker run -it --entrypoint /bin/bash --name ghe-backup pierone.stups.zalan.do/machinery/ghe-backup:0.0.7

attach to the running local container

docker attach --sig-proxy=false [repo name]

detach from the running local container (does not stop the container)

CTRL+C

run bash in running docker container

sudo docker exec -i -t [ContainerID] bash

exit bash

exit

IAM policy settings

Zalando Tech's Github Enterprise backup hosts contain private ssh keys that have to match with public ssh keys registered on the Github Enterprise main instance. Private ssh keys should not be propagated unencrypted with deployments. AWS KMS allows to encrypt any kind of data, so this service is used to encrypt the private ssh key for both, Zalando Tech's Github Enterprise backup running on AWS and Kubernetes. KMS actions are managed by policies to make sure only configured tasks can be performed.

A kms policy similar to the one shown below is needed to:

  • allow kms decryption of the encrypted ssh key
  • access s3 bucket
  • use EBS volume
...
            "Resource": [  
                "arn:aws:s3:::[yourMintBucket]/[repo name]/*"  
            ]  
...
            "Effect": "Allow",  
            "Action": [  
                "ec2:DescribeVolumes",  
                "ec2:AttachVolume",  
                "ec2:DetachVolume"  
            ],  
            "Resource": "*"  
...

You can find a full policy sample here in the gist "ghe-backup-kms-policy-sample"

Make sure you have an according role that allows managing your policy.

Configure an EBS volume for backup data

Backup data shall be saved on an EBS volume to persist backups even if the backup instance goes down. The creation of such an ebs volume is described in creating-ebs-volume guide.
After creating an EBS volume, you have to make sure you can use it as described in ebs-using-volumes.

Pls note: You need to format the EBS volume before you use it, otherwise you may experience issues like:
You must specify the file type.

Tests

There are two kinds of tests available:

  • python nose tests
  • bash tests

Both can be run with ./run-tests.sh.
Pls note:

  • tests leveraging kms require aws logins e.g. via aws cli. Thats why those don not run on ci environments out of the box. The run-tests.sh script uses zaws (a zalando internal tool that is the successor of the former open source tool mai)
  • make sure you run bashtest/cleanup-tests.sh in order to clean up afterwards.

Nosetest

decrypt test

  • precondition: you are logged in with AWS e.g. using mai
    mai login [awsaccount-role]
  • test run:
    nosetests -w python -v --nocapture test_extract_decrypt_kms.py

delete in stuck in progress files

nosetests -w python -v --nocapture test_delete_instuck_progress.py

run all test minimum output

nosetests -w python

Bash tests

Pls go to bashtest directory: cd bashtest and run the tests:
./test-convert-kms-private-ssh-key.sh

Running in an additional AWS account

Please adapt the cron tab definitions when running in another AWS account e.g. to the values in cron-ghe-backup-alternative. This lowers the load on the Github Enterprise master with respect to backup attempts.

Restore

Restoring backups is based on github's (using the backup and restore commands)[https://github.com/github/backup-utils#using-the-backup-and-restore-commands]. The actual ghe-restore command gets issued from the backup host. Please note: the backup restore can run for several hours. (Nohup)[https://en.wikipedia.org/wiki/Nohup] is recommended to keep the restore process running even if the shell connection is lost.

sample steps include:

put ghe instance to restor to into maintenance mode
# ssh into your ec2 instance and exec into your container
# docker exec -it [container label or ID] bash/sh
# or
# exec into your pod
# kubectl exec -it [your pod e.g. statefulset-ghe-backup-0] bash/sh
nohup /backup/backup-utils/bin/ghe-restore -f [IP address of the ghe master to restore] &
# monitor the backup progress
tail -f nohup.out

Contribution

pls refer to CONTRIBUTING.md

Zalando specifics

The Taupage AMI is mandatory for backup hosts of Zalando Tech's Github Enterprise for compliance reasons. As Taupage AMI is part of Stups, other Stups technologies like Senza are also used for local development.

Upload Docker images to pierone (a Zalando docker registry) would be:

docker push [repo name]:[tag]
e.g.
docker push pierone.stups.zalan.do/machinery/ghe-backup:cdp-master-38

Senza yaml file

Stups requires a senza yaml file to deploy an artefact to AWS. Such a yaml file gets basically translated to AWS CloudFormation templates that causes a stack being deployed.

A sample senza yaml file would be:

# basic information for generating and executing this definition   
SenzaInfo:  
  StackName: hello-world  
  Parameters:  
    - ImageVersion:
        Description: "Docker image version of hello-world."
# a list of senza components to apply to the definition
SenzaComponents:
  # this basic configuration is required for the other components
  - Configuration:
      Type: Senza::StupsAutoConfiguration # auto-detect network setup
      AvailabilityZones: [myAZ] # use EBS volume's AZ
  # will create a launch configuration and auto scaling group with scaling triggers
  - AppServer:
      Type: Senza::TaupageAutoScalingGroup
      InstanceType: t2.micro
      SecurityGroups:
        - app-{{Arguments.ApplicationId}}
      IamRoles:
        - app-{{Arguments.ApplicationId}}
      AssociatePublicIpAddress: false # change for standalone deployment in default VPC
      TaupageConfig:
        application_version: "{{Arguments.ImageVersion}}"
        runtime: Docker
        source: "stups/hello-world:{{Arguments.ImageVersion}}"
        mint_bucket: "{{Arguments.MintBucket}}"
        kms_private_ssh_key: "aws:kms:myAWSregion:123456789:key/myrandomstringwithnumbers123456567890"
        volumes:
          ebs:
            /dev/sdf: my-volume
        mounts:
          /data:
            partition: /dev/xvdf  

If you copy/paste the template above, make sure your details replace the dummy values

EBS volumes with Senza

Please follow these instructions: senza's storage guild to create a EBS volume the stups way.

Kubernetes stateful set, volume, volume claim

The statefulset resource definition is the main kubernetes configuration file:

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
      name: statefulset-ghe-backup
spec:
  serviceName: deploy-ghe-backup
  replicas: 1
  template:
    metadata:
      labels:
        app: ghe-backup
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      containers:
      - name: container-{ghe-backup}
        image: pierone.stups.zalan.do/machinery/ghe-backup:cdp-master-38
        resources:
          requests:
            cpu: 100m
            memory: 1Gi
          limits:
            cpu: 400m
            memory: 4Gi
        volumeMounts:
        - name: data-{ghe-backup}
          mountPath: /data
        - name: {ghe-backup}-secret
          mountPath: /meta/ghe-backup-secret
          readOnly: true
        - name: podinfo
          mountPath: /details
          readOnly: false
      volumes:
      - name: {ghe-backup}-secret
        secret:
          secretName: {ghe-backup}-secret
      - name: podinfo
        downwardAPI:
          items:
            - path: "labels"
              fieldRef:
                fieldPath: metadata.labels
  volumeClaimTemplates:
  - metadata:
      name: data-ghe-backup
      annotations:
        volume.beta.kubernetes.io/storage-class: standard
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1000Gi

===

License

Copyright ยฉ 2015 Zalando SE

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

ghe-backup's People

Contributors

hjacobs avatar jmcs avatar kgalli avatar lars-zalando avatar lotharschulz avatar m4ntr4 avatar mkempson avatar rashamalek avatar scherniavsky avatar tuxlife avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ghe-backup's Issues

deploy to kubernetes

  • adapt internal delivery.yaml to pick latest docker images created with this repo's delivery.yaml

delivery.yaml in k8s-master-like and master branch

@lotharschulz
Here is my understanding of "k8s-master-like" and "master" branches
"k8s-master-like" branch: Its delivery.yaml should build and push an k8s compatible image to pierone,

"master" branch: its delivery.yaml should build and push a Taupage compatible image to pierone,

are these definitions mixed somehow?

Currently
k8s-master-like/delivery.yaml#L18 and k8s-master-like/delivery.yaml#L27 seem to be creating Taupage compatible ones, and
master/delivery.yaml#L22
is creating a k8s compatible one.

run on Zalando Kubernetes setup

run ghe-backup on Zalando Kubernetes cluster:

  • create docker images for Kubernetes cluster: #66
  • deploy to Kubernetes cluster: #74
  • gather experience running it in production

replace-convert-properties.sh is not added in dockerfile

Hi @lotharschulz
https://github.com/zalando/ghe-backup/blob/master/replace-convert-properties.sh is not added to
https://github.com/zalando/ghe-backup/blob/master/Dockerfile

# docker ps -a
CONTAINER ID        IMAGE                                                       COMMAND                   CREATED             STATUS                        PORTS               NAMES
acc4c1a12aa3        pierone.stups.zalan.do/machinery/ghe-backup:cdp-master-16   "/bin/sh -c \"/backup/"   22 minutes ago      Exited (127) 20 minutes ago                       taupageapp

# cat /var/log/application.log
May 30 09:55:21 ip-172-31-131-253 docker/acc4c1a12aa3[833]: /backup/final-docker-cmd.sh: line 13: ./replace-convert-properties.sh: No such file or directory
May 30 09:56:12 ip-172-31-131-253 docker/acc4c1a12aa3[833]:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
May 30 09:56:12 ip-172-31-131-253 docker/acc4c1a12aa3[833]:                                  Dload  Upload   Total   Spent    Left  Speed
May 30 09:56:12 ip-172-31-131-253 docker/acc4c1a12aa3[833]: #015  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0#015100   469  100   469    0     0   114k      0 --:--:-- --:--:-- --:--:--  114k

Thanks,
Rasha

docker image build broken

Removing intermediate container 3c89e8b17b65
Step 16 : "/KMS/CONVERT-KMS-PRIVATE-SSH-KEY.SH", 
Unknown instruction: "/KMS/CONVERT-KMS-PRIVATE-SSH-KEY.SH",

adapt backups

  • lets do backups on sundays as there is activity there from time to time
  • we see again behavior like in #27 - lets reduce the backup attempts until we have a bigger instance (again)

reduce the number of backup attempts

there are to many (zombie) back processes running at the same time:

  • bus instance:
root     12081  0.0  0.0  45796  1000 ?        S    Aug24   0:00      |       |   \_ CRON
root     12082  0.0  0.0   4500   620 ?        Ss   Aug24   0:00      |       |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     12083  0.0  0.0   9656   852 ?        S    Aug24   0:00      |       |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     12102  0.0  0.0  11276   560 ?        S    Aug24   0:00      |       |   |           \_ grep ghe-backup
root     12143  0.0  0.0  45796  1000 ?        S    Aug24   0:00      |       |   \_ CRON
root     12144  0.0  0.0   4500   624 ?        Ss   Aug24   0:00      |       |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     12145  0.0  0.0   9656   852 ?        S    Aug24   0:00      |       |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     12164  0.0  0.0  11276   560 ?        S    Aug24   0:00      |       |   |           \_ grep ghe-backup
root     12216  0.0  0.0  45796  1000 ?        S    Aug24   0:00      |       |   \_ CRON
root     12217  0.0  0.0   4500   624 ?        Ss   Aug24   0:00      |       |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     12218  0.0  0.0   9656   848 ?        S    Aug24   0:00      |       |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     12237  0.0  0.0  11276   564 ?        S    Aug24   0:00      |       |   |           \_ grep ghe-backup
root     13226  0.0  0.1  45796  1364 ?        S    07:26   0:00      |       |   \_ CRON
root     13227  0.0  0.0   4500   664 ?        Ss   07:26   0:00      |       |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     13228  0.0  0.1   9656  1512 ?        S    07:26   0:00      |       |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     13247  0.0  0.0  11276   720 ?        S    07:26   0:00      |       |   |           \_ grep ghe-backup
root     13288  0.0  0.1  45796  1364 ?        S    08:26   0:00      |       |   \_ CRON
root     13289  0.0  0.0   4500   660 ?        Ss   08:26   0:00      |       |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     13290  0.0  0.1   9656  1520 ?        S    08:26   0:00      |       |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     13309  0.0  0.0  11276   724 ?        S    08:26   0:00      |       |   |           \_ grep ghe-backup
root     13350  0.0  0.1  45796  1364 ?        S    09:26   0:00      |       |   \_ CRON
root     13351  0.0  0.0   4500   664 ?        Ss   09:26   0:00      |       |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     13352  0.0  0.1   9656  1516 ?        S    09:26   0:00      |       |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     13371  0.0  0.0  11276   728 ?        S    09:26   0:00      |       |   |           \_ grep ghe-backup
root     13412  0.0  0.1  45796  1364 ?        S    10:26   0:00      |       |   \_ CRON
root     13413  0.0  0.0   4500   664 ?        Ss   10:26   0:00      |       |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     13414  0.0  0.1   9656  1516 ?        S    10:26   0:00      |       |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     13433  0.0  0.0  11276   728 ?        S    10:26   0:00      |       |   |           \_ grep ghe-backup
root     13485  0.0  0.1  45796  1364 ?        S    11:26   0:00      |       |   \_ CRON
root     13486  0.0  0.0   4500   664 ?        Ss   11:26   0:00      |       |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     13487  0.0  0.1   9656  1512 ?        S    11:26   0:00      |       |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     13506  0.0  0.0  11276   724 ?        S    11:26   0:00      |       |   |           \_ grep ghe-backup
root     13547  0.0  0.1  45796  1364 ?        S    12:26   0:00      |       |   \_ CRON
root     13548  0.0  0.0   4500   664 ?        Ss   12:26   0:00      |       |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     13549  0.0  0.1   9656  1516 ?        S    12:26   0:00      |       |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     13568  0.0  0.0  11276   724 ?        S    12:26   0:00      |       |   |           \_ grep ghe-backup
root     13609  0.0  0.1  45796  1364 ?        S    13:26   0:00      |       |   \_ CRON
root     13610  0.0  0.0   4500   660 ?        Ss   13:26   0:00      |       |       \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     13611  0.0  0.1   9656  1516 ?        S    13:26   0:00      |       |           \_ bash /backup/backup-utils/bin/ghe-backup -v
root     13630  0.0  0.0  11276   728 ?        S    13:26   0:00      |       |               \_ grep ghe-backup
  • automata instancnce:
root     11015  0.0  0.0  11276   124 ?        S    Aug22   0:00              |   |           \_ grep ghe-backup
root     11132  0.0  0.0  45796   380 ?        S    Aug22   0:00              |   \_ CRON
root     11133  0.0  0.0   4500    96 ?        Ss   Aug22   0:00              |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     11134  0.0  0.0   9656   296 ?        S    Aug22   0:00              |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     11153  0.0  0.0  11276   124 ?        S    Aug22   0:00              |   |           \_ grep ghe-backup
root     11255  0.0  0.0  45796   380 ?        S    Aug22   0:00              |   \_ CRON
root     11256  0.0  0.0   4500    92 ?        Ss   Aug22   0:00              |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     11257  0.0  0.0   9656   292 ?        S    Aug22   0:00              |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     11276  0.0  0.0  11276   124 ?        S    Aug22   0:00              |   |           \_ grep ghe-backup
root     11379  0.0  0.0  45796   380 ?        S    Aug22   0:00              |   \_ CRON
root     11380  0.0  0.0   4500   100 ?        Ss   Aug22   0:00              |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     11381  0.0  0.0   9656   292 ?        S    Aug22   0:00              |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     11400  0.0  0.0  11276   128 ?        S    Aug22   0:00              |   |           \_ grep ghe-backup
root     11505  0.0  0.0  45796   380 ?        S    Aug22   0:00              |   \_ CRON
root     11506  0.0  0.0   4500    96 ?        Ss   Aug22   0:00              |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     11507  0.0  0.0   9656   300 ?        S    Aug22   0:00              |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     11526  0.0  0.0  11276   128 ?        S    Aug22   0:00              |   |           \_ grep ghe-backup
root     11641  0.0  0.0  45796   380 ?        S    Aug22   0:00              |   \_ CRON
root     11642  0.0  0.0   4500    96 ?        Ss   Aug22   0:00              |   |   \_ /bin/sh -c /backup/backup-utils/bin/ghe-backup -v 1>> /var/log/ghe-prod-backup.log 2>&1
root     11643  0.0  0.0   9656   296 ?        S    Aug22   0:00              |   |       \_ bash /backup/backup-utils/bin/ghe-backup -v
root     11662  0.0  0.0  11276   128 ?        S    Aug22   0:00              |   |           \_ grep ghe-backup
``

Backup process hangs after a couple of tries due to docker fifo issue

When Backup process starts, It creates a file named in-progress (with the assumption of preventing other backup processes to start), but when it is not responsive anymore (stucked for some reason), the backup does not finish, the process is still in the process list, and the in-progress is still there till the next day, which /delete-instuck-backups/delete_instuck_progress.py will take care of it and delete the in-progress file (only after one day).

The issue is that it will not take care of the running (stucked) process.

On the other hand, /start_backup.sh only checks for the pid existence in process list

pidof -o $$ -x "$0" >/dev/null 2>&1 && exit 1
in this case no other backup will be executed, till someone, manually kills the old stucked process or restart the docker machine.

full backup disk

root@....:/data/ghe-production-data# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvdf       985G  985G     0 100% /data

lets reduce the number of backups

prune in stuck backups

An in-progress file is left in backup data folder in case a backup is aborted.
The next backup attempt fails with

Error: backup process 1468 of [myhost] already in progress in snapshot 20160219T112301. Aborting.

Prune in-progress file on EBS volume if exists in backup data only on container startup for now as this file indicates a backup is running currently.

different docker files/images/containers per AWS account

Current situation: backups in both AWS accounts are triggered via corn at the same time 13th minutes.
goals:

  • backups should be triggered in one account on odd hours in the other account in even hours
    • approach: different dockerfiles, docker images, docker container per AWS account
  • trigger the backup process only if no other backup process is in the process list

correct permissions for /kms/convert-kms-private-ssh-key.sh

Permission issues on /kms/convert-kms-private-ssh-key.sh
May 30 13:02:46 ip-172-31-142-237 docker/d13c786d96fd[825]: % Total % Received % Xferd Average Speed Time Time Time Current
May 30 13:02:46 ip-172-31-142-237 docker/d13c786d96fd[825]: Dload Upload Total Spent Left Speed
May 30 13:02:46 ip-172-31-142-237 docker/d13c786d96fd[825]: #15 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0#015100 469 100 469 0 0 114k 0 --:--:-- --:--:-- --:--:-- 114k
May 30 13:02:46 ip-172-31-142-237 docker/d13c786d96fd[825]: /backup/final-docker-cmd.sh: line 14: /kms/convert-kms-private-ssh-key.sh: Permission denied

fixing ssh private key

id_rsa is written to a file in a wrong path because of "~" wrong expansion.

# find /backup -name id_rsa
/backup/~/.ssh/id_rsa

backup clean up script

a script should implement a back clean up strategy e.g.

  • delete all backups older then 4 weeks but keep
  • one backup per calendar month with last 12 months
  • one backup per calendar year

current variable expansion throws exception in some cases

current variable expansion produces unexpected exception in some edge cases:

# /kms/convert-kms-ghe-mcpassword.sh
/kms/convert-kms-ghe-mcpassword.sh: line 18: $2: unbound variable

more detail about parameter expansion in shell scripts
https://www.quora.com/What-is-the-best-way-to-check-if-an-argument-exists-in-Bash
http://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_05_02

similar issues:
https://groups.google.com/forum/#!topic/comp.unix.shell/qklDGBv0Sdk

backup schedule overlap

Hi,
Currently cron-ghe-backup-automata and cron-ghe-backup-bus each, are configured for every two hours.
First automata backup took more than 1 hour. the next one took around 13 minutes.
This would lead us not to able to calculate the normal backup, and an overlap between automata and bus backup instances.
Suggestion: change the cron to every 3-4 hours to prevent the overlap, and also some time for GHE job queue to be cleaned and completed.
@lotharschulz Please check if applicable.

$ du -hc --max-depth=1 /data/ghe-production-data 124G /data/ghe-production-data/20170321T121301 12G /data/ghe-production-data/20170321T101301

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.