pivotal-cf / bbr-pcf-pipeline-tasks Goto Github PK
View Code? Open in Web Editor NEWCollection of Concourse tasks for backing up a Tanzu Application Service (TAS) installation using BOSH Backup and Restore (BBR)
License: Apache License 2.0
Collection of Concourse tasks for backing up a Tanzu Application Service (TAS) installation using BOSH Backup and Restore (BBR)
License: Apache License 2.0
Hi,
I am getting the error "tar: director-backup.tar: Cannot write: No space left on device, tar: Error is not recoverable: exiting now", when running the pipeline.
I have increased the space for Bosh Director in PAS tile, But still getting the same error.
Could any one help me in understanding where exactly, it was loading the temporary ".tar" files, before moving them to Storage Account.
Thanks in advance.
Hi
Running bbr pipelines in a PKS environment. Because of docker pull rate limits, i moved the pivnet-resource image to internal registry. The backups are running according to a schedule by using a time resource. When i look at my internal registry logs, i see that pivnet-resource:latest is pulled once in each minute of the day..
This is the pivnet resource:
resource_types:
name: pivnet
type: docker-image
source:
repository: $internal_registry:443/pks/pivnet-resource
tag: latest-final
resources for bbr repo and bbr-release:
resources:
name: bbr-pipeline-tasks-repo
type: git
source:
uri: http://$internal_git/bbr-pcf-pipeline-tasks-2.1.0.git
#private_key: ((git-private-key))
branch: test
#tag_filter: ((bbr-pipeline-tasks-repo-version))
name: bbr-release
type: pivnet
source:
api_token: ((pivnet-api-token))
product_slug: p-bosh-backup-and-restore
My time resource:
So wondering what triggers a pull from internal registry each minute..Any ideas?
Please help me, i want to backup two environtment. Let say Prod and Dev. I have 2 instance Worker. On dev will use one VM worker and Prod will use another one. How to achieve this setting. Because i dont have and idea related to concourse worker tags. Is there any good documention for adding worker tags ?
In order to allow consumers to consume this repository directly as a Concourse resource without being broken when non-passive changes are made, have you considered versioning these tasks?
Or if the tasks aren't going to be versioned, should these tasks be treated strictly as "reference tasks" not designed for direct consumption?
Surely there is going to come a day when these tasks will no longer be backwards compatible with every version of PAS/Opsmgr/PKS/etc. Perhaps it would be too much management overhead to maintain all different task compatibilities for different versions of products; case in point, the pcf-pipelines
project has to maintain all these compatibility matrices and such. But then again, without version pinning, people using different product combinations will eventually be broken; naming conventions will eventually change (case in point, ERT is now PAS), etc.
Hi.. Our Ops Manager is SAML authenticated..
Can you provide some information on how we would go about exporting the Ops Manager settings in such as scenario?
We initiate CF backups via a bastion host and are starting to see [WARNING] BOSH_ALL_PROXY was defined in pipeline but missing from task file
.
As of Concourse 4.1 this is a warning but the 4.1 release notes indicate this will become an error in the future.
The bbr-pcf-pipeline-tasks can successfully director and pas tiles, can I use this to backup credhub or concourse deployment successfully? If so, is there any modifications I need to do?
it doesn't look like bbr-version
in secrets.yml
is being used by a pipeline. should the version be controlled by a parameter or hardcoded using tag: latest-final
Hi,
I have a query about adding compression option in the tasks under bbr-backup-pas
and bbr-backup-director
. Is there any logic behind the current scripts which only archive the files or is it possible to add compression option?
For example:
tar -czvf "director-backup_${current_date}.tar.gz" --remove-files -- /
Thanks,
Meraj
How do we backup OM installation settings if Ops Manager is configured with SAML Authentication?
Is there a way to get credentials to be used in Concourse automation pipeline to target Ops Manager in fully automated way?
Thank you!
We were unable to call all parameters, like SA details instead of calling in pipeline.yml file.
Is there a way to call, all of them in params.yml file?
Thanks in advance.
Logs :
Using Ops Manager credentials
exporting installation
could not execute "export-installation": failed to export installation: could not make api request to installation_asset_collection endpoint: could not send api request to GET /api/v0/installation_asset_collection: token could not be retrieved from target url: Post https://xxxx-opsman-02.ims.cnp.local/uaa/oauth/token: x509: certificate is valid for Tempest, not xxxx-opsman-02.xxx.xxx.xxx
Just need to know if anyone has considered DELETE-ing all current sessions before starting the om export installation.
My Ops Manager export installations fail when someone else is logged into Ops Manager.
A simple "om curl DELETE /api/v0/sessions" should help avoid failed installation exports.
I maybe able to submit a PR for this if interested.
In the sample pipeline, the default value for om-backup-artifact
is installation.zip
. It wouldn't hurt to parameterize that out into the secrets.yml
(it's not a secret, but in lieu of a params.yml
I just figure put it there?).
3.14.0
binary
VMware
Chrome / Firefox
Initial Test
I am using #14 pipeline to automate PCF ecosystem backup. Out of three jobs running in parallel , job #1 (bbr-backup-ert
) and job #2 (export-om-installation
) finish successfully. However, job #2 (bbr-backup-director
) errors out with the following message:
failed to stream in to volume
.
This is it. No more information. Won't tell me why it failed.
How can I troubleshoot this? Any ideas?
Thanks a lot.
Alex
Hi
When task bbr-backup-director/task.sh try to run
../binary/bbr director --host "${BOSH_ENVIRONMENT}"
--username "$BOSH_USERNAME"
--private-key-path <(echo "${BOSH_PRIVATE_KEY}")
backup
it fails with
socks connect tcp localhost:5000-> : dial tcp 127.0.0.1:5000: connect: connection refused
██fly -t rnd i -j bbr_backup/bbr_backup -b 2
1: build #2, step: bbr-backup-director, type: task
2: build #2, step: bbr-pipeline-tasks-repo, type: get
3: build #2, step: bbr-release, type: get
4: build #2, step: bbr_image, type: get
5: build #2, step: extract-bbr, type: task
choose a container: 1
root@79297396-9b90-472e-72f1-9aedc3d84d7d:/tmp/build/cdcc260a# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/loop2 967G 1.3G 964G 1% /
tmpfs 16G 0 16G 0% /dev/shm
/dev/sdb2 977G 4.0G 924G 1% /tmp/garden-init
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/loop2 967G 1.3G 964G 1% /scratch
/dev/loop2 967G 1.3G 964G 1% /tmp/build/cdcc260a
/dev/loop2 967G 1.3G 964G 1% /tmp/build/cdcc260a/bbr-pipeline-tasks-repo
/dev/loop2 967G 1.3G 964G 1% /tmp/build/cdcc260a/binary
/dev/loop2 967G 1.3G 964G 1% /tmp/build/cdcc260a/director-backup-artifact
devtmpfs 16G 0 16G 0% /dev/tty
tmpfs 16G 0 16G 0% /proc/scsi
tmpfs 16G 0 16G 0% /sys/firmware
root@79297396-9b90-472e-72f1-9aedc3d84d7d:/tmp/build/cdcc260a#
And still error
root@79297396-9b90-472e-72f1-9aedc3d84d7d:/tmp/build/cdcc260a# cat director-backup-artifact/bbr-2020-02-26T04:54:08Z.err.log
1 error occurred:
error 1:
socks connect tcp localhost:5000->10.100.8.2:22: dial tcp 127.0.0.1:5000: connect: connection refused
ssh.Dial failed
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.Connection.runInSession
/tmp/build/80754af9/bosh-backup-and-restore/ssh/connection.go:148
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.Connection.Stream
/tmp/build/80754af9/bosh-backup-and-restore/ssh/connection.go:85
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.Connection.Run
/tmp/build/80754af9/bosh-backup-and-restore/ssh/connection.go:77
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.SshRemoteRunner.FindFiles
/tmp/build/80754af9/bosh-backup-and-restore/ssh/remote_runner.go:123
github.com/cloudfoundry-incubator/bosh-backup-and-restore/instance.(*JobFinderFromScripts).findBBRScripts
/tmp/build/80754af9/bosh-backup-and-restore/instance/job_finder.go:95
github.com/cloudfoundry-incubator/bosh-backup-and-restore/instance.(*JobFinderFromScripts).FindJobs
/tmp/build/80754af9/bosh-backup-and-restore/instance/job_finder.go:52
github.com/cloudfoundry-incubator/bosh-backup-and-restore/standalone.DeploymentManager.Find
/tmp/build/80754af9/bosh-backup-and-restore/standalone/deployment_manager.go:53
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.(*FindDeploymentStep).Run
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/find_deployment_step.go:14
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.(*Workflow).Run
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/workflow.go:17
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.Backuper.Backup
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/backuper.go:55
github.com/cloudfoundry-incubator/bosh-backup-and-restore/cli/command.DirectorBackupCommand.Action
/tmp/build/80754af9/bosh-backup-and-restore/cli/command/director_backup.go:47
github.com/urfave/cli.HandleAction
/go/pkg/mod/github.com/urfave/[email protected]/app.go:490
github.com/urfave/cli.Command.Run
/go/pkg/mod/github.com/urfave/[email protected]/command.go:210
github.com/urfave/cli.(*App).RunAsSubcommand
/go/pkg/mod/github.com/urfave/[email protected]/app.go:379
github.com/urfave/cli.Command.startApp
/go/pkg/mod/github.com/urfave/[email protected]/command.go:298
github.com/urfave/cli.Command.Run
/go/pkg/mod/github.com/urfave/[email protected]/command.go:98
github.com/urfave/cli.(*App).Run
/go/pkg/mod/github.com/urfave/[email protected]/app.go:255
main.main
/tmp/build/80754af9/bosh-backup-and-restore/cmd/bbr/main.go:73
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
ssh.Stream failed
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.Connection.Stream
/tmp/build/80754af9/bosh-backup-and-restore/ssh/connection.go:87
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.Connection.Run
/tmp/build/80754af9/bosh-backup-and-restore/ssh/connection.go:77
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.SshRemoteRunner.FindFiles
/tmp/build/80754af9/bosh-backup-and-restore/ssh/remote_runner.go:123
github.com/cloudfoundry-incubator/bosh-backup-and-restore/instance.(*JobFinderFromScripts).findBBRScripts
/tmp/build/80754af9/bosh-backup-and-restore/instance/job_finder.go:95
github.com/cloudfoundry-incubator/bosh-backup-and-restore/instance.(*JobFinderFromScripts).FindJobs
/tmp/build/80754af9/bosh-backup-and-restore/instance/job_finder.go:52
github.com/cloudfoundry-incubator/bosh-backup-and-restore/standalone.DeploymentManager.Find
/tmp/build/80754af9/bosh-backup-and-restore/standalone/deployment_manager.go:53
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.(*FindDeploymentStep).Run
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/find_deployment_step.go:14
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.(*Workflow).Run
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/workflow.go:17
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.Backuper.Backup
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/backuper.go:55
github.com/cloudfoundry-incubator/bosh-backup-and-restore/cli/command.DirectorBackupCommand.Action
/tmp/build/80754af9/bosh-backup-and-restore/cli/command/director_backup.go:47
github.com/urfave/cli.HandleAction
/go/pkg/mod/github.com/urfave/[email protected]/app.go:490
github.com/urfave/cli.Command.Run
/go/pkg/mod/github.com/urfave/[email protected]/command.go:210
github.com/urfave/cli.(*App).RunAsSubcommand
/go/pkg/mod/github.com/urfave/[email protected]/app.go:379
github.com/urfave/cli.Command.startApp
/go/pkg/mod/github.com/urfave/[email protected]/command.go:298
github.com/urfave/cli.Command.Run
/go/pkg/mod/github.com/urfave/[email protected]/command.go:98
github.com/urfave/cli.(*App).Run
/go/pkg/mod/github.com/urfave/[email protected]/app.go:255
main.main
/tmp/build/80754af9/bosh-backup-and-restore/cmd/bbr/main.go:73
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
ssh.Run failed
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.Connection.Run
/tmp/build/80754af9/bosh-backup-and-restore/ssh/connection.go:79
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.SshRemoteRunner.FindFiles
/tmp/build/80754af9/bosh-backup-and-restore/ssh/remote_runner.go:123
github.com/cloudfoundry-incubator/bosh-backup-and-restore/instance.(*JobFinderFromScripts).findBBRScripts
/tmp/build/80754af9/bosh-backup-and-restore/instance/job_finder.go:95
github.com/cloudfoundry-incubator/bosh-backup-and-restore/instance.(*JobFinderFromScripts).FindJobs
/tmp/build/80754af9/bosh-backup-and-restore/instance/job_finder.go:52
github.com/cloudfoundry-incubator/bosh-backup-and-restore/standalone.DeploymentManager.Find
/tmp/build/80754af9/bosh-backup-and-restore/standalone/deployment_manager.go:53
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.(*FindDeploymentStep).Run
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/find_deployment_step.go:14
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.(*Workflow).Run
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/workflow.go:17
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.Backuper.Backup
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/backuper.go:55
github.com/cloudfoundry-incubator/bosh-backup-and-restore/cli/command.DirectorBackupCommand.Action
/tmp/build/80754af9/bosh-backup-and-restore/cli/command/director_backup.go:47
github.com/urfave/cli.HandleAction
/go/pkg/mod/github.com/urfave/[email protected]/app.go:490
github.com/urfave/cli.Command.Run
/go/pkg/mod/github.com/urfave/[email protected]/command.go:210
github.com/urfave/cli.(*App).RunAsSubcommand
/go/pkg/mod/github.com/urfave/[email protected]/app.go:379
github.com/urfave/cli.Command.startApp
/go/pkg/mod/github.com/urfave/[email protected]/command.go:298
github.com/urfave/cli.Command.Run
/go/pkg/mod/github.com/urfave/[email protected]/command.go:98
github.com/urfave/cli.(*App).Run
/go/pkg/mod/github.com/urfave/[email protected]/app.go:255
main.main
/tmp/build/80754af9/bosh-backup-and-restore/cmd/bbr/main.go:73
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357
finding scripts failed on bosh/0
github.com/cloudfoundry-incubator/bosh-backup-and-restore/instance.(*JobFinderFromScripts).findBBRScripts
/tmp/build/80754af9/bosh-backup-and-restore/instance/job_finder.go:97
github.com/cloudfoundry-incubator/bosh-backup-and-restore/instance.(*JobFinderFromScripts).FindJobs
/tmp/build/80754af9/bosh-backup-and-restore/instance/job_finder.go:52
github.com/cloudfoundry-incubator/bosh-backup-and-restore/standalone.DeploymentManager.Find
/tmp/build/80754af9/bosh-backup-and-restore/standalone/deployment_manager.go:53
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.(*FindDeploymentStep).Run
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/find_deployment_step.go:14
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.(*Workflow).Run
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/workflow.go:17
github.com/cloudfoundry-incubator/bosh-backup-and-restore/orchestrator.Backuper.Backup
/tmp/build/80754af9/bosh-backup-and-restore/orchestrator/backuper.go:55
github.com/cloudfoundry-incubator/bosh-backup-and-restore/cli/command.DirectorBackupCommand.Action
/tmp/build/80754af9/bosh-backup-and-restore/cli/command/director_backup.go:47
github.com/urfave/cli.HandleAction
/go/pkg/mod/github.com/urfave/[email protected]/app.go:490
github.com/urfave/cli.Command.Run
/go/pkg/mod/github.com/urfave/[email protected]/command.go:210
github.com/urfave/cli.(*App).RunAsSubcommand
/go/pkg/mod/github.com/urfave/[email protected]/app.go:379
github.com/urfave/cli.Command.startApp
/go/pkg/mod/github.com/urfave/[email protected]/command.go:298
github.com/urfave/cli.Command.Run
/go/pkg/mod/github.com/urfave/[email protected]/command.go:98
github.com/urfave/cli.(*App).Run
/go/pkg/mod/github.com/urfave/[email protected]/app.go:255
main.main
/tmp/build/80754af9/bosh-backup-and-restore/cmd/bbr/main.go:73
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
root@f550bbef-d4b8-48f0-4db8-5803088aeaad:/tmp/build/cdcc260a# ps -auwx
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 1120 4 ? Ss 05:34 0:00 /tmp/garden-init
root 565 0.0 0.0 18288 3384 pts/0 Ss 05:48 0:00 bash
root 665 99.7 0.0 47000 5308 pts/0 R 05:48 7:20 ssh -4 -D 5000 -NC [email protected] -i /tmp/build/cdcc260a/ssh.key -o ServerAliveInterval=60 -o StrictHostKeyC
root 670 0.0 0.0 34428 2868 pts/0 R+ 05:56 0:00 ps -auwx
root@f550bbef-d4b8-48f0-4db8-5803088aeaad:/tmp/build/cdcc260a# env | grep -i bosh
BOSH_CA_CERT_PATH=/tmp/build/cdcc260a/bosh.crt
BOSH_CLIENT=bbr_client
BOSH_USERNAME=bbr
BOSH_ENVIRONMENT=asdasd
BOSH_PRIVATE_KEY=-----BEGIN RSA PRIVATE KEY-----
BOSH_CLIENT_SECRET=asdasdasdasd
BOSH_ALL_PROXY=socks5://localhost:5000
BOSH_CA_CERT=/tmp/build/cdcc260a/bosh.crt
I'm working on a concourse pipeline using this for an s3-compatible filestore but one that doesn't support versioned files. Is there a way to set up the s3 resource part of the pipeline without versioned files?
Docker container created by the Dockerfile at bbr-pcf-pipeline-tasks/docker does NOT contain any valid om commands. This is the same for the Docker image pcfplatformrecovery/bbr-pcf-pipeline-tasks at Docker Hub, which seems to be updated recently. I think this is because the download url list, https://api.github.com/repos/pivotal-cf/om/releases/latest, is updated.
Hope this problem to be resolved.
Can you please add option in pipeline/secrets yml to not specify any tag for worker so worker from default pool is selected?
This might be my view but here it goes.
It is common for CF teams to have a single bucket for all backups.
Given the current pipeline if you run the backups for say export-om-installation for two sites keeping the s3 bucket the same, the installation.zip of one site will be overwritten by the installation.zip of the other site since there is no uniqueness in the name. This can happen for director and ert backups too. It works amazingly well as long as it is for one site and one site only.
My suggestion is to add a site-id parameter that consumers can add to the secrets and the site-id will be appended to the versioned file as ((site-id))-installation.zip . This is something that requires modification of the pipeline.yml only.
As much as consumers can customize the existing pipeline to their needs, this might be one that ensures files are named meaningfully when the pipeline is run.
If there is interest, I can submitted a PR for the same.
The sample pipeline does not include param values for client-id
or client-secret
to be passed in for om
auth., however the sample secrets.yml
file has those params defined. Maybe I'm missing something on this, though?
EDIT: it also appears some of the tasks don't allow for client-id and client-secret either
In the v1.1.0 release notes we announced that the ERT tasks are deprecated and that we will remove them in a future version. Removing bbr-backup-ert
and bbr-cleanup-ert
will be a breaking change and we will cut a major release.
If you use these tasks to back up ERT, then you should consider pinning to version 1.0.0, see Pinning to a version.
If you use these tasks to back up PAS, then you should switch to bbr-backup-pas
and bbr-cleanup-pas
.
Please comment in this issue if you have any objections.
@terminatingcode and Josh
[bbr] 2019/03/26 16:07:08 ERROR - Error unlocking bbr-usage-servicedb on backup_restore/78a03107-5bab-4fb4-8308-0a4d5c60bb8f.
[bbr] 2019/03/26 16:07:08 INFO - Finished running post-backup-unlock scripts.
2 errors occurred:
error 1:
1 error occurred:
error 1:
Error attempting to run pre-backup-lock for job bbr-usage-servicedb on backup_restore/78a03107-5bab-4fb4-8308-0a4d5c60bb8f: + exec /var/vcap/jobs/bbr-usage-servicedb/bin/lock
bbr backup-cleanup
to ensure that any temp files are cleaned up and all jobs are unlocked.Shouldn't the backup in the container always be cleaned up (on failure and on success)? The ensure
task item runs in both scenarios. Also, using cached folders avoids copying the backup from one container task to another. If you hijack the containers you can see it is indeed copied.
I can create a pull request for this: master...patrickhuber:master but never received customer feedback that it was working. My local tests showed that it worked as expected.
Greetings,
I was running bbr-pcf-pipeline-tasks
, quite successfully over past month, no issues. Today, the pipeline errored out with the following message:
Using Ops Manager credentials
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Thu, 13 Sep 2018 14:33:29 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 64f50273-1875-479e-8e52-36f05f808f5b
X-Runtime: 1.028000
X-Xss-Protection: 1; mode=block
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Thu, 13 Sep 2018 14:33:31 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 47844370-555f-4e4e-9e04-da7a511995e4
X-Runtime: 1.094104
X-Xss-Protection: 1; mode=block
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Thu, 13 Sep 2018 14:33:33 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: e75cfd0a-231a-4f35-ab7d-6b656d5aea65
X-Runtime: 0.967371
X-Xss-Protection: 1; mode=block
bbr-pipeline-tasks-repo/tasks/bbr-backup-director/../../scripts/export-director-metadata: line 25: OPSMAN_PRIVATE_KEY: unbound variable
Not sure what to make of it. This pipeline was running fine, and now this... Don't think I ever had to provide OPSMAN_PRIVATE_KEY variable. Is this new?
Does bbr-pcf-pipeline
support uploading backups to EMC ECS ?
Greetings,
This is just a question: Is it even possible to schedule a pipeline run? For instance, I would like to do a periodic backup of PCF, let's say every Saturday morning at 02:00am. Will it be possible to do it, or should I be resorting to unpause
and then pause
using fly
and based on crontab ?
Hello,
we have an issue where export-director-metadata is not working with ERT where credhub is enabled.
error below
bosh director unreachable or unhealthy: Fetching info: Performing request GET 'https://<redacted_ip>:25555/info': Performing GET request: Retry: Get https://<redacted_ip>:25555/info: Forbidden
Greetings,
Having a very strange behavior with Concourse. So, I’ve deployed Concourse though BOSH as part of deployment inside BOSH director. I was able to launch BBR job on it as well. So far so good. Now, out of three components of Concourse BBR pipeline, two finish without a problem:
export-om-installation
bbr-backup-director
However, bbr-backup-ert fails. Very curious error:
Using Ops Manager credentials
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Fri, 21 Dec 2018 21:48:16 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 29966c32-bac4-4c24-a014-f568cbad8821
X-Runtime: 1.019136
X-Xss-Protection: 1; mode=block
Retreving BBR client credentials for BOSH Director
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Fri, 21 Dec 2018 21:48:17 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 49f3c396-e933-498a-bff7-d530a147e5fd
X-Runtime: 1.341792
X-Xss-Protection: 1; mode=block
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Fri, 21 Dec 2018 21:48:20 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: fcddd37e-d160-406b-9201-fed33bfb2df2
X-Runtime: 2.298233
X-Xss-Protection: 1; mode=block
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Fri, 21 Dec 2018 21:48:22 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 69c49dfb-c877-45dc-8bfa-b4b5593850c1
X-Runtime: 1.836832
X-Xss-Protection: 1; mode=block
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Fri, 21 Dec 2018 21:48:23 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 60474c65-effb-4f6c-bb1a-27b545e92f37
X-Runtime: 0.077863
X-Xss-Protection: 1; mode=block
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Fri, 21 Dec 2018 21:48:25 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 9d8e6205-82d7-44be-9803-b9c320961b61
X-Runtime: 1.924620
X-Xss-Protection: 1; mode=block
Status: 200 OK
Cache-Control: no-cache, no-store
Connection: keep-alive
Content-Type: application/json; charset=utf-8
Date: Fri, 21 Dec 2018 21:48:28 GMT
Expires: Fri, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: nginx/1.4.6 (Ubuntu)
Strict-Transport-Security: max-age=15552000
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Request-Id: 3c4120c1-9e82-4d40-9208-a8dff6f7051e
X-Runtime: 2.847638
X-Xss-Protection: 1; mode=block
/tmp/build/e4b91406/ert-backup-artifact /tmp/build/e4b91406
[bbr] 2018/12/21 21:48:28 INFO - Looking for scripts
1 error occurred:
error 1:
failed to find instances for deployment cf-bd9479cdb9dee3d26d39: failed to check os: ssh.Run failed: ssh.Stream failed: ssh.Dial failed: dial tcp 192.168.1.59:22: connect: connection timed out
I don’t know where this IP 192.168.1.59
comes from. But with every backup it changes, while staying within the same RFC1918 address space (192.168.1.x
). I hijacked into a container, couldn’t find anything but this little snippet:
$ fly42 -t boshcc hijack -j bbr-backup-CFLAB01-CG/bbr-backup-ert
1: build #1, step: bbr-backup-ert, type: task
2: build #1, step: bbr-release, type: get
3: build #1, step: extract-binary, type: task
4: build #1, step: slack-alert, type: get
5: build #1, step: slack-alert, type: put
choose a container: 1
root@6a7867a6-08b8-4447-746c-90ab5b6e025c:/tmp/build/e4b91406# ls
bbr-pipeline-tasks-repo bbr_client.json bbr_keys.json binary bosh.crt deployed_products.json ert-backup-artifact
root@6a7867a6-08b8-4447-746c-90ab5b6e025c:/tmp/build/e4b91406# cd ert-backup-artifact/
root@6a7867a6-08b8-4447-746c-90ab5b6e025c:/tmp/build/e4b91406/ert-backup-artifact# ls
bbr-2018-12-21T21:50:45Z.err.log
root@6a7867a6-08b8-4447-746c-90ab5b6e025c:/tmp/build/e4b91406/ert-backup-artifact# cat bbr-2018-12-21T21\:50\:45Z.err.log
1 error occurred:
error 1:
dial tcp 192.168.1.59:22: connect: connection timed out
ssh.Dial failed
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.Connection.runInSession
/tmp/build/80754af9/src/github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh/connection.go:148
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.Connection.Stream
/tmp/build/80754af9/src/github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh/connection.go:85
github.com/cloudfoundry-incubator/bosh-backup-and-restore/ssh.Connection.Run
I don’t know what is it’s trying to do. It does not happen on any other foundations, however, it is important to tell that the other foundation Concourse ecosystem is not deployed using BOSH. That’s the difference. Other than that, all the pipeline settings are identical.
Here is my pipeline:
jobs:
- name: export-om-installation
serial: true
plan:
- aggregate:
- get: bbr-pipeline-tasks-repo
tags:
- ((concourse-worker-tag))
- get: run-schedule
tags:
- ((concourse-worker-tag))
trigger: true
- task: export-om-installation
tags:
- ((concourse-worker-tag))
file: bbr-pipeline-tasks-repo/tasks/export-om-installation/task.yml
params:
SKIP_SSL_VALIDATION: ((skip-ssl-validation))
OPSMAN_URL: ((opsman-url))
OPSMAN_USERNAME: {{opsman-username}}
OPSMAN_PASSWORD: {{opsman-password}}
- put: om-backup-artifact
tags:
- ((concourse-worker-tag))
params:
file: om-installation/installation.zip
inputs:
- name: export-om-installation
outputs:
- name: notify_message
on_success:
put: slack-alert
tags:
- ((concourse-worker-tag))
params:
channel: '#cloudeng'
text: |
The `BOSH BBR Export OM Installation` pipeline has Succeeded. Check it out at:
$ATC_EXTERNAL_URL/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
silent: false
on_failure:
put: slack-alert
tags:
- ((concourse-worker-tag))
params:
channel: '#cloudeng'
text: |
The `BOSH BBR Export OM Installation` pipeline has FAILED. Please resolve any issues and ensure the pipeline lock was released. Check it out at:
$ATC_EXTERNAL_URL/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
silent: false
- name: bbr-backup-ert
serial: true
plan:
- get: run-schedule
tags:
- ((concourse-worker-tag))
trigger: true
- aggregate:
- get: bbr-pipeline-tasks-repo
tags:
- ((concourse-worker-tag))
trigger: false
- get: bbr-release
tags:
- ((concourse-worker-tag))
trigger: false
- task: extract-binary
tags:
- ((concourse-worker-tag))
config:
platform: linux
image_resource:
type: docker-image
source:
repository: cloudfoundrylondon/bbr-pipeline
tag: release-candidate
inputs:
- name: bbr-release
outputs:
- name: binary
run:
path: sh
args:
- -c
- |
tar -xvf bbr-release/bbr*.tar
cp releases/bbr binary/
- task: bbr-backup-ert
tags:
- ((concourse-worker-tag))
file: bbr-pipeline-tasks-repo/tasks/bbr-backup-ert/task.yml
params:
SKIP_SSL_VALIDATION: ((skip-ssl-validation))
OPSMAN_URL: ((opsman-url))
OPSMAN_USERNAME: ((opsman-username))
OPSMAN_PASSWORD: ((opsman-password))
- put: ert-backup-bucket
tags:
- ((concourse-worker-tag))
params:
file: ert-backup-artifact/ert-backup.tar
on_success:
put: slack-alert
tags:
- ((concourse-worker-tag))
params:
channel: '#cloudeng'
text: |
The `BOSH BBR ERT/PAS Backup` pipeline has Succeeded. Check it out at:
$ATC_EXTERNAL_URL/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
silent: false
on_failure:
put: slack-alert
tags:
- ((concourse-worker-tag))
params:
channel: '#cloudeng'
The `BOSH BBR ERT/PAS Backup` pipeline has FAILED. Please resolve any issues and ensure the pipeline lock was released. Check it out at:
$ATC_EXTERNAL_URL/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
silent: false
- name: bbr-backup-director
serial: true
plan:
- get: run-schedule
tags:
- ((concourse-worker-tag))
trigger: true
- aggregate:
- get: bbr-pipeline-tasks-repo
tags:
- ((concourse-worker-tag))
trigger: false
- get: bbr-release
tags:
- ((concourse-worker-tag))
trigger: true
- task: extract-binary
tags:
- ((concourse-worker-tag))
config:
platform: linux
image_resource:
type: docker-image
source:
repository: cloudfoundrylondon/bbr-pipeline
tag: release-candidate
inputs:
- name: bbr-release
outputs:
- name: binary
run:
path: sh
args:
- -c
- |
tar -xvf bbr-release/bbr*.tar
cp releases/bbr binary/
- task: bbr-backup-director
tags:
- ((concourse-worker-tag))
file: bbr-pipeline-tasks-repo/tasks/bbr-backup-director/task.yml
params:
SKIP_SSL_VALIDATION: ((skip-ssl-validation))
OPSMAN_URL: ((opsman-url))
OPSMAN_USERNAME: ((opsman-username))
OPSMAN_PASSWORD: ((opsman-password))
- put: director-backup-bucket
tags:
- ((concourse-worker-tag))
params:
file: director-backup-artifact/director-backup.tar
on_success:
put: slack-alert
tags:
- ((concourse-worker-tag))
params:
channel: '#cloudeng'
text: |
The `BOSH BBR Director Backup` pipeline has Succeeded. Check it out at:
$ATC_EXTERNAL_URL/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
silent: false
on_failure:
put: slack-alert
tags:
- ((concourse-worker-tag))
params:
channel: '#cloudeng'
The `BOSH BBR Director BAckup` pipeline has FAILED. Please resolve any issues and ensure the pipeline lock was released. Check it out at:
$ATC_EXTERNAL_URL/teams/$BUILD_TEAM_NAME/pipelines/$BUILD_PIPELINE_NAME/jobs/$BUILD_JOB_NAME/builds/$BUILD_NAME
silent: false
resource_types:
- name: pivnet
tags:
- ((concourse-worker-tag))
type: docker-image
source:
repository: pivotalcf/pivnet-resource
tag: latest-final
- name: slack-notification
tags:
- ((concourse-worker-tag))
type: docker-image
source:
repository: cfcommunity/slack-notification-resource
resources:
- name: slack-alert
tags:
- ((concourse-worker-tag))
type: slack-notification
source:
url: ((slack-webhook))
- name: bbr-pipeline-tasks-repo
type: git
tags:
- ((concourse-worker-tag))
source:
uri: ssh://[email protected]/cf/bbr-pcf-pipeline-tasks.git
private_key: {{private_key}}
branch: master
- name: om-backup-artifact
tags:
- ((concourse-worker-tag))
type: s3
source:
bucket: ((backup-artifact-bucket))
region_name: ((storage-region))
endpoint: ((storage-endpoint))
access_key_id: ((storage-access-key-id))
secret_access_key: {{storage-secret-access-key}}
versioned_file: installation.zip
use_v2_signing: ((storage-use-v2-signing))
disable_ssl: ((disable_ssl))
- name: ert-backup-bucket
type: s3
tags:
- ((concourse-worker-tag))
source:
bucket: ((backup-artifact-bucket))
region_name: ((storage-region))
endpoint: ((storage-endpoint))
access_key_id: ((storage-access-key-id))
secret_access_key: {{storage-secret-access-key}}
versioned_file: ert-backup.tar
use_v2_signing: ((storage-use-v2-signing))
disable_ssl: ((disable_ssl))
- name: director-backup-bucket
tags:
- ((concourse-worker-tag))
type: s3
source:
bucket: ((backup-artifact-bucket))
region_name: ((storage-region))
endpoint: ((storage-endpoint))
access_key_id: ((storage-access-key-id))
secret_access_key: {{storage-secret-access-key}}
versioned_file: director-backup.tar
use_v2_signing: ((storage-use-v2-signing))
disable_ssl: ((disable_ssl))
- name: run-schedule
type: time
tags:
- ((concourse-worker-tag))
source:
location: America/New_York
days: [Friday]
start: 4:45 PM
stop: 5:45 PM
- name: bbr-release
tags:
- ((concourse-worker-tag))
type: pivnet
source:
api_token: {{pivnet-api-token}}
product_slug: p-bosh-backup-and-restore
product_version: ((BBR_major_minor_version))
Any thoughts ?
Thanks!!
Although restoring from a BBR backup can be a rather complicated process, I have not seen any examples of automation workflows which restore any of these artifacts that are being backed up (director, PAS, PKS, or any BBR artifact really).
Are there existing examples of using Concourse to restore from a BBR backup?
Thanks for your consideration
Add support for using client ID and secret to access opsman instead of username and password.
In the documentation for backing up PCF https://docs.pivotal.io/pivotalcf/2-6/customizing/backup-restore/backup-pcf-bbr.html#check-director, the "Check Your BOSH Director" section says to call bbr backup-pre-check
.
A customer has this as part of their cron bbr backup process that we are moving to concourse. We created the tasks and scripts to support it and added it to the director backup job. The idea is that the backup precheck will fail and kill the backup job before any of the other backup steps run.
Creating this issue to see if this is a desirable contribution for a pull request.
This has been raised a few times, including in #35.
The current ERT tasks are suitable for PAS or ERT. However, this is confusing. We would like tasks that have PAS in their name, so that it is clear I am using the correct tasks to backup PAS.
Please note: we want to avoid breaking changes to the tasks with ERT in their name.
Pivotal uses GITBOT to synchronize Github issues and pull requests with Pivotal Tracker.
Please add your new repo to the GITBOT config-production.yml
in the Gitbot configuration repo.
If you don't have access you can send an ask ticket to the CF admins. We prefer teams to submit their changes via a pull request.
Steps:
config-production.yml
fileIf there are any questions, please reach out to [email protected].
The current pipeline is assuming the Concourse Worker is able to talk to OpsMgr, Director and ERT for backing up using BBR. Any Pipelines available for doing a BBR Backup using Jumpbox - either OpsMgr acting as Jumpbox or a Non-Concourse Worker. We have firewall restrictions and only Ops Mgr and one other utility VM is allowed to talk to the Director and Deployments in PCF Foundations. We do not have a concourse worker in each of our foundations due to network and security reasons.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.