Giter Site home page Giter Site logo

maxisam / mgob Goto Github PK

View Code? Open in Web Editor NEW
114.0 3.0 15.0 5.65 MB

MongoDB dockerized backup agent. Runs schedule backups with retention, S3 & SFTP upload, notifications, instrumentation with Prometheus and more.

Home Page: https://maxisam.github.io/mgob/

License: MIT License

Shell 2.38% Go 87.64% Makefile 5.64% Dockerfile 2.70% JavaScript 0.90% Smarty 0.74%
backup mongodb kubernetes docker s3-storage

mgob's Introduction

mgob

Release Build Status Docker Pulls GitHub release GitHub Sponsors GitHub

MGOB is a MongoDB backup automation tool designed using Go. This fork introduces a variety of enhancements over the original repository by stefanprodan, which is set to be archived. Check out the archival discussion here.

Note: New features are being added to this fork exclusively.

Enhancements in This Fork

  • Backup validation
  • Retry mechanism for backups
  • MS Team notification support
  • Updated email notification mechanism addressing the certificate issue
  • Go updated to 1.21
  • Dependencies updated
  • Introduced warnOnly option for all notification channels
  • Integrated Github Actions for CI/CD
  • New Helm Chart with enhanced metrics, liveness probe, and other features
  • Multiple Docker image releases catering to different backup solutions
  • Option to skip local backup when retention is set to 0 (#42, Credit: @aneagoe)
  • On-demand restore API
  • Load config from environment variables to override config file. syntax: PLAN_ID__KEY_PROPERTY (e.g. mongo_test__SMTP_SERVER=smtp.company.com)

Helm Chart

helm pull oci://registry-1.docker.io/maxisam/mgob --version 1.8.3
helm upgrade --install mgob maxisam/mgob --namespace mgob --create-namespace

Breaking Changes

  • v2: in config, sftp.private_key -> sftp.privateKey

Original Features

  • schedule backups
  • local backups retention
  • upload to S3 Object Storage (Minio, AWS, Google Cloud, Azure)
  • upload to gcloud storage
  • upload to SFTP
  • upload to any Rclone supported storage
  • notifications (Email, Slack)
  • instrumentation with Prometheus
  • http file server for local backups and logs
  • distributed as an Alpine Docker image

Installation

MGOB is available on Docker Hub at maxisam/mgob.

Supported tags:

  • maxisam/mgob:latest latest stable release

Compatibility matrix:

MGOB MongoDB
stefanprodan/mgob:0.9 3.4
stefanprodan/mgob:0.10 3.6
stefanprodan/mgob:1.0 4.0
stefanprodan/mgob:1.1 4.2
maxisam/mgob:1.10 5.0
maxisam/mgob:1.12 7.0

Docker:

docker run -dp 8090:8090 --name mgob \
    -v "/mgob/config:/config" \
    -v "/mgob/storage:/storage" \
    -v "/mgob/tmp:/tmp" \
    -v "/mgob/data:/data" \
    stefanprodan/mgob \
    -LogLevel=info

Configuration

Define a backup plan (yaml format) for each database you want to backup inside the config dir. The yaml file name is being used as the backup plan ID, no white spaces or special characters are allowed.

READ MORE

On-Demand Operations

MGOB exposes a set of HTTP endpoints for on-demand operations like backup, restore, status, metrics, and version.

READ MORE: On-Demand Operations

Logs

READ MORE: Logs

Metrics

READ MORE: Metrics

Restore

READ MORE: Restore

Special Thanks

  • stefanprodan for the original repository
  • First awesome sponsor!!

Sponsors GitHub Sponsors

mgob's People

Contributors

azhelev avatar chris-scentregroup avatar ehlertjd avatar flaccid avatar floweb avatar freyert avatar h0x91b avatar harsh-2711 avatar jaben avatar jussikuosa avatar kevinsandow avatar log2 avatar mathewpeterson avatar maulal avatar maxisam avatar paic avatar priyesh2609 avatar rebrendov avatar seanson avatar skbly7 avatar solidnerd avatar stefanprodan avatar v0ctor avatar vissarionas avatar wonko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

mgob's Issues

feature request: "Purge backups which are older than..."

Hello Maxisam,
first: thank you very much for maintaining this repo! (I already used mgob from stefanprodan).
I was searching today for a possibility to purge all backups which are older than 30 days, but sadly I did not found any information about it. Is it already possible? - If not, it would be awesome to support the purge command in the configuration yml.
Thank you very much and best regards,
Ferdinand

Error running backup

Hi,

I get an error when the backup is running (stack trace below).

I don't know Go, but I believe the issue is the evaluation of this regex.

The output of running the mongodump command inside the container contains the following lines before the error is thrown:

basmongodump --gzip --host *** --port 27017 -u *** -p *** --db minds-q --authenticationDatabase admin
2023-05-26T10:03:36.578+0000    writing minds-q.fs.files to dump/minds-q/fs.files.bson.gz
2023-05-26T10:03:36.579+0000    writing minds-q.fs.chunks to dump/minds-q/fs.chunks.bson.gz
2023-05-26T10:03:36.586+0000    done dumping minds-q.fs.files (1 document)
2023-05-26T10:03:36.595+0000    done dumping minds-q.fs.chunks (1 document)

The collections are MongoDB collections to store files. Maybe the regex can't handle two dots in the collection name?

Help is much appreciated. This tool is really useful, I'd be very nice to use it!

Best

Chris

Stack trace:

2023/05/26 09:59:31 http: panic serving 172.24.0.2:39012: runtime error: slice bounds out of range [-1:]
goroutine 68 [running]:
net/http.(*conn).serve.func1()
        /usr/local/go/src/net/http/server.go:1850 +0xbf
panic({0xc05c20, 0xc0002cc1c8})
        /usr/local/go/src/runtime/panic.go:890 +0x262
github.com/go-chi/chi/middleware.prettyStack.decorateFuncCallLine({}, {0xc000410178, 0x1f}, 0xd9?, 0x6)
        /go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:130 +0x545
github.com/go-chi/chi/middleware.prettyStack.decorateLine({}, {0xc000410178?, 0xc0002c2c00?}, 0x8?, 0x1?)
        /go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:106 +0x15d
github.com/go-chi/chi/middleware.prettyStack.parse({}, {0xc0002ff000, 0xec4, 0xad23bf?}, {0xc05c20, 0xc0002cc198})
        /go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:89 +0x4d5
github.com/go-chi/chi/middleware.PrintPrettyStack({0xc05c20, 0xc0002cc198})
        /go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:46 +0x45
github.com/go-chi/chi/middleware.Recoverer.func1.1()
        /go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:30 +0xdd
panic({0xc05c20, 0xc0002cc198})
        /usr/local/go/src/runtime/panic.go:884 +0x212
github.com/stefanprodan/mgob/pkg/backup.getDumpedDocMap({0xc000282800, 0x3ef})
        /go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:98 +0x245
github.com/stefanprodan/mgob/pkg/backup.dump({{0xc0002cc038, 0x5}, {{0xc0002922b8, 0x7}, {0xc000292290, 0xc}, {0x0, 0x0}, {0xc0002cc0a8, 0x18}, ...}, ...}, ...)
        /go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:70 +0x630
github.com/stefanprodan/mgob/pkg/backup.Run({{0xc0002cc038, 0x5}, {{0xc0002922b8, 0x7}, {0xc000292290, 0xc}, {0x0, 0x0}, {0xc0002cc0a8, 0x18}, ...}, ...}, ...)
        /go/src/github.com/stefanprodan/mgob/pkg/backup/backup.go:21 +0x12d
github.com/stefanprodan/mgob/pkg/api.postBackup({0x7fac3927a9f8, 0xc0002abf00}, 0xc0002c6400)
        /go/src/github.com/stefanprodan/mgob/pkg/api/backup.go:42 +0x428
net/http.HandlerFunc.ServeHTTP(0xb85fe0?, {0x7fac3927a9f8?, 0xc0002abf00?}, 0xc0002921c6?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
github.com/go-chi/chi.(*Mux).routeHTTP(0xc000284600, {0x7fac3927a9f8, 0xc0002abf00}, 0xc0002c6400)
        /go/pkg/mod/github.com/go-chi/[email protected]/mux.go:436 +0x1f9
net/http.HandlerFunc.ServeHTTP(0xe19ef8?, {0x7fac3927a9f8?, 0xc0002abf00?}, 0xe11fc0?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
github.com/stefanprodan/mgob/pkg/api.configCtx.func1.1({0x7fac3927a9f8, 0xc0002abf00}, 0xc0002c6200)
        /go/src/github.com/stefanprodan/mgob/pkg/api/backup.go:24 +0x2f6
net/http.HandlerFunc.ServeHTTP(0xc000403798?, {0x7fac3927a9f8?, 0xc0002abf00?}, 0xc000403778?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
github.com/go-chi/chi.(*Mux).ServeHTTP(0xc000284600, {0x7fac3927a9f8, 0xc0002abf00}, 0xc0002c6200)
        /go/pkg/mod/github.com/go-chi/[email protected]/mux.go:70 +0x371
github.com/go-chi/chi.(*Mux).Mount.func1({0x7fac3927a9f8, 0xc0002abf00}, 0xc0002c6200)
        /go/pkg/mod/github.com/go-chi/[email protected]/mux.go:311 +0x19c
net/http.HandlerFunc.ServeHTTP(0xb85fe0?, {0x7fac3927a9f8?, 0xc0002abf00?}, 0xc0002a0045?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
github.com/go-chi/chi.(*Mux).routeHTTP(0xc000284120, {0x7fac3927a9f8, 0xc0002abf00}, 0xc0002c6200)
        /go/pkg/mod/github.com/go-chi/[email protected]/mux.go:436 +0x1f9
net/http.HandlerFunc.ServeHTTP(0xc0002c6100?, {0x7fac3927a9f8?, 0xc0002abf00?}, 0x8?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
github.com/go-chi/chi/middleware.RequestLogger.func1.1({0xe19338, 0xc0002d0000}, 0xc0002c6100)
        /go/pkg/mod/github.com/go-chi/[email protected]/middleware/logger.go:57 +0x3aa
net/http.HandlerFunc.ServeHTTP(0x0?, {0xe19338?, 0xc0002d0000?}, 0xf8?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
github.com/go-chi/chi/middleware.Recoverer.func1({0xe19338?, 0xc0002d0000?}, 0x129e701?)
        /go/pkg/mod/github.com/go-chi/[email protected]/middleware/recoverer.go:37 +0x83
net/http.HandlerFunc.ServeHTTP(0xe19e50?, {0xe19338?, 0xc0002d0000?}, 0x129e7d0?)
        /usr/local/go/src/net/http/server.go:2109 +0x2f
github.com/go-chi/chi.(*Mux).ServeHTTP(0xc000284120, {0xe19338, 0xc0002d0000}, 0xc0002c6000)
        /go/pkg/mod/github.com/go-chi/[email protected]/mux.go:87 +0x32a
net/http.serverHandler.ServeHTTP({0xc000288d80?}, {0xe19338, 0xc0002d0000}, 0xc0002c6000)
        /usr/local/go/src/net/http/server.go:2947 +0x30c
net/http.(*conn).serve(0xc0002879a0, {0xe19ef8, 0xc000288c90})
        /usr/local/go/src/net/http/server.go:1991 +0x607
created by net/http.(*Server).Serve
        /usr/local/go/src/net/http/server.go:3102 +0x4db

The `/status` endpoint provides incorrect information.

I force a backup via the API:

# curl -XPOST http://localhost:8090/backup/coremedia
{"plan":"coremedia","file":"coremedia-1681481854.gz","duration":"73.335066ms","size":"1.9 kB","timestamp":"2023-04-14T14:17:34.175514639Z"}

The dump was - for me - created correctly:

# ls -l /opt/mgob/storage/coremedia/coremedia-1681481854*
-rw-r--r-- 1 root root 1876 Apr 14 16:17 /opt/mgob/storage/coremedia/coremedia-1681481854.gz
-rw-r--r-- 1 root root 1134 Apr 14 16:17 /opt/mgob/storage/coremedia/coremedia-1681481854.log

But the call to /status returns a 500:

# curl -s http://localhost:8090/status/ | jq
[
  {
    "plan": "coremedia",
    "next_run": "2023-04-14T18:10:00Z",
    "last_run": "2023-04-12T18:10:00.071630282Z",
    "last_run_status": "500",
    "last_run_log": "BACKUP FAILED: after 0 retries, mongodump log : exit status 1"
  }
]

The log file shows no problems:

# cat /opt/mgob/storage/coremedia/coremedia-1681481854.log 
2023-04-14T14:17:34.202+0000    writing admin.system.users to archive '/tmp/coremedia-1681481854.gz'
2023-04-14T14:17:34.206+0000    done dumping admin.system.users (3 documents)
2023-04-14T14:17:34.206+0000    writing admin.system.version to archive '/tmp/coremedia-1681481854.gz'
2023-04-14T14:17:34.209+0000    done dumping admin.system.version (2 documents)
2023-04-14T14:17:34.209+0000    writing cms_internal_models.processes to archive '/tmp/coremedia-1681481854.gz'
2023-04-14T14:17:34.216+0000    writing cms_internal_models.processes_deleted to archive '/tmp/coremedia-1681481854.gz'
2023-04-14T14:17:34.221+0000    writing cms_internal_tasks.counter to archive '/tmp/coremedia-1681481854.gz'
2023-04-14T14:17:34.230+0000    done dumping cms_internal_models.processes (0 documents)
2023-04-14T14:17:34.231+0000    writing cms_internal_tasks.default to archive '/tmp/coremedia-1681481854.gz'
2023-04-14T14:17:34.232+0000    done dumping cms_internal_models.processes_deleted (0 documents)
2023-04-14T14:17:34.233+0000    done dumping cms_internal_tasks.counter (0 documents)
2023-04-14T14:17:34.236+0000    done dumping cms_internal_tasks.default (0 documents)

Docker output also looks good so far:

{"level":"info","msg":"Clean up temp finished Temp folder cleanup finished, `/tmp/coremedia-1681481718.gz` is removed.","plan":"coremedia","time":"2023-04-14T14:15:18Z"}
{"level":"info","msg":"On demand backup finished in 81.955669ms archive coremedia-1681481718.gz size 1.9 kB","plan":"coremedia","time":"2023-04-14T14:15:18Z"}
2023/04/14 14:15:18 "POST http://localhost:8090/backup/coremedia HTTP/1.1" from 172.17.0.1:35980 - 200 140B in 83.319544ms
2023/04/14 14:15:34 "GET http://localhost:8090/status/ HTTP/1.1" from 172.17.0.1:49964 - 200 204B in 78.22ยตs
{"level":"info","msg":"On demand backup started","plan":"coremedia","time":"2023-04-14T14:17:34Z"}
{"level":"debug","msg":"dump cmd: mongodump --archive=/tmp/coremedia-1681481854.gz --gzip --host mongodb --port 27017 -u \"xxx\" -p xxxx --authenticationDatabase admin ","time":"2023-04-14T14:17:34Z"}
{"archive":"/tmp/coremedia-1681481854.gz","err":null,"level":"info","mlog":"/tmp/coremedia-1681481854.log","msg":"new dump","time":"2023-04-14T14:17:34Z"}
ls: *.gz.encrypted: No such file or directory
{"level":"debug","msg":"apply retention","time":"2023-04-14T14:17:34Z"}
{"level":"info","msg":"Local backup finished Local backup finished `/tmp/coremedia-1681481854.gz` -\u003e `/storage/coremedia/coremedia-1681481854.gz` Duration: 7.381566ms","plan":"coremedia","time":"2023-04-14T14:17:34Z"}
{"level":"info","msg":"Clean up temp finished Temp folder cleanup finished, `/tmp/coremedia-1681481854.gz` is removed.","plan":"coremedia","time":"2023-04-14T14:17:34Z"}
{"level":"info","msg":"On demand backup finished in 73.335066ms archive coremedia-1681481854.gz size 1.9 kB","plan":"coremedia","time":"2023-04-14T14:17:34Z"}
2023/04/14 14:17:34 "POST http://localhost:8090/backup/coremedia HTTP/1.1" from 172.17.0.1:33498 - 200 140B in 73.68587ms
2023/04/14 14:17:40 "GET http://localhost:8090/status/ HTTP/1.1" from 172.17.0.1:33512 - 200 204B in 65.491ยตs

how to store passwords securely

the helm chart does not support mounting secrets as env making it hard to manage secrets securely:

Let env allow valueFrom option
Allow more custom detail in the secret option

This will allow secrets to be pulled cross namespace using replicator then mounted as env without having to put the secret in the configmap.

happy to do the pr what the process?

Error: unable to install chart

I'm getting this error:

Error: looks like "https://maxisam.github.io/mgob/" is not a valid chart repository or cannot be reached: failed to fetch https://maxisam.github.io/mgob/index.yaml : 404 Not Found

which can be reproduced manually by:

helm repo add maxisam https://maxisam.github.io/mgob/ 
Error: looks like "https://maxisam.github.io/mgob/" is not a valid chart repository or cannot be reached: failed to fetch https://maxisam.github.io/mgob/index.yaml : 404 Not Found

The file has been deleted 5 days ago. Please restore it.

Thanks in advance,
Tom

feature request: option to stepdown after backup

Background

I have a [email protected] three member replica set running on k8s and automated backups using mgob. Its memory limit (400M) is set to the max operational amount based on the working set to save $. This works great until the nightly backup is executed. When a backup completes, the primary memory spikes and will eventually do one of two things:

  1. terminate with OOMKilled
  2. even worse, succumb to disk paging with huge query times until the memory limit is reached

My efforts to figure out how to limit mongodb memory usage have been fruitless...I've tried changing the wired tiger cache limits, increasing memory limits, etc. The cluster is maxed to point where I would have to spin up more nodes ($) to increase memory further.

I'd like to propose a an option to stepdown the primary after successful dump. This is what I end up doing after seeing the primary is getting close to OOMKilled or performance goes to hell.

What do you think?

Thank you for taking on the maintenance of this critical piece of infra!

Environment variables injection

Hi,
would it be possible to inject environment variables to config-files via Kubernetes configuration?

As an example, Kubernetes StatefulSet with attached secret-password from mongodb-backup-creds:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb-backup
  namespace: mongodb
spec:
  serviceName: "mgob"
  replicas: 1
  template:
    spec:
      containers:
        - name: mongodb-backup
          image: maxisam/mgob:2.0.18-all
          env:
            - name: BACKUP_PWD
              valueFrom:
                secretKeyRef:
                  name: mongodb-backup-creds
                  key: password

Kubernetes config-map with BACKUP_PWD injection:

kind: ConfigMap
apiVersion: v1
metadata:
  name: mongodb-backup
  namespace: mongodb
data:
  mongodb-backup.yml: |
    target:
      host: "mongodb-0.mongodb,mongodb-1.mongodb,mongodb-2.mongodb"
      port: 27017
      username: "backup"
      password: "$BACKUP_PWD"
    scheduler:
      cron: "0 2 * * *"
      retention: 14
      timeout: 60

Metrics page doesn't return any information about the actual backups

The /metrics page only returns data about the go instance and promhttp, but does not return any about the actual backups themselves. Meaning mgob_scheduler_backup_total, or mgob_scheduler_backup_latency and others are missing. I'm running it in a docker instance using the latest version. Here's the version info:

{"arch":"amd64","cpu_count":"16","go_version":"go1.19.3","goroutines":"7","max_procs":"16","mgob_version":"1.9.13","os":"linux","repository":"github.com/maxisam/mgob"}

Any help is appreciated.

add restore support as alternate mode

Hello @maxisam,

first of all, thank you for your work on this project.

Currently, inside readme, the "Restore" paragraph states that you have to exec inside a running container the mongorestore command with the needed parameters.

It would be nice to support some switch to indicate that we actually want to start the container and restore an existent backup inside /storage/ and then exit the container.

Example

inside the configuration file if an archive option exists this option will override the default behaviour and run mongorestore command.

mongorestore.yaml file

scheduler:
  cron: "0 3 * * *"
  retention: 14
  timeout: 60
archive: "/storage/last_mongodb_backup.gz"
target:
  host: "mongodb"
  port: 27017
  database: ""
  username: "root"
  password: "mySecretPassw0rd"
  params: "--authenticationDatabase admin --ssl --sslCAFile=/certs/mongodb/mongodb-ca.pem --sslPEMKeyFile=/certs/mongodb/mongodb-client.pem --sslAllowInvalidCertificates"

This configuration file allows mongorestore to use the following parameters:

  • archive
  • target.host
  • target.port
  • target.database (that is actually skipped because it's empty)
  • target.username
  • target.password
  • target.params

and will be equivalent to connecting to a running container and running the following command:

mongorestore \
--host=mongodb \
-u root --password=mySecretPassw0rd --authenticationDatabase admin \
--ssl --sslCAFile=/certs/mongodb/mongodb-ca.pem --sslPEMKeyFile=/certs/mongodb/mongodb-client.pem --sslAllowInvalidCertificates \
--gzip --archive=/storage/last_backup.gz

Here it is the docker-compose.yml relevant part for the example

...
  mongorestore:
    hostname: mongorestore
    image: maxisam/mgob:1.10.14-all
    volumes:
      - ./mongodb_backup:/tmp
      - ./mongodb_backup:/data
      - ./mongodb_backup/mongorestore.yaml:/config/mongorestore.yaml:ro
      - ./certs:/certs:ro
      - ./mongodb_backup/backup:/storage
    environment:
      - -LogLevel=info
...

Maybe adding support for connection string as an option for restore could be also very useful and simplify it.

Thank you in advance

Feature Request: Restore Collections

Hi,
it would be great to restore only some configurable collections to an existing database. If the collection is not available in the target database, it should be created, otherwise its overriden. Is this possible?

Thanks in advance.

Office365 mail auth error

Office365 login error. With this configuration I send email on spring boot application.

time="2023-03-23T18:00:44Z" level=error msg="Notifier failed sending email notification failed: 504 5.7.4 Unrecognized authentication type [FR2P281CA0035.DEUP281.PROD.OUTLOOK.COM 2023-03-23T18:00:39.250Z 08DB2B43918359DA]" plan=xxxxxxxxxxxxx

smtp:
  server: smtp.office365.com
  port: 587
  username: no-reply@xxxxxx
  password: yyyyyyyyyy
  from: no-reply@xxxxxx
  to:
    - [email protected]
    - [email protected]

Upload path changes on 1.10.14

Hello,

We use mgob image 1.9.12-all and tried to upgrade to 1.10.14-all version and use Azure as storage backend.
with 1.9 version, backup were uploaded in storage/ path but after upgrade to 1.10, backup are in tmp/

Is it a normal behaviour ?

Thanks

"cron: panic running job: runtime error: index out of range [2] with length 0" when running planned config

Hello!

Thanks for the awesome project! Unfortunately I'm getting a strange issue when running my plan in mgob. Below is the full stack trace I am getting:

2022-12-13T15:30:00.678233506Z 2022/12/13 15:30:00 cron: panic running job: runtime error: index out of range [2] with length 0
2022-12-13T15:30:00.678252737Z goroutine 55 [running]:
2022-12-13T15:30:00.678256416Z github.com/robfig/cron.(*Cron).runWithRecovery.func1()
2022-12-13T15:30:00.678259563Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:161 +0x69
2022-12-13T15:30:00.678262770Z panic({0xc05be0, 0xc000298210})
2022-12-13T15:30:00.678265786Z 	/usr/local/go/src/runtime/panic.go:884 +0x212
2022-12-13T15:30:00.678268855Z github.com/stefanprodan/mgob/pkg/backup.getDumpedDocMap({0xc000480000, 0x2edf})
2022-12-13T15:30:00.678271933Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:63 +0x245
2022-12-13T15:30:00.678275185Z github.com/stefanprodan/mgob/pkg/backup.dump({{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, {0xc00028e140, 0xa}, ...}, ...}, ...)
2022-12-13T15:30:00.678278469Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:35 +0x630
2022-12-13T15:30:00.678281517Z github.com/stefanprodan/mgob/pkg/backup.Run({{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, {0xc00028e140, 0xa}, ...}, ...}, ...)
2022-12-13T15:30:00.678293129Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/backup.go:22 +0x1f8
2022-12-13T15:30:00.678296566Z github.com/stefanprodan/mgob/pkg/scheduler.backupJob.Run({{0xc000298050, 0x6}, {{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, ...}, ...}, ...})
2022-12-13T15:30:00.678299714Z 	/go/src/github.com/stefanprodan/mgob/pkg/scheduler/scheduler.go:92 +0x13b
2022-12-13T15:30:00.678302752Z github.com/robfig/cron.(*Cron).runWithRecovery(0xc0000607d0?, {0xe15420?, 0xc0002d0000?})
2022-12-13T15:30:00.678305785Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:165 +0x62
2022-12-13T15:30:00.678308848Z created by github.com/robfig/cron.(*Cron).run
2022-12-13T15:30:00.678311862Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:199 +0x7b2
2022-12-13T15:35:00.047428329Z time="2022-12-13T15:35:00Z" level=info msg="Backup started" plan=config
2022-12-13T15:35:00.599743667Z 2022/12/13 15:35:00 cron: panic running job: runtime error: index out of range [2] with length 0
2022-12-13T15:35:00.599766494Z goroutine 59 [running]:
2022-12-13T15:35:00.599770908Z github.com/robfig/cron.(*Cron).runWithRecovery.func1()
2022-12-13T15:35:00.599774493Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:161 +0x69
2022-12-13T15:35:00.599777971Z panic({0xc05be0, 0xc000298288})
2022-12-13T15:35:00.599781180Z 	/usr/local/go/src/runtime/panic.go:884 +0x212
2022-12-13T15:35:00.599784459Z github.com/stefanprodan/mgob/pkg/backup.getDumpedDocMap({0xc000483000, 0x2edf})
2022-12-13T15:35:00.599787726Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:63 +0x245
2022-12-13T15:35:00.599791004Z github.com/stefanprodan/mgob/pkg/backup.dump({{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, {0xc00028e140, 0xa}, ...}, ...}, ...)
2022-12-13T15:35:00.599794399Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:35 +0x630
2022-12-13T15:35:00.599797609Z github.com/stefanprodan/mgob/pkg/backup.Run({{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, {0xc00028e140, 0xa}, ...}, ...}, ...)
2022-12-13T15:35:00.599801026Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/backup.go:22 +0x1f8
2022-12-13T15:35:00.599816853Z github.com/stefanprodan/mgob/pkg/scheduler.backupJob.Run({{0xc000298050, 0x6}, {{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, ...}, ...}, ...})
2022-12-13T15:35:00.599819484Z 	/go/src/github.com/stefanprodan/mgob/pkg/scheduler/scheduler.go:92 +0x13b
2022-12-13T15:35:00.599821970Z github.com/robfig/cron.(*Cron).runWithRecovery(0xc000064fd0?, {0xe15420?, 0xc0002d0000?})
2022-12-13T15:35:00.599824489Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:165 +0x62
2022-12-13T15:35:00.599826958Z created by github.com/robfig/cron.(*Cron).run
2022-12-13T15:35:00.599829386Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:199 +0x7b2
2022-12-13T15:40:00.067217384Z time="2022-12-13T15:40:00Z" level=info msg="Backup started" plan=config
2022-12-13T15:40:00.653024694Z 2022/12/13 15:40:00 cron: panic running job: runtime error: index out of range [2] with length 0
2022-12-13T15:40:00.653054720Z goroutine 63 [running]:
2022-12-13T15:40:00.653061929Z github.com/robfig/cron.(*Cron).runWithRecovery.func1()
2022-12-13T15:40:00.653066290Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:161 +0x69
2022-12-13T15:40:00.653070674Z panic({0xc05be0, 0xc00003a708})
2022-12-13T15:40:00.653074772Z 	/usr/local/go/src/runtime/panic.go:884 +0x212
2022-12-13T15:40:00.653079045Z github.com/stefanprodan/mgob/pkg/backup.getDumpedDocMap({0xc000396000, 0x2edf})
2022-12-13T15:40:00.653083237Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:63 +0x245
2022-12-13T15:40:00.653087520Z github.com/stefanprodan/mgob/pkg/backup.dump({{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, {0xc00028e140, 0xa}, ...}, ...}, ...)
2022-12-13T15:40:00.653091872Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:35 +0x630
2022-12-13T15:40:00.653096064Z github.com/stefanprodan/mgob/pkg/backup.Run({{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, {0xc00028e140, 0xa}, ...}, ...}, ...)
2022-12-13T15:40:00.653100411Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/backup.go:22 +0x1f8
2022-12-13T15:40:00.653104654Z github.com/stefanprodan/mgob/pkg/scheduler.backupJob.Run({{0xc000298050, 0x6}, {{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, ...}, ...}, ...})
2022-12-13T15:40:00.653108971Z 	/go/src/github.com/stefanprodan/mgob/pkg/scheduler/scheduler.go:92 +0x13b
2022-12-13T15:40:00.653113197Z github.com/robfig/cron.(*Cron).runWithRecovery(0xc000064fd0?, {0xe15420?, 0xc0002d0000?})
2022-12-13T15:40:00.653117434Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:165 +0x62
2022-12-13T15:40:00.653121624Z created by github.com/robfig/cron.(*Cron).run
2022-12-13T15:40:00.653125759Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:199 +0x7b2
2022-12-13T15:45:00.068879257Z time="2022-12-13T15:45:00Z" level=info msg="Backup started" plan=config
2022-12-13T15:45:00.647122041Z 2022/12/13 15:45:00 cron: panic running job: runtime error: index out of range [2] with length 0
2022-12-13T15:45:00.647141503Z goroutine 36 [running]:
2022-12-13T15:45:00.647145794Z github.com/robfig/cron.(*Cron).runWithRecovery.func1()
2022-12-13T15:45:00.647149450Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:161 +0x69
2022-12-13T15:45:00.647153686Z panic({0xc05be0, 0xc00003a780})
2022-12-13T15:45:00.647158551Z 	/usr/local/go/src/runtime/panic.go:884 +0x212
2022-12-13T15:45:00.647163503Z github.com/stefanprodan/mgob/pkg/backup.getDumpedDocMap({0xc000399000, 0x2edf})
2022-12-13T15:45:00.647168385Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:63 +0x245
2022-12-13T15:45:00.647172055Z github.com/stefanprodan/mgob/pkg/backup.dump({{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, {0xc00028e140, 0xa}, ...}, ...}, ...)
2022-12-13T15:45:00.647184427Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/local.go:35 +0x630
2022-12-13T15:45:00.647188333Z github.com/stefanprodan/mgob/pkg/backup.Run({{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, {0xc00028e140, 0xa}, ...}, ...}, ...)
2022-12-13T15:45:00.647191942Z 	/go/src/github.com/stefanprodan/mgob/pkg/backup/backup.go:22 +0x1f8
2022-12-13T15:45:00.647195428Z github.com/stefanprodan/mgob/pkg/scheduler.backupJob.Run({{0xc000298050, 0x6}, {{0xc000298050, 0x6}, {{0x0, 0x0}, {0xc00028e100, 0xd}, {0x0, 0x0}, ...}, ...}, ...})
2022-12-13T15:45:00.647199061Z 	/go/src/github.com/stefanprodan/mgob/pkg/scheduler/scheduler.go:92 +0x13b
2022-12-13T15:45:00.647202831Z github.com/robfig/cron.(*Cron).runWithRecovery(0xc0000617d0?, {0xe15420?, 0xc0002d0000?})
2022-12-13T15:45:00.647206330Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:165 +0x62
2022-12-13T15:45:00.647209779Z created by github.com/robfig/cron.(*Cron).run
2022-12-13T15:45:00.647213229Z 	/go/pkg/mod/github.com/robfig/[email protected]/cron.go:199 +0x7b2

In case it helps, here is my config (with some data redacted):

scheduler:
  cron: "*/5 * * * *"
  retention: 12
  timeout: 15
target:
  host: "192.168.86.68"
  port: 27017
  database: ""
  username: "[REDACTED"
  password: "[REDACTED]"
  # add custom params to mongodump (eg. Auth or SSL support), leave blank if not needed
  # params: "--authenticationDatabase admin"
  # disable gzip compression
  noGzip: false
retry:
  # number of retries
  attempts: 3
  # backoff factor  * (2 ^ attemptCount) seconds
  backoffFactor: 60
validation:
  database:
    host: "srv-captain--mongodb"
    port: 27017
    username: "[REDACTED]"
    password: "[REDACTED]"
    # params: "--authenticationDatabase admin"
    noGzip: false
    database: test_restore # database name for restore

I'm not sure what's wrong here (or what's up with the stack trace) so asking for some help here! Thanks! ๐Ÿ™Œ

Configuration via environment variables

The helm chart didn't fit my exact needs in a recent project, so I ended up deploying mgob as a Deployment in a recent Kubernetes setup instead.

One thing I was missing was the ability to configure the image via environment variables. I ended up doing a custom setup, like this - with secrets for username and password pulled from the secrets automatically created by the mongodb community operator:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mongodb-backup
spec:
  serviceName: mongodb-backup
  replicas: 1
  selector:
    matchLabels:
      app: mongodb-backup
  template:
    metadata:
      labels:
        app: mongodb-backup
    spec:
      containers:
      - name: mongodb-backup
        image: maxisam/mgob:1.9.15-all
        env:
        - name: DB1_HOST
          value: mongodb-svc
        - name: DB1_PORT
          value: "27017"
        - name: DB1_DATABASE
          value: db1
        - name: DB1_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb-db1-user
              key: username
        - name: DB1_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb-db1-user
              key: password
        - name: DB2_HOST
          value: mongodb-svc
        - name: DB2_PORT
          value: "27017"
        - name: DB2_DATABASE
          value: db2
        - name: DB2_USERNAME
          valueFrom:
            secretKeyRef:
              name: mongodb-db2-user
              key: username
        - name: DB2_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mongodb-db2-user
              key: password
        command:
        - /bin/bash
        - -c
        args:
          - |
            cat /config.yml.template |
            sed s/HOST/$DB1_HOST/ |
            sed s/PORT/$DB1_PORT/ |
            sed s/USERNAME/$DB1_USERNAME/ |
            sed s/PASSWORD/$DB1_PASSWORD/ |
            sed s/DATABASE/$DB1_DATABASE/ > /config/api.yml &&
            cat /config.yml.template |
            sed s/HOST/$DB2_HOST/ |
            sed s/PORT/$DB2_PORT/ |
            sed s/USERNAME/$DB2_USERNAME/ |
            sed s/PASSWORD/$DB2_PASSWORD/ |
            sed s/DATABASE/$DB2_DATABASE/ > /config/crawler.yml &&
            ./mgob
        imagePullPolicy: Always
        ports:
        - containerPort: 8090
        volumeMounts:
        # - ...
        - name: mongodb-backup-configmap
          mountPath: /config.yml.template
          subPath: config.yml
      volumes:
      - name: mongodb-backup-configmap
        configMap:
          name: mongodb-backup-configmap
          items:
          - key: config.yml
            path: config.yml
  volumeClaimTemplates:
    # ...
---
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    role: backup
  name: mongodb-backup-configmap
data:
  config.yml: |
    target:
      host: HOST
      port: PORT
      username: USERNAME
      password: PASSWORD
      database: DATABASE
    scheduler:
      cron: "0 1 * * *"
      retention: 5
      timeout: 60

The complex setup of running sed on a config.yml template for each file is clearly not optimal. I was wondering if you might consider better support for configuration of plans through environment variables for the docker image itself?

I'm not sure what the optimal format would be, but am open to working on it together if you want input/think this might be a valuable feature :-)

Azure cli no more in image

Hello
thx for this fork !

It seems Azure cli is no more in your mgob image, after dump finished I get this error :

{"error":"Azure uploading /storage/xxxxx-1666269967.gz to mgob failed /bin/sh: az: not found : exit status 127"}

Configure target with connection string URI

In the examples in the README for how to configure a target mongod the format is something like this:

target:
  # mongod IP or host name
  host: "172.18.7.21"
  # mongodb port
  port: 27017
  # mongodb database name, leave blank to backup all databases
  database: "test"
  # leave blank if auth is not enabled
  username: "admin"
  password: "secret"

The kubernetes MongoCommunity operator automatically creates secrets with a connection string, which means it would be nice to be able to set that directly in the config:

target:
  connectionString: mongodb://admin:[email protected]/test

See the mongodb community operator docs here: https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/docs/deploy-configure.md#deploy-a-replica-set

Is this something that could be considered as an additional option?

error ls: *.gz.encrypted: No such file or directory

mgob-config

target:
  host: "db-mongodb.default.svc.cluster.local"
  port: 27017
  # add custom params to mongodump (eg. Auth or SSL support), leave blank if not needed
  params: ""
scheduler:
  # Daily at 5am UCT https://crontab.guru/#0_5_*_*_*
  cron: "0 5 */1 * *"
  retention: 1
  timeout: 60
gcloud:
  bucket: "myapp-test_mongodbdump"
  keyFilePath: /etc/mgob/service-account.json
slack:
  url: [redacted]
  channel: myapp-devops
  username: mgob
  # 'true' to notify only on failures 
  warnOnly: true
time="2023-03-25T19:11:20Z" level=info msg="new dump" archive=/tmp/myapp-test-1679771441.gz err="<nil>" mlog=/tmp/myapp-test-1679771441.log
ls: *.gz.encrypted: No such file or directory
time="2023-03-25T19:11:22Z" level=info msg="Local backup finished Local backup finished `/tmp/myapp-test-1679771441.gz` -> `/storage/myapp-test/myapp-test-1679771441.gz` Duration: 1.723790023s" plan=myapp-test

Cloned and built using https://github.com/maxisam/mgob/tree/366207a0954fbfdf684cf0726c8b14e71f94ee08

I verified that the gz backup file and log file were created in /storage/myapp-test

I am migrating from stefanprodan on GCP k8s...please let me know if any other repro steps are necessary.

feature request: skip local storage for backups

It would be great to have an option to back-up straight to cloud storage as otherwise, with larger datasets, the local storage requirements get quite big. Alternatively, an option to not keep local backups would also be fine (currently setting retention: 0 does nothing).

Error sending notification email

I have a problem with sending notifications. Right after running the backup I always come across the following error below:

mgob-backup-manager | time="2023-06-09T07:23:53Z" level=info msg="Backup finished in 4h23m53.110717758s archive mydatabase-1686279600.gz size 2.1 GB" plan=mydatabase mgob-backup-manager | time="2023-06-09T07:28:13Z" level=error msg="Notifier failed sending email notification failed: dial tcp 191.6.216.100:587: connect: connection timed out" plan=mydatabase mgob-backup-manager | time="2023-06-09T07:28:13Z" level=info msg="Next run at 2023-06-10 03:00:00 +0000 UTC" plan=mydatabase

Detail: Before making the settings for production, I created an example for validation and testing. During the tests the database had almost no data and right after the backup I always received the email smoothly.

What could be happening?

Issues with latest helm chart 1.8.3

This is a follow-up to #86.

No past versions anymore

As already mentioned in #86 the latest index.yaml only contains the current version, no past ones. This breaks backwards compatibility with existing deployments in production environments where versions are pinned to specific tested versions. You should include past versions again as you did with earlier versions of the chart.

versioning scheme inconsistent with git version tags

The current version of mgob is 1.12.17, there's also a matching git tag. However, if I specify this version in my values.yaml as:

image:
  tag: "1.12.17"

then the helm install fails with this error:

Failed to pull image "docker.io/maxisam/mgob:1.12.17": rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/maxisam/mgob:1.12.17": failed to resolve reference "docker.io/maxisam/mgob:1.12.17": docker.io/maxisam/mgob:1.12.17: not found

It finally worked with 1.12.17-all. But IMHO you should provide an image with only 1.12.17 as well.

Thanks a lot however for the work you put into mgob and the helm chart, this is much appreciated!

best regards,
Tom

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.