Giter Site home page Giter Site logo

torodb / stampede Goto Github PK

View Code? Open in Web Editor NEW
1.8K 85.0 120.0 13.68 MB

The ToroDB solution to provide better analytics on top of MongoDB and make it easier to migrate from MongoDB to SQL

Home Page: https://www.torodb.com/stampede/

License: GNU Affero General Public License v3.0

Java 96.92% Shell 3.08%
nosql sql nosql-database analytics sql-database

stampede's Introduction

ToroDB Stampede

Transform your NoSQL data from a MongoDB replica set into a relational database in PostgreSQL.

There are other solutions that are able to store the JSON document in a relational table using PostgreSQL JSON support, but it doesn't solve the real problem of 'how to really use that data'. ToroDB Stampede replicates the document structure in different relational tables and stores the document data in different tuples using those tables.

Installation

Due to the use of different external systems like MongoDB and PostgreSQL, the installation requires some previous steps. Take a look at out quickstart in the documentation.

Usage example

MongoDB is a great idea, but sooner or later some kind of business intelligence, or complex aggregated queries are required. At this point MongoDB is not so powerful and ToroDB Stampede borns to solve that problem (see our post about that).

The kind of replication done by ToroDB Stampede allows the execution of aggregated queries in a relational backend (PostgreSQL) with a noticeable time improvement.

A deeper explanation is available in our how to use section in the documentation.

Development setup

As it was said in the installation section, the requirements of external systems can make more difficult to explain briefly how to setup the development environment here. So if you want to take a look on how to prepare your development environment, take a look to our documentation.

Release History

  • 1.0.0
    • Released on October 24th 2018
  • 1.0.0-beta3
    • Released on June 30th 2017
  • 1.0.0-beta2
    • Released on April 06th 2017
  • 1.0.0-beta1
    • Released on December 30th 2016

Meta

ToroDB – @nosqlonsql[email protected]

Distributed under the GNU AGPL v3 license. See LICENSE for more information.

stampede's People

Contributors

adescoms avatar ahachete avatar aymandf avatar dovaleac avatar ergo70 avatar fatalmind avatar germandz avatar gortiz avatar maxamel avatar sergio-alonso avatar teoincontatto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stampede's Issues

How does it work?

As already asked by various people on Hacker News: "how is the transformation designed, to go from a structured document to a flat set of tables, akin to what an object-relational mapper would do?"

I could of course dig myself into the source. On the other hand, you could add a section to the readme explaining it yourself.

Reset did sequential after drop a collection

If you drop a collection and then insert a new document in this collection, the did of the new document is the next available did from the previous collection "instance", and is not reset to 0.

Keyfile Replica Auth

You folks seem to work well with X.509, however, there isn't any love for keyfile auth. Any chance that'll change?

Unable to find docs

Hi,

I can't find the docs anywhere, and im getting exception:

Exception in thread "main" java.lang.IllegalArgumentException /protocol/mongo/replication: instance type (null)

when i tried to changed it from null to an empty array i got another exception.
I dont want to use replications yet, where can i see some docs?

Be able to prioritize replication nodes

At this moment, ToroDB Stampede decides the node to replicate from by the ping and the replication state of each one. But, as explained on the devel email group, it may be a failure point if Stampede starts up when that sync source is down, the user may want to prioritize some of them. For example, it can be interested to prioritize secondary nodes to reduce the preassure on the primary node.

Procedures created and accessed in different schemas

In a clean database, when ToroDb starts, creates procedures in torodb schema,

image

but when intances a FirstFreeDocId class the schema used is PUBLIC

    public FirstFreeDocId() {
        super("first_free_doc_id", com.torodb.torod.db.postgresql.meta.PublicSchema.PUBLIC, org.jooq.impl.SQLDataType.INTEGER);

        setReturnParameter(RETURN_VALUE);
        addInParameter(COL_SCHEMA);
    }

The log error is:

Caused by: java.lang.RuntimeException: org.jooq.exception.DataAccessException: SQL [{ ? = call "public"."reserve_doc_ids"(?, ?) }]; ERROR: no existe la función public.reserve_doc_ids(character varying, integer)
  Hint: Ninguna función coincide en el nombre y tipos de argumentos. Puede ser necesario agregar conversión explícita de tipos.

Have an issue using the `docker-compose` distribution of Stampede

Hello!

Trying to run your docker-compose distribution, but have following in the terminal

mongodb_1          | 2017-05-15T05:44:21.280+0000 I REPL     [rsSync] transition to primary complete; database writes are now permitted
postgres_1         | FATAL:  role "torodb" does not exist
torodb-stampede_1  | 
torodb-stampede_1  | Connection to PostgreSQL Fatal error while ToroDB was starting: Was passed main parameter 'root' but no main parameter was defined:Fatal error while ToroDB was starting: Was passed main parameter 'root' but no main parameter was defined database Fatal error while ToroDB was starting: Was passed main parameter 'root' but no main parameter was defined with user postgres has failed!
torodb-stampede_1  | 
torodb-stampede_1  | Please, check PostgreSQL is running and, if connecting with TCP, the password for user postgres is correctly configured in /root/.pgpass
torodb-stampede_1  | Remember to set file permission correctly to 0600:
torodb-stampede_1  | chmod 0600 /root/.pgpass
torodb-stampede_1  | 
torodb-stampede_1  | To specify a user different than postgres to setup ToroDB Stampede specify it with environment variable ADMIN_USER:
torodb-stampede_1  | export ADMIN_USER=<PostgreSQL's administrator user name>
wiki_torodb-stampede_1 exited with code 1

As a consequence - cannot connect to the postgres using torodb user and torod database.
All what I did is:

  1. downloaded docker-compose.yml (as described in https://www.torodb.com/stampede/docs/1.0.0-beta2/installation/docker/)
  2. executed docker-compose up

Issues with mongorestore will evaluating ToroDB

I am running latest ToroDB against Postgres 9.4

I am trying to use mongorestore to reimport some project data through ToroDB.

No success with mongorestore 2.6:

connected to: 127.0.0.1
2015-04-27T18:20:10.812+0200 monitoring/sw-duisburg2.einkaeufe.bson
2015-04-27T18:20:10.812+0200 going into namespace [monitoring.sw-duisburg2.einkaeufe]
6 objects found
2015-04-27T18:20:10.814+0200 Creating index: { key: { _id: 1 }, ns: "monitoring.sw-duisburg2.einkaeufe", name: "id" }
2015-04-27T18:20:11.064+0200 Assertion: 16441:Error calling getLastError: EOO
2015-04-27T18:20:11.068+0200 0x11f6f21 0x119a3e9 0x117ef96 0x117f4ec 0x7702fb 0x774755 0x778957 0x778121 0x77df5a 0x116fb6d 0x1172d57 0x11737f2 0x7fd256be4d5d 0x7639c9
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(_ZN5mongo15printStackTraceERSo+0x21) [0x11f6f21]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(_ZN5mongo10logContextEPKc+0x159) [0x119a3e9]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(_ZN5mongo11msgassertedEiPKc+0xe6) [0x117ef96]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore() [0x117f4ec]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(_ZN7Restore11createIndexEN5mongo7BSONObjEb+0x107b) [0x7702fb]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(_ZN7Restore22processFileAndMetadataERKN5boost11filesystem34pathERKSs+0x1b45) [0x774755]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(_ZN7Restore9drillDownEN5boost11filesystem34pathEbbbb+0xe47) [0x778957]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(_ZN7Restore9drillDownEN5boost11filesystem34pathEbbbb+0x611) [0x778121]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(_ZN7Restore5doRunEv+0x55a) [0x77df5a]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(_ZN5mongo8BSONTool3runEv+0x10d) [0x116fb6d]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(ZN5mongo4Tool4mainEiPPcS2+0xb7) [0x1172d57]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore(main+0x42) [0x11737f2]
/lib64/libc.so.6(__libc_start_main+0xfd) [0x7fd256be4d5d]
/opt/mongodb-linux-x86_64-2.6.7/bin/mongorestore() [0x7639c9]

No errors on the ToroDB side (running with --debug)

Import with mongorestore 3.0:

[ajung@dev1 vik.monitoring_buildout]$ /opt/mongodb-linux-x86_64-3.0.0/bin/mongorestore -d monitoring monitoring
2015-04-27T18:21:21.914+0200 Failed: error connecting to db server: no reachable servers

On the ToroDB side:

56494 [nioEventLoopGroup-3-8] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=53693, requestId=3, database='admin', collection='$cmd', numberToSkip=0, numberToReturn=-1, document={ "ismaster" : 1}, returnFieldsSelector=null}
256995 [nioEventLoopGroup-3-1] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=53694, requestId=1, database='admin', collection='$cmd', numberToSkip=0, numberToReturn=-1, document={ "getnonce" : 1}, returnFieldsSelector=null}
256996 [nioEventLoopGroup-3-1] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=53694, requestId=3, database='admin', collection='$cmd', numberToSkip=0, numberToReturn=-1, document={ "ismaster" : 1}, returnFieldsSelector=null}
257998 [nioEventLoopGroup-3-2] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=53695, requestId=1, database='admin', collection='$cmd', numberToSkip=0, numberToReturn=-1, document={ "getnonce" : 1}, returnFieldsSelector=null}
257999 [nioEventLoopGroup-3-2] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=53695, requestId=3, database='admin', collection='$cmd', numberToSkip=0, numberToReturn=-1, document={ "ping" : 1}, returnFieldsSelector=null}
257999 [nioEventLoopGroup-3-3] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=53696, requestId=1, database='admin', collection='$cmd', numberToSkip=0, numberToReturn=-1, document={ "getnonce" : 1}, returnFieldsSelector=null}
258000 [nioEventLoopGroup-3-3] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=53696, requestId=3, database='admin', collection='$cmd', numberToSkip=0, numberToReturn=-1, document={ "ismaster" : 1}, returnFieldsSelector=null}
258001 [nioEventLoopGroup-3-3] ERROR c.e.m.m.RequestMessageObjectHandler - Error while processing request
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method) ~[na:1.7.0_25]
at sun.nio.ch.SocketDispatcher.read(Unknown Source) ~[na:1.7.0_25]
at sun.nio.ch.IOUtil.readIntoNativeBuffer(Unknown Source) ~[na:1.7.0_25]
at sun.nio.ch.IOUtil.read(Unknown Source) ~[na:1.7.0_25]
at sun.nio.ch.SocketChannelImpl.read(Unknown Source) ~[na:1.7.0_25]
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:447) ~[torodb.jar:na]
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881) ~[torodb.jar:na]
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:225) ~[torodb.jar:na]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119) ~[torodb.jar:na]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) ~[torodb.jar:na]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) ~[torodb.jar:na]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) ~[torodb.jar:na]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) ~[torodb.jar:na]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[torodb.jar:na]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) ~[torodb.jar:na]
at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_25]

ToroDB Stampede gives no output

In a Ubuntu 16.04 server running Mongo 3.4.18 and PostgreSQL 10.6 with all the steps at quick-start guide successfully made, when trying to run the torodb-stampede --ask-for-password command inside the torodb-stampede-1.0.0/bin folder gives me no output. Any suggestions on this?

Compatibility with old versions?

I know you support mongodb version 3.2 only, but is there a way of using stampede with mongodb 2.4.x version (even hacky)? I'm testing with this setup:

  • mongo 2.4.9
  • postgresql 9.6.2
  • stampede 1.0.0-beta1

And I'm getting this error:

2017-02-10T12:15:53.777 INFO 'StampedeService STARTING' c.t.s.StampedeService Starting up ToroDB Stampede
2017-02-10T12:15:53.804 INFO 'StampedeService STARTING' c.t.b.p.PostgreSqlDbBackend Configured PostgreSQL backend at localhost:5432
2017-02-10T12:15:54.845 INFO 'PostgreSqlDbBackend STARTING' c.t.b.AbstractDbBackendService Created pool session with size 28 and level TRANSACTION_REPEATABLE_READ
2017-02-10T12:15:54.978 INFO 'PostgreSqlDbBackend STARTING' c.t.b.AbstractDbBackendService Created pool system with size 1 and level TRANSACTION_REPEATABLE_READ
2017-02-10T12:15:55.019 INFO 'PostgreSqlDbBackend STARTING' c.t.b.AbstractDbBackendService Created pool cursors with size 1 and level TRANSACTION_REPEATABLE_READ
2017-02-10T12:15:56.518 INFO 'BackendBundleImpl STARTING' c.t.b.m.AbstractSchemaUpdater Schema 'torodb' not found. Creating it...
2017-02-10T12:15:56.649 INFO 'BackendBundleImpl STARTING' c.t.b.m.AbstractSchemaUpdater Schema 'torodb' created
2017-02-10T12:15:57.092 INFO 'StampedeService STARTING' c.t.s.StampedeService Database is not consistent. Cleaning it up
2017-02-10T12:15:57.326 INFO 'StampedeService STARTING' c.t.s.StampedeService Replicating from seeds: localhost:27017
2017-02-10T12:15:58.290 INFO 'MongodbReplBundle STARTING' c.t.m.r.MongodbReplBundle Starting replication service
2017-02-10T12:15:58.670 WARN 'topology-network-0' c.t.m.r.t.TopologyHeartbeatHandler Heartbeat start failed (sync source: localhost:27017): com.mongodb.MongoCommandException: Command failed with error -1: 'no such cmd: replSetGetConfig' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "no such cmd: replSetGetConfig", "bad cmd" : { "replSetGetConfig" : 1.0 } }
2017-02-10T12:15:59.682 WARN 'topology-network-0' c.t.m.r.t.TopologyHeartbeatHandler Heartbeat start failed (sync source: localhost:27017): com.mongodb.MongoCommandException: Command failed with error -1: 'no such cmd: replSetGetConfig' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "no such cmd: replSetGetConfig", "bad cmd" : { "replSetGetConfig" : 1.0 } }
2017-02-10T12:16:00.689 WARN 'topology-network-0' c.t.m.r.t.TopologyHeartbeatHandler Heartbeat start failed (sync source: localhost:27017): com.mongodb.MongoCommandException: Command failed with error -1: 'no such cmd: replSetGetConfig' on server localhost:27017. The full response is { "ok" : 0.0, "errmsg" : "no such cmd: replSetGetConfig", "bad cmd" : { "replSetGetConfig" : 1.0 } }

Maybe having a replica set with two members: where member 0 runs mongo 2.4.9 as master and member 1 runs mongo 3.2 as slave and connecting stampede to member 1? (Having in mind the risk of doing that).

Oplog replication stream finished exceptionally: null

We've started to experience the following error after a few hours of use. Any help would be much appreciated...

2018-03-05T09:06:26.952 INFO LIFECYCLE - Starting up ToroDB Stampede 2018-03-05T09:06:27.156 INFO BACKEND - Configured PostgreSQL backend at localhost:5432 2018-03-05T09:06:28.113 INFO BACKEND - Created pool session with size 28 and level TRANSACTION_REPEATABLE_READ 2018-03-05T09:06:28.182 INFO BACKEND - Created pool system with size 1 and level TRANSACTION_REPEATABLE_READ 2018-03-05T09:06:28.200 INFO BACKEND - Created pool cursors with size 1 and level TRANSACTION_REPEATABLE_READ 2018-03-05T09:06:29.453 INFO BACKEND - Schema 'torodb' found. Checking it... 2018-03-05T09:06:29.644 INFO BACKEND - Schema 'torodb' checked 2018-03-05T09:06:29.651 INFO BACKEND - Database metadata has been validated 2018-03-05T09:06:29.867 INFO LIFECYCLE - All replication shards are consistent 2018-03-05T09:07:26.127 INFO LIFECYCLE - Starting replication from replica set named rs0 2018-03-05T09:07:27.429 INFO REPL - Starting replication service 2018-03-05T09:07:29.118 INFO REPL - Waiting for 2 pings from other members before syncing 2018-03-05T09:07:29.143 INFO REPL - Member mongodb-warehouse-server-1:27017 is now in state RS_PRIMARY 2018-03-05T09:07:30.119 INFO REPL - Waiting for 1 pings from other members before syncing 2018-03-05T09:07:31.120 INFO REPL - Waiting for 1 pings from other members before syncing 2018-03-05T09:07:32.128 INFO REPL - syncing from: mongodb-warehouse-server-1:27017 2018-03-05T09:07:32.133 INFO REPL - Topology service started 2018-03-05T09:07:32.250 INFO REPL - Database is consistent. 2018-03-05T09:07:32.252 INFO REPL - Replication service started 2018-03-05T09:07:32.252 INFO LIFECYCLE - ToroDB Stampede is now running 2018-03-05T09:07:32.254 INFO REPL - Starting SECONDARY mode 2018-03-05T09:07:32.486 INFO REPL - Reading from mongodb-warehouse-server-1:27017 2018-03-05T09:07:37.320 WARN REPL - Oplog replication stream finished exceptionally: null 2018-03-05T09:07:37.387 ERROR REPL - Catched an error on the replication layer. Escalating it 2018-03-05T09:07:37.387 ERROR LIFECYCLE - Error reported by replication supervisor. Stopping ToroDB Stampede 2018-03-05T09:07:37.414 INFO LIFECYCLE - Shutting down ToroDB Stampede 2018-03-05T09:07:37.442 INFO REPL - Shutting down replication service 2018-03-05T09:07:37.562 INFO REPL - Topology service shutted down 2018-03-05T09:07:37.583 INFO REPL - Replication service shutted down 2018-03-05T09:07:38.605 INFO LIFECYCLE - ExecutorService java.util.concurrent.ScheduledThreadPoolExecutor@7387c99f[Shutting down, pool size = 1, active threads = 0, queued tasks = 1, completed tasks = 18] did not finished in PT1S 2018-03-05T09:07:39.447 INFO LIFECYCLE - ToroDB Stampede has been shutted down

Method may fail to close Connection

Hi,
I'm trying to add a small functionality to detect postgreSQL version on start-up and issue a warning if the version<9.4. I added this code (surrounded by try-catch) to class:Config, method:Initialize.

commonDataSource.getConnection().getMetaData().getDatabaseProductVersion();

Got build error saying:

image

As I can see the build error comes from an xml file find-bugs.
Can you tell what that means and how should I go about handling it?
I can overcome this by using suppressFBWarnings, but I assume that's not what you want.

Extract nested field as an attribute not a relation

@ahachete, thanks for the great open source effort, it's impressive!
I am using stampede in a project, but since our schema has many nested fields, it will create a lot of tables. Since in theory, stampede can generate nested fields as attributes not as new relations, I am wondering what was the design decision behind this implementation?
And more importantly, is there any configuration that can change the mode to create attribute instead relation for nested fields?

Thanks a lot.

Improve shown error when Stampede is trying to replicate from a MongoDB whose version is not 3.2

The issue #130 reported by @p1nox indicates that when Stampede is trying to replicate from a MongoDB 3.4, the error shown is (syntethized) the following:

2016-12-22T04:01:30.851 ERROR 'RecoveryService' c.t.m.r.ReplCoordinator Fatal error while starting recovery mode: Not supported yet
2016-12-22T04:01:30.854 ERROR 'RecoveryService' c.t.m.r.MongodbReplBundle Catched an error on the replication layer. Escalating it
2016-12-22T04:01:30.855 ERROR 'RecoveryService' c.t.s.StampedeService Error reported by replication supervisor. Stopping ToroDB Stampede

It should say something like "It is not supported to replicate from MongoDB version whatever"

Build fail after running 'mvn package'

Hi all,
After getting latest version of the project in git, running 'mvn package' from root yields this:

image

Digging a bit into the code revealed it's connected to something added in recent commit a week ago.
Up until then, by the way, everything was perfect. Do you know where this is coming from?

ToroDB shuts down unexpectedly

Hi @teoincontatto ,

When stampede is running for some time, it shuts down suddenly. Taking a look at the logs, you can see the following messages:

MongoDB Logs

Jan  3 21:22:21 10.255.0.7 docker[mongodb][25132]: 2019-01-03T21:22:21.596+0000 I -        [conn332518] Assertion: 16089:Cannot kill pinned cursor: 13122723805
Jan  3 21:22:21 10.255.0.7 docker[mongodb][25132]: 2019-01-03T21:22:21.596+0000 I COMMAND  [conn332518] killcursors local.oplog.rs keyUpdates:0 writeConflicts:0 exception: Cannot kill pinned cursor: 13122723805 code:16089 numYields:0 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, oplog: { acquireCount: { r: 1 } } } 0ms

Stampede logs

torodb-stampede_1_39129f78021c | 2019-01-03T09:20:50.846 INFO  REPL       - syncing from: mongodb:27017
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:50.846 INFO  REPL       - Topology service started
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:50.903 INFO  REPL       - Database is consistent.
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:50.904 INFO  REPL       - Replication service started
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:50.906 INFO  LIFECYCLE  - ToroDB Stampede is now running
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:50.907 INFO  REPL       - Starting SECONDARY mode
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:51.139 INFO  REPL       - Reading from mongodb:27017
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:53.576 WARN  REPL       - Oplog replication stream finished exceptionally: null
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:53.592 ERROR REPL       - Catched an error on the replication layer. Escalating it
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:53.592 ERROR LIFECYCLE  - Error reported by replication supervisor. Stopping ToroDB Stampede
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:53.599 INFO  LIFECYCLE  - Shutting down ToroDB Stampede
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:53.612 INFO  REPL       - Shutting down replication service
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:55.003 INFO  REPL       - Topology service shutted down
torodb-stampede_1_39129f78021c | 2019-01-03T09:20:55.017 INFO  REPL       - Replication service shutted down

More info

MongoDB

root@b889778dc962:/# mongod --version
db version v3.2.21
git version: 1ab1010737145ba3761318508ff65ba74dfe8155
OpenSSL version: OpenSSL 1.0.1t  3 May 2016
allocator: tcmalloc
modules: none
build environment:
    distmod: debian81
    distarch: x86_64
    target_arch: x86_64
  • Using user and password auth.

ToroDB

  • Version 1.0.0 (latest).

What can I do to get more information about this error and fix it?

Thanks in advance.

Default configuration ("torodb -l") generates null values

"torodb -l" generates a default config file that can be used as a template for your own configuration. However, the current version generates "null" values on some fields that will prevent ToroDB from starting if not edited correctly. Some of these fields may be left unconfigured by the user, so it's best not to generate them with a null value as a default.

Sample execution:


$ ./torodb/bin/torodb -l

---
generic:
  logLevel: "INFO"
  logFile: null
  connectionPoolSize: 30
  reservedReadPoolSize: 10
  logPackages: null
  logbackFile: null
protocol:
  mongo:
    net:
      bindIp: "localhost"
      port: 27018
    replication: null
backend:
  postgres:
    host: "localhost"
    port: 5432
    database: "torod"
    user: "torodb"
    toropassFile: "/home/aht/.toropass"
    applicationName: "toro"

System collection support (and more)

ToroDB throws errors while access to Mongo's system collections, whose identifier is prefixed with system. (Mongoose ODM, used in some applications like Let's Chat, utilizes them).

20489 [torod-session-3] DEBUG c.t.t.d.p.query.QueryEvaluator - Query (migrationId == 1421181022156-drop-sessions) or (migrationId exists ( == 1421181022156-drop-sessions)) fulfiled by [] 
20490 [nioEventLoopGroup-3-5] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_INSERT, data: InsertMessage{clientAddress=127.0.0.1, clientPort=57308, requestId=0, database='letschat', collection='system.indexes', documents=[{ "ns" : "letschat.rooms" , "key" : { "slug" : 1} , "name" : "slug_1" , "unique" : true , "background" : true , "safe" :  null }]} 
20490 [nioEventLoopGroup-3-4] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_INSERT, data: InsertMessage{clientAddress=127.0.0.1, clientPort=57307, requestId=0, database='letschat', collection='system.indexes', documents=[{ "ns" : "letschat.users" , "key" : { "email" : 1} , "name" : "email_1" , "unique" : true , "background" : true , "safe" :  null }]} 
20493 [nioEventLoopGroup-3-4] DEBUG c.t.t.db.metaInf.CollectionMetaInfo - Heuristic said 50001 new ids are needed. Difference between lastCachedId and lastUsedId is -1 
20493 [nioEventLoopGroup-3-5] DEBUG c.t.t.db.metaInf.CollectionMetaInfo - Heuristic said 50002 new ids are needed. Difference between lastCachedId and lastUsedId is -2 
20493 [torod-system-1] ERROR c.t.t.d.e.DefaultExecutorFactory - System executor exception 
com.torodb.torod.core.exceptions.ToroRuntimeException: java.lang.IllegalArgumentException: At the present time Torod doesn't support '.' as identifier character. Only a alphanumeric letters and '_' are supported
        at com.torodb.torod.db.executor.jobs.SystemDbCallable.onFail(SystemDbCallable.java:64) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.Job.call(Job.java:22) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.DefaultSystemExecutor$SystemRunnable.call(DefaultSystemExecutor.java:153) ~[torodb.jar:na]
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) ~[na:1.7.0_25]
        at java.util.concurrent.FutureTask.run(FutureTask.java:166) ~[na:1.7.0_25]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_25]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_25]
        at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_25]
Caused by: java.lang.IllegalArgumentException: At the present time Torod doesn't support '.' as identifier character. Only a alphanumeric letters and '_' are supported
        at com.torodb.torod.db.postgresql.IdsFilter.filter(IdsFilter.java:42) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.IdsFilter.escapeSchemaName(IdsFilter.java:30) ~[torodb.jar:na]
        at com.torodb.torod.db.sql.AbstractSqlDbConnection.createCollection(AbstractSqlDbConnection.java:152) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.CreateCollectionCallable.call(CreateCollectionCallable.java:61) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.CreateCollectionCallable.call(CreateCollectionCallable.java:34) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.SystemDbCallable.failableCall(SystemDbCallable.java:72) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.Job.call(Job.java:20) ~[torodb.jar:na]
        ... 6 common frames omitted

ToroDB also complains about upper case characters:

9230 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Executing query          : insert into "torodb"."collections" ("name", "schema", "capped", "max_size", "max_elementes", "other", "storage_engine") values (?, ?, ?, ?, ?, ?, ?) 
9230 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - -> with bind values      : insert into "torodb"."collections" ("name", "schema", "capped", "max_size", "max_elementes", "other", "storage_engine") values ('migrootions', 'migrootions', false, 0, 0, null, 'torodb-dev') 
9233 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Affected row(s)          : 1 
9233 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Query executed           : Total: 3.934ms 
9233 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Finishing                : Total: 4.001ms, +0.067ms 
9272 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Executing query          : select count(*) from information_schema.tables where (table_schema = ? and table_name = ?) 
9273 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - -> with bind values      : select count(*) from information_schema.tables where (table_schema = 'migrootions' and table_name = 'indexes') 
9276 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Query executed           : Total: 7.332ms 
9277 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Fetched result           : +-----+ 
9277 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : |count| 
9277 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : +-----+ 
9277 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : |    0| 
9277 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : +-----+ 
9277 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Finishing                : Total: 7.864ms, +0.531ms 
9278 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Executing query          : CREATE TABLE migrootions.indexes (name VARCHAR PRIMARY KEY, index JSONB NOT NULL) 
9279 [nioEventLoopGroup-3-4] DEBUG c.t.t.db.metaInf.CollectionMetaInfo - migrootions.(_id : TWELVE_BYTES, __v : INTEGER, migrationId : STRING, dateMigrated : DATETIME) table was not created 
9279 [nioEventLoopGroup-3-4] DEBUG c.t.t.db.metaInf.CollectionMetaInfo - I will schedule creation of migrootions.(_id : TWELVE_BYTES, __v : INTEGER, migrationId : STRING, dateMigrated : DATETIME) table 
9279 [nioEventLoopGroup-3-4] DEBUG c.t.t.db.metaInf.CollectionMetaInfo - migrootions.(_id : TWELVE_BYTES, __v : INTEGER, migrationId : STRING, dateMigrated : DATETIME) table creation has been scheduled 
9364 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Affected row(s)          : 0 
9364 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Query executed           : Total: 86.247ms 
9364 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Finishing                : Total: 86.368ms, +0.121ms 
9368 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Executing query          : select "migrootions"."indexes"."index" from "migrootions"."indexes" 
9369 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Query executed           : Total: 1.862ms 
9369 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Fetched result           : +-----+ 
9369 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : |index| 
9369 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : +-----+ 
9370 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Finishing                : Total: 2.328ms, +0.466ms 
9395 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Calling routine          : { ? = call "public"."reserve_doc_ids"(?, ?) } 
9395 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - -> with bind values      : { ? = call "public"."reserve_doc_ids"('migrootions', 50001) } 
9404 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Query executed           : Total: 8.953ms 
9404 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Fetching out values      : Total: 9.064ms, +0.111ms 
9405 [torod-system-1] DEBUG org.jooq.tools.LoggerListener - Fetched OUT parameters   : +------------+ 
9406 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : |RETURN_VALUE| 
9406 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : +------------+ 
9406 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : |       50000| 
9406 [torod-system-1] DEBUG org.jooq.tools.LoggerListener -                          : +------------+ 
9406 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Out values fetched       : Total: 10.933ms, +1.869ms 
9406 [torod-system-1] DEBUG org.jooq.tools.StopWatch - Finishing                : Total: 11.031ms, +0.097ms 
9423 [torod-system-1] ERROR c.t.t.d.e.DefaultExecutorFactory - System executor exception 
com.torodb.torod.core.exceptions.ToroRuntimeException: java.lang.IllegalArgumentException: At the present time Torod doesn't support 'I' as identifier character. Only a alphanumeric letters and '_' are supported
        at com.torodb.torod.db.executor.jobs.SystemDbCallable.onFail(SystemDbCallable.java:64) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.Job.call(Job.java:22) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.DefaultSystemExecutor$SystemRunnable.call(DefaultSystemExecutor.java:153) ~[torodb.jar:na]
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) ~[na:1.7.0_25]
        at java.util.concurrent.FutureTask.run(FutureTask.java:166) ~[na:1.7.0_25]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_25]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_25]
        at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_25]
Caused by: java.lang.IllegalArgumentException: At the present time Torod doesn't support 'I' as identifier character. Only a alphanumeric letters and '_' are supported
        at com.torodb.torod.db.postgresql.IdsFilter.filter(IdsFilter.java:42) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.IdsFilter.escapeAttributeName(IdsFilter.java:35) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.tables.SubDocTable.toColumnName(SubDocTable.java:244) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.tables.SubDocTable.<init>(SubDocTable.java:109) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.tables.SubDocTable.<init>(SubDocTable.java:100) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.tables.SubDocTable.<init>(SubDocTable.java:96) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.CollectionSchema.prepareSubDocTable(CollectionSchema.java:230) ~[torodb.jar:na]
        at com.torodb.torod.db.sql.AbstractSqlDbConnection.createSubDocTypeTable(AbstractSqlDbConnection.java:190) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.CreateSubDocTableCallable.call(CreateSubDocTableCallable.java:58) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.CreateSubDocTableCallable.call(CreateSubDocTableCallable.java:34) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.SystemDbCallable.failableCall(SystemDbCallable.java:72) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.Job.call(Job.java:20) ~[torodb.jar:na]
        ... 6 common frames omitted

RecoveryService fatal error

I have three containers, one for postgres and two for mongo, mongo1 and mongo2 replica sets:

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                     PORTS                      NAMES
00aa057a85c5        mongo               "/entrypoint.sh mongo"   13 minutes ago      Up 13 minutes              0.0.0.0:27018->27017/tcp   mongo2
735ed126f61e        mongo               "/entrypoint.sh mongo"   13 minutes ago      Up 13 minutes              0.0.0.0:27017->27017/tcp   mongo1
e4ec4ea1dd1d        postgres:9.5        "/docker-entrypoint.s"   50 minutes ago      Up 50 minutes              0.0.0.0:5432->5432/tcp     postgres_test

Also, I already have postgres database configured with the requirements: torodb and torod databases and torodb user.

I start mongo containers like:

# create a custom network
docker network create my-mongo-cluster

# start first replica using this network
docker run -d \
-p 27017:27017 \
--name mongo1 \
--net my-mongo-cluster \
mongo mongod --replSet rs1

# start second replica using same network
docker run -d \
-p 27018:27017 \
--name mongo2 \
--net my-mongo-cluster \
mongo mongod --replSet rs1

I start stampede container like:

docker run -ti -v ~/.toropass:/root/.toropass --net my-mongo-cluster torodb/stampede

So when stampede container is starting, everything seams fine, all service connected, but then I receive an error message, and container is shutdown:

2016-12-22T04:01:25.114 INFO 'StampedeService STARTING' c.t.s.StampedeService Starting up ToroDB Stampede
2016-12-22T04:01:25.127 INFO 'StampedeService STARTING' c.t.b.p.PostgreSqlDbBackend Configured PostgreSQL backend at 172.22.0.1:5432
2016-12-22T04:01:25.701 INFO 'PostgreSqlDbBackend STARTING' c.t.b.AbstractDbBackendService Created pool session with size 28 and level TRANSACTION_REPEATABLE_READ
2016-12-22T04:01:25.743 INFO 'PostgreSqlDbBackend STARTING' c.t.b.AbstractDbBackendService Created pool system with size 1 and level TRANSACTION_REPEATABLE_READ
2016-12-22T04:01:25.756 INFO 'PostgreSqlDbBackend STARTING' c.t.b.AbstractDbBackendService Created pool cursors with size 1 and level TRANSACTION_REPEATABLE_READ
2016-12-22T04:01:26.329 INFO 'BackendBundleImpl STARTING' c.t.b.m.AbstractSchemaUpdater Schema 'torodb' found. Checking it...
2016-12-22T04:01:26.400 INFO 'BackendBundleImpl STARTING' c.t.b.m.AbstractSchemaUpdater Schema 'torodb' checked
2016-12-22T04:01:26.563 INFO 'StampedeService STARTING' c.t.s.StampedeService Database is not consistent. Cleaning it up
2016-12-22T04:01:26.646 INFO 'StampedeService STARTING' c.t.s.StampedeService Replicating from seeds: 172.22.0.1:27017
2016-12-22T04:01:27.039 INFO 'MongodbReplBundle STARTING' c.t.m.r.MongodbReplBundle Starting replication service
2016-12-22T04:01:27.271 INFO 'topology-executor-0' c.t.m.r.t.TopologyCoordinator Waiting for 4  pings from other members before syncing
2016-12-22T04:01:27.295 INFO 'topology-executor-0' c.t.m.c.p.MemberHeartbeatData Member mongo2:27017 is now in state RS_SECONDARY
2016-12-22T04:01:27.297 INFO 'topology-executor-0' c.t.m.c.p.MemberHeartbeatData Member mongo1:27017 is now in state RS_PRIMARY
2016-12-22T04:01:28.273 INFO 'topology-executor-0' c.t.m.r.t.TopologyCoordinator Waiting for 2  pings from other members before syncing
2016-12-22T04:01:29.274 INFO 'topology-executor-0' c.t.m.r.t.TopologyCoordinator Waiting for 2  pings from other members before syncing
2016-12-22T04:01:30.282 INFO 'topology-executor-0' c.t.m.r.t.TopologyCoordinator syncing from: mongo2:27017
2016-12-22T04:01:30.283 INFO 'TopologyService STARTING' c.t.m.r.t.TopologyService Topology service started
2016-12-22T04:01:30.331 WARN 'ReplCoordinator STARTING' c.t.m.r.ReplCoordinator loadStoredConfig() is not implemented yet
2016-12-22T04:01:30.331 INFO 'ReplCoordinator STARTING' c.t.m.r.ReplCoordinator Database is not consistent.
2016-12-22T04:01:30.334 INFO 'MongodbReplBundle STARTING' c.t.m.r.MongodbReplBundle Replication service started
2016-12-22T04:01:30.334 INFO 'StampedeService STARTING' c.t.s.StampedeService ToroDB Stampede is now running
2016-12-22T04:01:30.336 INFO 'repl-coord-starting-recovery' c.t.m.r.ReplCoordinatorStateMachine Starting RECOVERY mode
2016-12-22T04:01:30.364 INFO 'RecoveryService' c.t.m.r.RecoveryService Starting RECOVERY service
2016-12-22T04:01:30.364 INFO 'RecoveryService' c.t.m.r.RecoveryService Starting initial sync
2016-12-22T04:01:30.400 INFO 'RecoveryService' c.t.s.DefaultConsistencyHandler Consistent state has been set to 'false'
2016-12-22T04:01:30.405 INFO 'RecoveryService' c.t.m.r.RecoveryService Using node mongo2:27017 to replicate from
2016-12-22T04:01:30.434 INFO 'RecoveryService' c.t.m.r.RecoveryService Remote database cloning started
2016-12-22T04:01:30.591 INFO 'RecoveryService' c.t.b.SharedWriteBackendTransactionImpl Created internal index did_pkey for table oplog_replication
2016-12-22T04:01:30.635 INFO 'RecoveryService' c.t.b.SharedWriteBackendTransactionImpl Created internal index rid_pkey for table oplog_replication_lastappliedoplogentry
2016-12-22T04:01:30.668 INFO 'RecoveryService' c.t.b.SharedWriteBackendTransactionImpl Created internal index did_idx for table oplog_replication_lastappliedoplogentry
2016-12-22T04:01:30.698 INFO 'RecoveryService' c.t.b.SharedWriteBackendTransactionImpl Created internal index did_seq_idx for table oplog_replication_lastappliedoplogentry
2016-12-22T04:01:30.730 INFO 'RecoveryService' c.t.m.r.RecoveryService Local databases dropping started
2016-12-22T04:01:30.756 INFO 'RecoveryService' c.t.m.r.RecoveryService Local databases dropping finished
2016-12-22T04:01:30.756 INFO 'RecoveryService' c.t.m.r.RecoveryService Remote database cloning started
2016-12-22T04:01:30.851 ERROR 'RecoveryService' c.t.m.r.ReplCoordinator Fatal error while starting recovery mode: Not supported yet
2016-12-22T04:01:30.854 ERROR 'RecoveryService' c.t.m.r.MongodbReplBundle Catched an error on the replication layer. Escalating it
2016-12-22T04:01:30.855 ERROR 'RecoveryService' c.t.s.StampedeService Error reported by replication supervisor. Stopping ToroDB Stampede
2016-12-22T04:01:30.858 INFO 'StampedeService STOPPING' c.t.s.StampedeService Shutting down ToroDB Stampede
2016-12-22T04:01:30.858 INFO 'RecoveryService' c.t.m.r.RecoveryService Recived a request to stop the recovering service
2016-12-22T04:01:30.906 INFO 'MongodbReplBundle STOPPING' c.t.m.r.MongodbReplBundle Shutting down replication service
2016-12-22T04:01:30.924 INFO 'TopologyService STOPPING' c.t.m.r.t.TopologyService Topology service shutted down
2016-12-22T04:01:30.937 INFO 'MongodbReplBundle STOPPING' c.t.m.r.MongodbReplBundle Replication service shutted down
2016-12-22T04:01:31.357 INFO 'StampedeService STOPPING' c.t.s.StampedeService ToroDB Stampede has been shutted down

Where the specific errors seams related to RecoveryService:

2016-12-22T04:01:30.851 ERROR 'RecoveryService' c.t.m.r.ReplCoordinator Fatal error while starting recovery mode: Not supported yet
2016-12-22T04:01:30.854 ERROR 'RecoveryService' c.t.m.r.MongodbReplBundle Catched an error on the replication layer. Escalating it
2016-12-22T04:01:30.855 ERROR 'RecoveryService' c.t.s.StampedeService Error reported by replication supervisor. Stopping ToroDB Stampede

Warning when arbiter is present

Hi,
first of all, thanks for your work! ToroDB is awesome!
I have an issue replicating from a cluster composed of 4 "Regular" hosts and an arbiter.
The replication works fine and I'm impressed with the speed. The only annoying part is a list of messages like:

WARN REPL 'topology-executor-0' m.r.t.TopologyHeartbeatHandler - Error in heartbeat request to arbiter:27019; {errCode :HOST_UNREACHABLE, errorMsg:com.mongodb.MongoTimeoutException: Timed out after 30000 ms while waiting for a server that matches ReadPreferenceServerSelector{readPreference=primary}. Client view of cluster state is {type=UNKNOWN, servers=[{address=arbiter:27019, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=null, userName='stampede', source='admin', password=<hidden>, mechanismProperties={}}}, caused by {com.mongodb.MongoCommandException: Command failed with error 18: 'Authentication failed.' on server arbiter:27019. The full response is { "ok" : 0.0, "errmsg" : "Authentication failed.", "code" : 18, "codeName" : "AuthenticationFailed" }}}]}

or

08:41:52.576 WARN REPL 'topology-executor-0' m.r.t.TopologyHeartbeatHandler - Error in heartbeat request to arbiter:27019; {errCode :UNAUTHORIZED, errorMsg:Command failed with error 13: 'not authorized on admin to execute command { replSetHeartbeat: "sensibill_beta_staging", pv: 1, v: 11, from: "" }' on server arbiter:27019. The full response is { "ok" : 0.0, "errmsg" : "not authorized on admin to execute command { replSetHeartbeat: \"sensibill_beta_staging\", pv: 1, v: 11, from: \"\" }", "code" : 13, "codeName" : "Unauthorized" }}

is there a way to exclude the arbiter from the sync?
Thanks

Upsert updates are not supported

It is related to #18 and #19. After initialization (npm start) on Let's Chat is successful, accessing to http://localhost:5000/login produces the following errors:

684219 [nioEventLoopGroup-3-2] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=54056, requestId=7, database='letschat', collection='$cmd', numberToSkip=0, numberToReturn=-1, document={ "update" : "sessions" , "updates" : [ { "q" : { "_id" : "rY0RvMeYlwWHOy9o23zT_i8FfIkk1xZ5"} , "u" : { "_id" : "rY0RvMeYlwWHOy9o23zT_i8FfIkk1xZ5" , "session" : "{\"cookie\":{\"originalMaxAge\":null,\"expires\":null,\"secure\":false,\"httpOnly\":true,\"path\":\"/\"},\"passport\":{}}" , "expires" : { "$date" : 1436577724335}} , "upsert" : true}] , "ordered" : true , "writeConcern" : { }}, returnFieldsSelector=null} 
684221 [nioEventLoopGroup-3-2] ERROR c.e.m.m.RequestMessageObjectHandler - Error while processing request 
com.torodb.torod.core.exceptions.UserToroException: Upsert updates are not supported
    at com.toro.torod.connection.DefaultToroTransaction.update(DefaultToroTransaction.java:210) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.mongowp.mongoserver.api.toro.ToroQueryCommandProcessor.update(ToroQueryCommandProcessor.java:197) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.eightkdata.mongowp.mongoserver.api.QueryCommandProcessor$ProcessorCaller.update(QueryCommandProcessor.java:226) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.eightkdata.mongowp.mongoserver.api.commands.QueryAndWriteOperationsQueryCommand$4.doCall(QueryAndWriteOperationsQueryCommand.java:69) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    ...
684324 [nioEventLoopGroup-3-6] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=54052, requestId=8, database='letschat', collection='sessions', numberToSkip=0, numberToReturn=-1, document={ "_id" : "rY0RvMeYlwWHOy9o23zT_i8FfIkk1xZ5" , "$or" : [ { "expires" : { "$exists" : false}} , { "expires" : { "$gt" : { "$date" : 1435368124437}}}]}, returnFieldsSelector=null} 
684326 [nioEventLoopGroup-3-6] ERROR c.e.m.m.RequestMessageObjectHandler - Error while processing request 
com.torodb.torod.core.exceptions.ToroImplementationException: DocValue 2015-06-27T10:22:04.437 is not translatable to basic values
    at com.torodb.torod.core.subdocument.values.ValueFactory.fromDocValue(ValueFactory.java:91) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.translator.BasicQueryTranslator.translateGtOperand(BasicQueryTranslator.java:497) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.translator.BasicQueryTranslator.translateSubQuery(BasicQueryTranslator.java:428) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.translator.BasicQueryTranslator.translateSubQueries(BasicQueryTranslator.java:393) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    ...
684488 [nioEventLoopGroup-3-7] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=54053, requestId=9, database='letschat', collection='sessions', numberToSkip=0, numberToReturn=-1, document={ "_id" : "rY0RvMeYlwWHOy9o23zT_i8FfIkk1xZ5" , "$or" : [ { "expires" : { "$exists" : false}} , { "expires" : { "$gt" : { "$date" : 1435368124604}}}]}, returnFieldsSelector=null} 
684489 [nioEventLoopGroup-3-7] ERROR c.e.m.m.RequestMessageObjectHandler - Error while processing request 
com.torodb.torod.core.exceptions.ToroImplementationException: DocValue 2015-06-27T10:22:04.604 is not translatable to basic values
    at com.torodb.torod.core.subdocument.values.ValueFactory.fromDocValue(ValueFactory.java:91) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.translator.BasicQueryTranslator.translateGtOperand(BasicQueryTranslator.java:497) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.translator.BasicQueryTranslator.translateSubQuery(BasicQueryTranslator.java:428) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.translator.BasicQueryTranslator.translateSubQueries(BasicQueryTranslator.java:393) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    ...
684508 [nioEventLoopGroup-3-8] DEBUG c.e.m.m.RequestMessageObjectHandler - Received message type: OP_QUERY, data: QueryMessage{clientAddress=127.0.0.1, clientPort=54054, requestId=10, database='letschat', collection='sessions', numberToSkip=0, numberToReturn=-1, document={ "_id" : "rY0RvMeYlwWHOy9o23zT_i8FfIkk1xZ5" , "$or" : [ { "expires" : { "$exists" : false}} , { "expires" : { "$gt" : { "$date" : 1435368124626}}}]}, returnFieldsSelector=null} 
684509 [nioEventLoopGroup-3-8] ERROR c.e.m.m.RequestMessageObjectHandler - Error while processing request 
com.torodb.torod.core.exceptions.ToroImplementationException: DocValue 2015-06-27T10:22:04.626 is not translatable to basic values
    at com.torodb.torod.core.subdocument.values.ValueFactory.fromDocValue(ValueFactory.java:91) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.translator.BasicQueryTranslator.translateGtOperand(BasicQueryTranslator.java:497) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.translator.BasicQueryTranslator.translateSubQuery(BasicQueryTranslator.java:428) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    at com.torodb.translator.BasicQueryTranslator.translateSubQueries(BasicQueryTranslator.java:393) ~[torodb-0.23-SNAPSHOT-jar-with-dependencies.jar:na]
    ...

Error "There is no open cursor with id" when executing a find command

I am creating an empty postgresql database, starting torodb and executing the following code:

Dependencies

org.mongodb mongo-java-driver 2.13.1 ## Insert new data in an empty torodb database

import com.mongodb.BasicDBObject;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.Mongo;

/**

  • Java MongoDB : Insert a bunch of documents

  • */
    public class MassiveInsert {
    public static void main(String[] args) {

    try {
    Mongo mongo = new Mongo("localhost", 27017);
    DB db = mongo.getDB("db");

      DBCollection collection = db.getCollection("ships");
    
      for(int i=0; i<1500; i++){
          insert(collection, i);
      }
    

    } catch (Exception e) {
    e.printStackTrace();
    }

    }

    private static void insert(DBCollection collection, int index) {
    System.out.println("BasicDBObject example..." + index);
    BasicDBObject document = new BasicDBObject();
    document.put("name", "USS Enterprise");
    document.put("ship_id", index);

      BasicDBObject documentDetail = new BasicDBObject();
      documentDetail.put("active", "true");
      documentDetail.put("engines", 99);  
      document.put("detail", documentDetail);
      collection.insert(document);
    

    }
    }

Retrieve the data

import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.DBCursor;
import com.mongodb.Mongo;

/**

  • Java MongoDB : Retrieve documents

  • */
    public class MassiveReader {
    public static void main(String[] args) {

    try {
    Mongo mongo = new Mongo("localhost", 27017);
    DB db = mongo.getDB("db");

      DBCollection collection = db.getCollection("ships");
    
      DBCursor cursorDocJSON = collection.find();
      while (cursorDocJSON.hasNext()) {
          System.out.println(cursorDocJSON.next());
      }
    

    } catch (Exception e) {
    e.printStackTrace();
    }
    }
    }

Error output

62650 [torod-session-5] ERROR c.t.t.d.e.DefaultExecutorFactory - Session executor exception
com.torodb.torod.core.exceptions.ToroRuntimeException: com.torodb.torod.core.exceptions.UserToroException: java.lang.IllegalArgumentException: There is no open cursor with id com.torodb.torod.core.cursors.CursorId@150
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.onFail(ReadCursorCallable.java:89) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.onFail(ReadCursorCallable.java:39) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.Job.call(Job.java:22) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.DefaultSessionExecutor$SessionRunnable.call(DefaultSessionExecutor.java:196) ~[torodb-0.20-jar-with-dependencies.jar:na]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_25]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_25]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_25]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
Caused by: com.torodb.torod.core.exceptions.UserToroException: java.lang.IllegalArgumentException: There is no open cursor with id com.torodb.torod.core.cursors.CursorId@150
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.failableCall(ReadCursorCallable.java:73) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.failableCall(ReadCursorCallable.java:39) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.Job.call(Job.java:20) ~[torodb-0.20-jar-with-dependencies.jar:na]
... 5 common frames omitted
Caused by: java.lang.IllegalArgumentException: There is no open cursor with id com.torodb.torod.core.cursors.CursorId@150
at com.torodb.torod.db.sql.AbstractSqlDbWrapper.getGlobalCursor(AbstractSqlDbWrapper.java:219) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.LazyDbWrapper.getGlobalCursor(LazyDbWrapper.java:68) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.failableCall(ReadCursorCallable.java:64) ~[torodb-0.20-jar-with-dependencies.jar:na]
... 7 common frames omitted
62655 [nioEventLoopGroup-3-4] ERROR c.e.m.m.RequestMessageObjectHandler - Error while processing request
java.lang.RuntimeException: java.util.concurrent.ExecutionException: com.torodb.torod.core.exceptions.ToroRuntimeException: com.torodb.torod.core.exceptions.UserToroException: java.lang.IllegalArgumentException: There is no open cursor with id com.torodb.torod.core.cursors.CursorId@150
at com.toro.torod.connection.DefaultCursorManager.readCursor(DefaultCursorManager.java:137) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.mongowp.mongoserver.api.toro.ToroRequestProcessor.getMore(ToroRequestProcessor.java:203) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.eightkdata.mongowp.mongoserver.RequestMessageObjectHandler.channelRead(RequestMessageObjectHandler.java:65) ~[torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:182) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:182) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:130) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) [torodb-0.20-jar-with-dependencies.jar:na]
at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) [torodb-0.20-jar-with-dependencies.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
Caused by: java.util.concurrent.ExecutionException: com.torodb.torod.core.exceptions.ToroRuntimeException: com.torodb.torod.core.exceptions.UserToroException: java.lang.IllegalArgumentException: There is no open cursor with id com.torodb.torod.core.cursors.CursorId@150
at java.util.concurrent.FutureTask.report(FutureTask.java:122) ~[na:1.8.0_25]
at java.util.concurrent.FutureTask.get(FutureTask.java:192) ~[na:1.8.0_25]
at com.toro.torod.connection.DefaultCursorManager.readCursor(DefaultCursorManager.java:115) ~[torodb-0.20-jar-with-dependencies.jar:na]
... 19 common frames omitted
Caused by: com.torodb.torod.core.exceptions.ToroRuntimeException: com.torodb.torod.core.exceptions.UserToroException: java.lang.IllegalArgumentException: There is no open cursor with id com.torodb.torod.core.cursors.CursorId@150
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.onFail(ReadCursorCallable.java:89) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.onFail(ReadCursorCallable.java:39) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.Job.call(Job.java:22) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.DefaultSessionExecutor$SessionRunnable.call(DefaultSessionExecutor.java:196) ~[torodb-0.20-jar-with-dependencies.jar:na]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_25]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_25]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ~[na:1.8.0_25]
... 1 common frames omitted
Caused by: com.torodb.torod.core.exceptions.UserToroException: java.lang.IllegalArgumentException: There is no open cursor with id com.torodb.torod.core.cursors.CursorId@150
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.failableCall(ReadCursorCallable.java:73) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.failableCall(ReadCursorCallable.java:39) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.Job.call(Job.java:20) ~[torodb-0.20-jar-with-dependencies.jar:na]
... 5 common frames omitted
Caused by: java.lang.IllegalArgumentException: There is no open cursor with id com.torodb.torod.core.cursors.CursorId@150
at com.torodb.torod.db.sql.AbstractSqlDbWrapper.getGlobalCursor(AbstractSqlDbWrapper.java:219) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.LazyDbWrapper.getGlobalCursor(LazyDbWrapper.java:68) ~[torodb-0.20-jar-with-dependencies.jar:na]
at com.torodb.torod.db.executor.jobs.ReadCursorCallable.failableCall(ReadCursorCallable.java:64) ~[torodb-0.20-jar-with-dependencies.jar:na]
... 7 common frames omitted

Docker-Compose ExecutorService java.util.concurrent.ScheduledThreadPoolExecutor@2eac3d64[Shutting down

Hi,
using Docker-Compose script from:
https://github.com/wekan/wekan-postgresql

Dockerfile for that Wekan container is at:
https://github.com/wekan/wekan/blob/devel/Dockerfile

Only change to docker-compose.yml is that Wekan address is changed from http://localhost to IP address like http://192.168.1.5

I used docker-compose.yml with:

docker-compose up

ToroDB Stampede Docker container does not stay running, it crashes and exits. Other containers (wekan, mongo and postgres) do continue running. It's not lack of RAM, I tested on computer that has total 32GB RAM with more than half of that free before starting.

There was no errors about fibers when building DockerHub wekanteam:latest container, I don't know why there's fibers error below.

$ docker-compose up
Creating network "wekanpostgresql_wekan-tier" with driver "bridge"
Creating volume "wekanpostgresql_mongodb" with local driver
Creating volume "wekanpostgresql_mongodb-dump" with local driver
Pulling mongodb (mongo:3.2)...
3.2: Pulling from library/mongo
5233d9aed181: Pull complete
5bbfc055e8fb: Pull complete
03e4cc4b6057: Pull complete
8319d631fd37: Pull complete
797ca64b920a: Pull complete
4f57a996ba49: Pull complete
5778b19a1103: Pull complete
a763733f623a: Pull complete
0101d9086c98: Pull complete
8b0a7b12275b: Pull complete
bfe8dd06ccf2: Pull complete
Digest: sha256:b2c7025b69223fca43a2c7d60c30b2bffac4df20314f11d2b46f4d8d4eaf29e9
Status: Downloaded newer image for mongo:3.2
Pulling wekan (wekanteam/wekan:latest)...
latest: Pulling from wekanteam/wekan
9f0706ba7422: Pull complete
027fea8c066a: Pull complete
c261542db470: Pull complete
Digest: sha256:ead2dba16e0b80b4ff570d28de2d966be2508386f7b4632b8928de6ee401abb4
Status: Downloaded newer image for wekanteam/wekan:latest
Pulling postgres (postgres:9.6)...
9.6: Pulling from library/postgres
ad74af05f5a2: Pull complete
8996b4a29b2b: Pull complete
bea3311ef15b: Pull complete
b1b9eb0ac9c8: Pull complete
1d6d551d6af0: Pull complete
ba16377760f9: Pull complete
fd68bfa82d98: Pull complete
f49f2decd34d: Pull complete
6b1468749943: Pull complete
29d82d6e2d6c: Pull complete
ad849322ee0c: Pull complete
c5539863a39f: Pull complete
18cc2b50256c: Pull complete
Digest: sha256:586320aba4a40f7c4ffdb69534f93c844f01c0ff1211c4b9d9f05a8bddca186f
Status: Downloaded newer image for postgres:9.6
Pulling torodb-stampede (torodb/stampede:latest)...
latest: Pulling from torodb/stampede
5040bd298390: Pull complete
fce5728aad85: Pull complete
c42794440453: Pull complete
0c0da797ba48: Pull complete
7c9b17433752: Pull complete
114e02586e63: Pull complete
e4c663802e9a: Pull complete
0490ebe4175e: Pull complete
44f1d76d0958: Pull complete
ab29f21dee7e: Pull complete
c91455792d73: Pull complete
Digest: sha256:ff04de456602ecd01347bb836565da01e03d4f30d62078286951cea8242667ed
Status: Downloaded newer image for torodb/stampede:latest
Creating wekanpostgresql_mongodb_1 ... 
Creating wekanpostgresql_postgres_1 ... 
Creating wekanpostgresql_mongodb_1
Creating wekanpostgresql_mongodb_1 ... done
Creating wekan-app ... 
Creating wekanpostgresql_torodb-stampede_1 ... 
Creating wekan-app
Creating wekanpostgresql_torodb-stampede_1 ... done
Attaching to wekanpostgresql_postgres_1, wekanpostgresql_mongodb_1, wekan-app, wekanpostgresql_torodb-stampede_1
postgres_1         | The files belonging to this database system will be owned by user "postgres".
postgres_1         | This user must also own the server process.
postgres_1         | 
postgres_1         | The database cluster will be initialized with locale "en_US.utf8".
postgres_1         | The default database encoding has accordingly been set to "UTF8".
postgres_1         | The default text search configuration will be set to "english".
postgres_1         | 
postgres_1         | Data page checksums are disabled.
postgres_1         | 
postgres_1         | fixing permissions on existing directory /var/lib/postgresql/data ... ok
postgres_1         | creating subdirectories ... ok
postgres_1         | selecting default max_connections ... 100
postgres_1         | selecting default shared_buffers ... 128MB
postgres_1         | selecting dynamic shared memory implementation ... posix
postgres_1         | creating configuration files ... ok
mongodb_1          | 2017-08-16T22:50:23.726+0000 I CONTROL  [initandlisten] MongoDB starting : pid=6 port=27017 dbpath=/data/db 64-bit host=583fc0025955
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] db version v3.2.16
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] git version: 056bf45128114e44c5358c7a8776fb582363e094
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1t  3 May 2016
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] allocator: tcmalloc
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] modules: none
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten] build environment:
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten]     distmod: debian81
mongodb_1          | 2017-08-16T22:50:23.727+0000 I CONTROL  [initandlisten]     distarch: x86_64
mongodb_1          | 2017-08-16T22:50:23.728+0000 I CONTROL  [initandlisten]     target_arch: x86_64
mongodb_1          | 2017-08-16T22:50:23.728+0000 I CONTROL  [initandlisten] options: { replication: { replSet: "rs1" } }
mongodb_1          | 2017-08-16T22:50:23.872+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=1G,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] 
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] 
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
mongodb_1          | 2017-08-16T22:50:24.445+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
mongodb_1          | 2017-08-16T22:50:24.446+0000 I CONTROL  [initandlisten] 
mongodb_1          | 2017-08-16T22:50:24.446+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
mongodb_1          | 2017-08-16T22:50:24.446+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
mongodb_1          | 2017-08-16T22:50:24.446+0000 I CONTROL  [initandlisten] 
postgres_1         | running bootstrap script ... ok
mongodb_1          | 2017-08-16T22:50:24.955+0000 I REPL     [initandlisten] Did not find local voted for document at startup.
mongodb_1          | 2017-08-16T22:50:24.955+0000 I REPL     [initandlisten] Did not find local replica set configuration document at startup;  NoMatchingDocument: Did not find replica set configuration document in local.system.replset
mongodb_1          | 2017-08-16T22:50:24.955+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
mongodb_1          | 2017-08-16T22:50:24.956+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
mongodb_1          | 2017-08-16T22:50:24.956+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
postgres_1         | performing post-bootstrap initialization ... ok
postgres_1         | syncing data to disk ... ok
postgres_1         | 
postgres_1         | Success. You can now start the database server using:
postgres_1         | 
postgres_1         |     pg_ctl -D /var/lib/postgresql/data -l logfile start
postgres_1         | 
postgres_1         | 
postgres_1         | WARNING: enabling "trust" authentication for local connections
postgres_1         | You can change this by editing pg_hba.conf or using the option -A, or
postgres_1         | --auth-local and --auth-host, the next time you run initdb.
postgres_1         | ****************************************************
postgres_1         | WARNING: No password has been set for the database.
postgres_1         |          This will allow anyone with access to the
postgres_1         |          Postgres port to access your database. In
postgres_1         |          Docker's default configuration, this is
postgres_1         |          effectively any other container on the same
postgres_1         |          system.
postgres_1         | 
postgres_1         |          Use "-e POSTGRES_PASSWORD=password" to set
postgres_1         |          it in "docker run".
postgres_1         | ****************************************************
postgres_1         | waiting for server to start....LOG:  could not bind IPv6 socket: Cannot assign requested address
postgres_1         | HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
postgres_1         | LOG:  database system was shut down at 2017-08-16 22:50:26 UTC
postgres_1         | LOG:  MultiXact member wraparound protections are now enabled
postgres_1         | LOG:  database system is ready to accept connections
postgres_1         | LOG:  autovacuum launcher started
mongodb_1          | 2017-08-16T22:50:27.308+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.2:51956 #1 (1 connection now open)
mongodb_1          | 2017-08-16T22:50:27.480+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55280 #2 (2 connections now open)
wekan-app          | 
wekan-app          | /build/programs/server/node_modules/fibers/future.js:313
wekan-app          | 						throw(ex);
wekan-app          | 						^
wekan-app          | MongoError: not master and slaveOk=false
wekan-app          |     at Object.Future.wait (/build/programs/server/node_modules/fibers/future.js:449:15)
wekan-app          |     at [object Object].MongoConnection._ensureIndex (packages/mongo/mongo_driver.js:832:10)
wekan-app          |     at [object Object].Mongo.Collection._ensureIndex (packages/mongo/collection.js:677:20)
wekan-app          |     at setupUsersCollection (packages/accounts-base/accounts_server.js:1493:9)
wekan-app          |     at new AccountsServer (packages/accounts-base/accounts_server.js:51:5)
wekan-app          |     at meteorInstall.node_modules.meteor.accounts-base.server_main.js (packages/accounts-base/server_main.js:9:12)
wekan-app          |     at fileEvaluate (packages/modules-runtime.js:197:9)
wekan-app          |     at require (packages/modules-runtime.js:120:16)
wekan-app          |     at /build/programs/server/packages/accounts-base.js:2031:15
wekan-app          |     at /build/programs/server/packages/accounts-base.js:2042:3
wekan-app          |     - - - - -
wekan-app          |     at Function.MongoError.create (/build/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/error.js:31:11)
wekan-app          |     at queryCallback (/build/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/cursor.js:212:36)
wekan-app          |     at /build/programs/server/npm/node_modules/meteor/npm-mongo/node_modules/mongodb-core/lib/connection/pool.js:455:18
wekan-app          |     at nextTickCallbackWith0Args (node.js:489:9)
wekan-app          |     at process._tickCallback (node.js:418:13)
mongodb_1          | 2017-08-16T22:50:27.898+0000 I NETWORK  [conn2] end connection 172.18.0.4:55280 (1 connection now open)
postgres_1         |  done
postgres_1         | server started
postgres_1         | ALTER ROLE
postgres_1         | 
postgres_1         | 
postgres_1         | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
postgres_1         | 
postgres_1         | LOG:  received fast shutdown request
postgres_1         | LOG:  aborting any active transactions
postgres_1         | LOG:  autovacuum launcher shutting down
postgres_1         | LOG:  shutting down
postgres_1         | waiting for server to shut down....LOG:  database system is shut down
postgres_1         |  done
postgres_1         | server stopped
postgres_1         | 
postgres_1         | PostgreSQL init process complete; ready for start up.
postgres_1         | 
postgres_1         | LOG:  database system was shut down at 2017-08-16 22:50:28 UTC
postgres_1         | LOG:  MultiXact member wraparound protections are now enabled
postgres_1         | LOG:  database system is ready to accept connections
postgres_1         | LOG:  autovacuum launcher started
postgres_1         | FATAL:  role "wekan" does not exist
mongodb_1          | 2017-08-16T22:50:30.312+0000 I REPL     [conn1] replSetInitiate admin command received from client
mongodb_1          | 2017-08-16T22:50:30.314+0000 I REPL     [conn1] replSetInitiate config object with 1 members parses ok
mongodb_1          | 2017-08-16T22:50:30.314+0000 I REPL     [conn1] ******
mongodb_1          | 2017-08-16T22:50:30.314+0000 I REPL     [conn1] creating replication oplog of size: 22945MB...
mongodb_1          | 2017-08-16T22:50:30.321+0000 I STORAGE  [conn1] Starting WiredTigerRecordStoreThread local.oplog.rs
mongodb_1          | 2017-08-16T22:50:30.328+0000 I STORAGE  [conn1] The size storer reports that the oplog contains 0 records totaling to 0 bytes
mongodb_1          | 2017-08-16T22:50:30.328+0000 I STORAGE  [conn1] Scanning the oplog to determine where to place markers for truncation
mongodb_1          | 2017-08-16T22:50:30.363+0000 I REPL     [conn1] ******
mongodb_1          | 2017-08-16T22:50:30.427+0000 I REPL     [ReplicationExecutor] New replica set config in use: { _id: "rs1", version: 1, protocolVersion: 1, members: [ { _id: 0, host: "mongodb:27017", arbiterOnly: false, buildIndexes: true, hidden: false, priority: 1.0, tags: {}, slaveDelay: 0, votes: 1 } ], settings: { chainingAllowed: true, heartbeatIntervalMillis: 2000, heartbeatTimeoutSecs: 10, electionTimeoutMillis: 10000, getLastErrorModes: {}, getLastErrorDefaults: { w: 1, wtimeout: 0 }, replicaSetId: ObjectId('5994cc3613c864f461dfa9fb') } }
mongodb_1          | 2017-08-16T22:50:30.432+0000 I REPL     [ReplicationExecutor] This node is mongodb:27017 in the config
mongodb_1          | 2017-08-16T22:50:30.432+0000 I REPL     [ReplicationExecutor] transition to STARTUP2
mongodb_1          | 2017-08-16T22:50:30.432+0000 I REPL     [conn1] Starting replication applier threads
mongodb_1          | 2017-08-16T22:50:30.433+0000 I COMMAND  [conn1] command local.replset.minvalid command: replSetInitiate { replSetInitiate: { _id: "rs1", members: [ { _id: 0.0, host: "mongodb:27017" } ] } } keyUpdates:0 writeConflicts:0 numYields:0 reslen:22 locks:{ Global: { acquireCount: { r: 7, w: 5, W: 2 } }, Database: { acquireCount: { w: 2, W: 3 } }, Metadata: { acquireCount: { w: 1 } }, oplog: { acquireCount: { w: 2 } } } protocol:op_command 120ms
mongodb_1          | 2017-08-16T22:50:30.482+0000 I NETWORK  [conn1] end connection 172.18.0.2:51956 (0 connections now open)
mongodb_1          | 2017-08-16T22:50:30.482+0000 I REPL     [ReplicationExecutor] transition to RECOVERING
mongodb_1          | 2017-08-16T22:50:30.483+0000 I REPL     [ReplicationExecutor] transition to SECONDARY
mongodb_1          | 2017-08-16T22:50:30.502+0000 I REPL     [ReplicationExecutor] conducting a dry run election to see if we could be elected
mongodb_1          | 2017-08-16T22:50:30.503+0000 I REPL     [ReplicationExecutor] dry election run succeeded, running for election
mongodb_1          | 2017-08-16T22:50:30.598+0000 I REPL     [ReplicationExecutor] election succeeded, assuming primary role in term 1
mongodb_1          | 2017-08-16T22:50:30.598+0000 I REPL     [ReplicationExecutor] transition to PRIMARY
mongodb_1          | 2017-08-16T22:50:31.492+0000 I REPL     [rsSync] transition to primary complete; database writes are now permitted
mongodb_1          | 2017-08-16T22:50:32.717+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55286 #3 (1 connection now open)
mongodb_1          | 2017-08-16T22:50:32.907+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { username: 1 }, name: "username_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:32.907+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:32.913+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:32.938+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { emails.address: 1 }, name: "emails.address_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:32.939+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:32.940+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:32.966+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { services.resume.loginTokens.hashedToken: 1 }, name: "services.resume.loginTokens.hashedToken_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:32.967+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:32.973+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:32.995+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { services.resume.loginTokens.token: 1 }, name: "services.resume.loginTokens.token_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:32.995+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:32.996+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:33.095+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, key: { services.resume.haveLoginTokensToDelete: 1 }, name: "services.resume.haveLoginTokensToDelete_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:33.098+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:33.102+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:33.121+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, key: { services.resume.loginTokens.when: 1 }, name: "services.resume.loginTokens.when_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:33.126+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:33.142+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:34.821+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { services.email.verificationTokens.token: 1 }, name: "services.email.verificationTokens.token_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:34.822+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:34.832+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:34.856+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, unique: true, key: { services.password.reset.token: 1 }, name: "services.password.reset.token_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:34.856+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:34.858+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:34.885+0000 I INDEX    [conn3] build index on: wekan.users properties: { v: 1, key: { services.password.reset.when: 1 }, name: "services.password.reset.when_1", ns: "wekan.users", sparse: 1 }
mongodb_1          | 2017-08-16T22:50:34.885+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:34.886+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:34.942+0000 I INDEX    [conn3] build index on: wekan.meteor_accounts_loginServiceConfiguration properties: { v: 1, unique: true, key: { service: 1 }, name: "service_1", ns: "wekan.meteor_accounts_loginServiceConfiguration" }
mongodb_1          | 2017-08-16T22:50:34.942+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:34.944+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:37.743+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55288 #4 (2 connections now open)
mongodb_1          | 2017-08-16T22:50:37.814+0000 I NETWORK  [conn4] end connection 172.18.0.4:55288 (1 connection now open)
mongodb_1          | 2017-08-16T22:50:37.881+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55290 #5 (2 connections now open)
mongodb_1          | 2017-08-16T22:50:37.882+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55292 #6 (3 connections now open)
mongodb_1          | 2017-08-16T22:50:37.923+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55294 #7 (4 connections now open)
mongodb_1          | 2017-08-16T22:50:37.980+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55296 #8 (5 connections now open)
mongodb_1          | 2017-08-16T22:50:37.985+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55298 #9 (6 connections now open)
mongodb_1          | 2017-08-16T22:50:38.006+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55300 #10 (7 connections now open)
mongodb_1          | 2017-08-16T22:50:38.098+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55302 #11 (8 connections now open)
mongodb_1          | 2017-08-16T22:50:38.098+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55304 #12 (9 connections now open)
mongodb_1          | 2017-08-16T22:50:38.152+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55306 #13 (10 connections now open)
mongodb_1          | 2017-08-16T22:50:38.168+0000 I NETWORK  [conn13] end connection 172.18.0.4:55306 (9 connections now open)
mongodb_1          | 2017-08-16T22:50:38.192+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55308 #14 (10 connections now open)
mongodb_1          | 2017-08-16T22:50:38.195+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55310 #15 (11 connections now open)
mongodb_1          | 2017-08-16T22:50:38.203+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55312 #16 (12 connections now open)
mongodb_1          | 2017-08-16T22:50:38.226+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55314 #17 (13 connections now open)
mongodb_1          | 2017-08-16T22:50:38.227+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55316 #18 (14 connections now open)
mongodb_1          | 2017-08-16T22:50:38.282+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55318 #19 (15 connections now open)
mongodb_1          | 2017-08-16T22:50:38.283+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55320 #20 (16 connections now open)
mongodb_1          | 2017-08-16T22:50:38.977+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.4:55322 #21 (17 connections now open)
mongodb_1          | 2017-08-16T22:50:39.309+0000 I INDEX    [conn3] build index on: wekan.activities properties: { v: 1, key: { createdAt: -1 }, name: "createdAt_-1", ns: "wekan.activities" }
mongodb_1          | 2017-08-16T22:50:39.309+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.311+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.327+0000 I INDEX    [conn21] build index on: wekan.activities properties: { v: 1, key: { cardId: 1, createdAt: -1 }, name: "cardId_1_createdAt_-1", ns: "wekan.activities" }
mongodb_1          | 2017-08-16T22:50:39.328+0000 I INDEX    [conn21] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.329+0000 I INDEX    [conn21] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.352+0000 I INDEX    [conn5] build index on: wekan.activities properties: { v: 1, key: { boardId: 1, createdAt: -1 }, name: "boardId_1_createdAt_-1", ns: "wekan.activities" }
mongodb_1          | 2017-08-16T22:50:39.352+0000 I INDEX    [conn5] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.362+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.387+0000 I INDEX    [conn3] build index on: wekan.activities properties: { v: 1, key: { commentId: 1 }, name: "commentId_1", ns: "wekan.activities", partialFilterExpression: { commentId: { $exists: true } } }
mongodb_1          | 2017-08-16T22:50:39.388+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.394+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.415+0000 I INDEX    [conn21] build index on: wekan.activities properties: { v: 1, key: { attachmentId: 1 }, name: "attachmentId_1", ns: "wekan.activities", partialFilterExpression: { attachmentId: { $exists: true } } }
mongodb_1          | 2017-08-16T22:50:39.415+0000 I INDEX    [conn21] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.417+0000 I INDEX    [conn21] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.496+0000 I INDEX    [conn5] build index on: wekan.boards properties: { v: 1, unique: true, key: { _id: 1, members.userId: 1 }, name: "_id_1_members.userId_1", ns: "wekan.boards" }
mongodb_1          | 2017-08-16T22:50:39.496+0000 I INDEX    [conn5] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.498+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.525+0000 I INDEX    [conn3] build index on: wekan.boards properties: { v: 1, key: { members.userId: 1 }, name: "members.userId_1", ns: "wekan.boards" }
mongodb_1          | 2017-08-16T22:50:39.525+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.527+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.562+0000 I INDEX    [conn21] build index on: wekan.card_comments properties: { v: 1, key: { cardId: 1, createdAt: -1 }, name: "cardId_1_createdAt_-1", ns: "wekan.card_comments" }
mongodb_1          | 2017-08-16T22:50:39.565+0000 I INDEX    [conn21] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.567+0000 I INDEX    [conn21] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.610+0000 I INDEX    [conn5] build index on: wekan.cards properties: { v: 1, key: { boardId: 1, createdAt: -1 }, name: "boardId_1_createdAt_-1", ns: "wekan.cards" }
mongodb_1          | 2017-08-16T22:50:39.611+0000 I INDEX    [conn5] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.615+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.668+0000 I INDEX    [conn3] build index on: wekan.checklists properties: { v: 1, key: { cardId: 1, createdAt: 1 }, name: "cardId_1_createdAt_1", ns: "wekan.checklists" }
mongodb_1          | 2017-08-16T22:50:39.668+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.669+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.709+0000 I INDEX    [conn21] build index on: wekan.invitation_codes properties: { v: 1, unique: true, key: { email: 1 }, name: "c2_email", ns: "wekan.invitation_codes", background: true, sparse: false }
mongodb_1          | 2017-08-16T22:50:39.709+0000 I INDEX    [conn21] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.763+0000 I INDEX    [conn5] build index on: wekan.lists properties: { v: 1, key: { boardId: 1 }, name: "boardId_1", ns: "wekan.lists" }
mongodb_1          | 2017-08-16T22:50:39.764+0000 I INDEX    [conn5] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.766+0000 I INDEX    [conn5] build index done.  scanned 0 total records. 0 secs
mongodb_1          | 2017-08-16T22:50:39.893+0000 I INDEX    [conn3] build index on: wekan.unsaved-edits properties: { v: 1, key: { userId: 1 }, name: "userId_1", ns: "wekan.unsaved-edits" }
mongodb_1          | 2017-08-16T22:50:39.893+0000 I INDEX    [conn3] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
mongodb_1          | 2017-08-16T22:50:39.894+0000 I INDEX    [conn3] build index done.  scanned 0 total records. 0 secs
torodb-stampede_1  | Creating entry for user wekan in /root/.toropass
torodb-stampede_1  | Creating wekan user
torodb-stampede_1  | CREATE ROLE
torodb-stampede_1  | Creating wekan database
torodb-stampede_1  | CREATE DATABASE
torodb-stampede_1  | Writing configuration file to /maven/conf/torodb-stampede.yml
torodb-stampede_1  | 2017-08-16T10:50:51.452 INFO  LIFECYCLE  - Starting up ToroDB Stampede
torodb-stampede_1  | 2017-08-16T10:50:52.129 INFO  BACKEND    - Configured PostgreSQL backend at postgres:5432
torodb-stampede_1  | 2017-08-16T10:50:54.192 INFO  BACKEND    - Created pool session with size 28 and level TRANSACTION_REPEATABLE_READ
torodb-stampede_1  | 2017-08-16T10:50:54.348 INFO  BACKEND    - Created pool system with size 1 and level TRANSACTION_REPEATABLE_READ
torodb-stampede_1  | 2017-08-16T10:50:54.394 INFO  BACKEND    - Created pool cursors with size 1 and level TRANSACTION_REPEATABLE_READ
torodb-stampede_1  | 2017-08-16T10:50:57.585 INFO  BACKEND    - Schema 'torodb' not found. Creating it...
torodb-stampede_1  | 2017-08-16T10:50:57.784 INFO  BACKEND    - Schema 'torodb' created
torodb-stampede_1  | 2017-08-16T10:50:57.821 INFO  BACKEND    - Database metadata has been validated
torodb-stampede_1  | 2017-08-16T10:50:58.549 WARN  LIFECYCLE  - Found that replication shard unsharded is not consistent.
torodb-stampede_1  | 2017-08-16T10:50:58.549 WARN  LIFECYCLE  - Dropping user data.
torodb-stampede_1  | 2017-08-16T10:50:58.673 INFO  REPL-unsha - Consistent state has been set to 'false'
torodb-stampede_1  | 2017-08-16T10:50:59.196 INFO  LIFECYCLE  - Starting replication from replica set named rs1
torodb-stampede_1  | 2017-08-16T10:51:00.926 INFO  REPL       - Starting replication service
mongodb_1          | 2017-08-16T22:51:01.675+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.5:36708 #22 (18 connections now open)
mongodb_1          | 2017-08-16T22:51:01.682+0000 I NETWORK  [conn22] end connection 172.18.0.5:36708 (17 connections now open)
mongodb_1          | 2017-08-16T22:51:01.930+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.5:36710 #23 (18 connections now open)
mongodb_1          | 2017-08-16T22:51:02.097+0000 I NETWORK  [initandlisten] connection accepted from 172.18.0.5:36712 #24 (19 connections now open)
torodb-stampede_1  | 2017-08-16T10:51:02.334 INFO  REPL       - Waiting for 2  pings from other members before syncing
torodb-stampede_1  | 2017-08-16T10:51:02.344 INFO  REPL       - Member mongodb:27017 is now in state RS_PRIMARY
torodb-stampede_1  | 2017-08-16T10:51:03.340 INFO  REPL       - Waiting for 1  pings from other members before syncing
torodb-stampede_1  | 2017-08-16T10:51:04.341 INFO  REPL       - Waiting for 1  pings from other members before syncing
torodb-stampede_1  | 2017-08-16T10:51:05.352 INFO  REPL       - syncing from: mongodb:27017
torodb-stampede_1  | 2017-08-16T10:51:05.353 INFO  REPL       - Topology service started
torodb-stampede_1  | 2017-08-16T10:51:05.543 INFO  REPL       - Database is not consistent.
torodb-stampede_1  | 2017-08-16T10:51:05.545 INFO  REPL       - Replication service started
torodb-stampede_1  | 2017-08-16T10:51:05.547 INFO  LIFECYCLE  - ToroDB Stampede is now running
torodb-stampede_1  | 2017-08-16T10:51:05.552 INFO  REPL       - Starting RECOVERY mode
torodb-stampede_1  | 2017-08-16T10:51:05.562 INFO  REPL       - Starting RECOVERY service
torodb-stampede_1  | 2017-08-16T10:51:05.564 INFO  REPL       - Starting initial sync
torodb-stampede_1  | 2017-08-16T10:51:05.607 INFO  REPL       - Consistent state has been set to 'false'
torodb-stampede_1  | 2017-08-16T10:51:05.645 INFO  REPL       - Using node mongodb:27017 to replicate from
torodb-stampede_1  | 2017-08-16T10:51:05.733 INFO  REPL       - Remote database cloning started
torodb-stampede_1  | 2017-08-16T10:51:06.200 INFO  BACKEND    - Created internal index rid_pkey for table oplog_replication_lastappliedoplogentry
torodb-stampede_1  | 2017-08-16T10:51:06.210 INFO  BACKEND    - Created internal index did_seq_idx for table oplog_replication_lastappliedoplogentry
torodb-stampede_1  | 2017-08-16T10:51:06.315 INFO  REPL       - Local databases dropping started
torodb-stampede_1  | 2017-08-16T10:51:06.396 INFO  REPL       - Local databases dropping finished
torodb-stampede_1  | 2017-08-16T10:51:06.397 INFO  REPL       - Remote database cloning started
torodb-stampede_1  | 2017-08-16T10:51:06.429 INFO  REPL       - Collection wekan.card_comments will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.430 INFO  REPL       - Collection wekan.invitation_codes will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.452 INFO  REPL       - Collection wekan.checklists will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.459 INFO  REPL       - Collection wekan.cards will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.460 INFO  REPL       - Collection wekan.lists will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.460 INFO  REPL       - Collection wekan.meteor-migrations will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.461 INFO  REPL       - Collection wekan.boards will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.461 INFO  REPL       - Collection wekan.users will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.462 INFO  REPL       - Collection wekan.meteor_accounts_loginServiceConfiguration will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.463 INFO  REPL       - Collection wekan.accountSettings will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.463 INFO  REPL       - Collection wekan.activities will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.463 INFO  REPL       - Collection wekan.settings will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.463 INFO  REPL       - Collection wekan.unsaved-edits will be cloned
torodb-stampede_1  | 2017-08-16T10:51:06.476 INFO  MONGOD     - Drop collection wekan.card_comments
torodb-stampede_1  | 2017-08-16T10:51:06.540 INFO  MONGOD     - Drop collection wekan.invitation_codes
torodb-stampede_1  | 2017-08-16T10:51:06.587 INFO  MONGOD     - Drop collection wekan.checklists
torodb-stampede_1  | 2017-08-16T10:51:06.641 INFO  MONGOD     - Drop collection wekan.cards
torodb-stampede_1  | 2017-08-16T10:51:06.692 INFO  MONGOD     - Drop collection wekan.lists
torodb-stampede_1  | 2017-08-16T10:51:06.754 INFO  MONGOD     - Drop collection wekan.meteor-migrations
torodb-stampede_1  | 2017-08-16T10:51:06.778 INFO  MONGOD     - Drop collection wekan.boards
torodb-stampede_1  | 2017-08-16T10:51:06.824 INFO  MONGOD     - Drop collection wekan.users
torodb-stampede_1  | 2017-08-16T10:51:06.906 INFO  MONGOD     - Drop collection wekan.meteor_accounts_loginServiceConfiguration
torodb-stampede_1  | 2017-08-16T10:51:06.935 INFO  MONGOD     - Drop collection wekan.accountSettings
torodb-stampede_1  | 2017-08-16T10:51:06.977 INFO  MONGOD     - Drop collection wekan.activities
torodb-stampede_1  | 2017-08-16T10:51:07.038 INFO  MONGOD     - Drop collection wekan.settings
torodb-stampede_1  | 2017-08-16T10:51:07.069 INFO  MONGOD     - Drop collection wekan.unsaved-edits
torodb-stampede_1  | 2017-08-16T10:51:07.108 INFO  REPL       - Cloning collection data wekan.card_comments into wekan.card_comments
torodb-stampede_1  | 2017-08-16T10:51:07.471 INFO  REPL       - 0 documents have been cloned to wekan.card_comments
torodb-stampede_1  | 2017-08-16T10:51:07.474 INFO  REPL       - Cloning collection data wekan.invitation_codes into wekan.invitation_codes
torodb-stampede_1  | 2017-08-16T10:51:07.499 INFO  REPL       - 0 documents have been cloned to wekan.invitation_codes
torodb-stampede_1  | 2017-08-16T10:51:07.502 INFO  REPL       - Cloning collection data wekan.checklists into wekan.checklists
torodb-stampede_1  | 2017-08-16T10:51:07.520 INFO  REPL       - 0 documents have been cloned to wekan.checklists
torodb-stampede_1  | 2017-08-16T10:51:07.524 INFO  REPL       - Cloning collection data wekan.cards into wekan.cards
torodb-stampede_1  | 2017-08-16T10:51:07.566 INFO  REPL       - 0 documents have been cloned to wekan.cards
torodb-stampede_1  | 2017-08-16T10:51:07.574 INFO  REPL       - Cloning collection data wekan.lists into wekan.lists
torodb-stampede_1  | 2017-08-16T10:51:07.621 INFO  REPL       - 0 documents have been cloned to wekan.lists
torodb-stampede_1  | 2017-08-16T10:51:07.623 INFO  REPL       - Cloning collection data wekan.meteor-migrations into wekan.meteor-migrations
torodb-stampede_1  | 2017-08-16T10:51:07.908 INFO  BACKEND    - Created index meteor_migrations__id_s_a_idx for table meteor_migrations associated to logical index wekan.meteor-migrations._id_
torodb-stampede_1  | 2017-08-16T10:51:07.992 INFO  REPL       - 7 documents have been cloned to wekan.meteor-migrations
torodb-stampede_1  | 2017-08-16T10:51:07.993 INFO  REPL       - Cloning collection data wekan.boards into wekan.boards
torodb-stampede_1  | 2017-08-16T10:51:08.045 INFO  REPL       - 0 documents have been cloned to wekan.boards
torodb-stampede_1  | 2017-08-16T10:51:08.046 INFO  REPL       - Cloning collection data wekan.users into wekan.users
torodb-stampede_1  | 2017-08-16T10:51:08.085 INFO  REPL       - 0 documents have been cloned to wekan.users
torodb-stampede_1  | 2017-08-16T10:51:08.086 INFO  REPL       - Cloning collection data wekan.meteor_accounts_loginServiceConfiguration into wekan.meteor_accounts_loginServiceConfiguration
torodb-stampede_1  | 2017-08-16T10:51:08.115 INFO  REPL       - 0 documents have been cloned to wekan.meteor_accounts_loginServiceConfiguration
torodb-stampede_1  | 2017-08-16T10:51:08.118 INFO  REPL       - Cloning collection data wekan.accountSettings into wekan.accountSettings
torodb-stampede_1  | 2017-08-16T10:51:08.239 INFO  BACKEND    - Created index accountsettings__id_s_a_idx for table accountsettings associated to logical index wekan.accountSettings._id_
torodb-stampede_1  | 2017-08-16T10:51:08.260 INFO  REPL       - 1 documents have been cloned to wekan.accountSettings
torodb-stampede_1  | 2017-08-16T10:51:08.261 INFO  REPL       - Cloning collection data wekan.activities into wekan.activities
torodb-stampede_1  | 2017-08-16T10:51:08.278 INFO  REPL       - 0 documents have been cloned to wekan.activities
torodb-stampede_1  | 2017-08-16T10:51:08.308 INFO  REPL       - Cloning collection data wekan.settings into wekan.settings
torodb-stampede_1  | 2017-08-16T10:51:08.403 INFO  BACKEND    - Created index settings__id_s_a_idx for table settings associated to logical index wekan.settings._id_
torodb-stampede_1  | 2017-08-16T10:51:08.448 INFO  BACKEND    - Created internal index rid_pkey for table settings_mailserver
torodb-stampede_1  | 2017-08-16T10:51:08.464 INFO  BACKEND    - Created internal index did_seq_idx for table settings_mailserver
torodb-stampede_1  | 2017-08-16T10:51:08.516 INFO  REPL       - 1 documents have been cloned to wekan.settings
torodb-stampede_1  | 2017-08-16T10:51:08.522 INFO  REPL       - Cloning collection data wekan.unsaved-edits into wekan.unsaved-edits
torodb-stampede_1  | 2017-08-16T10:51:08.543 INFO  REPL       - 0 documents have been cloned to wekan.unsaved-edits
torodb-stampede_1  | 2017-08-16T10:51:08.573 INFO  REPL       - Cloning collection indexes wekan.card_comments into wekan.card_comments
torodb-stampede_1  | 2017-08-16T10:51:08.589 INFO  REPL       - Index card_comments.wekan._id_ will be cloned
torodb-stampede_1  | 2017-08-16T10:51:08.607 INFO  REPL       - Index card_comments.wekan.cardId_1_createdAt_-1 will be cloned
torodb-stampede_1  | 2017-08-16T10:51:08.656 ERROR REPL       - Fatal error while starting recovery mode: Error while cloning indexes: null
torodb-stampede_1  | 2017-08-16T10:51:08.673 ERROR REPL       - Catched an error on the replication layer. Escalating it
torodb-stampede_1  | 2017-08-16T10:51:08.674 ERROR LIFECYCLE  - Error reported by replication supervisor. Stopping ToroDB Stampede
torodb-stampede_1  | 2017-08-16T10:51:08.684 INFO  REPL       - Recived a request to stop the recovering service
torodb-stampede_1  | 2017-08-16T10:51:08.685 INFO  LIFECYCLE  - Shutting down ToroDB Stampede
torodb-stampede_1  | 2017-08-16T10:51:08.734 INFO  REPL       - Shutting down replication service
torodb-stampede_1  | 2017-08-16T10:51:09.082 INFO  REPL       - Topology service shutted down
torodb-stampede_1  | 2017-08-16T10:51:09.100 INFO  REPL       - Replication service shutted down
mongodb_1          | 2017-08-16T22:51:09.096+0000 I NETWORK  [conn24] end connection 172.18.0.5:36712 (18 connections now open)
mongodb_1          | 2017-08-16T22:51:09.098+0000 I NETWORK  [conn23] end connection 172.18.0.5:36710 (17 connections now open)
torodb-stampede_1  | 2017-08-16T10:51:10.124 INFO  LIFECYCLE  - ExecutorService java.util.concurrent.ScheduledThreadPoolExecutor@2eac3d64[Shutting down, pool size = 1, active threads = 0, queued tasks = 1, completed tasks = 15] did not finished in PT1.001S
torodb-stampede_1  | 2017-08-16T10:51:10.449 INFO  LIFECYCLE  - ToroDB Stampede has been shutted down
wekanpostgresql_torodb-stampede_1 exited with code 0

When I tried Docker Hub wekanteam:latestdevel tag, it did complain that I should add POSTGRES_PASSWORD environment variable. I did add POSTGRES_PASSWORD=wekan to both ToroDB and PosgreSQL containers, but still I got the same errors, just with different hash after SheduledThreadPoolExecutor and completed tasks=13.

I have not tested yet with ToroDB snap version does this happen there too, Wekan snap is at https://github.com/wekan/wekan-snap and Wekan snap edge has newest Wekan.

ERROR c.t.t.d.e.DefaultExecutorFactory - System executor exception

I'm inserting a register into a new collection, and ToroDB drops a strange error.

> db.createCollection('notify');
> db.notify.insert({
...   "section": "nfs",
...   "sequence": "01",
...   "command": "2f6574632f696e69742e642f6e66732d636f6d6d6f6e2073746f70",
...   "on-fail-action": "00",
...   "on-success-action": "10",
...   "expected-exit-code": 0,
...   "exit-code": 0,
...   "output": "53746f7070696e67206e66732d636f6d6d6f6e20287669612073797374656d63746c293a206e66732d636f6d6d6f6e2e736572766963652e0a"
... });

The ToroDB's error is:

Starting ToroDB v0.22.1 listening on port 27017
539800 [torod-system-1] ERROR c.t.t.d.e.DefaultExecutorFactory - System executor exception
com.torodb.torod.core.exceptions.ToroRuntimeException: java.lang.RuntimeException: org.jooq.exception.DataAccessException: SQL [{ ? = call "public"."reserve_doc_ids"(?, ?) }]; ERROR: function public.reserve_doc_ids(character varying, integer) does not exist
  Hint: No function matches the given name and argument types. You might need to add explicit type casts.
  Position: 15
        at com.torodb.torod.db.executor.jobs.SystemDbCallable.onFail(SystemDbCallable.java:64) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.Job.call(Job.java:22) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.DefaultSystemExecutor$SystemRunnable.call(DefaultSystemExecutor.java:153) ~[torodb.jar:na]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_60]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Caused by: java.lang.RuntimeException: org.jooq.exception.DataAccessException: SQL [{ ? = call "public"."reserve_doc_ids"(?, ?) }]; ERROR: function public.reserve_doc_ids(character varying, integer) does not exist
  Hint: No function matches the given name and argument types. You might need to add explicit type casts.
  Position: 15
        at com.torodb.torod.db.sql.AbstractSqlDbConnection.reserveDocIds(AbstractSqlDbConnection.java:217) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.ReserveSubDocIdsCallable.call(ReserveSubDocIdsCallable.java:58) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.ReserveSubDocIdsCallable.call(ReserveSubDocIdsCallable.java:33) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.SystemDbCallable.failableCall(SystemDbCallable.java:72) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.Job.call(Job.java:20) ~[torodb.jar:na]
        ... 5 common frames omitted
Caused by: org.jooq.exception.DataAccessException: SQL [{ ? = call "public"."reserve_doc_ids"(?, ?) }]; ERROR: function public.reserve_doc_ids(character varying, integer) does not exist
  Hint: No function matches the given name and argument types. You might need to add explicit type casts.
  Position: 15
        at org.jooq.impl.Utils.translate(Utils.java:1644) ~[torodb.jar:na]
        at org.jooq.impl.DefaultExecuteContext.sqlException(DefaultExecuteContext.java:661) ~[torodb.jar:na]
        at org.jooq.impl.AbstractRoutine.executeCallableStatement(AbstractRoutine.java:373) ~[torodb.jar:na]
        at org.jooq.impl.AbstractRoutine.execute(AbstractRoutine.java:304) ~[torodb.jar:na]
        at org.jooq.impl.AbstractRoutine.execute(AbstractRoutine.java:257) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.Routines.reserveDocIds(Routines.java:68) ~[torodb.jar:na]
        at com.torodb.torod.db.sql.AbstractSqlDbConnection.reserveDocIds(AbstractSqlDbConnection.java:211) ~[torodb.jar:na]
        ... 9 common frames omitted
Caused by: org.postgresql.util.PSQLException: ERROR: function public.reserve_doc_ids(character varying, integer) does not exist
  Hint: No function matches the given name and argument types. You might need to add explicit type casts.
  Position: 15
        at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2270) ~[torodb.jar:na]
        at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1998) ~[torodb.jar:na]
        at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) ~[torodb.jar:na]
        at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:570) ~[torodb.jar:na]
        at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:420) ~[torodb.jar:na]
        at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:413) ~[torodb.jar:na]
        at com.zaxxer.hikari.proxy.PreparedStatementProxy.execute(PreparedStatementProxy.java:44) ~[torodb.jar:na]
        at com.zaxxer.hikari.proxy.CallableStatementJavassistProxy.execute(CallableStatementJavassistProxy.java) ~[torodb.jar:na]
        at org.jooq.tools.jdbc.DefaultPreparedStatement.execute(DefaultPreparedStatement.java:194) ~[torodb.jar:na]
        at org.jooq.impl.AbstractRoutine.execute0(AbstractRoutine.java:386) ~[torodb.jar:na]
        at org.jooq.impl.AbstractRoutine.executeCallableStatement(AbstractRoutine.java:344) ~[torodb.jar:na]
        ... 13 common frames omitted
539808 [torod-system-1] ERROR c.t.t.d.e.DefaultExecutorFactory - System executor exception
com.torodb.torod.core.exceptions.ToroRuntimeException: java.lang.IllegalArgumentException: At the present time Torod doesn't support '-' as identifier character. Only a alphanumeric letters and '_' are supported
        at com.torodb.torod.db.executor.jobs.SystemDbCallable.onFail(SystemDbCallable.java:64) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.Job.call(Job.java:22) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.DefaultSystemExecutor$SystemRunnable.call(DefaultSystemExecutor.java:153) ~[torodb.jar:na]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_60]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_60]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_60]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_60]
Caused by: java.lang.IllegalArgumentException: At the present time Torod doesn't support '-' as identifier character. Only a alphanumeric letters and '_' are supported
        at com.torodb.torod.db.postgresql.IdsFilter.filter(IdsFilter.java:42) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.IdsFilter.escapeAttributeName(IdsFilter.java:35) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.tables.SubDocTable.toColumnName(SubDocTable.java:244) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.tables.SubDocTable.<init>(SubDocTable.java:109) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.tables.SubDocTable.<init>(SubDocTable.java:100) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.tables.SubDocTable.<init>(SubDocTable.java:96) ~[torodb.jar:na]
        at com.torodb.torod.db.postgresql.meta.CollectionSchema.prepareSubDocTable(CollectionSchema.java:230) ~[torodb.jar:na]
        at com.torodb.torod.db.sql.AbstractSqlDbConnection.createSubDocTypeTable(AbstractSqlDbConnection.java:190) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.CreateSubDocTableCallable.call(CreateSubDocTableCallable.java:58) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.CreateSubDocTableCallable.call(CreateSubDocTableCallable.java:34) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.SystemDbCallable.failableCall(SystemDbCallable.java:72) ~[torodb.jar:na]
        at com.torodb.torod.db.executor.jobs.Job.call(Job.java:20) ~[torodb.jar:na]
        ... 5 common frames omitted

Many thanks,
Zarmack

Fatal error while starting recovery mode: Error while cloning indexes: null

Hello,

I've followed the installation and setup and things appear to be running well. However, during the migration, it seems to fail with the following stack traced (truncated for brevity):

2017-09-06T03:39:22.362 INFO REPL - Cloning collection indexes mulch.users into mulch.users 2017-09-06T03:39:22.363 INFO REPL - Index users.mulch._id_ will be cloned 2017-09-06T03:39:22.363 INFO REPL - Index users.mulch.firstname_1_lastname_1_email_1_uid_1 will be cloned 2017-09-06T03:39:22.363 INFO REPL - Index users.mulch.uid_1 will be cloned 2017-09-06T03:39:22.363 INFO REPL - Index users.mulch.email_1 will be cloned 2017-09-06T03:39:22.365 ERROR REPL - Fatal error while starting recovery mode: Error while cloning indexes: null 2017-09-06T03:39:22.368 ERROR REPL - Catched an error on the replication layer. Escalating it 2017-09-06T03:39:22.368 ERROR LIFECYCLE - Error reported by replication supervisor. Stopping ToroDB Stampede 2017-09-06T03:39:22.370 INFO LIFECYCLE - Shutting down ToroDB Stampede 2017-09-06T03:39:22.370 INFO REPL - Recived a request to stop the recovering service 2017-09-06T03:39:22.373 INFO REPL - Shutting down replication service 2017-09-06T03:39:22.420 INFO REPL - Topology service shutted down 2017-09-06T03:39:22.423 INFO REPL - Replication service shutted down 2017-09-06T03:39:23.392 INFO LIFECYCLE - ToroDB Stampede has been shutted down

It actually seems to work and by that I mean in the PG schema mulch are ALL the tables (way more than there should be), but the core tables are migrated and the data is there.

Can you give me any information on the error. I googled and didn't see anything of value.
Thanks!

Include/Exclude Filtering Not excluding field

Submitting what I believe is a bug with the latest version 1.0.0beta3. Below is my torodb-stampede.yml file:

---
logging: {}
metricsEnabled: false
replication:
  replSetName: "XXX"
  syncSource: "XXX"
  ssl:
    enabled: false
    allowInvalidHostnames: false
    fipsMode: false
  auth:
    mode: "negotiate"
    user: "XXX"
    source: "admin"
  mongopassFile: "/etc/torodb-stampede/.mongopass"
  include:
    database_name:
      firstCollection: "*"
      secondCollection: "*"
  exclude:
    database_name:
      firstCollection:
        - name: "_field"
      secondCollection:
        - name: "_field"
backend:
  pool:
    connectionPoolTimeout: 10000
    connectionPoolSize: 30
  postgres:
    host: "XXX"
    port: 5432
    database: "XXX"
    user: "XXX"
    toropassFile: "/etc/torodb-stampede/.toropass"
    applicationName: "toro"
    ssl: false

For two of my collections, I'm trying to include every field, except for "_field". This field begins with an underscore and is an object that I don't want ToroDB to read/parse.

Following the last example in https://www.torodb.com/stampede/docs/1.0.0-beta3/configuration/filtered-replication/ it appears this is the correct configuration?

From the logs:
Nov 03 01:09:51 XXX torodb-stampede[XXX]: 2017-11-03T01:09:51.883 INFO BACKEND - Created internal index did_pid_seq_idx for table firstcollection__field_role_xxx
And in postgres, I can see the data being inserted.

Any help is appreciated!

com.eightkdata jar

Hi all,
I've been reviewing this project for the past couple of days, and it looks very interesting and well-documented. I'm hoping to get it up and running and try to see if I can contribute somehow.
Unfortunately, having been able to compile it since I'm missing eighkdata.com jar. Not able to find it online anywhere. Could you please point me to the right direction?

Thanks,
Max.

WARN REPL - Expected string type for field notEligibleForMBSBackups. Found STRING

I am trying to migrate data from an a remote m-lab mongo database. The connection authenticates fine but I am still unable to get any tables or data pulled; instead I keep on getting the following warning WARN REPL - Expected string type for field notEligibleForMBSBackups. Found STRING. Any leads on what might be causing this?

JVM crash while on Stampede replication

2016-12-16T10:12:48.988 INFO 'StampedeService STARTING' c.t.s.StampedeService Starting up ToroDB Stampede
2016-12-16T10:12:48.999 INFO 'StampedeService STARTING' c.t.b.p.PostgreSqlDbBackend Configured PostgreSQL backend at localhost:5432
2016-12-16T10:12:49.455 INFO 'PostgreSqlDbBackend STARTING' c.t.b.AbstractDbBackendService Created pool session with size 28 and level TRANSACTION_REPEATABLE_READ
2016-12-16T10:12:49.496 INFO 'PostgreSqlDbBackend STARTING' c.t.b.AbstractDbBackendService Created pool system with size 1 and level TRANSACTION_REPEATABLE_READ
2016-12-16T10:12:49.511 INFO 'PostgreSqlDbBackend STARTING' c.t.b.AbstractDbBackendService Created pool cursors with size 1 and level TRANSACTION_REPEATABLE_READ
2016-12-16T10:12:50.119 INFO 'BackendBundleImpl STARTING' c.t.b.m.AbstractSchemaUpdater Schema 'torodb' not found. Creating it...
2016-12-16T10:12:50.351 INFO 'BackendBundleImpl STARTING' c.t.b.m.AbstractSchemaUpdater Schema 'torodb' created
2016-12-16T10:12:50.509 INFO 'StampedeService STARTING' c.t.s.StampedeService Database is not consistent. Cleaning it up
2016-12-16T10:12:50.572 INFO 'StampedeService STARTING' c.t.s.StampedeService Replicating from seeds: localhost:27017
2016-12-16T10:12:50.901 INFO 'MongodbReplBundle STARTING' c.t.m.r.MongodbReplBundle Starting replication service
2016-12-16T10:12:51.073 INFO 'topology-executor-0' c.t.m.r.t.TopologyCoordinator Waiting for 2  pings from other members before syncing
2016-12-16T10:12:51.089 INFO 'topology-executor-0' c.t.m.c.p.MemberHeartbeatData Member ushuaia:27017 is now in state RS_PRIMARY
2016-12-16T10:12:52.074 INFO 'topology-executor-0' c.t.m.r.t.TopologyCoordinator Waiting for 1  pings from other members before syncing
2016-12-16T10:12:53.075 INFO 'topology-executor-0' c.t.m.r.t.TopologyCoordinator Waiting for 1  pings from other members before syncing
2016-12-16T10:12:54.096 INFO 'topology-executor-0' c.t.m.r.t.TopologyCoordinator syncing from: ushuaia:27017
2016-12-16T10:12:54.096 INFO 'TopologyService STARTING' c.t.m.r.t.TopologyService Topology service started
2016-12-16T10:12:54.145 WARN 'ReplCoordinator STARTING' c.t.m.r.ReplCoordinator loadStoredConfig() is not implemented yet
2016-12-16T10:12:54.145 INFO 'ReplCoordinator STARTING' c.t.m.r.ReplCoordinator Database is not consistent.
2016-12-16T10:12:54.146 INFO 'MongodbReplBundle STARTING' c.t.m.r.MongodbReplBundle Replication service started
2016-12-16T10:12:54.146 INFO 'StampedeService STARTING' c.t.s.StampedeService ToroDB Stampede is now running
2016-12-16T10:12:54.147 INFO 'repl-coord-starting-recovery' c.t.m.r.ReplCoordinatorStateMachine Starting RECOVERY mode
2016-12-16T10:12:54.164 INFO 'RecoveryService' c.t.m.r.RecoveryService Starting RECOVERY service
2016-12-16T10:12:54.165 INFO 'RecoveryService' c.t.m.r.RecoveryService Starting initial sync
2016-12-16T10:12:54.180 INFO 'RecoveryService' c.t.s.DefaultConsistencyHandler Consistent state has been set to 'false'
2016-12-16T10:12:54.182 INFO 'RecoveryService' c.t.m.r.RecoveryService Using node ushuaia:27017 to replicate from
2016-12-16T10:12:54.201 INFO 'RecoveryService' c.t.m.r.RecoveryService Remote database cloning started
2016-12-16T10:12:54.286 INFO 'RecoveryService' c.t.b.SharedWriteBackendTransactionImpl Created internal index did_pkey for table oplog_replication
2016-12-16T10:12:54.320 INFO 'RecoveryService' c.t.b.SharedWriteBackendTransactionImpl Created internal index rid_pkey for table oplog_replication_lastappliedoplogentry
2016-12-16T10:12:54.330 INFO 'RecoveryService' c.t.b.SharedWriteBackendTransactionImpl Created internal index did_idx for table oplog_replication_lastappliedoplogentry
2016-12-16T10:12:54.340 INFO 'RecoveryService' c.t.b.SharedWriteBackendTransactionImpl Created internal index did_seq_idx for table oplog_replication_lastappliedoplogentry
2016-12-16T10:12:54.387 INFO 'RecoveryService' c.t.m.r.RecoveryService Local databases dropping started
2016-12-16T10:12:54.406 INFO 'RecoveryService' c.t.m.r.RecoveryService Local databases dropping finished
2016-12-16T10:12:54.406 INFO 'RecoveryService' c.t.m.r.RecoveryService Remote database cloning started
2016-12-16T10:12:54.414 INFO 'RecoveryService' c.t.m.u.c.AkkaDbCloner Collection aht.githubarchive will be cloned
2016-12-16T10:12:54.416 INFO 'RecoveryService' c.t.m.r.c.i.DropCollectionReplImpl Dropping collection aht.githubarchive
2016-12-16T10:12:54.416 INFO 'RecoveryService' c.t.m.r.c.i.DropCollectionReplImpl Trying to drop collection aht.githubarchive but it has not been found. This is normal when reapplying oplog during a recovery. Ignoring operation
2016-12-16T10:12:54.416 INFO 'RecoveryService' c.t.m.r.c.i.CreateCollectionReplImpl Creating collection aht.githubarchive
2016-12-16T10:12:54.426 INFO 'RecoveryService' c.t.m.u.c.AkkaDbCloner Cloning collection data aht.githubarchive into aht.githubarchive
2016-12-16T10:12:55.201 INFO 'db-cloner-1' c.t.b.SharedWriteBackendTransactionImpl Created index githubarchive__id_x_a_idx for table githubarchive associated to logical index aht.githubarchive._id_
2016-12-16T10:13:04.575 INFO 'db-cloner-3' c.t.m.u.c.AkkaDbCloner 11000 documents have been cloned to aht.githubarchive
2016-12-16T10:13:14.706 INFO 'db-cloner-2' c.t.m.u.c.AkkaDbCloner 114000 documents have been cloned to aht.githubarchive
2016-12-16T10:13:24.717 INFO 'db-cloner-2' c.t.m.u.c.AkkaDbCloner 236000 documents have been cloned to aht.githubarchive
2016-12-16T10:13:34.815 INFO 'db-cloner-1' c.t.m.u.c.AkkaDbCloner 352000 documents have been cloned to aht.githubarchive
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007f352d024ec2, pid=11621, tid=0x00007f34e1446700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_111-b14) (build 1.8.0_111-b14)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.111-b14 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# j  com.torodb.backend.postgresql.converters.sql.StringSqlBinding.set(Ljava/sql/PreparedStatement;ILjava/lang/Object;)V+4
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /tmp/kk/hs_err_pid11621.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#
[1]    11621 abort      /tmp/torodb-stampede-1.0.0-beta1/bin/torodb-stampede --backend-database torod


JVM trace file attached below:

hs_err_pid11621.log.txt

Change syncSource param to be a list of seeds

At this moment, ToroDB Stampede has a configuration parameter called syncSource whose value is the seed Stampede uses to discover other nodes on the replica set. As explained on the devel email group, it may be a failure point if Stampede starts up when that sync source is down. To deal with that, the user needs to provide some scripts to detect the situation and then start Stampede with another seed. It would be nice to change the syncSource parameter to a list of seeds, so if the first one does not respond, Stampede would try the next one.

Unable to start when use Amazon RDS

Hi there,

When I try and start torodb with an Amazon RDS backend (Postgres 9.4) with root user I get the following error:

Exception in thread "Thread-3" java.lang.RuntimeException: org.postgresql.util.PSQLException: ERROR: must be owner of type character varying or type jsonb
    at com.torodb.torod.db.sql.AbstractSqlDbWrapper.initialize(AbstractSqlDbWrapper.java:116)
    at com.torodb.torod.db.executor.DefaultExecutorFactory.initialize(DefaultExecutorFactory.java:87)
    at com.toro.torod.connection.DefaultTorod.start(DefaultTorod.java:67)
    at com.torodb.Main.run(Main.java:155)
    at com.torodb.Main.access$100(Main.java:46)
    at com.torodb.Main$2.run(Main.java:134)
Caused by: org.postgresql.util.PSQLException: ERROR: must be owner of type character varying or type jsonb
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2270)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1998)
    at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:570)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:406)
    at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:398)
    at com.zaxxer.hikari.proxy.StatementProxy.execute(StatementProxy.java:83)
    at com.zaxxer.hikari.proxy.StatementJavassistProxy.execute(StatementJavassistProxy.java)
    at com.torodb.torod.db.postgresql.meta.TorodbMeta.executeSql(TorodbMeta.java:314)
    at com.torodb.torod.db.postgresql.meta.TorodbMeta.createCast(TorodbMeta.java:340)
    at com.torodb.torod.db.postgresql.meta.TorodbMeta.<init>(TorodbMeta.java:71)
    at com.torodb.torod.db.sql.AbstractSqlDbWrapper.initialize(AbstractSqlDbWrapper.java:107)```

DateTimeParseException - from nodejs sails application

Hello,
I'm developing a webapplicaton using nodejs ( and specifically sailjs). Whenever I create a new document with dates ( sailsjs adds createdAt and updatedAt) I get a DateTimeParseException error
for example,
com.torodb.torod.core.exceptions.ToroRuntimeException: org.threeten.bp.format.DateTimeParseException: Text '2016-05-11T14:30:00' could not be parsed at index 19.

Everything works fine if i use a "regular" mongod.

Thank you

Trying to build Stampede from source results in error

Hello,

I have another issue open (#205), and the stated fix is to build from source. I attempted to do so by following the instructions and on this step it fails:

➜  stampede git:(master) mvn clean package -P assembler,prod
[INFO] Scanning for projects...
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[FATAL] Non-resolvable parent POM for com.torodb.stampede:stampede-pom:1.0.0-SNAPSHOT: Could not find artifact com.torodb:parent-pom:pom:1.0.1-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 4, column 13
 @
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR]
[ERROR]   The project com.torodb.stampede:stampede-pom:1.0.0-SNAPSHOT (/private/tmp/stampede/pom.xml) has 1 error
[ERROR]     Non-resolvable parent POM for com.torodb.stampede:stampede-pom:1.0.0-SNAPSHOT: Could not find artifact com.torodb:parent-pom:pom:1.0.1-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 4, column 13 -> [Help 2]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
[ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException

Any help is appreciated. Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.