Giter Site home page Giter Site logo

pillar's People

Contributors

comeara avatar fkoehler avatar magro avatar marcopriebe avatar pvenable avatar sadowskik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pillar's Issues

Migration fails silently when no directory with datastore name is found

Pillar seeks for a folder with a datastore name, and if it is not found, it terminates silently. The problem is, when migration scripts are placed in some custom directory and the path of this directory is then passed to the Pillar's cli, migration is not executed and no warning or error message is shown either. This seems to be a counterintuitive and obscure behavior.

Please, change it (make Pillar load files from the root of the folder) or explicitly note this behavior in the README and put some warning message in the Registry class for the case of a nonexistent datastore directory.

Publish new release / project progress

There were some changes that are waiting to get released. It would be great if #8 could make it into the release as well.

What can we do to get changes more quickly into pillar, and to get them released?

I can offer to take care of releases to sonatype if that's an issue (I should have commit rights for this). Is there anything else I could do to speed up progress?

Cheers,
Martin

Allow to specify consistency levels for reads/writes of applied migrations

We just ran into an issue that (most probably) was caused by the consistency level that's used by default (ONE): when running migrate this fails with an AlreadyExistsException, which (most probably) is caused by an incomplete read with the default consistency ONE.

We just fixed this issue (for us) by changing the default/session consistency level in the sbt-pillar-plugin to QUORUM.

Because we're running in a multi DC environment it would be optimal if it would be possible to specify different consistency levels for reads and writes: Writes should be done using EACH_QUORUM by default, which allows you to run Reads with LOCAL_QUORUM - assuming that the applied migrations more often read than written, this combination provides the best overall performance (relatively fast reads).

Failing to install pillar using Maven

I've tried to install pillar using Maven, and I got this error:

โ€บ mvn com.chrisomeara:pillar_2.10:2.0.1
[INFO] Scanning for projects...
Downloading: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/maven-metadata.xml
Downloaded: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/maven-metadata.xml (395 B at 0.6 KB/sec)
Downloading: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.pom
Downloaded: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.pom (3 KB at 35.6 KB/sec)
Downloading: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.jar
Downloaded: http://repo.maven.apache.org/maven2/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.jar (92 KB at 345.8 KB/sec)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.578s
[INFO] Finished at: Tue Feb 03 16:58:53 PST 2015
[INFO] Final Memory: 4M/81M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to parse plugin descriptor for com.chrisomeara:pillar_2.10:2.0.1 (/Users/itay/.m2/repository/com/chrisomeara/pillar_2.10/2.0.1/pillar_2.10-2.0.1.jar): No plugin descriptor found at META-INF/maven/plugin.xml -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/PluginDescriptorParsingException

Also, Is it possible to install pillar as a binary?

Migrations and Consitency

Hi,

I am wondering: given that migrations often involve schema changes and subsequent data inserts of updates (e.g. update on a column just added by 'alter table'), what are your experiences with the fact that C* does not provide a way to control that the schema change actually occurred on all nodes before the update hits the cluster.

Do you have any pointers to discussions with the community that suggests that such migrations are even possible in a predictable fashion?

Jan

support copy command

Support 'copy' command.

Actually, cassandra doesn't support rename table name, and to rename a table we need:

  • migrate the data to a file or something like this with the command: 'copy';
  • drop the table;
  • recreate with the new name;
  • use copy again to migrate the data from file to the renamed table;

Example:
-- description: recreate foobar table
-- authoredAt: lucasoliveiracampos
-- up:

-- stage: 1
copy foobar to 'foobar_2017_08_19_11_17_00_data.csv';

-- stage: 2
drop table foobar;

-- stage: 3
create table foobar (
foo text,
bar bigint,
primary key ((foo))
);

-- stage: 4
copy foobar from 'foobar_2017_08_19_11_17_00_data.csv';

-- down:

Provide SLF4J binding when running command line application

When running the command line, SLF4J outputs the following.

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

The command line app should bind SLF4J such that log messages are output correctly.

This should be done in a way that does not conflict with SLF4J bindings for applications that include Pillar as a library.

missing EOF at 'CREATE' when running multiple stages migration script.

Trying to execute migrations script with multiple stages (from documentation) and getting following error:

com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24)
	at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
	at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
	at de.kaufhof.pillar.Migration$class.executeUpStatement(Migration.scala:38)
	at de.kaufhof.pillar.ReversibleMigration.executeUpStatement(Migration.scala:75)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at de.kaufhof.pillar.CassandraMigrator.migrate(CassandraMigrator.scala:12)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:194)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:187)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$Pillar$.withSession(Plugin.scala:158)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:59)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$Pillar$.withCassandraUrl(Plugin.scala:131)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:51)
	at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
	at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
	at sbt.std.Transform$$anon$4.work(System.scala:63)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
	at sbt.Execute.work(Execute.scala:237)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
	at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.Responses$Error.asException(Responses.java:132)
	at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
	at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
	at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
	at java.lang.Thread.run(Thread.java:745)
[error] An error occurred while performing task: com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24)
	at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
	at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
	at de.kaufhof.pillar.Migration$class.executeUpStatement(Migration.scala:38)
	at de.kaufhof.pillar.ReversibleMigration.executeUpStatement(Migration.scala:75)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at de.kaufhof.pillar.CassandraMigrator.migrate(CassandraMigrator.scala:12)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:194)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:187)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$Pillar$.withSession(Plugin.scala:158)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:59)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$Pillar$.withCassandraUrl(Plugin.scala:131)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:51)
	at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
	at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
	at sbt.std.Transform$$anon$4.work(System.scala:63)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
	at sbt.Execute.work(Execute.scala:237)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
	at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.Responses$Error.asException(Responses.java:132)
	at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
	at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
	at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
	at java.lang.Thread.run(Thread.java:745)
com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:58)
	at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:24)
	at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
	at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:63)
	at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:39)
	at de.kaufhof.pillar.Migration$class.executeUpStatement(Migration.scala:38)
	at de.kaufhof.pillar.ReversibleMigration.executeUpStatement(Migration.scala:75)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at de.kaufhof.pillar.CassandraMigrator$$anonfun$migrate$2.apply(CassandraMigrator.scala:12)
	at scala.collection.immutable.List.foreach(List.scala:318)
	at de.kaufhof.pillar.CassandraMigrator.migrate(CassandraMigrator.scala:12)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:194)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:187)
	at io.ino.sbtpillar.Plugin$Pillar$$anonfun$migrate$1.apply(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$Pillar$.withSession(Plugin.scala:158)
	at io.ino.sbtpillar.Plugin$Pillar$.migrate(Plugin.scala:186)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:59)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3$$anonfun$apply$5.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$Pillar$.withCassandraUrl(Plugin.scala:131)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:55)
	at io.ino.sbtpillar.Plugin$$anonfun$taskSettings$3.apply(Plugin.scala:51)
	at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
	at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
	at sbt.std.Transform$$anon$4.work(System.scala:63)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:228)
	at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
	at sbt.Execute.work(Execute.scala:237)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:228)
	at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
	at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
	at com.datastax.driver.core.Responses$Error.asException(Responses.java:132)
	at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:179)
	at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:184)
	at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:43)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:798)
	at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1005)
	at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:928)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
	at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
	at java.lang.Thread.run(Thread.java:745)
[error] (sharingAdapter/*:migrate) com.datastax.driver.core.exceptions.SyntaxError: line 8:0 missing EOF at 'CREATE' (... KEY (id))-- stage: 2[CREATE] TABLE...)
[error] Total time: 1 s, completed Jan 16, 2017 5:43:40 PM```

Multiple DML or DDL Statements per migration?

Hi,

I wonder whether it is possible to place more than one statement in a single migration file. Either as two migration sin one file (as YAML multi doc) or simply as separate statements in a single migration.

I tried but it did not work and I am unsure whether to should.

Can anyone clarify?

Jan

Build for Scala 2.12

This looks like it might take some work. At very least Argot needs to be replaced with a maintained CLI argument parser that is published for 2.12.

authored_at timestamp is set to a long time in the past during migration

I have been playing with Pillar for Cassandra schema migrations. I noticed that the applied_migrations.authored_at column is not set up correctly.

For instance, my migration CQL files have the following authoredAt markup:

rdawe@cstar:~/MO-3530/test-pillar$ grep -i authoredat conf/pillar/migrations/mydata/*
conf/pillar/migrations/mydata/1420779600_create_test.cql:-- authoredAt: 1420779600
conf/pillar/migrations/mydata/1420783200_add_column_test.cql:-- authoredAt: 1420783200
rdawe@cstar:~/MO-3530/test-pillar$ perl -e 'print gmtime(1420779600)."\n";'
Fri Jan  9 05:00:00 2015
rdawe@cstar:~/MO-3530/test-pillar$ perl -e 'print gmtime(1420783200)."\n";'
Fri Jan  9 06:00:00 2015

and this results in the following in the applied migrations table:

rdawe@cstar:~/MO-3530/test-pillar$ cqlsh
Connected to Test Cluster at localhost:9160.
[cqlsh 4.1.1 | Cassandra 2.0.8 | CQL spec 3.1.1 | Thrift protocol 19.39.0]
Use HELP for help.
cqlsh> USE test;
cqlsh:test> SELECT * from applied_migrations ;

 authored_at              | description                 | applied_at
--------------------------+-----------------------------+--------------------------
 1970-01-17 05:39:43-0500 | add column to example table | 2015-01-13 08:30:45-0500
 1970-01-17 05:39:39-0500 |           create test table | 2015-01-13 08:30:44-0500

(2 rows)

Allow loading migrations from a jar file

We are using Pillar inside a Play2 application where it is deployed as a jar. One can not read migrations from a directory inside a jar. We worked around this with something like the following. I am not creating a pull request as I am not sure on how to integrate this and if it's desired behaviour nor do I know how to correctly test this. sorry.

  private val registry = Registry(loadMigrationsFromJarOrFilesystem())
  private val migrator = Migrator(registry, new LoggerReporter)

  private def loadMigrationsFromJarOrFilesystem() = {
    val migrationsDir = "migrations/"
    val migrationNames = JarUtils.getResourceListing(getClass, migrationsDir).toList.filter(_.nonEmpty)
    val parser = Parser()

    migrationNames.map(name => getClass.getClassLoader.getResourceAsStream(migrationsDir + name)).map {
      stream =>
        try {
          parser.parse(stream)
        } finally {
          stream.close()
        }
    }.toList
  }

where JarUtils.getResourceListing is taken from the top answer here: http://stackoverflow.com/questions/6247144/how-to-load-a-folder-from-a-jar

Hope that helps

When using multi-stage migrations, a failure in stage 2 leaves stage 1 committed, even with a proper down statement

Based on the documentation of this projects README.md on stage migrations.... Given the migrationfile:

-- description: Example multistage
-- authoredAt: 1474034307117
-- up:

-- stage: 1
CREATE TYPE foo (
  username text,
  comments text
);

-- stage: 2
CREATE TYPE bar (
  line_one text,
  line_two text
);

-- stage: 3
CREATE TYPE invalid (
  herp derp,
  omg syntax
);

-- down:

-- stage: 1
DROP TYPE invalid
;

-- stage: 2
DROP TYPE bar
;

-- stage: 3
DROP TYPE foo
;

When run, it will error on the obvious syntax bomb in stage 3, and not write a line into the applied_migrations table (meaning this file will re-execute next run)

However, this leaves behind types foo and bar. So next run, you get a different cql error: "type foo already exists"

It feels like the tool should attempt to run stage downs if stage ups failed. This can be seen as trying to bring it back into a state where the whole thing can be run again.

This behavior forces you to use "IF NOT EXIST" on pretty much all statements to mitigate, which is an effective workaround, but could be undesirable in practice.

Config file has to be in install location

The README says the command line tool can be used by placing an application.conf in ./conf, but that doesn't seem to be the case and only works if JAVA_OPTS="-Dconfig.file=conf/application.conf" is set. Otherwise the conf file has to be placed in /opt/pillar.

problem with cassandra 3

Hi,

It seems that pillar is not compatible with Cassandra 3.
I'm using
"com.datastax.cassandra" % "cassandra-driver-core" % "3.0.0",

And when using latest version of pillar, I get the following error:

Exception encountered when attempting to run a suite with class name: common.utils.cassandra.ConnectionAndQuerySpec *** ABORTED ***
[info]   java.lang.NoSuchMethodError: com.datastax.driver.core.Row.getDate(Ljava/lang/String;)Ljava/util/Date;
[info]   at com.chrisomeara.pillar.AppliedMigrations$$anonfun$apply$1.apply(AppliedMigrations.scala:12)
[info]   at com.chrisomeara.pillar.AppliedMigrations$$anonfun$apply$1.apply(AppliedMigrations.scala:12)
[info]   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
[info]   at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
[info]   at scala.collection.Iterator$class.foreach(Iterator.scala:743)
[info]   at scala.collection.AbstractIterator.foreach(Iterator.scala:1195)
[info]   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
[info]   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
[info]   at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
[info]   at scala.collection.AbstractTraversable.map(Traversable.scala:104)

Support migrations with multiple statements (batch)

It would be great if a single migration file could contain multiple statements.

I tried to create 2 tables in a migration file, but this seems not to be supported. The migration failed with this error:

com.datastax.driver.core.exceptions.SyntaxError: line 9:0 missing EOF at 'CREATE'
    at com.datastax.driver.core.exceptions.SyntaxError.copy(SyntaxError.java:35) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:256) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:172) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.datastax.driver.core.SessionManager.execute(SessionManager.java:91) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.datastax.driver.core.SessionManager.execute(SessionManager.java:83) ~[cassandra-driver-core-2.0.1.jar:na]
    at com.streamsend.pillar.Migration$class.executeUpStatement(Migration.scala:38) ~[pillar_2.10-1.0.3.jar:1.0.3]

Looking at the Parser and Migration classes it seems obvious that it's just not supported.

To support this (without really parsing cql) perhaps a pragmatic/simple solution would be to use some statement separator, like e.g. a line only containing -- with an empty line above/below or s.th. like this.

What do you think?

Support authenticated Cassandra

Are there any plans to support authentication for Cassandra cluster? I suspect many might have this requirement specially in a production environment.

Add migration lock table

When running migrations in parallel (e.g. from integration tests running in parallel or parallel app deployments) there might occur conflicts, because the same migration is run from different clients. The error might look like this:

com.datastax.driver.core.exceptions.InvalidQueryException: Invalid
column name mycolumn because it conflicts with an existing column

To prevent this a separate migrations_lock table should/could be used, where a lock is created for the time migrations are performed (liquibase has this for example).

How to ignore applied scripts?

Hi
Having trouble with pillar when using it a play app.
If I have some scripts already applied how do I add a new one such that the applied ones are ignored and the new ones are applied?

I plan to use this on a continuous integration environment so that I can simply add a script, launch and upgrade, a bit like the play evolutions.
https://github.com/typesafehub/playframework/blob/master/framework/src/play-jdbc/src/main/scala/play/api/db/evolutions/Evolutions.scala

Thanks
Peter

Multi-stage queries fail on C* 3.0.9

running the example multi stage query fails on C* 3.0.9

-- description: creates users and groups tables
-- authoredAt: 1469630066000
-- up:

-- stage: 1
CREATE TABLE groups (
  id uuid,
  name text,
  PRIMARY KEY (id)
)

-- stage: 2
CREATE TABLE users (
  id uuid,
  group_id uuid,
  username text,
  password text,
  PRIMARY KEY (id)
)


-- down:

-- stage: 1
DROP TABLE users

-- stage: 2
DROP TABLE groups

Is 3.0.9 not supported?

cqlsh 5.0.1
CQL spec 3.4.0

Auto USE keyspace in session once Pillar is initialized

With 2.0.0 I run into the issue that when creating a new keyspace with Pillar.initilaize I get an
[error] An error occurred while performing task: com.datastax.driver.core.exceptions.InvalidQueryException: no keyspace has been specified com.datastax.driver.core.exceptions.InvalidQueryException: no keyspace has been specified
when running the real migrations. One solution is to execute
session.execute("USE $keyspace")
AFTER Pillar.initialize run but before Pillar.migrate

I think Pillar should do this automatically, meaning in Pillar.migrate it should "USE $keyspace"

Applied migration table is not ordered by the correct authored_at time

-- description: V001 - creates experiment table
-- authoredAt: 1370028263000
-- up:
-- stage: 1
create table experiment (
    id uuid,
...
)
-- description: V025 - creates user assignment export 
-- authoredAt: 1370028263024
-- up:

create table user_assignment_export(
...
)
cqlsh:wassabi_experiment_local> select * from applied_migrations  ;

 authored_at              | description                                                                         | applied_at
--------------------------+-------------------------------------------------------------------------------------+--------------------------
 2013-05-31 19:24:23+0000 |                                               V025 - creates user assignment export | 2016-08-05 00:17:01+0000
 2013-05-31 19:24:23+0000 |                                                               V002 - creates bucket | 2016-08-05 00:16:33+0000
 2013-05-31 19:24:23+0000 |                                                        V020 - creates user feedback | 2016-08-05 00:16:56+0000
 2013-05-31 19:24:23+0000 |                                              V004 - creates experiment label lookup | 2016-08-05 00:16:36+0000
 2013-05-31 19:24:23+0000 |                                                            V023 - creates audit log | 2016-08-05 00:16:59+0000
 2013-05-31 19:24:23+0000 |                      V026 - insert admin user to user_roles, app_users, superadmins | 2016-08-05 00:17:01+0000
 2013-05-31 19:24:23+0000 |                                                            V010 - creates exclusion | 2016-08-05 00:16:43+0000
 2013-05-31 19:24:23+0000 |                                                 V007 - creates experiment audit log | 2016-08-05 00:16:40+0000
 2013-05-31 19:24:23+0000 |                                                            V017 - creates app roles | 2016-08-05 00:16:50+0000
 2013-05-31 19:24:23+0000 |                                                              V021 - creates staging | 2016-08-05 00:16:57+0000
 2013-05-31 19:24:23+0000 |                                                     V001 - creates experiment table | 2016-08-05 00:16:32+0000
 2013-05-31 19:24:23+0000 |                                             V022 - creates bucket assingment counts | 2016-08-05 00:16:58+0000
 2013-05-31 19:24:23+0000 |                                                           V016 - creates user roles | 2016-08-05 00:16:49+0000
 2013-05-31 19:24:23+0000 |                                                       V015 - creates app page index | 2016-08-05 00:16:48+0000
 2013-05-31 19:24:23+0000 |                                              V011 - creates experiment user look up | 2016-08-05 00:16:44+0000
 2013-05-31 19:24:23+0000 |                                                      V005 - creates user assignment | 2016-08-05 00:16:37+0000
 2013-05-31 19:24:23+0000 |                                                      V014 - creates experiment page | 2016-08-05 00:16:47+0000
 2013-05-31 19:24:23+0000 |                                                            V019 - creates user info | 2016-08-05 00:16:52+0000
 2013-05-31 19:24:23+0000 |           V007 - creates user bucket lookup by experiment id, context, bucket label | 2016-08-05 00:16:39+0000
 2013-05-31 19:24:23+0000 | V006 - creates user experiment look up by app name, user id, context, experiment id | 2016-08-05 00:16:38+0000
 2013-05-31 19:24:23+0000 |                                                     V027 - creates applicaiton list | 2016-08-05 00:17:02+0000
 2013-05-31 19:24:23+0000 |                                              V003 - creates experiment label lookup | 2016-08-05 00:16:35+0000
 2013-05-31 19:24:23+0000 |                                                          V018 - creates superadmins | 2016-08-05 00:16:51+0000
 2013-05-31 19:24:23+0000 |                                                V013 - creates page experiment index | 2016-08-05 00:16:46+0000
 2013-05-31 19:24:23+0000 |                                               V024 - creates user assingment lookup | 2016-08-05 00:17:00+0000
 2013-05-31 19:24:23+0000 |                                                     V009 - creates bucket audit log | 2016-08-05 00:16:41+0000
 2013-05-31 19:24:23+0000 |                                              V012 - creates experiment label lookup | 2016-08-05 00:16:45+0000

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.