Giter Site home page Giter Site logo

palantir / docker-compose-rule Goto Github PK

View Code? Open in Web Editor NEW
424.0 263.0 91.0 1.88 MB

A JUnit rule to manage docker containers using docker-compose

License: Apache License 2.0

Java 99.97% Shell 0.03%
docker docker-compose junit junit-rule junit4 octo-correct-managed

docker-compose-rule's Introduction

Autorelease

build status Download

Docker Compose JUnit Rule

This is a library for executing JUnit tests that interact with Docker Compose managed containers. It supports the following:

  • Starting containers defined in a docker-compose.yml before tests and tearing them down afterwards
  • Waiting for services to become available before running tests
  • Recording log files from containers to aid debugging test failures

Why should I use this?

The code here started out as the end to end tests for one of our products. We needed to test this product in a variety of different configurations and environments which were mutually incompatible, thus multiple Docker Compose files were needed and so a simplistic model of running docker-compose up in Gradle was insufficient.

If you're experiencing any of the following using Docker for your testing this library should hopefully help:

  • Orchestrating multiple services and mapping the ports to outside the Docker machine so assertions can be made in tests
  • Needing to know when services are up to prevent flickering tests caused by slow to start services or complicated service dependencies
  • Lack of insight into what has happened in Docker containers during tests on CI servers due to loss of logs
  • Tests failing due to needing open ports on the CI build host which conflict with the test configuration

Simple Use

Add a dependency to your project. For example, in gradle:

repositories {
    maven {
        url 'https://dl.bintray.com/palantir/releases' // docker-compose-rule is published on bintray
    }
}
dependencies {
    testCompile 'com.palantir.docker.compose:docker-compose-rule-junit4:<latest-tag-from-bintray>'
}

For the most basic use simply add a DockerComposeRule object as a @ClassRule or @Rule in a JUnit test class.

public class MyIntegrationTest {

    @ClassRule
    public static DockerComposeRule docker = DockerComposeRule.builder()
            .file("src/test/resources/docker-compose.yml")
            .build();

    @Test
    public void testThatUsesSomeDockerServices() throws InterruptedException, IOException {
       ...
    }

}

This will cause the containers defined in src/test/resources/docker-compose.yml to be started by Docker Compose before the test executes and then the containers will be killed and removed (along with associated volumes) once the test has finished executing. If the containers have healthchecks specified, either in the docker image or in the docker-compose config, the test will wait for them to become healthy.

The docker-compose.yml file is referenced using the path given, relative to the working directory of the test. It will not be copied elsewhere and so references to shared directories and other resources for your containers can be made using path relative to this file as normal. If you wish to manually run the Docker containers for debugging the tests simply run docker-compose up in the same directory as the docker-compose.yml.

JUnit 5

If you'd prefer to use JUnit 5 (aka JUnit Jupiter), use the following dependency and replace your usages of DockerComposeRule with DockerComposeExtension.

dependencies {
    testCompile 'com.palantir.docker.compose:docker-compose-junit-jupiter:<latest-tag-from-bintray>'
}
public class MyIntegrationTest {

    @RegisterExtension
    public static DockerComposeExtension docker = DockerComposeExtension.builder()
            .file("src/test/resources/docker-compose.yml")
            .build();

    @Test
    public void testThatUsesSomeDockerServices() throws InterruptedException, IOException {
       ...
    }

}

Running on a Mac

The above example will work out of the box on Linux machines with Docker installed. On Mac you will first need to install Docker using the instructions here.

Once Docker is installed to run from the command line you will need to execute docker-machine env <machine_name> and follow the instructions to set the environment variables. Any tests can now be executed through Gradle in the usual way.

To run the tests from your IDE you will need to add the environment variables given from running docker-machine env <machine_name> to the run configuration for the test in your IDE. This is documented for Eclipse and IntelliJ.

Waiting for a service to be available

To wait for services to be available before executing tests, either add health checks to the configuration, or use the following methods on the DockerComposeRule object:

public class MyEndToEndTest {

    @ClassRule
    public static DockerComposeRule docker = DockerComposeRule.builder()
        .file("src/test/resources/docker-compose.yml")
        .waitingForService("db", HealthChecks.toHaveAllPortsOpen())
        .waitingForService("web", HealthChecks.toRespondOverHttp(8080, (port) -> port.inFormat("https://$HOST:$EXTERNAL_PORT")))
        .waitingForService("other", container -> customServiceCheck(container), Duration.standardMinutes(2))
        .waitingForServices(ImmutableList.of("node1", "node2"), toBeHealthyAsACluster())
        .waitingForHostNetworkedPort(5432, toBeOpen())
        .build();

    @Test
    public void testThatDependsServicesHavingStarted() throws InterruptedException, IOException {
        ...
    }
}

The entrypoint method waitingForService(String container, HealthCheck<Container> check[, Duration timeout]) will make sure the healthcheck passes for that container before the tests start. The entrypoint method waitingForServices(List<String> containers, HealthCheck<List<Container>> check[, Duration timeout]) will make sure the healthcheck passes for the cluster of containers before the tests start. The entrypoint method waitingForHostNetworkedPort(int portNumber, HealthCheck<DockerPort> check[, Duration timeout]) will make sure the healthcheck passes for a particular host networked port.

We provide 2 default healthChecks in the HealthChecks class:

  1. toHaveAllPortsOpen - this waits till all ports can be connected to that are exposed on the container
  2. toRespondOverHttp - which waits till the specified URL responds to a HTTP request.

Accessing services in containers from outside a container

In tests it is likely services inside containers will need to be accessed in order to assert that they are behaving correctly. In addition, when tests run on Mac the Docker containers will be inside a Virtual Box machine and so must be accessed on an external IP address rather than the loopback interface.

It is recommended to only specify internal ports in the docker-compose.yml as described in the (reference)[https://docs.docker.com/compose/compose-file/#ports]. This makes tests independent of the environment on the host machine and of each other. Docker will then randomly allocate an external port. For example:

postgres:
  image: postgres:9.5
  ports:
    - 5432

Given a DockerComposeRule instance called docker, you could then access a service called postgres as follows

DockerPort postgres = docker.containers()
        .container("postgres")
        .port(5432);

You could then interpolate the host IP address and random external port as follows:

String url = postgres.inFormat("jdbc:postgresql://$HOST:$EXTERNAL_PORT/mydb");
// e.g. "jdbc:postgresql://192.168.99.100:33045/mydb"

Run docker-compose exec

We support docker-compose exec command which runs a new command in a running container.

dockerCompose.exec(dockerComposeExecOption, containerName, dockerComposeExecArgument)

Just be aware that you need at least docker-compose 1.7 to run docker-compose exec

Collecting logs

To record the logs from your containers specify a location:

public class DockerComposeRuleTest {

    @ClassRule
    public static DockerComposeRule docker = DockerComposeRule.builder()
            .file("src/test/resources/docker-compose.yml")
            .saveLogsTo("build/dockerLogs/dockerComposeRuleTest")
            .build();

    @Test
    public void testRecordsLogs() throws InterruptedException, IOException {
       ...
    }

}

This will collect logs for all containers. Collection will occur when after the tests are finished executing.

The LogDirectory class contains utility methods to generate these paths. For example, you can write logs directly into the $CIRCLE_ARTIFACTS directory on CI (but fall back to build/dockerLogs locally) using:

    .saveLogsTo(circleAwareLogDirectory(MyTest.class))

Methods in LogDirectory are intended to be statically imported for readability.

Skipping shutdown

To skip shutdown of containers after tests are finished executing:

public class DockerComposeRuleTest {
    @ClassRule
    public static DockerComposeRule docker = DockerComposeRule.builder()
            .file("src/test/resources/docker-compose.yml")
            .skipShutdown(true)
            .build();
}

This can shorten iteration time when services take a long time to start. Remember to never leave it on in CI!

Pull images on startup

To pull images before starting the containers:

public class DockerCompositionTest {
    @ClassRule
    public static DockerComposition composition = DockerComposition.of("src/test/resources/docker-compose.yml")
                                                .pullOnStartup(true)
                                                .build();
}

This will make sure you are using the most up-to-date version of all the images included in the docker-compose.yml.

Docker Machine

Docker is able to connect to daemons that either live on the machine where the client is running, or somewhere remote. Using the docker client, you are able to control which daemon to connect to using the DOCKER_HOST environment variable.

Local Machine

The default out-of-the-box behaviour will configure docker-compose to connect to a Docker daemon that is running locally. That is, if you're on Linux, it will use the Docker daemon that exposes its socket. In the case of Mac OS X - which doesn't support Docker natively - we have to connect to a technically "remote" (but local) Docker daemon which is running in a virtual machine via docker-machine.

If you're on Mac OS X, the docker cli expects the following environment variables:

  • DOCKER_HOST
  • If the Docker daemon is secured by TLS, DOCKER_TLS_VERIFY and DOCKER_CERT_PATH need to be set.

Similarly, if you're using a LocalMachine, you need to ensure the Run Configuration (in your IDE, command line etc.) has those same variables set.

An example of creating a DockerMachine that connects to a local docker daemon:

DockerMachine.localMachine()
             .build()

Remote Machine

You may not always want to connect to a Docker daemon that is running on your local computer or a virtual machine running on your local computer.

An example of this would be running containers in a clustered manner with Docker Swarm. Since Docker Swarm implements the Docker API, setting the right environment variables would allow us to use Docker containers on the swarm.

An example of connecting to a remote Docker daemon that has also been secured by TLS:

DockerMachine.remoteMachine()
             .host("tcp://remote-docker-host:2376")
             .withTLS("/path/to/cert")
             .build()

Additional Environment Variables

It may also be useful to pass environment variables to the process that will call docker-compose.

You can do so in the following manner:

DockerMachine.localMachine()
             .withEnvironmentVariable("SOME_VARIABLE", "SOME_VALUE")
             .build()

The variable SOME_VARIABLE will be available in the process that calls docker-compose, and can be used for Variable Interpolation inside the compose file.

How to use a DockerMachine

When creating a DockerComposeRule, a custom DockerMachine may be specified. If no DockerMachine is specified, DockerComposeRule will connect to the local Docker daemon, similarly to how the docker cli works.

private final DockerMachine dockerMachine = DockerMachine.localMachine()
                                                         .withAdditionalEnvironmentVariable("SOME_VARIABLE", "SOME_VALUE")
                                                         .build();

@Rule
DockerComposeRule docker = DockerComposeRule.builder()
            .file("docker-compose.yaml")
            .machine(dockerMachine)
            .build();

Composing docker compose files

docker-compose (at least as of version 1.5.0) allows us to specify multiple docker-compose files. On the command line, you can do this with this example command:

docker-compose -f file1.yml -f file2.yml -f file3.yml

Semantics of how this works is explained here: Docker compose reference

To use this functionality inside docker-compose-rule, supply a DockerComposeFiles object to your DockerComposeRule builder:

DockerComposeRule docker = DockerComposeRule.builder()
            .files(DockerComposeFiles.from("file1.yml", "file2.yml"))
            .build()

Using a custom version of docker-compose

docker-compose-rule tries to use the docker-compose binary located at /usr/local/bin/docker-compose. This can be overriden by setting DOCKER_COMPOSE_LOCATION to be the path to a valid file.

docker-compose-rule's People

Contributors

alicederyn avatar ashrayjain avatar bulldozer-bot[bot] avatar crogers avatar d-lorenc avatar diamondt avatar fawind avatar felixdesouza avatar ferozco avatar fryz avatar gbrova avatar gsheasby avatar hpryce avatar hughsimpson avatar iamdanfox avatar j-baker avatar jkozlowski avatar joelea avatar markelliot avatar mktange avatar mswintermeyer avatar natgabb avatar neilrickards avatar nmiyake avatar pkoenig10 avatar qinfchen avatar rveguilla avatar serialvelocity avatar svc-autorelease avatar svc-excavator-bot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

docker-compose-rule's Issues

portMapping is not refreshed on restart

before restart

$ docker ps
4c7ee9d678c8 2666c3d6_api "/bin/sh -c 'java -ja" 2 minutes ago Up 24 seconds 0.0.0.0:32788->8080/tcp 2666c3d6_api_1

after restart
docker.containers().container("api").stop();
docker.containers().container("api").start();

$ docker ps
4c7ee9d678c8 2666c3d6_api "/bin/sh -c 'java -ja" 8 minutes ago Up 3 seconds 0.0.0.0:32790->8080/tcp 2666c3d6_api_1

but as portMapping was memoized for container it maps to old port and could not be used

Start up individual containers/services instead of the whole cluster.

We are trying to use docker-compose-rule to start up individual containers/services (and, of course, their dependencies, which docker does automatically), and I can't tell if there is already a way to do this. Ideally it would be something like:

DockerComposeRule.builder()
.files(...)
.services("my-service", "other-service")
...
.build()

I saw there is a ".containers" option in the builder, but it takes in a Cluster, and creating a cluster implementation seems overly heavyweight, so I'm not sure if that's it's intended use case.

saveToLogs does not save logs

V0.5.3

When using the following class rule, it creates the log directory, but never writes any container log files.

@ClassRule public static final DockerComposition composition = DockerComposition.of("src/test/resources/full-stack/docker-compose.yml").saveLogsTo("build/docker-logs/dockerSuite").build()

Debugging the code, I can see that in FileLogCollector.java > startCollecting
ContainerNames containerNames = dockerCompose.ps(); always returns empty.

The docker ps output contains a result, however ContainerNames.parseFromDockerComposePs(psOutput) is empty.

ContainerNames.java -> getContainerNamesAtStartOfLines seems to be the cause.

The psContainerOutput value it is trying to parse is:

gaia service/bin/gaia server va ... Up 0.0.0.0:17000->17000/tcp, 0.0.0.0:17001->17001/tcp, 8000/tcp, 8001/tcp, 8443/tcp
selenium /opt/bin/entry_point.sh Up 0.0.0.0:4444->4444/tcp, 0.0.0.0:5900->5900/tcp

Docker-compose does not work on Macs that use Docker daemon

The Docker for Mac Beta (https://blog.docker.com/2016/03/docker-for-mac-windows-beta/) finally allows Docker to run as a daemon on Macs rather than as a docker-machine. This is great, but breaks the current assumptions of docker-compose-rule, which hard-codes the DockerType for LocalMachine to REMOTE on Mac OS systems.

Most flexible near-term fix is to allow the DockerType to be specified for LocalMachine so that developers can customize as necessary. Long-term, it should probably be possible to just eliminate OS switching entirely and allow users to think in terms of just daemon versus remote independently of platform (maybe something to think about for next major version/breaking change).

DockerComposeRule.Builder is confusing

Either DCR.Builder should be package private, or should extend ImmutableDCR.Builder and not the other way around.

It's super confusing that you can do

DockerComposeRule.Builder builder = DockerComposeRule.builder()

and then can't actually call build().

My understanding is that the pattern you're meant to use with that Immutables feature is to declare the outer class as AbstractDockerComposeRule.Builder and then you use the Immutables generated version everywhere. The current implementation seems to be a bit worse than just having it extend ImmutableDCR.Builder. The other acceptable solution would be to make DCR.Builder package private, as then you can't use the type DockerComposeRule.Builder in your code.

save logs issue

hi, i'm using version 0.28.1 version (mvn repo was not update with newer version, be happy to be notified once they are published here https://dl.bintray.com/palantir/releases/com/palantir/docker/compose/docker-compose-rule/)

There is issue when save the logs when i have more than one service in my docker-compose file. That is because it split the services array and attach the new line "\n" to it.. this result in the name of the log to break down with this new line and only the last service on that array gets to be written /collected correctly

e.g., snippet from my log shows. only the last service 'ksm' log file name is correct 'ksm.log' and not like for other servies, e.g. 'sgw\n.log'
Thanks

2017/03/21 09:29:04.008 [FileLogCollector] [pool-5-thread-7]: INFO: Writing logs for container 'cdl2 ' to 'C:\Users\siasaraf\workspace\drm_stage2_branch\target\docker-compose\cdl2 .log' 2017/03/21 09:29:04.008 [FileLogCollector] [pool-5-thread-4]: INFO: Writing logs for container 'sgw ' to 'C:\Users\siasaraf\workspace\drm_stage2_branch\target\docker-compose\sgw .log' 2017/03/21 09:29:04.009 [FileLogCollector] [pool-5-thread-8]: INFO: Writing logs for container 'ksm' to 'C:\Users\siasaraf\workspace\drm_stage2_branch\target\docker-compose\ksm.log' 2017/03/21 09:29:04.020 [DockerComposeRule] [main]: DEBUG: docker-compose cluster started

Docker-compose --project is randomly chosen

DCR 0.5.3 randomly chooses the project name every single test run.

This seems like an unsafe default:

  • if the JVM exits during startup (I cancelled a build), the next run won't shut down or delete the containers that were created in the last run
  • if you use named containers and this happens, you won't be able to run the tests again, until you have manually deleted all the containers used in the last run
  • you can't easily use docker-compose logs to view the logfiles being created

I reverted to 0.5.2 ๐Ÿ‘

Overloading issue in DockerComposeExecOption

We never want to provide an option to docker-compose exec. However, this makes it a pain to actually construct an object of type DockerComposeExecOption, because the type has

public static DockerComposeExecOption options(String... options) and public abstract List<String> options() which collide and stop code from compiling.

Our code now looks like DockerComposeExecOption.options(new String[0])

`Retryer` should wait a few seconds before trying again

Especially if there is a network blip, there is no point in retrying 3 times in quick succession. Backing off for 10 seconds seems perfectly reasonable if it might prevent the failure of a 10 minute build!

com.palantir.docker.compose.execution.DockerExecutionException: 'docker-compose up -d' returned exit code 1
The output was:
Pulling db (postgres:latest)...
Pulling repository docker.io/library/postgres
Error while pulling image: Get https://index.docker.io/v1/repositories/library/postgres/images: dial tcp: lookup index.docker.io on 172.16.25.1:53: dial udp 172.16.25.1:53: connect: network is unreachable
    at com.palantir.docker.compose.execution.Command.lambda$throwingOnError$17(Command.java:59)
    at com.palantir.docker.compose.execution.Command$$Lambda$34/344008800.handle(Unknown Source)
    at com.palantir.docker.compose.execution.Command.execute(Command.java:49)
    at com.palantir.docker.compose.execution.DefaultDockerCompose.up(DefaultDockerCompose.java:70)
    at com.palantir.docker.compose.execution.DelegatingDockerCompose.up(DelegatingDockerCompose.java:39)
    at com.palantir.docker.compose.execution.RetryingDockerCompose.lambda$up$31(RetryingDockerCompose.java:36)
    at com.palantir.docker.compose.execution.RetryingDockerCompose$$Lambda$46/1139413690.call(Unknown Source)
    at com.palantir.docker.compose.execution.Retryer.runWithRetries(Retryer.java:39)
    at com.palantir.docker.compose.execution.RetryingDockerCompose.up(RetryingDockerCompose.java:35)
    at com.palantir.docker.compose.execution.ConflictingContainerRemovingDockerCompose.up(ConflictingContainerRemovingDockerCompose.java:51)
    at com.palantir.docker.compose.DockerComposeRule.before(DockerComposeRule.java:130)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:46)
    at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
    at org.junit.rules.RunRules.evaluate(RunRules.java:20)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
    at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
    at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
    at org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
    at org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

Generate a unique cluster name for each run

docker-compose by default gives each cluster the name of the directory it's in. This is going to cause issues when running tests in parallel etc.

We can fix this by using --project-name flag when using docker compose combined with a random name/id for each cluster.

Unable to connect to a remote machine

I'm unable to get docker-compose-rule to connect to my remote docker daemon running as a swarm manager on Ubuntu. I've created several docker machines in a swarm cluster, and when running "docker-machine ls" on my server I can see that my manager is running and accessible at tcp://192.168.99.101:2376.

Using nmap I can see that this port is open and the IP address is accessible.

However when creating a DockerMachine using the remote api as in your example, and passing it to the builder for DockerComposeRule, I get the error:

"Couldn't connect to Docker daemon - you might need to run docker-machine start default"

Any help to get this working would be appreciated.

Junit 5 extension

What are the thoughts about a junit 5 extension? I've done some work in a private repo that I could contribute.

No way to test features requiring recent Docker releases

Public CircleCI cannot upgrade Docker past v1.10.0, meaning we cannot test (among other things) native healthchecks.

CircleCI v2 is currently in closed beta, and will support upgrading Docker to any released version, but is currently very underprovisioned (meaning frequent flakes due to failing to attain build resources), but more critically does not yet support building tags, meaning we would be unable to automate deployment in a satisfactory manner. The circle.2 branch's circle.yml file contains a working config file we can pick up later when this blocker is removed.

Tests in the code which are disabled in continuous integration should link to this issue.

TestContainers?

Hi!

Have you seen http://github.com/testcontainers/testcontainers-java ? :)

I'm co-maintainer of it and TestContainers is a bit more generic compared to docker-compose-rule. However, our current Docker Compose support is less powerful.

Maybe we can collaborate? :)

TestContainers comes with a really powerful environment detection (i.e. it can start docker machine for you, works with Docker for Windows/Mac, even works when being executed in a container itself :) )

Our community is very active, and we have a couple of big users like ZeroTurnaround, OpenZipkin, a few Apache projects.

.addAllClusterWaits overwrites previous waits.

ClusterWait a,b,c,d;
DockerComposeRule.builder()
  .addClusterWait(a)
  .addAllClusterWaits(ImmutableList.of(b,c,d)

Removes cluster wait 'a' from the list of waits. Things then fail in a very non transparent manner.

Docker's new version format breaks Docker.version()

Tests that fail:

  • DockerComposeRuleNativeHealthcheckIntegrationTest
  • ContainerIntegrationTests

Stacktrace:

Numeric identifier MUST NOT contain leading zeroes
	at com.github.zafarkhaja.semver.VersionParser.checkForLeadingZeroes(VersionParser.java:479)
	at com.github.zafarkhaja.semver.VersionParser.numericIdentifier(VersionParser.java:407)
	at com.github.zafarkhaja.semver.VersionParser.parseVersionCore(VersionParser.java:287)
	at com.github.zafarkhaja.semver.VersionParser.parseValidSemVer(VersionParser.java:255)
	at com.github.zafarkhaja.semver.VersionParser.parseValidSemVer(VersionParser.java:195)
	at com.github.zafarkhaja.semver.Version.valueOf(Version.java:265)
	at com.palantir.docker.compose.execution.Docker.version(Docker.java:54)
	at com.palantir.docker.compose.connection.ContainerIntegrationTests.testStateChanges_withHealthCheck(ContainerIntegrationTests.java:59)
etc...

The tests run Docker.version() which tries to parse the docker version as a semantic version using the java-semver library, which considers that the latest docker version "17.03.0-ce" to be malformed because a numeric identifier (i.e. "03") contains a leading zero.

Docker's version format has recently been changed to YY.MM (see https://github.com/docker/docker/releases/tag/v17.03.0-ce) as part of a move making a release every two months. IMHO semantic versioning works so why change it, but I doubt we will be able to convince them otherwise. I haven't seen if docker-compose will also move to the YY.MM format.

I suggest we change Docker.version() and DockerCompose.version() to return a String instead of a java-semver Version, and remove the java-semver library (I don't see it being used elsewhere). Annoyingly we can't just do lexicographic comparison on the version string for comparing versions, since 17.03.3-ce < 17.03.11-ce, and we need to make sure we can compare the old version formats with the new ones (e.g. in DefaultDockerCompose we check that the version >= 1.7.0).

I can write a Comparator and some utility methods for comparing these versions, e.g. using ^(\d+)\.(\d+)\.(\d+)(?:-.*)?$ would capture ["1","7","0"] from "1.7.0" and ["17","03","0"] from "17.03.0-ce", then
using com.google.common.collect.Comparators.lexicographical(Comparator.naturalOrder()) would give the correct order.

Thoughts?

Publish to maven central?

Would you consider pushing the releases to maven central and/or jcenter?
The library seems very useful, and it would be handy to access releases through our maven proxy.

How to specify docker-compose file for each @Test method

Hello everyone, i have question regarding on now to specify docker-compose file for each @test method. Basically, I have requirements where each test method should start with different docker-compose file. Is this possible?

Currently I have following solution which is breaking @rule convention.

    @Test
    public void testTest1() throws IOException, InterruptedException {

        dockerComposeRule = DockerComposeRule.builder()
                .file(DOCKER_COMPOSE_ROOT+"docker-compose-test1.yml")
                .machine(dockerMachine)
                .saveLogsTo("target/dockerLogs/dockerComposeRuleTest/test1")
                .build();

        dockerComposeRule.before();
    }

    @Test
    public void testTest2() throws IOException, InterruptedException {

        dockerComposeRule = DockerComposeRule.builder()
                .file(DOCKER_COMPOSE_ROOT+"docker-compose-test2.yml")
                .machine(dockerMachine)
                .skipShutdown(true)
                .saveLogsTo("target/dockerLogs/test2")
                .build();

        dockerComposeRule.before();

    }

Gradle Idea task assumes specific IDEA JDK naming scheme

Currently, this project's Gradle IDEA task assumes as specific naming scheme for the IntelliJ JDK/JRE -- specifically, "JavaSE-$version".

However, this isn't really standard, and makes it so that the out-of-the-box configuration doesn't work in many environments. Is there a reason that this is enforced? Omitting this custom logic should cause IntelliJ to just pick the correct JDK according to the version specified (verified locally and it seems to work)

Output logging errors

I've been running tests with the docker-compose-rule from IntelliJ Idea.
The errors are correctly caught but are not outputted anywhere.

Due to misconfigurations, Intellij fails with only a stacktrace and no significant error message.

Force purge of existing containers with conflicting name.

After an unclean shutdown, running containers will remain, and hinder a repeated test with the error:
"Conflict. The name "/docker-contianer-name" is already in use by container 04a0cb1433e1c325ac43a52d0bfa6b0b2ae6ba6665753d50dab7f235a6768645. You have to remove (or rename) that container to be able to reuse that name.

Please add an option to the builder interface to catch that, and automatically delete the existing container and retry starting it. I can do a PR if you want.

Randomized project names exhausts docker's networks

By default, DockerComposeRule uses a randomized project name. Using the V2 yml syntax, this causes a host machine to 'run out of networks'.

To fix this, we could switch the default round, so that project name is non-random by default. We've already had trouble with this before (#40). Unfortunately, I can't remember why it was randomized in the first place (@CRogers, @joelea)?

DockerMachine fails if "system" and "additional" environment variables are duplicated.

Relevant code:

            Map<String, String> environment = ImmutableMap.<String, String>builder()
                    .putAll(systemEnvironment)
                    .putAll(additionalEnvironment)
                    .build();

If you set FOO=bar in the "additional environment", but FOO=bar happens to also be set on the system, this will fail.

Options:
1 - merge the entries, fail if there's a conflict (e.g. FOO=bar, FOO=baz -> fail)
2 - merge the entries, choose additional if there's a conflict (e.g. system FOO=bar, add. FOO=baz -> FOO=baz).

Release 1.0.0

docker-compose-rule is a fairly well established project - time to cut a 1.0.0 release and get the benefits of semver?

Setting project name not effective

The projectName is being set on the DockerComposeExecutable:

private static final DockerComposeExecutable dockerComposeExecutable = ImmutableDockerComposeExecutable.builder()
        .dockerComposeFiles(DockerComposeFiles.from(DOCKER_COMPOSE_FILE))
        .dockerConfiguration(DOCKER_MACHINE)
        .projectName(ProjectName.fromString("salt"))
        .build();

But when I run docker ps there's still a random name:

ubuntu@box238:~$ docker ps
CONTAINER ID        IMAGE                                          COMMAND                  CREATED             STATUS              PORTS                                                                      NAMES
ba009f05e464        c7459e66_salt                                  "dockerize -timeout 1"   7 seconds ago       Up 6 seconds        0.0.0.0:6000->6000/tcp, 0.0.0.0:32775->5843/tcp, 0.0.0.0:32774->5845/tcp   c7459e66_salt_1

This is making it impossible to run dockerCompose.exec. It wasn't a problem in docker-compose-rule 0.5.2.

See Salt PR 330 for more context

DockerPort.isListeningNow() does not really check container port (should be deprecated/removed)

Looks like isListeningNow() always returns true, even if the underlying container port is not open.

docker-compose.yml

version: '2'

services:
  foo:
    image: busybox
    command: sleep 100000
    ports:
      - "11111"

And a simple test:

public class FooTest {

    @ClassRule
    public static DockerComposeRule docker = DockerComposeRule.builder().file("docker-compose.yml")
            .waitingForService("foo", toHaveAllPortsOpen())
            .build();

    @Test
    public void neverRun() {
        fail("Should never run");
    }
}

And in the logs we can clearly see that port 11111 is considered open

10:25:41.706 [main] INFO  c.p.d.c.c.waiting.ClusterWait - Waiting for cluster to be healthy
10:25:41.960 [pool-4-thread-1] TRACE c.p.d.c.e.DefaultDockerCompose - stty: 'standard input': Inappropriate ioctl for device
10:25:41.972 [pool-4-thread-1] TRACE c.p.d.c.e.DefaultDockerCompose -      Name          Command      State            Ports           
10:25:41.972 [pool-4-thread-1] TRACE c.p.d.c.e.DefaultDockerCompose - ----------------------------------------------------------------
10:25:41.972 [pool-4-thread-1] TRACE c.p.d.c.e.DefaultDockerCompose - bbce96e8_foo_1   sleep 100000   Up      0.0.0.0:32867->11111/tcp 
10:25:41.982 [pool-3-thread-1] TRACE c.p.d.compose.connection.DockerPort - External Port '32867' on ip '127.0.0.1' was open
10:25:41.983 [main] DEBUG c.p.docker.compose.DockerComposeRule - docker-compose cluster started
10:25:41.989 [main] DEBUG c.p.d.c.e.GracefulShutdownStrategy - Killing docker-compose cluster
10:25:42.180 [pool-5-thread-1] TRACE c.p.d.c.e.DefaultDockerCompose - Stopping bbce96e8_foo_1 ... 

I assume it's some form of Docker forwarding ports mechanism, but clearly, its misleading.

Docker version 1.12.5, build 7392c3b
docker-compose version 1.7.1, build 0a9ab35

Re-enable ShutdownStrategy access to running rule

I noticed that as a part of #140 the parameters of ShutdownStrategy.shutdown() changed from single DockerDomposeRule params to two separate DockerCompose and Docker ones.
I understand it's to remove cross-dependency between -core and -junit4 projects, but this strips down the important functionality of being able to access the configuration of cluster being shut down, which comes in handy.

We used it to create, i.e. our own ShutdownStrategy which copied some additional metrics before killing the containers.

DockerCompose.ps() incorrectly ignores certain valid container names

To parse results, it calls

private static List<String> getContainerNamesAtStartOfLines(String psContainerOutput) {

which assumes that the container name is in the format PROJECT_name_SOMENUMBER, I think.

But, you can set the container name to whatever you want, and the name can also have underscores in.

In these cases, the container name will be filtered out.

For example, if the result of docker-compose ps is

47aab4ab_salt_config_1       /true                            Exit 0                                                              
cassandra.palantir.dev       /docker-entrypoint.sh cass ...   Up       7000/tcp, 7001/tcp, 7199/tcp, 9042/tcp, 9160/tcp           

then no containers will be returned.

Should support Docker for Mac

Now that the Docker for Mac Beta is open to the public, docker-compose-rule should not default to 'remote' if the OS is Mac. This causes docker-compose-rule to fail on Mac's that are running Docker for Mac since the expected environment variables do not exist.

JUnit5 Extension

Love what you all have done so far. What are the thoughts about a JUnit5 extension? I wanted to hit you all up first before I submitted a pull request.

Add option to remove volumes on shutdown

I just noticed my disk had been gobbled up by multiple dangling volumes, from running tests using this rule. An option (maybe even defaulting to true) to remove volumes on shutdown would be great.

DockerComposeRule doesn't work as @Rule

I have my rule in a JUnit integration test set up as follows:
@ClassRule public static DockerComposeRule docker = DockerComposeRule.builder() .file("src/test/resources/docker-compose.yml") .saveLogsTo("build/dockerLogs/dockerComposeRuleTest") .pullOnStartup(true) .waitingForService("vp0", HealthChecks.toHaveAllPortsOpen()) .build();

As soon as I change @ClassRule with @rule, the test breaks with an "InitializationError" message and no log whatsoever.
Is that a limitation/intended behaviour? I'd really like to run the rule for every single method rather than at class level.

Slow container shutdowns on CI

After a build has successfully completed, stopping 14 containers on circle CI can take roughly 40 seconds (see logs pasted below). When optimizing for speed, these 40 seconds seem pretty unnecessary, especially because these containers are never going to be started up again.

I'm hesitant to suggest just leaving them running on CI, but perhaps we could do a more aggressive shutdown than our current graceful approach?

DEBUG [2016-09-23 09:47:44,173] com.palantir.docker.compose.DockerComposeRule: Killing docker-compose cluster
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxx-web_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxxx-1.0_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxx-build_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_metadata_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_data-proxy_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_selenium_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_catalog_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_hdfs-proxy_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_nginx_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_web_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxx_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_hadoop-hive_1 ... 
DEBUG [2016-09-23 09:47:44,741] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxxxx_1 ... 
DEBUG [2016-09-23 09:47:44,742] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_kerberos_1 ... 
DEBUG [2016-09-23 09:47:45,033] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:47:45,035] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_selenium_1 ... done
DEBUG [2016-09-23 09:47:49,102] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:47:49,103] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxx-web_1 ... done
DEBUG [2016-09-23 09:47:49,432] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:47:49,433] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxx-build_1 ... done
DEBUG [2016-09-23 09:47:55,032] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:47:55,033] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_hdfs-proxy_1 ... done
DEBUG [2016-09-23 09:47:55,134] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:47:55,134] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxxx-1.0_1 ... done
DEBUG [2016-09-23 09:47:55,321] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:47:55,321] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_nginx_1 ... done
DEBUG [2016-09-23 09:47:55,761] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:47:55,761] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_metadata_1 ... done
DEBUG [2016-09-23 09:47:55,794] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:47:55,794] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_data-proxy_1 ... done
DEBUG [2016-09-23 09:47:56,400] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:47:56,400] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_catalog_1 ... done
DEBUG [2016-09-23 09:48:05,415] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:05,416] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_hadoop-hive_1 ... done
DEBUG [2016-09-23 09:48:05,550] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:05,550] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_web_1 ... done
DEBUG [2016-09-23 09:48:06,655] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:06,655] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxx_1 ... done
DEBUG [2016-09-23 09:48:15,538] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:15,539] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_kerberos_1 ... done
DEBUG [2016-09-23 09:48:16,841] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:16,842] com.palantir.docker.compose.execution.DefaultDockerCompose: Stopping demopathtest_xxxxxxxxx_1 ... done
DEBUG [2016-09-23 09:48:16,874] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxx-web_1 ... 
DEBUG [2016-09-23 09:48:16,874] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxxx-1.0_1 ... 
DEBUG [2016-09-23 09:48:16,874] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxx-build_1 ... 
DEBUG [2016-09-23 09:48:16,874] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_metadata_1 ... 
DEBUG [2016-09-23 09:48:16,874] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_data-proxy_1 ... 
DEBUG [2016-09-23 09:48:16,874] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_selenium_1 ... 
DEBUG [2016-09-23 09:48:16,875] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_catalog_1 ... 
DEBUG [2016-09-23 09:48:16,875] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_hdfs-proxy_1 ... 
DEBUG [2016-09-23 09:48:16,875] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_nginx_1 ... 
DEBUG [2016-09-23 09:48:16,875] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_web_1 ... 
DEBUG [2016-09-23 09:48:16,875] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxx_1 ... 
DEBUG [2016-09-23 09:48:16,875] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_hadoop-hive_1 ... 
DEBUG [2016-09-23 09:48:16,875] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxxxx_1 ... 
DEBUG [2016-09-23 09:48:16,875] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_kerberos_1 ... 
DEBUG [2016-09-23 09:48:17,306] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:17,306] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_catalog_1 ... done
DEBUG [2016-09-23 09:48:17,638] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:17,639] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_data-proxy_1 ... done
DEBUG [2016-09-23 09:48:17,903] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:17,903] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_nginx_1 ... done
DEBUG [2016-09-23 09:48:18,204] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:18,204] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_metadata_1 ... done
DEBUG [2016-09-23 09:48:18,537] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:18,537] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxxx-1.0_1 ... done
DEBUG [2016-09-23 09:48:18,838] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:18,838] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxx-web_1 ... done
DEBUG [2016-09-23 09:48:19,142] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:19,142] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_web_1 ... done
DEBUG [2016-09-23 09:48:19,644] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:19,644] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_hdfs-proxy_1 ... done
DEBUG [2016-09-23 09:48:20,209] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:20,209] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_selenium_1 ... done
DEBUG [2016-09-23 09:48:20,542] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:20,542] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxx_1 ... done
DEBUG [2016-09-23 09:48:21,144] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:21,144] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_hadoop-hive_1 ... done
DEBUG [2016-09-23 09:48:21,509] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:21,509] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxx-build_1 ... done
DEBUG [2016-09-23 09:48:21,826] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:21,826] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_xxxxxxxxx_1 ... done
DEBUG [2016-09-23 09:48:22,592] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:22,592] com.palantir.docker.compose.execution.DefaultDockerCompose: Removing demopathtest_kerberos_1 ... done
DEBUG [2016-09-23 09:48:22,605] com.palantir.docker.compose.execution.DefaultDockerCompose: 
DEBUG [2016-09-23 09:48:23,689] com.palantir.docker.compose.execution.DefaultDockerCompose: No stopped containers
WARN  [2016-09-23 09:48:23,752] com.palantir.docker.compose.logging.FileLogCollector: docker containers were still running when log collection stopped

Prints something about circle ci when running locally

7:53:16.426 [Thread-1] WARN c.p.d.c.e.AggressiveShutdownStrategy - Couldn't shut down containers due to btrfs volume error, see https://circleci.com/docs/docker-btrfs-error/ for more info.

this seems bogus since I'm not running in Circle CI.

`unknown enum constant` warning during compilation

When I compile a project using docker-compose-rule:0.18.0 this warning appears:

warning: unknown enum constant ImplementationVisibility.PACKAGE reason: class file for org.immutables.value.Value$Style$ImplementationVisibility not found

Is there a way to fix this?

This is my build configuration:

------------------------------------------------------------
Gradle 2.14.1
------------------------------------------------------------

Build time:   2016-07-18 06:38:37 UTC
Revision:     d9e2113d9fb05a5caabba61798bdb8dfdca83719

Groovy:       2.4.4
Ant:          Apache Ant(TM) version 1.9.6 compiled on June 29 2015
JVM:          1.8.0_101 (Oracle Corporation 25.101-b13)
OS:           Linux 3.19.0-32-generic amd64

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.