Giter Site home page Giter Site logo

vromero / activemq-artemis-docker Goto Github PK

View Code? Open in Web Editor NEW
199.0 15.0 182.0 279 KB

Dockerfile for the ActiveMQ Artemis Project

License: Apache License 2.0

Shell 49.39% Makefile 3.59% XSLT 26.27% Dockerfile 20.75%
activemq-artemis docker docker-image

activemq-artemis-docker's Introduction

latest 2.16.0 License MIT Build Status Say Thanks!

THIS PROJECT IS ARCHIVED

It has been quite a ride but after a few years, with multiple initiatives going on around Artemis and Docker both from Redhat and from Apache, I've decided that its time to let these project take the spot the community around this project and I have been occuping till now.

Of course the project will remain read-only and you should feel free to fork but I won't be maintaining anymore.

1. What is ActiveMQ Artemis?

Apache ActiveMQ Artemis is an open source project to build a multi-protocol, embeddable, very high performance, clustered, asynchronous messaging system. Apache ActiveMQ Artemis is an example of Message Oriented Middleware (MoM).

logo

2. Tags and Dockerfile links

Debian Based Alpine Based
latest latest-alpine
2.16.0 2.16.0-alpine
2.15.0 2.15.0-alpine
2.14.0 2.14.0-alpine
2.13.0 2.13.0-alpine
2.12.0 2.12.0-alpine
2.11.0 2.11.0-alpine
2.10.1 2.10.1-alpine
2.10.0 2.10.0-alpine
2.9.0 2.9.0-alpine
2.8.0 2.8.0-alpine
2.7.0 2.7.0-alpine
2.6.4 2.6.4-alpine
2.6.3 2.6.3-alpine
2.6.2 2.6.2-alpine
2.6.1 2.6.1-alpine
2.6.0 2.6.0-alpine
2.5.0 2.5.0-alpine
2.4.0 2.4.0-alpine
2.3.0 2.3.0-alpine
2.2.0 2.2.0-alpine
2.1.0 2.1.0-alpine
2.0.0 2.0.0-alpine
1.5.6 1.5.6-alpine
1.5.5 1.5.5-alpine
1.5.4 1.5.4-alpine
1.5.3 1.5.3-alpine
1.5.2 1.5.2-alpine
1.5.1 1.5.1-alpine
1.5.0 1.5.0-alpine
1.4.0 1.4.0-alpine
1.3.0 1.3.0-alpine
1.2.0 1.2.0-alpine
1.1.0 1.1.0-alpine
1.0.0 1.0.0-alpine

3. About this image

The ActiveMQ Artemis images come in two flavors, both equally supported :

  • Debian based: the default one.
  • Alpine based: much lighter.

All versions of ActiveMQ Artemis are provided for the time being but versions previous to 1.5.5 shall be considered deprecated and could be removed at any time.

This image shall not be considered production ready as is. If you plan to use this image in a production environment, fork the image in order to maintain stability as the build is reproducible in a best effort basis. Then at each rebase, make sure you tests the changes you are importing.

4. How to use this image

You can find how to run this image in the section Running the image. Beware as the default configuration is not recommended for production usage, at the very least you'll want to set your own login and password. This is described with detail in section Setting the username and password. In case you also want to set some customized memory limits, this is described in Setting the memory values.

ActiveMQ Artemis typically persists the queue state to disk. In order to leverage the most of your disk ActiveMQ artemis might require some fine-tuning. The good news is that this process is fully automated and its described in Performing a performance journal test.

JMX uses RMI and therefore random ports. This is extremely bad for automatization in Docker and in general. For that reason its not supported for most of the use cases. However, when using this image in orchestrators like Kubernetes you might want to connect from a sidecar where it does make sense. How to enable JMX is described in section Enabling JMX.

The Jolokia console CORS header won't be a problem by default as it set to *, however if you want to narrow it down for improved security don't miss the section Settings the console's allow origin.

In rare ocassions you might find the need of running ActiveMQ Artemis without security. This is described in section Disabling security.

Some of the configurations mentioned above are scripted automations that modify the configuration files. You might have your own configuration that you want to provide as a whole. In that case disregard the aforementioned sections and find how to pass your own configuration in section Using external configuration files.

If instead you want to use the configuration parameters and make some non-mayor changes to the configuration you could use the mechanisms to apply some small transformations using XSLT as described in section Overriding parts of the configuration.

5. Running the image

There are different methods to run a Docker image, from interactive Docker to Kubernetes and Docker Compose. This documentation will cover only Docker with an interactive terminal mode. You should refer to the appropriate documentation for more information around other execution methods.

To run ActiveMQ with AMQP, JMS and the web console open (if your are running 2.3.0 or later), run the following command:

docker run -it --rm \
  -p 8161:8161 \
  -p 61616:61616 \
  vromero/activemq-artemis

After a few seconds you'll see in the output a block similar to:

_        _               _
/ \  ____| |_  ___ __  __(_) _____
/ _ \|  _ \ __|/ _ \  \/  | |/  __/
/ ___ \ | \/ |_/  __/ |\/| | |\___ \
/_/   \_\|   \__\____|_|  |_|_|/___ /
Apache ActiveMQ Artemis x.x.x

HH:mm:ss,SSS INFO  [...] AMQ101000: Starting ActiveMQ Artemis Server

At this point you can open the web server port at 8161 and check the web console using the default username and password of artemis / simetraehcapa.

5.1 Setting the username and password

If you wish to change the default username and password of artemis / simetraehcapa, you can do so with the ARTEMIS_USERNAME and ARTEMIS_PASSWORD environment variables:

docker run -it --rm \
  -e ARTEMIS_USERNAME=myuser \
  -e ARTEMIS_PASSWORD=otherpassword \
  vromero/activemq-artemis

5.2 Setting the memory values

By default this image does leverage the new features that came in Java 8u131 related to memory ergonomics in containerized environments, more information about it here.

It does use a -XX:MaxRAMFraction=2 meaning that half of the memory made avaiable to the container will be used by the Java heap, leaving the other half for other types of Java memory and other OS purposes. However, in some circumstances it might be advisable to fine tune the memory to manual values, in that case you can set the memory that you application needs by using the parameters ARTEMIS_MIN_MEMORY and ARTEMIS_MAX_MEMORY:

docker run -it --rm \
  -e 'ARTEMIS_MIN_MEMORY=1512M' \
  -e 'ARTEMIS_MAX_MEMORY=3048M' \
  vromero/activemq-artemis

The previous example will launch Apache ActiveMQ Artemis in docker with 1512 MB of memory, with a maximum usage of 3048 MB of memory. The format of the values passed is the same than the format used for the Java -Xms and -Xmx parameters and its documented here.

5.3 Performing a performance journal test

Different kinds of volumes need different values in fine tuning. In ActiveMQ Artemis the journal-buffer-timeout is oftentimes configured for this purpose. Since 1.5.3 it is possible to calculate the optimal value automatically. This image supports this automation using the environment variable: ARTEMIS_PERF_JOURNAL with one of the following values:

Value Description
AUTO (default) Checks for the existence of a .perf-journal-completed file in the data volume, if it doesn't exist performs the calculation, applies the configuration and creates the file.
NEVER Never do the performance journal configuration
ALWAYS Always do the performance journal configuration

It is safe to leave it as AUTO even for the casual usage of this image given that the image already have incorporated a .perf-journal-completed for its internal directory used when no volume is mounted. One example of execution with the performance journal calibration set to be executed always can be found in the next listing:

docker run -it --rm \
  -e ARTEMIS_PERF_JOURNAL=ALWAYS \
  vromero/activemq-artemis

5.4 Critical Analysis

Since 2.3.0 ActiveMQ Artemis can monitor Queue delivery (add to the queue), Journal storage and Paging operations timings for anomalies in case there are IO errors or Memory issues (describe in detail here).

The following properties can configure the critical analysis:

Value Description
CRITICAL_ANALYZER Enable or disable the critical analysis (default true or false)
CRITICAL_ANALYZER_TIMEOUT Timeout used to do the critical analysis (default 120000 milliseconds)
CRITICAL_ANALYZER_CHECK_PERIOD Time used to check the response times (default half of critical-analyzer-timeout)
CRITICAL_ANALYZER_POLICY Should the server log, be halted or shutdown upon failures (default HALT or LOG)

5.5 Enabling JMX

Due to the JMX's nature, often with dynamics ports for RMI and the need having configure the public IP address to reach the RMI server. It is discouraged to use JMX in Docker. Although in certain scenarios, it could be advisable, as when deploying in a container orchestrator such as Kubernetes or Mesos, and deploying along side this container a side car. For such cases the following environment variable could be used: ENABLE_JMX.

It is also possible to set the JMX port and the JMX RMI port with these two environment variables respectively: JMX_PORT (default: 1099) and JMX_RMI_PORT (default: 1098).

Given that JMX is intended for side cars, it is attached only to localhost and not protected with SSL. Likewise, its ports are not declared in the Dockerfile.

docker run -it --rm \
  -e ENABLE_JMX=true \
  -e JMX_PORT=1199 \
  -e JMX_RMI_PORT=1198 \
  vromero/activemq-artemis

5.6 Using JSON Output

It can be oftentimes preferrable to have the log output structured in a parseable format. This image supports the usage of org.jboss.logmanager.formatters.JsonFormatter to format the output. To enable it LOG_FORMATTER=JSON can be passed as environment variable.

docker run -it --rm \
  -e LOG_FORMATTER=JSON \
  vromero/activemq-artemis

When used, the output will look similar to the following listing:

     _        _               _
    / \  ____| |_  ___ __  __(_) _____
   / _ \|  _ \ __|/ _ \  \/  | |/  __/
  / ___ \ | \/ |_/  __/ |\/| | |\___ \
 /_/   \_\|   \__\____|_|  |_|_|/___ /
 Apache ActiveMQ Artemis 2.x.x

{"timestamp":"2020-04-25T09:43:17.222Z","sequence":0,"loggerClassName":"org.apache.activemq.artemis.integration.bootstrap.ActiveMQBootstrapLogger_$logger","loggerName":"org.apache.activemq.artemis.integration.bootstrap","level":"INFO","message":"AMQ101000: Starting ActiveMQ Artemis Server","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"354e0e2e67cb","processName":"Artemis","processId":78}
{"timestamp":"2020-04-25T09:43:17.408Z","sequence":1,"loggerClassName":"org.apache.activemq.artemis.core.server.ActiveMQServerLogger_$logger","loggerName":"org.apache.activemq.artemis.core.server","level":"INFO","message":"AMQ221000: live Message Broker is starting with configuration Broker Configuration (clustered=false,journalDirectory=data/journal,bindingsDirectory=data/bindings,largeMessagesDirectory=data/large-messages,pagingDirectory=data/paging)","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"354e0e2e67cb","processName":"Artemis","processId":78}
{"timestamp":"2020-04-25T09:43:17.648Z","sequence":2,"loggerClassName":"org.apache.activemq.artemis.core.server.ActiveMQServerLogger_$logger","loggerName":"org.apache.activemq.artemis.core.server","level":"INFO","message":"AMQ221012: Using AIO Journal","threadName":"main","threadId":1,"mdc":{},"ndc":"","hostName":"354e0e2e67cb","processName":"Artemis","processId":78}

5.7 Prometheus metrics

When using this image in a orchestrated environmnet like in Kubernetes. It is often useful to have metrics endpoints compatible with prometheus to ease monitoring.

This image can export such metrics in port 9404 thanks to the integration with the Prometheus JMX exporter. In order to enable it the environmnet variable ENABLE_JMX_EXPORTER should be present, it will also inderectly enable JMX as if ENABLE_JMX was set.

To see what is exported just:

docker run -it --rm \
  -p9404:9404 \
  -e ENABLE_JMX_EXPORTER=true \
  vromero/activemq-artemis

And then in a different terminal run:

curl http://127.0.0.1:9404

To obtain the following and more:

# HELP artemis_disk_scan_period How often to check for disk space usage, in milliseconds (org.apache.activemq.artemis<broker="0.0.0.0"><>DiskScanPeriod)
# TYPE artemis_disk_scan_period counter
artemis_disk_scan_period 5000.0
# HELP artemis_durable_delivering_count number of durable messages that this queue is currently delivering to its consumers (org.apache.activemq.artemis<broker="0.0.0.0", component=addresses, address="DLQ", subcomponent=queues, routing-type="anycast", queue="DLQ"><>DurableDeliveringCount)
# TYPE artemis_durable_delivering_count counter
artemis_durable_delivering_count{queue="DLQ",address="DLQ",} 0.0
artemis_durable_delivering_count{queue="ExpiryQueue",address="ExpiryQueue",} 0.0
# HELP artemis_journal_min_files Number of journal files to pre-create (org.apache.activemq.artemis<broker="0.0.0.0"><>JournalMinFiles)
# TYPE artemis_journal_min_files counter
artemis_journal_min_files 2.0
# HELP artemis_message_expiry_thread_priority Priority of the thread used to scan message expiration (org.apache.activemq.artemis<broker="0.0.0.0"><>MessageExpiryThreadPriority)
# TYPE artemis_message_expiry_thread_priority counter
artemis_message_expiry_thread_priority 3.0
# HELP artemis_messages_killed number of messages removed from this queue since it was created due to exceeding the max delivery attempts (org.apache.activemq.artemis<broker="0.0.0.0", component=addresses, address="DLQ", subcomponent=queues, routing-type="anycast", queue="DLQ"><>MessagesKilled)
# TYPE artemis_messages_killed counter
artemis_messages_killed{queue="DLQ",address="DLQ",} 0.0
artemis_messages_killed{queue="ExpiryQueue",address="ExpiryQueue",} 0.0
# HELP artemis_address_memory_usage_percentage Memory used by all the addresses on broker as a percentage of global maximum limit (org.apache.activemq.artemis<broker="0.0.0.0"><>AddressMemoryUsagePercentage)
# TYPE artemis_address_memory_usage_percentage counter
artemis_address_memory_usage_percentage 0.0
# HELP artemis_journal_sync_non_transactional Whether the journal is synchronized when receiving non-transactional datar (org.apache.activemq.artemis<broker="0.0.0.0"><>JournalSyncNonTransactional)
# TYPE artemis_journal_sync_non_transactional counter
artemis_journal_sync_non_transactional 1.0
# HELP artemis_journal_buffer_size Size of the internal buffer on the journal (org.apache.activemq.artemis<broker="0.0.0.0"><>JournalBufferSize)
# TYPE artemis_journal_buffer_size counter
artemis_journal_buffer_size 501760.0
# HELP artemis_journal_max_io Maximum number of write requests that can be in the AIO queue at any given time (org.apache.activemq.artemis<broker="0.0.0.0"><>JournalMaxIO)
# TYPE artemis_journal_max_io counter
artemis_journal_max_io 4096.0

In case you need more control over the metrics that are exported, you can mount a jmx-exporter configuration file in /opt/jmx-exporter/etc-override with the file name jmx-exporter-config.yaml.

5.8 Settings the console's allow origin

ActiveMQ Artemis console uses Jolokia. In the default vanilla non-docker installation Jolokia does set a CORS header to allow only localhost. In the docker image this create problems as things are rarely accesed as localhost.

Therefore the docker image does set the CORS header to * by default. However there is a mechanism to narrow it down to whatever value is best suited to you for improved security through the environmnet property: JOLOKIA_ALLOW_ORIGIN.

docker run -it --rm \
  -e JOLOKIA_ALLOW_ORIGIN=192.168.1.1 \
  vromero/activemq-artemis

5.9 Overriding parts of the configuration

ActiveMQ Artemis support disabling the security using the element <security-enabled>false</security-enabled> as described in the official documentation. This docker image makes it simple to set that element using the environment property: DISABLE_SECURITY:

docker run -it --rm \
  -e DISABLE_SECURITY=true \
  vromero/activemq-artemis

Please keep in mind no production system, possible no environment at all, should ever disable security. Make sure you read the falacy number one of the falacies of the distributed computing before disabling the security.

5.10 Using external configuration files

It is possible to mount a whole artemis etc directory in this image in the volume /var/lib/artemis/etc. Be careful as this might be an overkill for many situations where only small tweaks are necessary.

When using this technique be aware that the configuration files of Artemis might change from version to version. Generally speaking, when in need to configure Artemis beyond what it is offered by this image using environment variables, it is recommended to use the partial override mechanism described in the next section.

5.11 Overriding parts of the configuration

The default ActiveMQ Artemis configuration can be partially modified, instead of completely replaced as in the previous section, using three mechanisms. Merge snippets, XSLT tranformations and entrypoint overrides.

Merging snippets

Multiple files with snippets of configuration can be dropped in the /var/lib/artemis/etc-override volume. Those configuration files must be named following the name convention broker-{{num}}.xml where num is a numeric representation of the snippet. The configuration files will be merged with the default configuration. An alphabetical precedence of the file names will be considered for the merge and in case of collision the latest change will be treated as final.

For instance lets say that you want to add a diverts section, you could have a local directory, lets say /var/artemis-data/etc-override where you could place a broker-00.xml file that looks like the following listing:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
   <!-- from 1.0.0 to 1.5.5 the following line should be : <core xmlns="urn:activemq:core"> -->
   <core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
      <diverts>
         <divert name="order-divert">
            <routing-name>order-divert</routing-name>
            <address>orders</address>
            <forwarding-address>spyTopic</forwarding-address>
            <exclusive>false</exclusive>
         </divert>
      </diverts>
   </core>
</configuration>

Please notice the core element change along with the versions:

  • 1.0.0 up to 1.5.5: <core xmlns="urn:activemq:core">
  • 2.0.0 onwards: <core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">

Configuration transformations

For the use cases where instead of merging, the desired outcome is a deletion or some other kind of advanced transformation a file named broker-00.xslt in /var/lib/artemis/etc-override is supported. For instance to delete the jms definitions that is present by default in the broker.xml file shown below:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
  ...
  <jms xmlns="urn:activemq:jms">
    <queue name="myfancyqueue"/>
    <queue name="myotherqueue"/>
  </jms>
  ...
</configuration>

A file name broker-00.xslt with content like the following listing, could be used:

<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
  xmlns:activemq="urn:activemq" xmlns:jms="urn:activemq:jms">

 <xsl:output omit-xml-declaration="yes"/>

    <xsl:template match="node()|@*">
      <xsl:copy>
         <xsl:apply-templates select="node()|@*"/>
      </xsl:copy>
    </xsl:template>

    <xsl:template match="*[local-name()='jms']"/>
</xsl:stylesheet>

Entrypoint Overrides

Multiple shell scripts can be dropped in the /var/lib/artemis/etc-override volume. Those shell files must be named following the name convention entrypoint-{{num}}.sh where num is a numeric representation of the snippet. The shell scripts will be executed in alphabetical precedence of the file names on startup of the docker container.

A typical use case for using entrypoint overrides would be if you want to make a minor modification to a file which cannot be overriden using the 2 methods above and you do not want to expose the etc volume.

If you would like to see the final result of your transformations, execute the following:

docker run -it --rm \
  -v /var/artemis-data/override:/var/lib/artemis/etc-override \
  vromero/activemq-artemis \
  cat ../etc/broker.xml

5.12 Broker Config

ActiveMQ allows you to override key configuration values using System properties. This docker image has built in support to set these values by passing environment variables prefixed with BROKER_CONFIG to the docker image.

Below is an example which overrides the global-max-size and disk-scan-period values

docker run -it --rm   -p 8161:8161 \
    -e BROKER_CONFIG_GLOBAL_MAX_SIZE=50000 \
    -e BROKER_CONFIG_DISK_SCAN_PERIOD=6000 \
    vromero/activemq-artemis

5.13 Environment Variables

Additionally, the following environment variables are supported

Env Var Default Description
JAVA_OPTS Will pass additional java options to the artemis runtime

5.14 Mount points

Mount point Description
/var/lib/artemis/data Holds the data files used for storing persistent messages
/var/lib/artemis/etc Holds the instance configuration files
/var/lib/artemis/etc-override Holds the instance configuration files
/var/lib/artemis/lock Holds the command line locks (typically not useful to mount)
/opt/jmx-exporter/etc-override Holds the configuration file for jmx-exporter jmx-exporter-config.yaml

5.15 Exposed ports

Port Description
8161 Web Server
9404 JMX Exporter
61616 Core,MQTT,AMQP,HORNETQ,STOMP,Openwire
5445 HORNETQ,STOMP
5672 AMQP
1883 MQTT
61613 STOMP

6. Running in orchestrators

At the moment only docker is directly supported for this image. However there is an attempt to create a helm chart for Kubernetes and some configuration tuning for OpenShift.

6.1 Running in Kubernetes

ActiveMQ Artemis can leverage JGroups to discover the members of the cluster. And JGroups can be extended with a plugin called jgroups-kubernetes that allows JGroups to discover using Kubernetes. Both KUBE_PING (via Kubernetes API) and DNS_PING (using the SRV records of a Kubernetes service) building blocks are included to facilitate initial membership discovery.

jgroups-kubernetes version 0.9.3 is included in the classpath of this image, however everything about the configuration of jgroups and jgroups-kubernetes is left to the user.

If you rather prefer a easier solution to run a cluster of ActiveMQ Artemis nodes, there is an attempt to create a Helm chart by the same author of this image. It can be found here. It does leverage jgroups-kubernetes in a transparent way.

6.2 OpenShift

OpenShift has diverted a bit from Kubernetes (e.g: automounts empty volumes in all declared volumes without the user asking for it at all) and Docker (e.g: runs on an random user).

The biggest problem to run this image is the automount of empty directories because it empties the etc directory. In order to restore it the environment variable RESTORE_CONFIGURATION has been created. It can be used as follows:

oc new-app --name=artemis vromero/activemq-artemis -e RESTORE_CONFIGURATION=true

7. License

View license information for the software contained in this image.

8. User Feedback

8.1 Issues

If you have any problems with or questions about this image, please contact us through a GitHub issue.

8.2 Contributing

You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can.

Before you start to code, we recommend discussing your plans through a GitHub issue, especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your design, and help you find out if someone else is working on the same thing.

activemq-artemis-docker's People

Contributors

djanavar avatar dwickern avatar fernandofederico1984 avatar havret avatar jarek-przygodzki avatar jerhat avatar middagj avatar mobe91 avatar pawelj-pl avatar stupidsheep avatar tahubu avatar thorbenheins avatar vromero avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

activemq-artemis-docker's Issues

Configure cluster in kuberntes with HA

Hello,

I've use your chart to deploy a cluster in kubernetes with HA.

But when deploying the SatefulSet

I got this error on pod-0


14:47:20,496 INFO  [org.apache.activemq.artemis.core.server] AMQ221034: Waiting indefinitely to obtain live lock
14:47:20,497 INFO  [org.apache.activemq.artemis.core.server] AMQ221035: Live Server Obtained live lock
14:47:20,655 ERROR [org.apache.activemq.artemis.core.client] AMQ214016: Failed to create netty connection: java.net.UnknownHostException: jms-service-1.jms-service.default.svc.cluster.local
	at java.net.InetAddress.getAllByName0(InetAddress.java:1280) [rt.jar:1.8.0_162]
	at java.net.InetAddress.getAllByName(InetAddress.java:1192) [rt.jar:1.8.0_162]
	at java.net.InetAddress.getAllByName(InetAddress.java:1126) [rt.jar:1.8.0_162]
	at java.net.InetAddress.getByName(InetAddress.java:1076) [rt.jar:1.8.0_162]
	at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:146) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.util.internal.SocketUtils$8.run(SocketUtils.java:143) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at java.security.AccessController.doPrivileged(Native Method) [rt.jar:1.8.0_162]
	at io.netty.util.internal.SocketUtils.addressByName(SocketUtils.java:143) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.DefaultNameResolver.doResolve(DefaultNameResolver.java:43) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:63) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.SimpleNameResolver.resolve(SimpleNameResolver.java:55) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.java:57) [netty-all-4.1.16.Final.jar:4.1.16.Final]
	at io.netty.resolver.InetSocketAddressResolver.doResolve(InetSocketAddressResolver.


Here is my SatetulSet


apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: jms-service
  labels: 
     app: jms-service
spec:
 serviceName: jms-service
 replicas: 2
 selector:
    matchLabels:
        app: jms-service
 template:
    metadata:
      labels:
         app: jms-service
    spec:  
        containers:
        - name: jms-service
          image: kube-registry:5000/tk/jms-service:2.5
          ports:
            - containerPort: 8161
              name: http
            - containerPort: 61616
              name: core
            - containerPort: 5672
              name: amqp
          env:
            - name: ARTEMIS_USERNAME
              value: admin
            - name: ARTEMIS_PASSWORD
              value: admin
          volumeMounts:
           - name: config-override
             mountPath: /var/lib/artemis/etc-override
           - name: config-override-template
             mountPath: /var/lib/artemis/etc-override-template
          imagePullPolicy: Always
        initContainers:
        - name: init-myservice
          image: kube-registry:5000/tk/jms-service:2.5
          command: ['/bin/bash', '/var/lib/artemis/etc-override-template/configure-cluster.sh']
          volumeMounts:
          - name: data
            mountPath: /var/lib/artemis/data
          - name: config-override
            mountPath: /var/lib/artemis/etc-override
          - name: config-override-template
            mountPath: /var/lib/artemis/etc-override-template

        volumes:
        - name: config-override
          emptyDir: {}
        - name: config-override-template
          configMap:
            name: jms-service-map
        - name: data
          emptyDir: {}

And the Headless Service


apiVersion: v1
kind: Service
metadata:
  name:  jms-service
  annotations:
      # Make sure DNS is resolvable during initialization.
      service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
  publishNotReadyAddresses: true
  ports:
    - port: 8161
      name: http
      targetPort: http
    - port: 61616
      name: core
      targetPort: core
    - port: 5672
      name: amqp
      targetPort: amqp
  clusterIP: None
  selector:
    app: jms-service


But if i delete the pod-0 then the pod starts OK, but then this warn is on pod-1

15:04:37,825 WARN [org.apache.activemq.artemis.core.client] AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]
15:04:37,842 WARN [org.apache.activemq.artemis.core.client] AMQ212037: Connection failure has been detected: AMQ119015: The connection was disconnected because of server shutdown [code=DISCONNECTED]

And when trying to acess the console, i can't login with the username and pass.. Well i can login but with a lot of retries, The login sometime works and sometime don't

Any help ?

Failed to connect to server

I run docker command with/without username/pass and I got all proper INFO messages, but cannot connect to http://0.0.0.0:8161/console and also cannot use 0.0.0.0:1883 when try to connect to MQTT broker.

Do I need to set up a custom address and override default configuration?

PS. I'm using macOS and 17.09.0-ce Docker version

<name> shoud be hostname

In order to have a better experience in the console the element in broker.xml should be the host name.

Dockerizing Apache Active MQ Artemis Master/Slave config not working

I was able to successfully setup Apache ActiveMQ Artemis Master/Slave replication on my 2 VM cluster.

VM1 : 172.29.219.89

VM2 : 172.29.219.104

My broker.xml for Master node is :

  <connectors>
    <connector name="artemis">tcp://172.29.219.89:61616</connector>
    <connector name="cluster-connector">tcp://172.29.219.104:61616</connector>
  </connectors> 

  <cluster-user>cluster-user</cluster-user>
  <cluster-password>cluster-password</cluster-password>

  <cluster-connections>
   <cluster-connection name="cluster1">
    <address>*</address>
    <connector-ref>artemis</connector-ref>
    <retry-interval>1000</retry-interval>
    <message-load-balancing>ON_DEMAND</message-load-balancing>
    <max-hops>1</max-hops>
     <static-connectors>
      <connector-ref>cluster-connector</connector-ref>
     </static-connectors>
   </cluster-connection>
  </cluster-connections>


  <ha-policy>
    <replication>
     <master>
        <check-for-live-server>true</check-for-live-server>
     </master>
    </replication>
  </ha-policy>

My broker.xml for Slave node is :

  <connectors>
    <connector name="artemis">tcp://172.29.219.104:61616</connector>
    <connector name="cluster-connector">tcp://172.29.219.89:61616</connector>
  </connectors> 

  <cluster-user>cluster-user</cluster-user>
  <cluster-password>cluster-password</cluster-password>

  <cluster-connections>
   <cluster-connection name="cluster1">
    <address>*</address>
    <connector-ref>artemis</connector-ref>
    <retry-interval>1000</retry-interval>
    <message-load-balancing>ON_DEMAND</message-load-balancing>
    <max-hops>1</max-hops>
     <static-connectors>
      <connector-ref>cluster-connector</connector-ref>
     </static-connectors>
   </cluster-connection>
  </cluster-connections>


  <ha-policy>
    <replication>
     <slave>
         <allow-failback>true</allow-failback>
     </slave>
    </replication>
  </ha-policy>

The above configuration when deployed on just the 2 VMs works perfectly fine. As soon as I take the Master down, the failover is instantaneous and when I bring back the master, the fail back is instantaneous too.

Now I want to dockerize this.

My Docker file is :

COPY initialize.sh /

RUN  chmod a+x initialize.sh

RUN yum clean all && yum install -y unzip java-1.8.0-openjdk.x86_64

RUN curl -f -L -o apache-artemis-2.4.0-bin.zip http://apache.mirrors.spacedump.net/activemq/activemq-artemis/2.4.0/apache-artemis-2.4.0-bin.zip

RUN unzip -qd /opt apache-artemis-2.4.0-bin.zip

EXPOSE 8080 61616 5672 61613 5445 1883 

ENTRYPOINT [ "/initialize.sh" ] 

The initialize.sh just setups the brokers and loads the respective broker.xml files for Master and Slave configs.

My Docker container for Master is deployed on Master node. I start the docker container with the command :

docker run -p 8080:8080 -p 61616:61616 -p 5672:5672 -p 61613:61613 -p 5445:5445 -p 1883:1883 <container-id> --state master

My Docker container for Slave is deployed on Slave node. I start the docker container with

docker run -p 8080:8080 -p 61616:61616 -p 5672:5672 -p 61613:61613 -p 5445:5445 -p 1883:1883 <container-id> --state slave

My broker.xml config is the same that I am loading into the containers.

But in this case when I take down the Master, the failover takes over 1 min to happen.

The logs are ::

14:32:43,532 INFO  [org.apache.activemq.artemis.core.server] AMQ221066: Initiating quorum vote: LiveFailoverQuorumVote
14:32:43,535 INFO  [org.apache.activemq.artemis.core.server] AMQ221067: Waiting 30 seconds for quorum vote results.
14:32:43,535 INFO  [org.apache.activemq.artemis.core.server] AMQ221068: Received all quorum votes.
14:32:43,536 INFO  [org.apache.activemq.artemis.core.server] AMQ221071: Failing over based on quorum vote results.
14:32:43,561 INFO  [org.apache.activemq.artemis.core.server] AMQ221037: ActiveMQServerImpl::serverUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001 to become 'live'
14:32:43,591 WARN  [org.apache.activemq.artemis.core.client] AMQ212004: Failed to connect to server.
14:32:43,854 INFO  [org.apache.activemq.artemis.core.server] AMQ221003: Deploying queue DLQ on address DLQ
14:32:43,855 INFO  [org.apache.activemq.artemis.core.server] AMQ221003: Deploying queue ExpiryQueue on address ExpiryQueue
14:32:44,261 INFO  [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live
14:32:44,318 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61616 for protocols [CORE,MQTT,AMQP,STOMP,HORNETQ,OPENWIRE]
14:32:44,345 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:5445 for protocols [HORNETQ,STOMP]
14:32:44,348 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:5672 for protocols [AMQP]
14:32:44,365 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:1883 for protocols [MQTT]
14:32:44,368 INFO  [org.apache.activemq.artemis.core.server] AMQ221020: Started EPOLL Acceptor at 0.0.0.0:61613 for protocols [STOMP]

And the fail back does not occur at all when the Master is back up.

On the slave container all I see in the logs is :

14:34:29,464 INFO  [org.apache.activemq.artemis.core.server] AMQ221027: Bridge ClusterConnectionBridge@66d554c6 [name=$.artemis.internal.sf.cluster1.f4ad2f1c-285d-11e8-acde-0242ac110001, queue=QueueImpl[name=$.artemis.internal.sf.cluster1.f4ad2f1c-285d-11e8-acde-0242ac110001, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001], temp=false]@21ef670d targetConnector=ServerLocatorImpl (identity=(Cluster-connection-bridge::ClusterConnectionBridge@66d554c6 [name=$.artemis.internal.sf.cluster1.f4ad2f1c-285d-11e8-acde-0242ac110001, queue=QueueImpl[name=$.artemis.internal.sf.cluster1.f4ad2f1c-285d-11e8-acde-0242ac110001, postOffice=PostOfficeImpl [server=ActiveMQServerImpl::serverUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001], temp=false]@21ef670d targetConnector=ServerLocatorImpl [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-29-219-89], discoveryGroupConfiguration=null]]::ClusterConnectionImpl@1909325807[nodeUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001, connector=TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-29-219-104, address=*, server=ActiveMQServerImpl::serverUUID=83d3b7c9-285d-11e8-bfc9-0242ac110001])) [initialConnectors=[TransportConfiguration(name=artemis, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=61616&host=172-29-219-89], discoveryGroupConfiguration=null]] is connected

Does anyone have any idea why the Master/Slave replication isnt working in docker form ?

Error when use with volume

[org.apache.activemq.artemis.core.server] AMQ222141: Node Manager can not open file /var/lib/artemis/./data/journal/server.lock: java.io.IOException: No such file or directory
        at java.io.UnixFileSystem.createFileExclusively(Native Method) [rt.jar:1.8.0_151]

My yml
version: '3' services: amq-artemis: image: vromero/activemq-artemis:2.4.0 ports: - 8161:8161 - 61616:61616 - 1199:1199 - 1198:1198 environment: ARTEMIS_USERNAME: admin ARTEMIS_PASSWORD: admin ARTEMIS_MIN_MEMORY: 256M ARTEMIS_MAX_MEMORY: 1024M ARTEMIS_PERF_JOURNAL: AUTO ENABLE_JMX: 'true' JMX_PORT: 1199 JMX_RMI_PORT: 1198 volumes: - /opt/docker/artemis/data:/var/lib/artemis/data

Sending message through console on a queue is giving error

untitled
I am running docker image with the following command -
docker run -it --rm -p 8161:8161 -p 61616:61616 vromero/activemq-artemis.

Then if I go to artmeis console at 8161 port and try to send a message on a queue, it gives me below error message (as can be seen in the attached screenshot) -

[Core] Operation sendMessage(java.util.Map, int, java.lang.String, boolean, java.lang.String, java.lang.String) failed due to: java.lang.IllegalStateException : AMQ119213: User: null does not have permission='SEND' for queue DLQ on address DLQ

Container does not appear to terminate gracefully

I'm using 2.6.2, the SIGTERM signal gets lost as far as I can see and when I issue docker stop I never see this is the logs:

2018-07-31 01:12:30,398 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.2 [c88bc5de-945e-11e8-8b55-0242ac110002] stopped, uptime 5.710 seconds

To reproduce:

$ docker run -d -e ARTEMIS_PERF_JOURNAL=ALWAYS --name graceful-artemis vromero/activemq-artemis
$ docker logs -f graceful-artemis
$ docker stop graceful-artemis
$ docker logs -f graceful-artemis

I do get graceful shutdown when running as follows in -it mode:

$ docker run -it --name vromero-artemis vromero/activemq-artemis

The context is in clusters graceful shutdown is important for message redistribution.

My humble apologies if this is a PEBKAC issue.

artemis clustering with ha

Hi,

From what I can tell clustering is not yet supported although there was some talk of it on the thread that lead to the creation of the activemq-artemis-docker image.

I believe that a

  • symmetric cluster; with
  • colocated replication

... would fit most expectations regarding clustering and ha. In any case this is what I aim for.

A clustered Artemis Docker image should work with any number of replicas in a Docker Composer file similar to

version: "3"
services:
  artemis:
    image: vromero/activemq-artemis
    ports:
      - "8161:8161"
      - "61616:61616"
      - "5445:5445"
      - "5672:5672"
      - "1883:1883"
      - "61613:61613"
    deploy:
      replicas: 4
      update_config:
        parallelism: 2
        delay: 10s
      restart_policy:
        condition: on-failure
    volumes:
     - "/var/lib/artemis/data"
     - "/var/lib/artemis/etc"

Caveat: I am taking my first baby steps using Docker so I am bound to miss something...

Ideally service discovery and cluster connections would use its own backend network, not accessible to other services defined in the stack.

2.2.0 tags not pushed/do not exist on docker hub

Thanks for the great job, this is one of the best artemis images on the docker hub however while trying to setup my test environment I came across the fact that the recent tags of 2.2.0 and 2.2.0-alpine are not properly pushed or created on docker hub, could you please fix this?

Not possible to extend image due to COPY feeding into ENTRYPOINT.

I need to be able to do some pre-processing prior to starting Artemis -- specifically around obtaining and installing certificates specific to my environment, but, because the COPY feeds directly into the ENTRYPOINT clause, there is no way to insert new actions -- as overriding the ENTRYPOINT prevents the prior COPY from being invoked.

There are two options:

  1. make the COPY statement stand alone:
    e.g. COPY "assets/docker-entrypoint.sh" "./docker-entrypoint.sh"

  2. add a clause to docker-entrypoint.sh that looks for another, optional script to run. e.g.
    if [[ -f ./preprocess.sh ]]; then
    ./preprocess.sh
    fi

User credential environment vars not working

The replacement of username/passwords in artemis-users.properties is not working since the passwords are not stored in plain text. The problematic part is:

sed -i "s/artemis=simetraehcapa/$ARTEMIS_USERNAME=$ARTEMIS_PASSWORD/g" ../etc/artemis-users.properties

Will not run with mounted non-empty etc

If you run the image with a mounted etc, containing custom configuration, it will crash with the message:

error: exec: "./artemis": stat ./artemis: no such file or directory

Note that this only happens when you use docker run to start a new container with an existing etc mount point. Stopping and starting an existing container works. This makes using this image in Docker Cloud with etc mounted pretty impossible.

This happens because of the if in docker-entrypoint.sh:

if [ ! "$(ls -A /var/lib/artemis/etc)" ]

which stops the broker from being created if the mounted etc is detected. A crude workaround could be like this:

if [ ! "$(ls -A /var/lib/artemis/bin)" ]; then

	# Copy mounted etc, if existing
	if [ "$(ls -A /var/lib/artemis/etc)" ]; then
		cd /var/lib/artemis
		cp -r etc etc_copy
	fi

	# Create broker instance
	cd /var/lib && \
	  /opt/apache-artemis-1.5.0/bin/artemis create artemis \
	    --force \
		--home /opt/apache-artemis \
		--user $EFFECTIVE_ARTEMIS_USERNAME \
		--password $EFFECTIVE_ARTEMIS_PASSWORD \
		--role amq \
		--require-login \
		--cluster-user artemisCluster \
		--cluster-password simetraehcaparetsulc

	# Replace broker etc with mounted
	if [ "$(ls -A /var/lib/artemis/etc_copy)" ]; then
		cd artemis
		rm -f etc/*
		mv etc_copy/* etc/
		rm -r etc_copy
        else
		# Ports are only exposed with an explicit argument, there is no need to binding
		# the web console to localhost
		cd /var/lib/artemis/etc && \
		  xmlstarlet ed -L -N amq="http://activemq.org/schema" \
			-u "/amq:broker/amq:web/@bind" \
			-v "http://0.0.0.0:8161" bootstrap.xml
	fi

	chown -R artemis.artemis /var/lib/artemis
	
	cd $WORKDIR
fi

Note that this will move the mounted config files, which is not necessarily good. A better approach might be to create the broker in a temporary folder and merge the necessary files properly without messing around with the mounted files.

ARTEMIS_MAX_MEMORY environment variable is not respected

The default value in artemis.profile is "-Xmx2G" so the replacement is not working.

Solution

# Update min memory if the argument is passed
if [[ "$ARTEMIS_MIN_MEMORY" ]]; then
  sed -i "s/-Xms[^ ]*/-Xms$ARTEMIS_MIN_MEMORY/g" ../etc/artemis.profile
fi

# Update max memory if the argument is passed
if [[ "$ARTEMIS_MAX_MEMORY" ]]; then
  sed -i "s/-Xmx[^ ]*/-Xmx$ARTEMIS_MAX_MEMORY/g" ../etc/artemis.profile
fi

Entrypoint Script Bug

There is a bug in the entrypoint script that prevents the ARTEMIS_USERNAME environment variable from being applied to the artemis-roles.properties file.

Line 9 in docker-entrypoint.sh:
sed -i "s/apollo=amq/$ARTEMIS_USERNAME=amq/g" ../etc/artemis-roles.properties
should be:
sed -i "s/amq=apollo/amq=$ARTEMIS_USERNAME/g" ../etc/artemis-roles.properties

Also, the documentation says to use the environment variables ACTIVEMQ_MIN_MEMORY and ACTIVEMQ_MAX_MEMORY, but the script actually uses ARTEMIS_MIN_MEMORY and ARTEMIS_MAX_MEMORY.

mqtt broker doesn't process messages

I started a container with all default values, can use the management interface, where I can also see the connected clients. I use mqtt...

vromero/activemq-artemis:latest-alpine "/docker-entrypoint.โ€ฆ" About an hour ago Up About an hour 0.0.0.0:32809->1883/tcp, 0.0.0.0:32808->5445/tcp, 0.0.0.0:32807->5672/tcp, 0.0.0.0:32806->8161/tcp, 0.0.0.0:32805->61613/tcp, 0.0.0.0:32804->61616/tcp

despite of all these, I cannot deliver a single message. no error message on the publisher clients side, but no message at all on the subscriber side.

any way to raise the loglevel, or any recommendation on how to look after the problem?

Unable to connect using AWS ACS Container Service

I cannot connect to the image using the 8161 port.

For example i telnet the ip in the 8161 but i get connection refused.

I use AWS ACS Container Service with the minimum configuration:

{
"requiresAttributes": [
{
"value": null,
"name": "com.amazonaws.ecs.capability.logging-driver.awslogs",
"targetId": null,
"targetType": null
},
{
"value": null,
"name": "com.amazonaws.ecs.capability.docker-remote-api.1.19",
"targetId": null,
"targetType": null
}
],
"taskDefinitionArn": "arn:aws:ecs:eu-west-2:893749253116:task-definition/ventureCloud-messaging-qa:2",
"networkMode": "bridge",
"status": "ACTIVE",
"revision": 2,
"taskRoleArn": null,
"containerDefinitions": [
{
"volumesFrom": [],
"memory": 512,
"extraHosts": null,
"dnsServers": null,
"disableNetworking": null,
"dnsSearchDomains": null,
"portMappings": [
{
"hostPort": 61613,
"containerPort": 61613,
"protocol": "tcp"
},
{
"hostPort": 8161,
"containerPort": 8161,
"protocol": "tcp"
}
],
"hostname": null,
"essential": true,
"entryPoint": null,
"mountPoints": [],
"name": "VentureCloudMessageBrokerContainer",
"ulimits": null,
"dockerSecurityOptions": null,
"environment": [],
"links": null,
"workingDirectory": null,
"readonlyRootFilesystem": null,
"image": "vromero/activemq-artemis",
"command": null,
"user": null,
"dockerLabels": null,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "vc-api-qa",
"awslogs-region": "eu-west-2",
"awslogs-stream-prefix": "qamessaging"
}
},
"cpu": 1,
"privileged": null,
"memoryReservation": null
}
],
"placementConstraints": [],
"volumes": [],
"family": "ventureCloud-messaging-qa"
}

Starting with configuration snippet (broker-00.xml) fails

When I try to run the container with the example configuration snippet (broker-00.xml), booting fails with the following message:

Merging input with '/var/lib/artemis//etc-override/broker-00.xml'
[Fatal Error] :147:18: The markup in the document following the root element must be well-formed.
Exception in thread "main" org.xml.sax.SAXParseException; lineNumber: 147; columnNumber: 18; The markup in the document following the root element must be well-formed.
at com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:257)
at com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:339)
at org.apache.activemq.artemis.utils.XMLUtil.readerToElement(XMLUtil.java:89)
at org.apache.activemq.artemis.utils.XMLUtil.stringToElement(XMLUtil.java:55)
at org.apache.activemq.artemis.core.config.FileDeploymentManager.readConfiguration(FileDeploymentManager.java:76)
at org.apache.activemq.artemis.cli.commands.Configurable.getFileConfiguration(Configurable.java:93)
at org.apache.activemq.artemis.cli.commands.Run.execute(Run.java:64)
at org.apache.activemq.artemis.cli.Artemis.internalExecute(Artemis.java:125)
at org.apache.activemq.artemis.cli.Artemis.execute(Artemis.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.activemq.artemis.boot.Artemis.execute(Artemis.java:129)
at org.apache.activemq.artemis.boot.Artemis.main(Artemis.java:49)

When I export the container, the broker.xml seems OK to me (so something else is broken?).

the /etc/broker.xml is not merged with /etc-override/broker-00.xml

Dear Developers,

I'm using vromero/activemq-artemis:2.3.0-alpine (and tested also vromero/activemq-artemis:2.6.0-alpine)
minikube version: v0.27.0 (and tested also 0.26.1 and 0.25.2)

A few weeks ago the minikube yaml works correctly and the artemis pod was created with the queues were deployed when starting up the pod.

If I now log into the artemis-0 pod and navigate to /var/lib/artemis/etc-override the broker-00.xml is present.
and if I navigate to /var/lib/artemis/etc/broker.xml I see that the broker-00.xml was not merged with this file. This used to be the case. So when deploying the pod the queue were also deployed and then can be used.

the top part of the broker-00.xml is:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

<core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
    <security-settings>
       <security-setting match="#">
          <permission type="createNonDurableQueue" roles="amq"/>
          <permission type="deleteNonDurableQueue" roles="amq"/>
          <permission type="createDurableQueue" roles="amq"/>
          <permission type="deleteDurableQueue" roles="amq"/>
          <permission type="consume" roles="guest"/>
          <permission type="send" roles="guest"/>
          <!-- we need this otherwise ./artemis data imp wouldn't work -->
          <permission type="manage" roles="amq"/>
          <!-- the indexer must be able to browse the queue -->
          <permission type="browse" roles="amq"/>
       </security-setting>
    </security-settings>

    <addresses>
        <address name="DLQ">
          <anycast>
             <queue name="DLQ"/>
          </anycast>
        </address>
        <address name="ExpiryQueue">
          <anycast>
             <queue name="ExpiryQueue"/>
          </anycast>
        </address>


        <address name="accessQueue">
          <anycast>
            <queue name="accessQueue">
               <durable>true</durable>
            </queue>
          </anycast>
        </address>

Do you need more information, then please contact me?

I hope you can provide a solution for this problem.

Kind regards,
Egbert

Getting "The command '.....' returned a non-zero code: 4 ERROR

I am deploying the code and get the ERROR :

returned a non-zero code: 4

Full log of image build is :

[root@minion activemq-artemis-docker-master]# docker build -f Dockerfile --tag=test-0.1 . --no-cache
Sending build context to Docker daemon 38.91 kB
Step 1 : FROM openjdk:8
 ---> 891c9734d5ab
Step 2 : MAINTAINER Victor Romero <[email protected]>
 ---> Running in e22befa8824e
 ---> 133f398ae063
Removing intermediate container e22befa8824e
Step 3 : RUN groupadd -r artemis && useradd -r -g artemis artemis
 ---> Running in a106367e4bfc
 ---> a374be22e9bd
Removing intermediate container a106367e4bfc
Step 4 : RUN apt-get -qq -o=Dpkg::Use-Pty=0 update && apt-get -qq -o=Dpkg::Use-Pty=0 upgrade -y &&   apt-get -qq -o=Dpkg::Use-Pty=0 install -y --no-install-recommends libaio1 xmlstarlet jq &&   rm -rf /var/lib/apt/lists/*
 ---> Running in 6e72be76c801
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 23468 files and directories currently installed.)
Preparing to unpack .../curl_7.52.1-5+deb9u5_amd64.deb ...
Unpacking curl (7.52.1-5+deb9u5) over (7.52.1-5+deb9u4) ...
Preparing to unpack .../libcurl3_7.52.1-5+deb9u5_amd64.deb ...
Unpacking libcurl3:amd64 (7.52.1-5+deb9u5) over (7.52.1-5+deb9u4) ...
Preparing to unpack .../libcurl3-gnutls_7.52.1-5+deb9u5_amd64.deb ...
Unpacking libcurl3-gnutls:amd64 (7.52.1-5+deb9u5) over (7.52.1-5+deb9u4) ...
Setting up libcurl3:amd64 (7.52.1-5+deb9u5) ...
Setting up libcurl3-gnutls:amd64 (7.52.1-5+deb9u5) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Setting up curl (7.52.1-5+deb9u5) ...
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package libonig4:amd64.
(Reading database ... 23468 files and directories currently installed.)
Preparing to unpack .../0-libonig4_6.1.3-2_amd64.deb ...
Unpacking libonig4:amd64 (6.1.3-2) ...
Selecting previously unselected package libjq1:amd64.
Preparing to unpack .../1-libjq1_1.5+dfsg-1.3_amd64.deb ...
Unpacking libjq1:amd64 (1.5+dfsg-1.3) ...
Selecting previously unselected package jq.
Preparing to unpack .../2-jq_1.5+dfsg-1.3_amd64.deb ...
Unpacking jq (1.5+dfsg-1.3) ...
Selecting previously unselected package libaio1:amd64.
Preparing to unpack .../3-libaio1_0.3.110-3_amd64.deb ...
Unpacking libaio1:amd64 (0.3.110-3) ...
Selecting previously unselected package libxslt1.1:amd64.
Preparing to unpack .../4-libxslt1.1_1.1.29-2.1_amd64.deb ...
Unpacking libxslt1.1:amd64 (1.1.29-2.1) ...
Selecting previously unselected package xmlstarlet.
Preparing to unpack .../5-xmlstarlet_1.6.1-2_amd64.deb ...
Unpacking xmlstarlet (1.6.1-2) ...
Setting up libonig4:amd64 (6.1.3-2) ...
Setting up libxslt1.1:amd64 (1.1.29-2.1) ...
Setting up libjq1:amd64 (1.5+dfsg-1.3) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Setting up libaio1:amd64 (0.3.110-3) ...
Setting up jq (1.5+dfsg-1.3) ...
Setting up xmlstarlet (1.6.1-2) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
 ---> 64cb6a341d65
Removing intermediate container 6e72be76c801
Step 5 : ENV GOSU_VERSION 1.9
 ---> Running in 299b6a5c8c89
 ---> 9207f44a232b
Removing intermediate container 299b6a5c8c89
Step 6 : RUN set -x     && apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/*     && dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"     && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"     && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"     && export GNUPGHOME="$(mktemp -d)"     && (gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 || gpg --keyserver keyserver.ubuntu.com --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4)     && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu     && rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc     && chmod +x /usr/local/bin/gosu     && gosu nobody true
 ---> Running in 2984a5fdc2a1
+ apt-get update
Ign:1 http://deb.debian.org/debian stretch InRelease
Get:2 http://security.debian.org stretch/updates InRelease [63.0 kB]
Get:3 http://deb.debian.org/debian stretch-updates InRelease [91.0 kB]
Get:4 http://deb.debian.org/debian stretch Release [118 kB]
Get:5 http://deb.debian.org/debian stretch Release.gpg [2434 B]
Get:6 http://security.debian.org stretch/updates/main amd64 Packages [453 kB]
Get:7 http://deb.debian.org/debian stretch-updates/main amd64 Packages [8431 B]
Get:8 http://deb.debian.org/debian stretch/main amd64 Packages [9530 kB]
Fetched 10.3 MB in 8s (1268 kB/s)
Reading package lists...
+ apt-get install -y --no-install-recommends ca-certificates wget
Reading package lists...
Building dependency tree...
Reading state information...
ca-certificates is already the newest version (20161130+nmu1).
wget is already the newest version (1.18-5+deb9u1).
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
+ rm -rf /var/lib/apt/lists/deb.debian.org_debian_dists_stretch-updates_InRelease /var/lib/apt/lists/deb.debian.org_debian_dists_stretch-updates_main_binary-amd64_Packages.lz4 /var/lib/apt/lists/deb.debian.org_debian_dists_stretch_Release /var/lib/apt/lists/deb.debian.org_debian_dists_stretch_Release.gpg /var/lib/apt/lists/deb.debian.org_debian_dists_stretch_main_binary-amd64_Packages.lz4 /var/lib/apt/lists/lock /var/lib/apt/lists/partial /var/lib/apt/lists/security.debian.org_dists_stretch_updates_InRelease /var/lib/apt/lists/security.debian.org_dists_stretch_updates_main_binary-amd64_Packages.lz4
+ awk -F- { print $NF }
+ dpkg --print-architecture
+ dpkgArch=amd64
+ wget -O /usr/local/bin/gosu https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64
--2018-03-23 13:58:47--  https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64
Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112
Connecting to github.com (github.com)|192.30.253.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/19708981/5812fd6c-16fa-11e6-9847-985f5f7d9917?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180323%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180323T135848Z&X-Amz-Expires=300&X-Amz-Signature=e8b1e3a9245c4c8e42caa42ee5366bf96263ecd408d908778523ce4096701f9c&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dgosu-amd64&response-content-type=application%2Foctet-stream [following]
--2018-03-23 13:58:48--  https://github-production-release-asset-2e65be.s3.amazonaws.com/19708981/5812fd6c-16fa-11e6-9847-985f5f7d9917?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180323%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180323T135848Z&X-Amz-Expires=300&X-Amz-Signature=e8b1e3a9245c4c8e42caa42ee5366bf96263ecd408d908778523ce4096701f9c&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dgosu-amd64&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 54.231.82.74
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|54.231.82.74|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1804608 (1.7M) [application/octet-stream]
Saving to: โ€˜/usr/local/bin/gosuโ€™

     0K .......... .......... .......... .......... ..........  2%  198K 9s
    50K .......... .......... .......... .......... ..........  5%  457K 6s
   100K .......... .......... .......... .......... ..........  8%  761K 5s
   150K .......... .......... .......... .......... .......... 11%  772K 4s
   200K .......... .......... .......... .......... .......... 14% 3.57M 3s
   250K .......... .......... .......... .......... .......... 17%  794K 3s
   300K .......... .......... .......... .......... .......... 19% 2.64M 2s
   350K .......... .......... .......... .......... .......... 22%  947K 2s
   400K .......... .......... .......... .......... .......... 25% 2.27M 2s
   450K .......... .......... .......... .......... .......... 28% 1.54M 2s
   500K .......... .......... .......... .......... .......... 31% 2.46M 2s
   550K .......... .......... .......... .......... .......... 34% 2.16M 1s
   600K .......... .......... .......... .......... .......... 36% 1.77M 1s
   650K .......... .......... .......... .......... .......... 39% 2.07M 1s
   700K .......... .......... .......... .......... .......... 42% 2.40M 1s
   750K .......... .......... .......... .......... .......... 45% 4.14M 1s
   800K .......... .......... .......... .......... .......... 48% 2.13M 1s
   850K .......... .......... .......... .......... .......... 51% 3.21M 1s
   900K .......... .......... .......... .......... .......... 53% 2.45M 1s
   950K .......... .......... .......... .......... .......... 56% 1.74M 1s
  1000K .......... .......... .......... .......... .......... 59% 2.54M 1s
  1050K .......... .......... .......... .......... .......... 62%  403K 1s
  1100K .......... .......... .......... .......... .......... 65% 3.45M 1s
  1150K .......... .......... .......... .......... .......... 68% 1.09M 1s
  1200K .......... .......... .......... .......... .......... 70% 1.14M 0s
  1250K .......... .......... .......... .......... .......... 73% 4.37M 0s
  1300K .......... .......... .......... .......... .......... 76% 3.49M 0s
  1350K .......... .......... .......... .......... .......... 79% 4.27M 0s
  1400K .......... .......... .......... .......... .......... 82% 57.5M 0s
  1450K .......... .......... .......... .......... .......... 85% 71.0M 0s
  1500K .......... .......... .......... .......... .......... 87% 58.0M 0s
  1550K .......... .......... .......... .......... .......... 90% 54.9M 0s
  1600K .......... .......... .......... .......... .......... 93% 1.09M 0s
  1650K .......... .......... .......... .......... .......... 96% 2.68M 0s
  1700K .......... .......... .......... .......... .......... 99% 51.0M 0s
  1750K .......... ..                                         100% 94.4M=1.3s

2018-03-23 13:58:49 (1.37 MB/s) - โ€˜/usr/local/bin/gosuโ€™ saved [1804608/1804608]

+ wget -O /usr/local/bin/gosu.asc https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64.asc
--2018-03-23 13:58:49--  https://github.com/tianon/gosu/releases/download/1.9/gosu-amd64.asc
Resolving github.com (github.com)... 192.30.253.113, 192.30.253.112
Connecting to github.com (github.com)|192.30.253.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/19708981/5828570c-16fa-11e6-9e66-0433eb15bcd0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180323%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180323T135850Z&X-Amz-Expires=300&X-Amz-Signature=5b7358312da6f10b29de64de67fce5dc820f272d14b12db26413263b0823bc2e&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dgosu-amd64.asc&response-content-type=application%2Foctet-stream [following]
--2018-03-23 13:58:50--  https://github-production-release-asset-2e65be.s3.amazonaws.com/19708981/5828570c-16fa-11e6-9e66-0433eb15bcd0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20180323%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20180323T135850Z&X-Amz-Expires=300&X-Amz-Signature=5b7358312da6f10b29de64de67fce5dc820f272d14b12db26413263b0823bc2e&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dgosu-amd64.asc&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 54.231.82.74
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|54.231.82.74|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 543 [application/octet-stream]
Saving to: โ€˜/usr/local/bin/gosu.ascโ€™

     0K                                                       100%  323K=0.002s

2018-03-23 13:58:50 (323 KB/s) - โ€˜/usr/local/bin/gosu.ascโ€™ saved [543/543]

+ mktemp -d
+ export GNUPGHOME=/tmp/tmp.wv5ldfVeuv
+ gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
gpg: keybox '/tmp/tmp.wv5ldfVeuv/pubring.kbx' created
gpg: keyserver receive failed: Server indicated a failure
+ gpg --keyserver keyserver.ubuntu.com --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4
gpg: /tmp/tmp.wv5ldfVeuv/trustdb.gpg: trustdb created
gpg: key 036A9C25BF357DD4: public key "Tianon Gravi <[email protected]>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
+ gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu
gpg: Signature made Wed May 11 04:56:44 2016 UTC
gpg:                using RSA key 036A9C25BF357DD4
gpg: Good signature from "Tianon Gravi <[email protected]>" [unknown]
gpg:                 aka "Tianon Gravi <[email protected]>" [unknown]
gpg:                 aka "Tianon Gravi <[email protected]>" [unknown]
gpg:                 aka "Andrew Page (tianon) <[email protected]>" [unknown]
gpg:                 aka "Andrew Page (tianon) <[email protected]>" [unknown]
gpg:                 aka "Andrew Page (Tianon Gravi) <[email protected]>" [unknown]
gpg:                 aka "Tianon Gravi (Andrew Page) <[email protected]>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: B42F 6819 007F 00F8 8E36  4FD4 036A 9C25 BF35 7DD4
+ rm -rf /tmp/tmp.wv5ldfVeuv /usr/local/bin/gosu.asc
+ chmod +x /usr/local/bin/gosu
+ gosu nobody true
 ---> edba455d1992
Removing intermediate container 2984a5fdc2a1
Step 7 : ENV ACTIVEMQ_ARTEMIS_VERSION 2.4.0
 ---> Running in f6d0c1afd738
 ---> e73684744663
Removing intermediate container f6d0c1afd738
Step 8 : RUN cd /opt && wget -q https://repository.apache.org/content/repositories/releases/org/apache/activemq/apache-artemis/${ACTIVEMQ_ARTEMIS_VERSION}/apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz &&   wget -q https://repository.apache.org/content/repositories/releases/org/apache/activemq/apache-artemis/${ACTIVEMQ_ARTEMIS_VERSION}/apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc &&   wget -q http://apache.org/dist/activemq/KEYS &&   gpg --import KEYS &&   gpg apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc &&   tar xfz apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz &&   ln -s apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION} apache-artemis &&   rm -f apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz KEYS apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc
 ---> Running in 7f0ea558e7f4
The command '/bin/sh -c cd /opt && wget -q https://repository.apache.org/content/repositories/releases/org/apache/activemq/apache-artemis/${ACTIVEMQ_ARTEMIS_VERSION}/apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz &&   wget -q https://repository.apache.org/content/repositories/releases/org/apache/activemq/apache-artemis/${ACTIVEMQ_ARTEMIS_VERSION}/apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc &&   wget -q http://apache.org/dist/activemq/KEYS &&   gpg --import KEYS &&   gpg apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc &&   tar xfz apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz &&   ln -s apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION} apache-artemis &&   rm -f apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz KEYS apache-artemis-${ACTIVEMQ_ARTEMIS_VERSION}-bin.tar.gz.asc' returned a non-zero code: 4

Any idea why this is happening ?

Openshift error finding logging.properties

Hello Victor,

Have you had success running this in OpenShift?
Running this in Docker locally works great for me.
Running in OpenShift seems very close:

minishift start --vm-driver vmwarefusion
oc login -u system:admin
oc new-app --name=artemis vromero/activemq-artemis
oc expose service artemis --port=61616
oc get pods
# the pod name for artemis is artemis-1-db4hr
oc logs artemis-1-db4hr -c artemis

The error I'm getting is:

sed: can't read ../etc/logging.properties: No such file or directory

Can you think of a reason why logging.properties would not be found?

Thanks

Alpine build error

Hi @vromero
Thank you for your work:)
How we can to download git for 2.4.0-alpine image?

Step 13/27 : COPY merge.xslt /opt/merge
lstat merge.xslt: no such file or directory

If i'm not wrong, we can to download a Dockerfile from 2.4.0 alpine only.
So, can u give access to artemis-2.4.0-alpine folder from this repo?

Jolokia CORS Error

Hi, When I deploy your dokcer image to host machine then such an error occuring "Operation unknown failed due to: java.lang.Exception : Origin http://45.32.145.53:8161 is not allowed to call this agent"
image

XML merges no longer working as they should

When using latest tag the image fails to properly merge XML as it had on a previous working environment.

To replicate:
docker pull vromero/activemq-artemis

broker-00.xml:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
   <core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
      <security-enabled>false</security-enabled>
   </core>
</configuration>

docker run -it --rm -v /home/artemis-override/:/var/lib/artemis/etc-override vromero/activemq-artemis:2.4.0 cat ../etc/broker.xml

returns:

<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xi="http://www.w3.org/2001/XInclude" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
    <core xmlns="urn:activemq:core" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq:core ">
        <name>0.0.0.0</name>
        <persistence-enabled>true</persistence-enabled>
        <journal-type>ASYNCIO</journal-type>
        <paging-directory>data/paging</paging-directory>
        <bindings-directory>data/bindings</bindings-directory>
        <journal-directory>data/journal</journal-directory>
        <large-messages-directory>data/large-messages</large-messages-directory>
        <journal-datasync>true</journal-datasync>
        <journal-min-files>2</journal-min-files>
        <journal-pool-files>10</journal-pool-files>
        <journal-file-size>10M</journal-file-size>
        <journal-buffer-timeout>24000</journal-buffer-timeout>
        <journal-max-io>4096</journal-max-io>
        <disk-scan-period>5000</disk-scan-period>
        <max-disk-usage>90</max-disk-usage>
        <critical-analyzer>true</critical-analyzer>
        <critical-analyzer-timeout>120000</critical-analyzer-timeout>
        <critical-analyzer-check-period>60000</critical-analyzer-check-period>
        <critical-analyzer-policy>HALT</critical-analyzer-policy>
        <acceptors>
            <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
            <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>
            <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>
            <acceptor name="hornetq">tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true</acceptor>
            <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>
        </acceptors>
        <security-settings>
            <security-setting match="#">
                <permission type="createNonDurableQueue" roles="amq"></permission>
                <permission type="deleteNonDurableQueue" roles="amq"></permission>
                <permission type="createDurableQueue" roles="amq"></permission>
                <permission type="deleteDurableQueue" roles="amq"></permission>
                <permission type="createAddress" roles="amq"></permission>
                <permission type="deleteAddress" roles="amq"></permission>
                <permission type="consume" roles="amq"></permission>
                <permission type="browse" roles="amq"></permission>
                <permission type="send" roles="amq"></permission>
                <permission type="manage" roles="amq"></permission>
            </security-setting>
        </security-settings>
        <address-settings>
            <address-setting match="activemq.management#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
            <address-setting match="#">
                <dead-letter-address>DLQ</dead-letter-address>
                <expiry-address>ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
                <auto-create-queues>true</auto-create-queues>
                <auto-create-addresses>true</auto-create-addresses>
                <auto-create-jms-queues>true</auto-create-jms-queues>
                <auto-create-jms-topics>true</auto-create-jms-topics>
            </address-setting>
        </address-settings>
        <addresses>
            <address name="DLQ">
                <anycast>
                    <queue name="DLQ"></queue>
                </anycast>
            </address>
            <address name="ExpiryQueue">
                <anycast>
                    <queue name="ExpiryQueue"></queue>
                </anycast>
            </address>
        </addresses>
    </core>
    <core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">
        <security-enabled>false</security-enabled>
    </core>
</configuration>

Change to known working image:
docker pull vromero/activemq-artemis@sha256:626afff517d3ec0564987b7bbce17f1f8d55f5b55c5cf282d2a6049c0c1074a8

Output:

<?xml version="1.0"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.  See the NOTICE file
distributed with this work for additional information
regarding copyright ownership.  The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License.  You may obtain a copy of the License at

  http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied.  See the License for the
specific language governing permissions and limitations
under the License.
-->
<configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

   <!-- from 1.0.0 to 1.5.5 the following line should be : <core xmlns="urn:activemq:core"> -->
   <core xmlns="urn:activemq:core" xsi:schemaLocation="urn:activemq:core ">

      <name>0.0.0.0</name><persistence-enabled>true</persistence-enabled><!-- this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       --><journal-type>ASYNCIO</journal-type><paging-directory>./data/paging</paging-directory><bindings-directory>./data/bindings</bindings-directory><journal-directory>./data/journal</journal-directory><large-messages-directory>./data/large-messages</large-messages-directory><journal-datasync>true</journal-datasync><journal-min-files>2</journal-min-files><journal-pool-files>-1</journal-pool-files><journal-file-size>10M</journal-file-size><!--
       This value was determined through a calculation.
       Your system could perform 5.56 writes per millisecond
       on the current journal configuration.
       That translates as a sync write every 180000 nanoseconds.

       Note: If you specify 0 the system will perform writes directly to the disk.
             We recommend this to be 0 if you are using journalType=MAPPED and ournal-datasync=false.
      --><journal-buffer-timeout>180000</journal-buffer-timeout><!--
        When using ASYNCIO, this will determine the writing queue depth for libaio.
       --><journal-max-io>4096</journal-max-io><!--
        You can verify the network health of a particular NIC by specifying the <network-check-NIC> element.
         <network-check-NIC>theNicName</network-check-NIC>
        --><!--
        Use this to use an HTTP server to validate the network
         <network-check-URL-list>http://www.apache.org</network-check-URL-list> --><!-- <network-check-period>10000</network-check-period> --><!-- <network-check-timeout>1000</network-check-timeout> --><!-- this is a comma separated list, no spaces, just DNS or IPs
           it should accept IPV6

           Warning: Make sure you understand your network topology as this is meant to validate if your network is valid.
                    Using IPs that could eventually disappear or be partially visible may defeat the purpose.
                    You can use a list of multiple IPs, and if any successful ping will make the server OK to continue running --><!-- <network-check-list>10.0.0.1</network-check-list> --><!-- use this to customize the ping used for ipv4 addresses --><!-- <network-check-ping-command>ping -c 1 -t %d %s</network-check-ping-command> --><!-- use this to customize the ping used for ipv6 addresses --><!-- <network-check-ping6-command>ping6 -c 1 %2$s</network-check-ping6-command> --><!-- how often we are looking for how many bytes are being used on the disk in ms --><disk-scan-period>5000</disk-scan-period><!-- once the disk hits this limit the system will block, or close the connection in certain protocols
           that won't support flow control. --><max-disk-usage>90</max-disk-usage><!-- should the broker detect dead locks and other issues --><critical-analyzer>true</critical-analyzer><critical-analyzer-timeout>120000</critical-analyzer-timeout><critical-analyzer-check-period>60000</critical-analyzer-check-period><critical-analyzer-policy>HALT</critical-analyzer-policy><!-- the system will enter into page mode once you hit this limit.
           This is an estimate in bytes of how much the messages are using in memory

            The system will use half of the available memory (-Xmx) by default for the global-max-size.
            You may specify a different value here if you need to customize it to your needs.

            <global-max-size>100Mb</global-max-size>

      --><acceptors>

         <!-- useEpoll means: it will use Netty epoll if you are on a system (Linux) that supports it -->
         <!-- amqpCredits: The number of credits sent to AMQP producers -->
         <!-- amqpLowCredits: The server will send the # credits specified at amqpCredits at this low mark -->

         <!-- Acceptor for every supported protocol -->
         <acceptor name="artemis">tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300</acceptor>

         <!-- AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.-->
         <acceptor name="amqp">tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpMinCredits=300</acceptor>

         <!-- STOMP Acceptor. -->
         <acceptor name="stomp">tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true</acceptor>

         <!-- HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. -->
         <acceptor name="hornetq">tcp://0.0.0.0:5445?protocols=HORNETQ,STOMP;useEpoll=true</acceptor>

         <!-- MQTT Acceptor -->
         <acceptor name="mqtt">tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true</acceptor>

      </acceptors><security-settings>
         <security-setting match="#">
            <permission type="createNonDurableQueue" roles="amq"/>
            <permission type="deleteNonDurableQueue" roles="amq"/>
            <permission type="createDurableQueue" roles="amq"/>
            <permission type="deleteDurableQueue" roles="amq"/>
            <permission type="createAddress" roles="amq"/>
            <permission type="deleteAddress" roles="amq"/>
            <permission type="consume" roles="amq"/>
            <permission type="browse" roles="amq"/>
            <permission type="send" roles="amq"/>
            <!-- we need this otherwise ./artemis data imp wouldn't work -->
            <permission type="manage" roles="amq"/>
         </security-setting>
      </security-settings><address-settings>
         <!-- if you define auto-create on certain queues, management has to be auto-create -->
         <address-setting match="activemq.management#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
         <!--default for catch all-->
         <address-setting match="#">
            <dead-letter-address>DLQ</dead-letter-address>
            <expiry-address>ExpiryQueue</expiry-address>
            <redelivery-delay>0</redelivery-delay>
            <!-- with -1 only the global-max-size is in use for limiting -->
            <max-size-bytes>-1</max-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <address-full-policy>PAGE</address-full-policy>
            <auto-create-queues>true</auto-create-queues>
            <auto-create-addresses>true</auto-create-addresses>
            <auto-create-jms-queues>true</auto-create-jms-queues>
            <auto-create-jms-topics>true</auto-create-jms-topics>
         </address-setting>
      </address-settings><addresses>
         <address name="DLQ">
            <anycast>
               <queue name="DLQ"/>
            </anycast>
         </address>
         <address name="ExpiryQueue">
            <anycast>
               <queue name="ExpiryQueue"/>
            </anycast>
         </address>

      </addresses><security-enabled>false</security-enabled>
   </core>
</configuration>

I feel like this was likely introduced with #50

Alpine linux support...

Hi,

I really like your image, but in an effort to make it smaller, I started playing around the idea of using Alpine Linux and Oracle Java instead of the java:8 base image which tends to be larger and uses the OpenJDK instead.

It would be nice to support Artemis in such setup, I tried to implement it but I'm getting some problems with the volume setup, would you like to help me out to get this to work?

It would be a nice alternative to your image.

Thanks

Remove LGPL XSLT merge

The merger for XML written in XSLT has a LGPL license. Its unclear to me if given the non-linkable nature of XSLT this is a problem, but just in case, remove for some other implementation.

Build automated tests

Build automated tests for all the docker image specific features and integrate them in the build process.

Support of configuration snippet override

Problem

Currently the way a user can change a broker configuration is by dropping a file called broker.xml into the etc-override folder in the broker directory. This configuration mechanism is sufficient in case we only need to add a unique configuration layer on top of the artemis docker image.

We want to provide a way where users can extend/override the broker's configuration with multiple files that can be added in different layers.

The same issue is presented with custom transformations. Where a custom transformation has to be applied over the latests broker.xml in the parent layer.

Example

Consider the following example:

A team alpha creates an artemis image with cluster configuration, for that, it creates a new layer on top of the artemis base image with a broker.xml file that adds cluster configuration to the base image.

At the same time a team beta wants to deploy artemis with a cluster configuration and default queues defined.

It would be very useful for beta team to use alpha image as base image for their implementation.

Proposed solution.

Instead of using a single broker.xml configuration file in order to extend/override the base broker.xml configuration, a set of configuration snippets could be used. Each snippet will extend/override the previous configuration. So for the previous example, the alpha team would create a snippet that would add clustering configuration, and beta would only add the default queues configuration to the same folder. The beta team would also be able to apply a custom transformation over the broker.xml of the alpha team. The transformation will be perform right before the merge.

Details.

Inside the etc-override folder allow users to drop xml files that will hold the piece of broker configuration the would be extended by that file.

The format name of those files will be:

  • broker-{{desc}}.xml where desc is a descriptive index of the configuration to be merge.
  • broker-{{desc}}.xslt where desc is the same as the transformation file.

Both files are optional, one file does not require the other.

The configuration will be applied following the index order.

Example solution.

alpha team would add the following file:

broker-00.xml:

 <configuration xmlns="urn:activemq"
                   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
                   xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">

      <core xmlns="urn:activemq:core">

        <jmx-management-enabled>true</jmx-management-enabled>
        <persistence-enabled>true</persistence-enabled>
        <cluster-user>exampleUser</cluster-user>
        <cluster-password>secret</cluster-password><connectors>
            <connector name="open-numbat-activemq-artemis-0">tcp://open-numbat-activemq-artemis-0.open-numbat-activemq-artemis.default.svc.cluster.local:61616</connector>
            <connector name="open-numbat-activemq-artemis-1">tcp://open-numbat-activemq-artemis-1.open-numbat-activemq-artemis.default.svc.cluster.local:61616</connector>
          
        </connectors>
        <cluster-connections>
          <cluster-connection name="replication-cluster">
            <address>jms</address>
            <connector-ref>open-numbat-activemq-artemis-0</connector-ref>
            <retry-interval>1000</retry-interval>
            <retry-interval-multiplier>1.1</retry-interval-multiplier>
            <max-retry-interval>5000</max-retry-interval>
            <initial-connect-attempts>-1</initial-connect-attempts>
            <reconnect-attempts>-1</reconnect-attempts>
            <message-load-balancing>OFF</message-load-balancing>
            <max-hops>1</max-hops>

            <static-connectors allow-direct-connections-only="true">    
                <connector-ref>open-numbat-activemq-artemis-0</connector-ref>
                <connector-ref>open-numbat-activemq-artemis-1</connector-ref>
            </static-connectors>
         </cluster-connection>
       </cluster-connections>

       <ha-policy>
         <replication>
           <master>
             <check-for-live-server>false</check-for-live-server>
           </master>
         </replication>
       </ha-policy>
      </core>
    </configuration>

And beta team will add the following:

broker-01.xml:

<queues>
   <queue name="jms.queue.selectorQueue">
      <address>jms.queue.selectorQueue</address>
      <filter string="color='red'"/>
      <durable>true</durable>
    </queue>
</queues>

Active mq artemis 1.5.5 does not properly merge xml

And now for something completely different. For arrtemis 1.5.5 when giving an extra config block (broker-00.xml) with the following contents:

<?xml version='1.0' encoding="UTF-8" standalone="no"?> <configuration xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd"> <jms xmlns="urn:activemq:jms"> <queue name="anQueueName"/> </jms> </configuration>

Then anQueueName is not deployed, instead the following file is generated:

<configuration` xmlns="urn:activemq" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="urn:activemq /schema/artemis-configuration.xsd">
    <jms xmlns="urn:activemq:jms">
        <queue name="DLQ"></queue>
        <queue name="ExpiryQueue"></queue>
    </jms>
    <core xmlns="urn:activemq:core">
        <name>0.0.0.0</name>
        <persistence-enabled>true</persistence-enabled>
        <journal-type>ASYNCIO</journal-type>
        <paging-directory>./data/paging</paging-directory>
        <bindings-directory>./data/bindings</bindings-directory>
        <journal-directory>./data/journal</journal-directory>
        <large-messages-directory>./data/large-messages</large-messages-directory>
        <journal-datasync>true</journal-datasync>
        <journal-min-files>2</journal-min-files>
        <journal-pool-files>-1</journal-pool-files>
        <journal-buffer-timeout>640000</journal-buffer-timeout>
        <disk-scan-period>5000</disk-scan-period>
        <max-disk-usage>90</max-disk-usage>
        <global-max-size>104857600</global-max-size>
        <acceptors>
            <acceptor name="artemis">
                tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE
            </acceptor>
            <acceptor name="amqp">tcp://0.0.0.0:5672?protocols=AMQP</acceptor>
            <acceptor name="stomp">tcp://0.0.0.0:61613?protocols=STOMP</acceptor>
            <acceptor name="hornetq">tcp://0.0.0.0:5445?protocols=HORNETQ,STOMP</acceptor>
            <acceptor name="mqtt">tcp://0.0.0.0:1883?protocols=MQTT</acceptor>
        </acceptors>
        <security-settings>
            <security-setting match="#">
                <permission type="createNonDurableQueue" roles="amq"></permission>
                <permission type="deleteNonDurableQueue" roles="amq"></permission>
                <permission type="createDurableQueue" roles="amq"></permission>
                <permission type="deleteDurableQueue" roles="amq"></permission>
                <permission type="consume" roles="amq"></permission>
                <permission type="browse" roles="amq"></permission>
                <permission type="send" roles="amq"></permission>
                <permission type="manage" roles="amq"></permission>
            </security-setting>
        </security-settings>
        <address-settings>
            <address-setting match="#">
                <dead-letter-address>jms.queue.DLQ</dead-letter-address>
                <expiry-address>jms.queue.ExpiryQueue</expiry-address>
                <redelivery-delay>0</redelivery-delay>
                <max-size-bytes>-1</max-size-bytes>
                <message-counter-history-day-limit>10</message-counter-history-day-limit>
                <address-full-policy>PAGE</address-full-policy>
            </address-setting>
        </address-settings>
    </core>
    <jms xmlns="urn:activemq:jms">
        <queue name="anQueueName"></queue>
    </jms>
</configuration>

Therefore not deploying the queue anQueueName
The merge functionallity works fine for 2.6.0

I got an error when building the image

Hello,

I get this error when building the image from the Dockerfile in Windows

The command '/bin/sh -c set -x && apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* && dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc" && export GNUPGHOME="$(mktemp -d)" && (gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 || gpg --keyserver keyserver.ubuntu.com --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4) && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc && chmod +x /usr/local/bin/gosu && gosu nobody true' returned a non-zero code: 2

On unix the error is

The command '/bin/sh -c set -x && apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* && dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')" && wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch" && wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc" && export GNUPGHOME="$(mktemp -d)" && (gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 || gpg --keyserver keyserver.ubuntu.com --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4) && gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu && rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc && chmod +x /usr/local/bin/gosu && gosu nobody true' returned a non-zero code: 4

Use artemis without user and password

So far I see that the user and password are needed to connect with ActiveMQ.

We have the ActiveMQ inside our VPC, subnet and security group basically to allow the application to connect with the queue without credentials.

Is there any way to bypass the authentication?

Messages not persisted on docker container stop/start/restart or docker host restart

If the docker host (in this instance, a boot2docker linux VM) and the container it is running on is restarted, then the messages sitting in an queue are not persisted.

Steps:

  1. docker run --name artemis --restart=always --mount source=artemis-db-volume,target=/var/lib/artemis/data -d -p 8161:8161 -p 5672:5672 -e 'ARTEMIS_MIN_MEMORY=256M' -e 'ARTEMIS_MAX_MEMORY=512M'

  2. .net tester app pushes a number or a number of messages to the queue. No consumers configured and no purge if no consumers flag NOT set

  3. VM is restarted or docker container stop/start commands are run:

  4. queue is still there, but message count is 0.

Can't acces web console when installing docker image on kubernetes as a service

Hello,

I try to install artemis in kubernetes as a service.

Here are my pod and service files.

jms-service:1.0 is the same as vromero/activemq-artemis

POD:


apiVersion: v1
kind: Pod
metadata:
  name: jms-service
  labels: 
     app: jms-service
spec:
 containers:
  - name: jms-service
    image: kube-registry:5000/tk/jms-service:1.0
    ports:
      - containerPort: 8161
      - containerPort: 61616
      - containerPort: 5445
      - containerPort: 5672
      - containerPort: 1883
      - containerPort: 61613
    env:
      - name: ARTEMIS_USERNAME
        value: admin
      - name: ARTEMIS_PASSWORD
        value: admin

SERVICE


apiVersion: v1
kind: Service
metadata:
  name:  jms-service
spec:
  ports:
    - port: 8161
      nodePort: 30001
      name: webserver
    - port: 61616
      nodePort: 30002
      name: core
    - port: 5445
      nodePort: 30003
      name: hornetq
    - port: 5672
      nodePort: 30004
      name: amqp
    - port: 1883
      nodePort: 30005
      name: mqtt
    - port: 61613
      nodePort: 30006
      name: stomp
  selector:
    app: jms-service
  type:
    NodePort
   

The service start ok and got no error but when accessing the console in http://host:30001/console after login i got this error every few seconds in the console of the web


ARTEMIS] plugin running [object Object]
[ARTEMIS] *************creating Artemis Console************
[activemq] ActiveMQ theme loaded
[Core] ActiveMQ Management Console started
[Core] Operation unknown failed due to: java.lang.Exception : Origin http://srvaxivln090:30001 is not allowed to call this agent
[Core] Operation unknown failed due to: java.lang.Exception : Origin http://srvaxivln090:30001 is not allowed to call this agent
[Window] Uncaught TypeError: Cannot read property 'apply' of undefined (http://srvaxivln090:30001/console/app/app.js?0d5300a336117972:16:14366)
[Window] Uncaught TypeError: Cannot read property 'apply' of undefined (http://srvaxivln090:30001/console/app/app.js?0d5300a336117972:16:14366)
[Window] Uncaught TypeError: Cannot read property 'apply' of undefined (http://srvaxivln090:30001/console/app/app.js?0d5300a336117972:16:14366)

What am i doing wrong ? Is it necessary to bind any other port ?

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.