Giter Site home page Giter Site logo

rabbitmq / rabbitmq-server Goto Github PK

View Code? Open in Web Editor NEW
11.6K 382.0 3.9K 200.83 MB

Open source RabbitMQ: core server and tier 1 (built-in) plugins

Home Page: https://www.rabbitmq.com/

License: Other

Makefile 25.31% Shell 13.51% Dockerfile 7.95% Starlark 49.04% Batchfile 1.97% Erlang 1.35% Smarty 0.87%
rabbitmq amqp messaging amqp-0-9-1 amqp1-0 mqtt stomp streaming streams message-broker

rabbitmq-server's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rabbitmq-server's Issues

rabbitmq-server.ocf stop action returns before the rabbitmq processes have stopped

When pacemaker attempts to restart rabbitmq and rabbbitmq is already running, the start request frequently fails. I found that the previous beam.smp process was still running when the resource agent attempted to start rabbitmq-server.

I saw that the initd script does a "rabbitmqctl stop ". Adding the pidfile parameter to the resource agent appears to fix this problem.

Dead-lettering carries over the TTL into the new queue

If you set an expiration on a message, and the queue has x-dead-letter-exchange or x-dead-letter-routing-key set, when the message is pushed to the new queue, the TTL still persists.

This means that if you set up a queue to watch for dead-lettered items, if it's not processed quickly enough these messages will vanish forever.

Perhaps there should be a parameter to set the new TTL after dead-lettering, or remove the TTL after this occurs altogether?

passive queueDeclare resulting in 404, leaves channel open.

I have tested with two different clients, so this seems to be an issue with RabbitMQ itself.

Here is a trace, using the node-amqp library, where I also explicitly close the channel in the error handler, and get a channelCloseOK, but the channel stays open and is visible using rabbitmqctl list_channels. https://gist.github.com/e23bbeb8e5ae069f3e49

This is a pretty serious use case for us, and is leading to huge memory leaks in production. In the order of tens of thousands of open channels a day.

No exec or trap in rabbitmq-script-wrapper?

When starting RabbitMQ, the flow looks like this:

  1. /usr/sbin/rabbitmq-server (https://github.com/rabbitmq/rabbitmq-server/blob/master/packaging/common/rabbitmq-script-wrapper)
  2. /usr/lib/rabbitmq/bin/rabbitmq-server
  3. Erlang

Step 2 calls step 3 with exec. However, step 1 does not use exec when calling step 2. It also doesn't trap SIGTERM. This causes an issue downstream in dockerfile/rabbitmq, which adds a step 0 script at /usr/local/bin/rabbitmq-start:

#!/bin/bash

ulimit -n 1024
chown -R rabbitmq:rabbitmq /data
exec rabbitmq-server $@

Is there any reason why exec is missing at step 1(/usr/sbin/rabbitmq-server)? It breaks the chain, causing issues with SIGTERM handling when RabbitMQ is run in a container.

Find configuration file bug when set CONFIG_FILE in RabbitMQ Environment

I writed configuration file Environment in /etc/rabbitmq/rabbitmq-env.conf
CONFIG_FILE=/etc/rabbitmq/rabbitmq.config or RABBITMQ_CONFIG_FILE=/etc/rabbitmq/rabbitmq.config,
and restart the rabbitmq-server, but rabbitmq.config doesn't work.
Node message: Config file /etc/rabbitmq/rabbitmq.config (not found) on web monitor page with plugin rabbitmq_management.

When i remove it from /etc/rabbitmq/rabbitmq-env.conf, it worked. I don't know why, maybe it's a bug.

rabbitmq-server-3.4.4-1.noarch.rpm and os is Centos6.4 x64.

RabbitMQ server is sending Connection.Unkown method to the client

RabbitMQ server 3.2.0 (but reproducible with 3.2.2 also). Will try to update the ticket if I have chance to try with 3.2.3 but not yet sure. Change log for 3.2.3 is not saying anything related to this issue

Steps:
1.Have MQ hit the Memory or HDD alarm (for example, publish enough persistent messages to the queue so free disk space falls below 50 MB with default MQ settings). Expecting here is to have RabbitMQ blocking further publishing and sending blocking notification to clients
2. Try to publish additional messages to MQ

Expected:
Rabbit should notify clients with proper connection method, id should be 50 (according to docs 60 is renumbered to 50)

Actual:
Rabbit is still sending id 60 as you can see from attached wireshark traces

1 client sending header
2 server sending connection start
3 server sending connection unknown 60

Note: please contact me by e-mail if you'll need some more info on the issue

Missing support for unsigned 32-bit integer field value type

When sending a message with a header with value kind 'i' (meaning long-uint) as specified in AMQP 0-9-1 section 4.2.1, the rabbitmq-server responds with Connection.Close with an obscure "INTERNAL_ERROR". I haven't had the time to build a server and the management tool from the repositories right now, but the issue seems to be in rabbit_binary_parser.erl, lines 41 and onward, where the case for 'i' is simply missing.

RabbitMQ service crashes with cannot allocate memory error

We've had this same error tank our rabbit server and cause the service to crash on multiple versions of RabbitMQ:
3.1.5
3.3.3
3.3.4

Error is:

Slogan: std_alloc: Cannot allocate 1125316227743680 bytes of memory (of type "arg_reg").

We're running the following dedicated bare metal setup for our rabbit boxes:

"Debian GNU/Linux 7 (wheezy)"
Linux rabbithostname 3.2.0-4-amd64 #1 SMP Debian 3.2.60-1+deb7u1 x86_64 GNU/Linux

64gig of ram

free -l
             total       used       free     shared    buffers     cached
Mem:      66080784   11431808   54648976          0     178896    4216156
Low:      66080784   11431808   54648976
High:            0          0          0
-/+ buffers/cache:    7036756   59044028
Swap:       999420          0     999420

24 cpu cores

processor   : 23
vendor_id   : GenuineIntel
cpu family  : 6
model       : 45
model name  : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
stepping    : 7
microcode   : 0x710
cpu MHz     : 2000.254
cache size  : 15360 KB
physical id : 1
siblings    : 12
core id     : 5
cpu cores   : 6
apicid      : 43
initial apicid  : 43
fpu     : yes
fpu_exception   : yes
cpuid level : 13
wp      : yes
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips    : 3999.95
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

RabbitMQ service status:

Status of node 'rabbit@rabbithost' ...
[{pid,20989},
 {running_applications,
     [{rabbitmq_management,"RabbitMQ Management Console","3.1.5"},
      {rabbitmq_management_agent,"RabbitMQ Management Agent","3.1.5"},
      {rabbit,"RabbitMQ","3.1.5"},
      {os_mon,"CPO  CXC 138 46","2.2.7"},
      {rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.1.5"},
      {webmachine,"webmachine","1.10.3-rmq3.1.5-gite9359c7"},
      {mochiweb,"MochiMedia Web Server","2.7.0-rmq3.1.5-git680dba8"},
      {xmerl,"XML parser","1.2.10"},
      {inets,"INETS  CXC 138 49","5.7.1"},
      {mnesia,"MNESIA  CXC 138 12","4.5"},
      {amqp_client,"RabbitMQ AMQP Client","3.1.5"},
      {sasl,"SASL  CXC 138 11","2.1.10"},
      {stdlib,"ERTS  CXC 138 10","1.17.5"},
      {kernel,"ERTS  CXC 138 10","2.14.5"}]},
 {os,{unix,linux}},
 {erlang_version,
     "Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:24:24] [rq:24] [async-threads:30] [hipe] [kernel-poll:true]\n"},
 {memory,
     [{total,1611930112},
      {connection_procs,35524600},
      {queue_procs,851008160},
      {plugins,746880},
      {other_proc,89921648},
      {mnesia,696352},
      {mgmt_db,7613160},
      {msg_index,119802976},
      {other_ets,30572504},
      {binary,413736680},
      {code,19552913},
      {atom,1903865},
      {other_system,40850374}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,27066689126},
 {disk_free_limit,1000000000},
 {disk_free,367982977024},
 {file_descriptors,
     [{total_limit,65436},
      {total_used,329},
      {sockets_limit,58890},
      {sockets_used,230}]},
 {processes,[{limit,1048576},{used,5058}]},
 {run_queue,0},
 {uptime,1768}]
...done.

Rabbitmq Environment output

rabbitmqctl environment
Application environment of node 'rabbit@rabbithost ...
[{auth_backends,[rabbit_auth_backend_internal]},
 {auth_mechanisms,['PLAIN','AMQPLAIN']},
 {backing_queue_module,rabbit_variable_queue},
 {cluster_nodes,{[],disc}},
 {cluster_partition_handling,ignore},
 {collect_statistics,fine},
 {collect_statistics_interval,5000},
 {default_permissions,[<<".*">>,<<".*">>,<<".*">>]},
 {default_user,<<"guest">>},
 {default_user_tags,[administrator]},
 {default_vhost,<<"/">>},
 {delegate_count,16},
 {disk_free_limit,1000000000},
 {enabled_plugins_file,"/etc/rabbitmq/enabled_plugins"},
 {error_logger,{file,"/var/log/rabbitmq/[email protected]"}},
 {frame_max,131072},
 {heartbeat,600},
 {hipe_compile,true},
 {hipe_modules,[rabbit_reader,rabbit_channel,gen_server2,rabbit_exchange,
                rabbit_command_assembler,rabbit_framing_amqp_0_9_1,
                rabbit_basic,rabbit_event,lists,queue,priority_queue,
                rabbit_router,rabbit_trace,rabbit_misc,rabbit_binary_parser,
                rabbit_exchange_type_direct,rabbit_guid,rabbit_net,
                rabbit_amqqueue_process,rabbit_variable_queue,
                rabbit_binary_generator,rabbit_writer,delegate,gb_sets,lqueue,
                sets,orddict,rabbit_amqqueue,rabbit_limiter,gb_trees,
                rabbit_queue_index,rabbit_exchange_decorator,gen,dict,ordsets,
                file_handle_cache,rabbit_msg_store,array,
                rabbit_msg_store_ets_index,rabbit_msg_file,
                rabbit_exchange_type_fanout,rabbit_exchange_type_topic,mnesia,
                mnesia_lib,rpc,mnesia_tm,qlc,sofs,proplists,credit_flow,pmon,
                ssl_connection,tls_connection,ssl_record,tls_record,gen_fsm,
                ssl]},
 {included_applications,[]},
 {log_levels,[{connection,info}]},
 {msg_store_file_size_limit,16777216},
 {msg_store_index_module,rabbit_msg_store_ets_index},
 {plugins_dir,"/usr/lib/rabbitmq/lib/rabbitmq_server-3.1.5/sbin/../plugins"},
 {plugins_expand_dir,"/var/lib/rabbitmq/mnesia/rabbit@rabbithost-plugins-expand"},
 {queue_index_max_journal_entries,65536},
 {reverse_dns_lookups,false},
 {sasl_error_logger,{file,"/var/log/rabbitmq/[email protected]"}},
 {server_properties,[]},
 {ssl_apps,[asn1,crypto,public_key,ssl]},
 {ssl_cert_login_from,distinguished_name},
 {ssl_listeners,[]},
 {ssl_options,[]},
 {tcp_listen_options,[binary,
                      {packet,raw},
                      {reuseaddr,true},
                      {backlog,128},
                      {nodelay,true},
                      {linger,{true,0}},
                      {exit_on_close,false}]},
 {tcp_listeners,[5672]},
 {trace_vhosts,[]},
 {vm_memory_high_watermark,0.4}]
...done.

We're finding it very hard to replicate the crash, we've also done multiple extensive memory testing on these servers and we're using ECC ram.

If there is any other info that might help troubleshoot this showstopper, we're keen to assist in any way.

Any help is greatly appreciated. I have a 1.5Gig erl_crash.dump file if anyone is keen to review.

Cheers.

No way to set queue name or attributes (persistent, etc.) in WCF bindings.

I would like to receive RabbitMQ messages from a system which uses WCF. However, there is no way to specify the queue name when using the RabbitMQ bindings. This makes the RabbitMQ WCF bindings much less useful, in my mind. Any plans on adding this feature to said RabbitMQ WCF binding? Thanks.

By the way, others have expresed such a need:

http://rabbitmq.1065348.n5.nabble.com/WCF-and-queue-creation-td16243.html

http://rabbitmq.1065348.n5.nabble.com/WCF-client-persistent-gueue-setting-routing-key-td1754.html

Mirrored Queues Down Behaviour

I've noticed that when rabbitmq nodes becomes unreachable the gm module can behave very strangely.

It looks like the server is trying to fetch its own record in the view but this record no longer exists so it crashes. It's view according to the state at the start of the function is:

'{{0,<2908.114.8>}, {0,<0.19014.9>}, {1,<23021.2099.0>}' and its process is ' {0,<0.19014.9>}' however it ends up trying to look itself up in a view that only contains these members: '{1,<23021.2099.0>}' (https://github.com/rabbitmq/rabbitmq-server/blob/rabbitmq_v3_2_3/src/gm.erl#L1245). I notice that the state of the view is also stored in mnesia. I think this state is shared across the cluster but I'm not sure.

If this is true then it might be possible for a node A to consider a gm process on node B down and then push this state to node A via mnesia. Then when node A gets the down message for node B and tries to update its state it will try and fetch itself from the view and will crash.

** Reason for termination ==
** {function_clause,
       [{orddict,fetch,
            [{0,<0.19014.9>},
             [{{1,<23021.2099.0>},
               {view_member,
                   {1,<23021.2099.0>},
                   [{0,<2908.114.8>}],
                   {1,<23021.2099.0>},
                   {1,<23021.2099.0>}}}]],
            [{file,"orddict.erl"},{line,72}]},
        {gm,check_neighbours,1,[]},
        {gm,handle_info,2,[]},
        {gen_server2,handle_msg,2,[]},
        {proc_lib,wake_up,3,[{file,"proc_lib.erl"},{line,249}]}]}

=ERROR REPORT==== 22-May-2014::15:12:18 ===
** Generic server <0.19014.9> terminating
** Last message in was {'DOWN',#Ref<0.0.195.189462>,process,<2908.114.8>,
                               noconnection}
** When Server state == {state,
                            {0,<0.19014.9>},
                            {{0,<2908.114.8>},#Ref<0.0.195.189462>},
                            {{1,<23021.2099.0>},#Ref<0.0.1718.177207>},
                            {resource,<<"/">>,queue,
                                <<"retry_queue_600_mo_service_12">>},
                            rabbit_mirror_queue_coordinator,
                            {2,
                             [{{0,<2908.114.8>},
                               {view_member,
                                   {0,<2908.114.8>},
                                   [],
                                   {1,<23021.2099.0>},
                                   {0,<0.19014.9>}}},
                              {{0,<0.19014.9>},
                               {view_member,
                                   {0,<0.19014.9>},
                                   [],
                                   {0,<2908.114.8>},
                                   {1,<23021.2099.0>}}},
                              {{1,<23021.2099.0>},
                               {view_member,
                                   {1,<23021.2099.0>},
                                   [],
                                   {0,<0.19014.9>},
                                   {0,<2908.114.8>}}}]},
                            2,
                            [{{0,<2908.114.8>},{member,{[],[]},0,0}},
                             {{0,<0.19014.9>},{member,{[],[]},2,2}},
                             {{1,<23021.2099.0>},{member,{[],[]},0,0}}],
                            [<0.19013.9>],
                            {[],[]},
                            [],0,undefined,
                            #Fun<rabbit_misc.execute_mnesia_transaction.1>}

Include net_ticktime into rabbitmqctl status output

I suggest that we add kernel.net_ticktime to rabbitmqctl status and rabbitmqctl report output. While not a RabbitMQ configuration setting, it can lead to false positives and false negatives w.r.t. node reachability. Knowing the effective value should make debugging and operations a bit easier.

erlang v.18

Hello. In erlang v.18 a lot of functions and descriptors are deprecated and removed. Will ever be any update to rabbitmq?

For example:
src/rabbit_variable_queue.erl:341: Warning: type gb_tree/0 is deprecated and will be removed in OTP 18.0; use use gb_trees:tree/0 or preferably gb_trees:tree/2

Can't publish via HTTP API with "Accept: application/json"

In RabbitMQ 3.4.2 if you publish a message via the http/mgmt api and set the Accept header to application/json if fails with 406. But if you set it to anything else it works.

curl -i -XPOST -u guest:guest http://localhost:15672/api/exchanges/%2f/amq.default/publish -H "Content-Type: application/json" -H "Accept: application/json" -d'{"properties":{},"routing_key":"a","payload":"b","payload_encoding":"string"}'
HTTP/1.1 406 Not Acceptable
Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
Date: Sun, 01 Mar 2015 15:24:09 GMT
Content-Type: application/json
Content-Length: 58

{"error":"Not Acceptable","reason":"\"Not Acceptable\"\n"}
curl -i -XPOST -u guest:guest http://localhost:15672/api/exchanges/%2f/amq.default/publish -H "Content-Type: application/json" -H "Accept: */*" -d'{"properties":{},"routing_key":"a","payload":"b","payload_encoding":"string"}'

HTTP/1.1 200 OK
Server: MochiWeb/1.1 WebMachine/1.10.0 (never breaks eye contact)
Date: Sun, 01 Mar 2015 15:25:12 GMT
content-type: application/json
Content-Length: 16
Cache-Control: no-cache

{"routed":false}

Also note how the casing of the response header "Content-Type" differ between the two requests.

Random generic server crashes in the log

I'm getting crashes when publishing messages, it seems to happen every 20th message i publish or so. Maybe i'm doing something wrong but I can't make heads or tails of the error logs.

==> /var/log/rabbitmq/[email protected] <==

=INFO REPORT==== 13-Aug-2014::14:23:04 ===
accepting AMQP connection <0.25428.0> (10.0.2.2:58391 -> 10.0.2.15:5672)

=ERROR REPORT==== 13-Aug-2014::14:23:19 ===
** Generic server <0.25434.0> terminating
** Last message in was {'$gen_cast',
                           {method,
                               {'basic.publish',0,<<"test-exchange">>,
                                   <<"routingKey">>,true,false},
                               {content,60,none,
                                   <<144,0,16,97,112,112,108,105,99,97,116,105,
                                     111,110,47,106,115,111,110,2,0>>,
                                   rabbit_framing_amqp_0_9_1,
                                   [<<"{\"idx\":15,\"prio\":0}">>]},
                               flow}}
** When Server state == {ch,running,rabbit_framing_amqp_0_9_1,1,<0.25428.0>,
                         <0.25432.0>,<0.25428.0>,
                         <<"10.0.2.2:58391 -> 10.0.2.15:5672">>,
                         {lstate,<0.25433.0>,false},
                         none,1,
                         {[],[]},
                         {user,<<"guest">>,
                          [administrator],
                          rabbit_auth_backend_internal,
                          {internal_user,<<"guest">>,
                           <<35,182,104,22,26,135,171,44,2,196,216,67,159,241,
                             160,42,233,1,84,197>>,
                           [administrator]}},
                         <<"/">>,<<>>,
                         {dict,1,16,16,8,80,48,
                          {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
                          {{[],[],[],[],[],[],[],[],[],[],
                            [[<0.25439.0>|
                              {resource,<<"/">>,queue,<<"my-queue">>}]],
                            [],[],[],[],[]}}},
                         {state,
                          {dict,1,16,16,8,80,48,
                           {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
                           {{[],[],[],[],[],[],[],[],[],[],
                             [[<0.25439.0>|#Ref<0.0.0.153346>]],
                             [],[],[],[],[]}}},
                          erlang},
                         {dict,0,16,16,8,80,48,
                          {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
                          {{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},
                         {dict,0,16,16,8,80,48,
                          {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
                          {{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},
                         {set,0,16,16,8,80,48,
                          {[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]},
                          {{[],[],[],[],[],[],[],[],[],[],[],[],[],[],[],[]}}},
                         <0.25429.0>,
                         {state,fine,5000,#Ref<0.0.0.153503>},
                         true,15,
                         {{0,nil},{0,nil}},
                         [],
                         {{0,nil},{0,nil}},
                         [],none,0}
** Reason for termination ==
** {{badmatch,<<0>>},
    [{rabbit_framing_amqp_0_9_1,decode_properties,2,[]},
     {rabbit_binary_parser,ensure_content_decoded,1,[]},
     {rabbit_channel,handle_method,3,[]},
     {rabbit_channel,handle_cast,2,[]},
     {gen_server2,handle_msg,2,[]},
     {proc_lib,wake_up,3,[{file,"proc_lib.erl"},{line,249}]}]}

==> /var/log/rabbitmq/[email protected] <==

=CRASH REPORT==== 13-Aug-2014::14:23:19 ===
  crasher:
    initial call: gen:init_it/6
    pid: <0.25434.0>
    registered_name: []
    exception exit: {{badmatch,<<0>>},
                     [{rabbit_framing_amqp_0_9_1,decode_properties,2,[]},
                      {rabbit_binary_parser,ensure_content_decoded,1,[]},
                      {rabbit_channel,handle_method,3,[]},
                      {rabbit_channel,handle_cast,2,[]},
                      {gen_server2,handle_msg,2,[]},
                      {proc_lib,wake_up,3,
                                [{file,"proc_lib.erl"},{line,249}]}]}
      in function  gen_server2:terminate/3
    ancestors: [<0.25431.0>,<0.25430.0>,<0.25427.0>,<0.25426.0>,
                  rabbit_tcp_client_sup,rabbit_sup,<0.139.0>]
    messages: []
    links: [<0.25431.0>]
    dictionary: [{process_name,
                      {rabbit_channel,
                          {<<"10.0.2.2:58391 -> 10.0.2.15:5672">>,1}}},
                  {{credit_to,<0.25428.0>},35},
                  {{queue_exchange_stats,
                       {{resource,<<"/">>,queue,<<"my-queue">>},
                        {resource,<<"/">>,exchange,<<"test-exchange">>}}},
                   [{publish,14}]},
                  {{xtype_to_module,topic},rabbit_exchange_type_topic},
                  {pause_minority_guard,not_minority_mode},
                  {{exchange_stats,
                       {resource,<<"/">>,exchange,<<"test-exchange">>}},
                   [{confirm,14},{publish,14}]},
                  {permission_cache,
                      [{{resource,<<"/">>,exchange,<<"test-exchange">>},
                        write}]},
                  {{credit_from,<0.25439.0>},186},
                  {guid,{{4231563882,87170283,4179743796,3585566879},13}}]
    trap_exit: true
    status: running
    heap_size: 1598
    stack_size: 27
    reductions: 10968
  neighbours:

=SUPERVISOR REPORT==== 13-Aug-2014::14:23:19 ===
     Supervisor: {<0.25431.0>,rabbit_channel_sup}
     Context:    child_terminated
     Reason:     {{badmatch,<<0>>},
                  [{rabbit_framing_amqp_0_9_1,decode_properties,2,[]},
                   {rabbit_binary_parser,ensure_content_decoded,1,[]},
                   {rabbit_channel,handle_method,3,[]},
                   {rabbit_channel,handle_cast,2,[]},
                   {gen_server2,handle_msg,2,[]},
                   {proc_lib,wake_up,3,[{file,"proc_lib.erl"},{line,249}]}]}
     Offender:   [{pid,<0.25434.0>},
                  {name,channel},
                  {mfargs,
                      {rabbit_channel,start_link,
                          [1,<0.25428.0>,<0.25432.0>,<0.25428.0>,
                           <<"10.0.2.2:58391 -> 10.0.2.15:5672">>,
                           rabbit_framing_amqp_0_9_1,
                           {user,<<"guest">>,
                               [administrator],
                               rabbit_auth_backend_internal,
                               {internal_user,<<"guest">>,
                                   <<35,182,104,22,26,135,171,44,2,196,216,67,
                                     159,241,160,42,233,1,84,197>>,
                                   [administrator]}},
                           <<"/">>,[],<0.25429.0>,<0.25433.0>]}},
                  {restart_type,intrinsic},
                  {shutdown,4294967295},
                  {child_type,worker}]


=SUPERVISOR REPORT==== 13-Aug-2014::14:23:19 ===
     Supervisor: {<0.25431.0>,rabbit_channel_sup}
     Context:    shutdown
     Reason:     reached_max_restart_intensity
     Offender:   [{pid,<0.25434.0>},
                  {name,channel},
                  {mfargs,
                      {rabbit_channel,start_link,
                          [1,<0.25428.0>,<0.25432.0>,<0.25428.0>,
                           <<"10.0.2.2:58391 -> 10.0.2.15:5672">>,
                           rabbit_framing_amqp_0_9_1,
                           {user,<<"guest">>,
                               [administrator],
                               rabbit_auth_backend_internal,
                               {internal_user,<<"guest">>,
                                   <<35,182,104,22,26,135,171,44,2,196,216,67,
                                     159,241,160,42,233,1,84,197>>,
                                   [administrator]}},
                           <<"/">>,[],<0.25429.0>,<0.25433.0>]}},
                  {restart_type,intrinsic},
                  {shutdown,4294967295},
                  {child_type,worker}]


==> /var/log/rabbitmq/[email protected] <==

=ERROR REPORT==== 13-Aug-2014::14:23:19 ===
AMQP connection <0.25428.0> (running), channel 1 - error:
{{badmatch,<<0>>},
 [{rabbit_framing_amqp_0_9_1,decode_properties,2,[]},
  {rabbit_binary_parser,ensure_content_decoded,1,[]},
  {rabbit_channel,handle_method,3,[]},
  {rabbit_channel,handle_cast,2,[]},
  {gen_server2,handle_msg,2,[]},
  {proc_lib,wake_up,3,[{file,"proc_lib.erl"},{line,249}]}]}

=WARNING REPORT==== 13-Aug-2014::14:23:19 ===
Non-AMQP exit reason '{{badmatch,<<0>>},
                       [{rabbit_framing_amqp_0_9_1,decode_properties,2,[]},
                        {rabbit_binary_parser,ensure_content_decoded,1,[]},
                        {rabbit_channel,handle_method,3,[]},
                        {rabbit_channel,handle_cast,2,[]},
                        {gen_server2,handle_msg,2,[]},
                        {proc_lib,wake_up,3,
                                  [{file,"proc_lib.erl"},{line,249}]}]}'

=INFO REPORT==== 13-Aug-2014::14:23:19 ===
closing AMQP connection <0.25428.0> (10.0.2.2:58391 -> 10.0.2.15:5672)

=INFO REPORT==== 13-Aug-2014::14:23:20 ===
accepting AMQP connection <0.25447.0> (10.0.2.2:58401 -> 10.0.2.15:5672)

Running CentOS 6 in VirtualBox

$ erl
Erlang/OTP 17 [erts-6.1] [source-d2a4c20] [smp:2:2] [async-threads:10] [kernel-poll:false]

rabbitmq support priority_queue?

rabbitmq support priority_queue?

eg:
data = {key: 1(int), body:{}};
I want according to key from small to large enter queue

rabbitmq support?

Messages Logging

Is there any plugin tools or somewhat that can log all messages that come through and processed in rabbitMQ. This log is used for statistics.

Provide way to tune/adjust/manipulate flow control triggers

We have seen flow control kick in when the server itself is under no real stress. It has much more resources available yet rabbitmq has throttled the publish rate.

The workload pattern we're servicing is when generating a large (tens of millions of messages) backlog in a large somewhat prolonged burst to then be processed slowly over time, but first having the messages queued quickly and reliably.

We want the ability to do this at a decent flow rate in order to queue the workload then consume over time as it is processed. Our servers can easily take a larger backlog at full speed, but rabbit will throttle the incoming publish rate down with flow control.

It would be good to have better ways to control this behaviour to either delay/extend the time or queue sizes before this flow control kicks in, or have it flow control slowly/incrementally or in an extreme case, disable flow control all together and rely only on the high watermark to prevent server death and protect stability.

Our testing is showing flow control is triggering when there is no serious load on any of memory, cpu, disk IO at the time.

What are some thoughts around this?

Login to management ui not working via AD after update to 3.5.0

Hello!

After upgrading my installation from 3.4.3 to 3.5.0 i 'm not able to login to the management interface with my AD user (matching account in internal database has tag administrator) anymore:

rabbitmq.log (anonymized):

=INFO REPORT==== 16-Mar-2015::11:04:19 ===
Starting RabbitMQ 3.5.0 on Erlang 17.4
Copyright (C) 2007-2014 GoPivotal, Inc.
Licensed under the MPL.  See http://www.rabbitmq.com/

=INFO REPORT==== 16-Mar-2015::11:04:19 ===
node           : rabbit@<servername>
home dir       : C:\Windows
config file(s) : d:/Daten/RabbitMQ/rabbitmq.config
log            : D:/Daten/RabbitMQ/log/rabbit.log
sasl log       : D:/Daten/RabbitMQ/log/rabbit-sasl.log
database dir   : d:/Daten/RabbitMQ/db/rabbit-mnesia
...
=INFO REPORT==== 16-Mar-2015::11:04:22 ===
Server startup complete; 8 plugins started.
 * rabbitmq_management
 * rabbitmq_web_dispatch
 * webmachine
 * mochiweb
 * rabbitmq_shovel
 * rabbitmq_auth_backend_ldap
 * rabbitmq_management_agent
 * amqp_client

=INFO REPORT==== 16-Mar-2015::11:39:02 ===
LDAP DECISION: login for myuserdomainname: ok

=WARNING REPORT==== 16-Mar-2015::11:04:24 ===
HTTP access denied: user 'myuserdomainname' - Not management user

=ERROR REPORT==== 16-Mar-2015::11:04:24 ===
webmachine error: path="/api/users"
"Unauthorized"

If i start the the service with version 3.4.3 again there are no problems:

=INFO REPORT==== 16-Mar-2015::11:00:38 ===
Starting RabbitMQ 3.4.3 on Erlang 17.4
Copyright (C) 2007-2014 GoPivotal, Inc.
Licensed under the MPL.  See http://www.rabbitmq.com/

=INFO REPORT==== 16-Mar-2015::11:00:38 ===
node           : rabbit@myservername
home dir       : C:\Windows
config file(s) : d:/Daten/RabbitMQ/rabbitmq.config
log            : D:/Daten/RabbitMQ/log/rabbit.log
sasl log       : D:/Daten/RabbitMQ/log/rabbit-sasl.log
database dir   : d:/Daten/RabbitMQ/db/rabbit-mnesia

...
=INFO REPORT==== 16-Mar-2015::11:00:41 ===
Server startup complete; 8 plugins started.
 * rabbitmq_management
 * rabbitmq_web_dispatch
 * webmachine
 * mochiweb
 * rabbitmq_shovel
 * rabbitmq_auth_backend_ldap
 * rabbitmq_management_agent
 * amqp_client

=INFO REPORT==== 16-Mar-2015::11:42:01 ===
    LDAP DECISION: does myaduserdomainname have tag administrator? false

=INFO REPORT==== 16-Mar-2015::11:42:01 ===
LDAP DECISION: login for myaduserdomainname: ok

Login successful!

Here is my rabbitmq.config:

[
 {rabbit,
  [%%
   {auth_backends, [{rabbit_auth_backend_ldap, rabbit_auth_backend_internal}, rabbit_auth_backend_internal]},
   {default_vhost,       <<"/">>},
   {default_user,        <<"myaduserdomainname">>},
   {default_pass,        <<"">>}
  ]},

  ... no changes at all ...

 {rabbitmq_auth_backend_ldap,
  [
   {servers, ["fqdn_adserver"]},
   {use_ssl, true},
   {port, secured_ldap_port},
   {log, false},
   {user_dn_pattern, "${username}@domain"},
   {dn_lookup_attribute,   "userPrincipalName"},
   {dn_lookup_base,        "OU=User, OU=Location,OU=language,dc=domain,dc=domainsuffix"}
  ]}
].

If any further informations are needed - plz let me know!
Kindly Regards
birgos

Priority queue does not sensibly report ram duration, queue might not page out under memory pressure

Noticed while working on #65.

The priority queue ram_duration/1 implementation takes the sum of all the ram durations of the sub-queues. That means, if a single sub-queue has a ram duration of infinity (which is very likely!) then the entire queue does.

I didn't really think that through. The point of what we're trying to do here is identify fast-moving queues so they can use more ram. Ideally each sub-queue would have its own ram duration, but in the absence of that we should probably take the minimum of the sub-queue ram durations, not the sum of them.

Error when calling rabbitmqctrl join_cluster

rabbit@adp-rabbitmq01:~# rabbitmqctl join_cluster --ram rabbit@adp-rabbitmq02 rabbit@adp-rabbitmq03

Error: {'EXIT',
       {function_clause,
           [{rabbit_control_main,action,
                [join_cluster,'rabbit@adp-rabbitmq01',
                 ["rabbit@adp-rabbitmq02","rabbit@adp-rabbitmq03"],
                 [{"-q",false},
                  {"-n",'rabbit@adp-rabbitmq01'},
                  {"--ram",true}],
                 #Fun<rabbit_control_main.11.121822165>],
                []},
            {rabbit_cli,main,3,[]},
            {init,start_it,1,[]},
            {init,start_em,1,[]}]}}

I've followed the guidelines at http://www.rabbitmq.com/ha.html but can't seem to add nodes to the cluster.

adp-rabbitmq01/02/03 is resolving to their assigned ip-addresses and I can telnet to each of them from all nodes. The eralang cookie is allso checked (several times) and is the same on all machines/nodes.

I can't seem to find anything wrong with what I'm doing.

Runing RabbitMQ 3.4.3 on all machines!

Messages lost on server restart with dead-letter and durable TTL queues

This bug was first identified back in 2012 in the rabbitmq-discuss mailing list and has indepth explanation.

http://rabbitmq.1065348.n5.nabble.com/Lost-messages-on-dead-letter-ttl-queue-while-process-is-restarted-td23004.html

To summarize the above thread, if a durable TTL queue is setup with dead-letter exchange to create a delayed queue and the rabbitmq server is stopped and restarted after a period greater than the queues TTL the messages get lost rather than being dead-letter.

Whilst the bug was identified approximately 1.5 years ago this same issue is present in v3.3.1 so one would assume it has not been fixed yet?

Update HiPE recommendations in the docs

Post 3.5.0, so filing it here.

HiPE seems to have matured and I couldn't find a HiPE-induced crash report on Erlang 17.x. I can, however, recall a couple of high profile users using it in production.

So maybe we should re-consider our position and recommend people to give it a try if throughput is a major concern for them.

rabbitmq-server.ocf deletes /var/run/rabbitmq directory

When the rabbitmq-server pacemaker resource agent deletes the pid file, it also deletes the entire /var/run/rabbitmq directory. On RHEL 7, using the systemctl service, if I want to use systemctl to work with the resource after using pacemaker, I have to recreate that directory and set the owner before I can start rabbitmq-server as a service again.

RabbitMQ can go down after restart

After restart of rabbitmq server it become inaccessible returning error to clients:

ConnectionError: 541: INTERNAL_ERROR

I found that it was caused by corrupted rabbit_serial file in mnesia directory.

Here is what I have in log:

=SUPERVISOR REPORT==== 28-May-2013::17:44:55 ===
     Supervisor: {local,rabbit_guid_sup}
     Context:    start_error
     Reason:     {'EXIT',
                     {{case_clause,{ok,[]}},
                      [{rabbit_guid,update_disk_serial,0,
                           [{file,"src/rabbit_guid.erl"},{line,64}]},
                       {rabbit_guid,start_link,0,
                           [{file,"src/rabbit_guid.erl"},{line,54}]},
                       {supervisor,do_start_child,2,
                           [{file,"supervisor.erl"},{line,303}]},
                       {supervisor,start_children,3,
                           [{file,"supervisor.erl"},{line,287}]},
                       {supervisor,init_children,2,
                           [{file,"supervisor.erl"},{line,253}]},
                       {gen_server,init_it,6,
                           [{file,"gen_server.erl"},{line,304}]},
                       {proc_lib,init_p_do_apply,3,
                           [{file,"proc_lib.erl"},{line,227}]}]}}
     Offender:   [{pid,undefined},
                  {name,rabbit_guid},
                  {mfargs,{rabbit_guid,start_link,[]}},
                  {restart_type,transient},
                  {shutdown,4294967295},
                  {child_type,worker}]

This error can be reproduced by replacing rabbit_serial file with empty file.

I expect rabbitmq to be able to recover after restart, maybe by replacing invalid rabbit_serial file with some good default?

Can rabbitmq-server resource agent support a nodename that supports cloned resources?

The rabbitmq-server resource agent cannot be used to create maker cloned resources - for example, when using a mirrored rabbitmq cluster.

The default nodename - rabbit@localhost - does not work; a value like rabbit@host1 is needed. But you cannot use rabbit@hostname with creating a separate resource for each node as well as a corresponding location constraint to force pacemaker to run the resource on th ecorrect node.

Can the behavior of the resource agent be changed to handle this scenario better? Some suggestions have been made including changing the default node name to:

  • use NODENAME from rabbitmq-env.conf if set
  • use rabbit@$(hostname -s)
  • support setting the resource agent nodename property to a value like "rabbit@^hostname^", where the resource agent would replace ^hostname^ with the output of hostname -s.

This is discussed in a couple of rabbitmq-users posts:

Error: "process_not_running"

I get the following error when trying to wait for rabbitmq to start:

[   10.505287] rabbitmq-post-start[1306]: Waiting for rabbit@server ...
[   10.506554] rabbitmq-post-start[1306]: pid is 1305 ...
[   10.525235] rabbitmq-post-start[1306]: Error: process_not_running

Process is running and listening, pid is correctly written and path to it is correct, what else could be wrong?

This is systemd service config:

[Unit]
After=network.target
Description=RabbitMQ Server

[Service]
Environment="GCOV_PREFIX=/tmp/xchg/coverage-data"
Environment="LOCALE_ARCHIVE=/nix/store/b6sfq9h1ahl6xaa4hmbmszghdhljhs7x-glibc-locales-2.20/lib/locale/locale-archive"
Environment="PATH=/nix/store/rl91q0al47bv19hq3vziv6s06lq2m99a-rabbitmq-server-3.4.3/bin:/nix/store/h4ssyq8lac0ywmn8j0lsichvj9fvcfyd-coreutils-8.23/bin:/nix/store/ay30q0agqfakb1giq44il646n5b4lahv-findutils-4.4.2/bin:/nix/store/6y37f4lbj1gampn4b79xy9qcyib44qby-gnugrep-2.20/bin:/nix/store/pvylk9549spwycylhn27qyggs3xnlwyv-gnused-4.2.2/bin:/nix/store/55fr8jhlmzkjdkwpw2kvvbq71wd2dxjh-systemd-217/bin:/nix/store/rl91q0al47bv19hq3vziv6s06lq2m99a-rabbitmq-server-3.4.3/sbin:/nix/store/h4ssyq8lac0ywmn8j0lsichvj9fvcfyd-coreutils-8.23/sbin:/nix/store/ay30q0agqfakb1giq44il646n5b4lahv-findutils-4.4.2/sbin:/nix/store/6y37f4lbj1gampn4b79xy9qcyib44qby-gnugrep-2.20/sbin:/nix/store/pvylk9549spwycylhn27qyggs3xnlwyv-gnused-4.2.2/sbin:/nix/store/55fr8jhlmzkjdkwpw2kvvbq71wd2dxjh-systemd-217/sbin"
Environment="RABBITMQ_ENABLED_PLUGINS_FILE=/nix/store/99h3wdy4ijd0728i634mxdr3qfr5634x-enabled_plugins"
Environment="RABBITMQ_MNESIA_BASE=/var/lib/rabbitmq/mnesia"
Environment="RABBITMQ_NODE_IP_ADDRESS=127.0.0.1"
Environment="RABBITMQ_NODE_PORT=5672"
Environment="RABBITMQ_PID_FILE=/var/lib/rabbitmq/pid"
Environment="RABBITMQ_SERVER_START_ARGS=-rabbit error_logger tty -rabbit sasl_error_logger false"
Environment="SYS_PREFIX="
Environment="TZDIR=/nix/store/w84y452knm71afyn9j1qpm23hjyi88ry-tzdata-2014j/share/zoneinfo"



ExecStart=/nix/store/rl91q0al47bv19hq3vziv6s06lq2m99a-rabbitmq-server-3.4.3/sbin/rabbitmq-server
ExecStartPost=/nix/store/rl91q0al47bv19hq3vziv6s06lq2m99a-rabbitmq-server-3.4.3/sbin/rabbitmqctl wait /var/lib/rabbitmq/pid
ExecStartPre=/nix/store/r89z35v0wwrq2fv9rikcyk0lhz94vpqg-unit-script/bin/rabbitmq-pre-start
ExecStop=/nix/store/rl91q0al47bv19hq3vziv6s06lq2m99a-rabbitmq-server-3.4.3/sbin/rabbitmqctl stop
Group=rabbitmq
User=rabbitmq
WorkingDirectory=/var/lib/rabbitmq

Feature Request - Command to Set Master Queue.

Would it be possible to have a rabbitmqctl command to set a mirrored queue slave (in a specific vhost on a specific node of a cluster) to be the master queue, without having to restart any nodes? We are working on a recovery strategy for node failures and network partitions in a RabbitMQ cluster (specifically, across multiple AWS availability zones (not geographic regions, just AZs)). Thank you.

"No items" message of rabbitmqadmin, rabbitmq-server bug?

Hi,

I noticed a strange behavior of rabbitmq:

  1. Get list queues:
    root@localhost:~# rabbitmqctl list_queues
    Listing queues ...
    send_emails_local 311
  2. Then try get messages from queue, by rabbitmqadmin, or by other client:
    root@localhost:~# rabbitmqadmin get queue=send_emails_local requeue=true count=10
    No items

this error sometimes occurs, and it corrected after server restart,
root@localhost:~# rabbitmqctl status
Status of node 'rabbit@localhost' ...
[{pid,19301},
{running_applications,
[{rabbitmq_management,"RabbitMQ Management Console","2.8.4"},
{xmerl,"XML parser","1.3.1"},
{rabbitmq_management_agent,"RabbitMQ Management Agent","2.8.4"},
{amqp_client,"RabbitMQ AMQP Client","2.8.4"},
{rabbit,"RabbitMQ","2.8.4"},
{os_mon,"CPO CXC 138 46","2.2.9"},
{sasl,"SASL CXC 138 11","2.2.1"},
{rabbitmq_mochiweb,"RabbitMQ Mochiweb Embedding","2.8.4"},
{webmachine,"webmachine","1.7.0-rmq2.8.4-hg"},
{mochiweb,"MochiMedia Web Server","1.3-rmq2.8.4-git"},
{inets,"INETS CXC 138 49","5.9"},
{mnesia,"MNESIA CXC 138 12","4.7"},
{stdlib,"ERTS CXC 138 10","1.18.1"},
{kernel,"ERTS CXC 138 10","2.15.1"}]},
{os,{unix,linux}},
{erlang_version,
"Erlang R15B01 (erts-5.9.1) [source] [64-bit] [smp:8:8] [async-threads:30] [kernel-poll:true]\n"},
{memory,
[{total,355708992},
{processes,203104614},
{processes_used,203104600},
{system,152604378},
{atom,695185},
{atom_used,664354},
{binary,34721608},
{code,18096525},
{ets,94972392}]},
{vm_memory_high_watermark,0.3999999999881596},
{vm_memory_limit,13513082470},
{disk_free_limit,1000000000},
{disk_free,2166095872},
{file_descriptors,
[{total_limit,924},
{total_used,34},
{sockets_limit,829},
{sockets_used,13}]},
{processes,[{limit,1048576},{used,285}]},
{run_queue,0},
{uptime,1096867}]
...done.

Examples of custom ssl option: verify_fun

I'm not Erlang-guru, and I can't find any examples specifying a custom verify_fun. Could you please share your experience?

I've tried to write something like

-module(mymod).
-export([custom_validator/1]).

validate_function(_) ->
    file:write("/tmp/out", "In my validator\n"),
    true.

rabbitmq.config

{verify_fun, {mymod, validate_function}}

I put mymod.beam to /usr/lib/rabbitmq/lib/rabbitmq_server-3.3.0/include/ebin, but it seems that my validate_function is never called, since there is no /tmp/out file.
What's wrong here?

Pathological performance of file_handle_cache read buffer when synchronising queues

Noticed while working on #65.

The queue eager-sync process ends up reading messages from the queue in the opposite order to that in which they were published. If the messages are being read from the message store this causes very poor performance since 3.5.0.

If we have 10kB messages then we read 1MB from near the end of the file, then seek back 10kB, decide that's outside the buffer, read another 1MB, seek back 10kB, and so on. On my machine we sync ~250 msg/s like that. Oops!

Ways we could fix this, in ascending order of cleverness:

  • Hard code a smaller buffer size
  • Dynamically shrink the buffer size if we determine it is not working
  • Read the buffer backwards from our seek point if we detect we are seeking backwards

aliveness-test does still work when memory high watermark is reached and publishing is blocked

Hi,

from the docs it seems if I call /api/aliveness-test I can be 100% sure my apps are able to publish and consume messages from my MQ.

Declares a test queue, then publishes and consumes a
          message. Intended for use by monitoring tools.

So, when I put my queue into a state where it will not allow publishing new items, for instance through setting rabbitmqctl set_vm_memory_high_watermark 0.01
and I can see Publishers will be blocked until this alarm clear in the logs and my application can't actually publish new messages, I'd consider the aliveness-test to tell me that.
What I actually see is

root@mq:/home/vagrant# curl -u foo:bar http://localhost:15672/api/aliveness-test/%2F
{"status":"ok"}

This is not correct, considering this API call should really "publish and consume a message. Intended for use by monitoring tools."

This is what I run at the moment

{running_applications,
     [{rabbitmq_management,"RabbitMQ Management Console","3.1.5"},
      {rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.1.5"},
      {webmachine,"webmachine","1.10.3-rmq3.1.5-gite9359c7"},
      {mochiweb,"MochiMedia Web Server","2.7.0-rmq3.1.5-git680dba8"},
      {rabbitmq_management_agent,"RabbitMQ Management Agent","3.1.5"},
      {rabbit,"RabbitMQ","3.1.5"},
      {os_mon,"CPO  CXC 138 46","2.2.9"},
      {inets,"INETS  CXC 138 49","5.9"},
      {xmerl,"XML parser","1.3.1"},
      {mnesia,"MNESIA  CXC 138 12","4.7"},
      {amqp_client,"RabbitMQ AMQP Client","3.1.5"},
      {sasl,"SASL  CXC 138 11","2.2.1"},
      {stdlib,"ERTS  CXC 138 10","1.18.1"},
      {kernel,"ERTS  CXC 138 10","2.15.1"}]},
 {os,{unix,linux}},
 {erlang_version,
     "Erlang R15B01 (erts-5.9.1) [source] [64-bit] [async-threads:30] [kernel-poll:true]\n"}

node.js tutorial patch

Thanks for proviing node.js support!

To get the first 3 tutorials to work, I had to change a couple of files like this:

╭─imran at lenovo in ~ 
╰─○ colordiff rabbitmq-tutorials/javascript-nodejs/send.js carrot/send.js 
7c7
<     connection.publish('task_queue', 'Hello World!');

---
>     connection.publish('hello', 'Hello World!');
╭─imran at lenovo in ~ 
╰─○ colordiff rabbitmq-tutorials/javascript-nodejs/amqp-hacks.js carrot/amqp-hacks.js 
7c7,8
<     connection.queue('tmp-' + Math.random(), {exclusive: true}, function(){

---
>     connection.queue('task_queue', {exclusive: true}, function(){
>     //connection.queue('tmp-' + Math.random(), {exclusive: true}, function(){

Logging enhancement on unsync slaves

Hi,

Would it be possible to add some logging around slaves coming back after being offline ?
If it turns out a slave coming back has some unsynchronized mirrored queues, it would be good if a message is being printed out specifying which queues are impacted.

When running clusters with ha-sync-mode as manual, you would want to know when you're at risk and should run a manual sync (decision based on number of unsync nodes vs total number of nodes)

Regards.

Priority Queue Efficiency

I sumbled upon the priority queue implementation (priority_queue.erl) and was wondering if implementing it as a heap wouldn't increase performance? Or are there other reasons to keep it like that?

Disk check (df) runs every few seconds

Output from execsnoop on OS X:

2013 Mar  8 23:44:13   501   2933   2932 df
2013 Mar  8 23:44:14   501   2947   2946 df
2013 Mar  8 23:44:18   501   2957   2956 df
2013 Mar  8 23:44:23   501   2960   2959 df
2013 Mar  8 23:44:24   501   2962   2961 df
2013 Mar  8 23:44:29   501   2969   2968 df
2013 Mar  8 23:44:34   501   2971   2970 df
2013 Mar  8 23:44:34   501   2973   2972 df
2013 Mar  8 23:44:39   501   2977   2976 df
2013 Mar  8 23:44:44   501   2980   2979 df
2013 Mar  8 23:44:44   501   2982   2981 df
2013 Mar  8 23:44:49   501   2985   2984 df
2013 Mar  8 23:44:54   501   2989   2988 df
2013 Mar  8 23:44:54   501   2991   2990 df
2013 Mar  8 23:44:59   501   2994   2993 df
2013 Mar  8 23:45:04   501   2996   2995 df
2013 Mar  8 23:45:04   501   2998   2997 df
2013 Mar  8 23:45:09   501   3001   3000 df
2013 Mar  8 23:45:14   501   3004   3003 df
2013 Mar  8 23:45:15   501   3006   3005 df
2013 Mar  8 23:45:20   501   3008   3007 df
2013 Mar  8 23:45:24   501   3012   3011 df
2013 Mar  8 23:45:25   501   3014   3013 df
2013 Mar  8 23:45:30   501   3017   3016 df
2013 Mar  8 23:45:34   501   3019   3018 df
2013 Mar  8 23:45:35   501   3023   3022 df
2013 Mar  8 23:45:40   501   3028   3027 df

The code starts a timer set at DEFAULT_DISK_CHECK_INTERVAL, which is 10 seconds, so something tells me multiple timers are being started, or that Erlang's timer is screwed up somehow.

This is 3.0.3 installed from rabbitmq-server-generic-unix-3.0.3.tar.gz on Erlang R16B via Homebrew.

Confusing sentence in the auto-clustering docs

If cluster_nodes is specified, RabbitMQ will try to cluster to each node provided,
and stop after it can cluster with one of them.

on this page can lead people to believe that the node will stop (terminate) after clustering, instead of stop attempts to pick a reachable node.

rabbitmqctl rotate_logs issue

I am running rabbitmq-server-2.8.4 on RHEL 6.3. In /var/log/secure there are messages:

Jan  2 17:24:48 linux02 sudo:     root : sorry, you must have a tty to run sudo ; TTY=console ; PWD=/ ; USER=rabbitmq ; COMMAND=/usr/sbin/rabbitmqctl -n rabbit@localhost status
Jan  2 17:26:02 linux02 sudo:     root : sorry, you must have a tty to run sudo ; TTY=console ; PWD=/ ; USER=rabbitmq ; COMMAND=/usr/sbin/rabbitmqctl wait /var/run/rabbitmq/pid
Jan  6 03:10:13 linux02 sudo:     root : sorry, you must have a tty to run sudo ; TTY=unknown ; PWD=/ ; USER=rabbitmq ; COMMAND=/usr/sbin/rabbitmqctl rotate_logs

The default /etc/sudoers is used. How do I fix these issues please?

Question auto-delete

Concerning: https://www.rabbitmq.com/amqp-0-9-1-reference.html using auto-delete: true

workflow:

  1. producer(p) connects and binds to an exchange(e) // can be step 2
  2. consumer(c) connects and binds to an exchange(e) // can be step 1
  3. p sends some messages
  4. c receives messages
  5. c disconnects // currently the exchange(e) could be deleted
  6. p sends some other messages

Couldn't the exchange not only check for consumers but for producers too? so in my described workflow the exchange will not be deleted?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.