Giter Site home page Giter Site logo

yaoweibin / nginx_upstream_check_module Goto Github PK

View Code? Open in Web Editor NEW

This project forked from cep21/healthcheck_nginx_upstreams

2.0K 107.0 479.0 416 KB

Health checks upstreams for nginx

Home Page: http://github.com/yaoweibin/nginx_upstream_check_module

Perl 64.20% C 33.78% Makefile 0.07% Shell 0.08% Ruby 0.02% Ragel 1.86%

nginx_upstream_check_module's Introduction

Name
    nginx_http_upstream_check_module - support upstream health check with
    Nginx

Synopsis
    http {

        upstream cluster {

            # simple round-robin
            server 192.168.0.1:80;
            server 192.168.0.2:80;

            check interval=5000 rise=1 fall=3 timeout=4000;

            #check interval=3000 rise=2 fall=5 timeout=1000 type=ssl_hello;

            #check interval=3000 rise=2 fall=5 timeout=1000 type=http;
            #check_http_send "HEAD / HTTP/1.0\r\n\r\n";
            #check_http_expect_alive http_2xx http_3xx;
        }

        server {
            listen 80;

            location / {
                proxy_pass http://cluster;
            }

            location /status {
                check_status;

                access_log   off;
                allow SOME.IP.ADD.RESS;
                deny all;
           }
        }

    }

Description
    Add the support of health check with the upstream servers.

Directives
  check
    syntax: *check interval=milliseconds [fall=count] [rise=count]
    [timeout=milliseconds] [default_down=true|false]
    [type=tcp|http|ssl_hello|mysql|ajp|fastcgi]*

    default: *none, if parameters omitted, default parameters are
    interval=30000 fall=5 rise=2 timeout=1000 default_down=true type=tcp*

    context: *upstream*

    description: Add the health check for the upstream servers.

    The parameters' meanings are:

    *   *interval*: the check request's interval time.

    *   *fall*(fall_count): After fall_count check failures, the server is
        marked down.

    *   *rise*(rise_count): After rise_count check success, the server is
        marked up.

    *   *timeout*: the check request's timeout.

    *   *default_down*: set initial state of backend server, default is
        down.

    *   *port*: specify the check port in the backend servers. It can be
        different with the original servers port. Default the port is 0 and
        it means the same as the original backend server.

    *   *type*: the check protocol type:

        1.  *tcp* is a simple tcp socket connect and peek one byte.

        2.  *ssl_hello* sends a client ssl hello packet and receives the
            server ssl hello packet.

        3.  *http* sends a http request packet, receives and parses the http
            response to diagnose if the upstream server is alive.

        4.  *mysql* connects to the mysql server, receives the greeting
            response to diagnose if the upstream server is alive.

        5.  *ajp* sends a AJP Cping packet, receives and parses the AJP
            Cpong response to diagnose if the upstream server is alive.

        6.  *fastcgi* send a fastcgi request, receives and parses the
            fastcgi response to diagnose if the upstream server is alive.

  check_http_send
    syntax: *check_http_send http_packet*

    default: *"GET / HTTP/1.0\r\n\r\n"*

    context: *upstream*

    description: If you set the check type is http, then the check function
    will sends this http packet to check the upstream server.

  check_http_expect_alive
    syntax: *check_http_expect_alive [ http_2xx | http_3xx | http_4xx |
    http_5xx ]*

    default: *http_2xx | http_3xx*

    context: *upstream*

    description: These status codes indicate the upstream server's http
    response is ok, the backend is alive.

  check_keepalive_requests
    syntax: *check_keepalive_requests num*

    default: *check_keepalive_requests 1*

    context: *upstream*

    description: The directive specifies the number of requests sent on a
    connection, the default vaule 1 indicates that nginx will certainly
    close the connection after a request.

  check_fastcgi_param
    Syntax: *check_fastcgi_params parameter value*

    default: see below

    context: *upstream*

    description: If you set the check type is fastcgi, then the check
    function will sends this fastcgi headers to check the upstream server.
    The default directive looks like:

          check_fastcgi_param "REQUEST_METHOD" "GET";
          check_fastcgi_param "REQUEST_URI" "/";
          check_fastcgi_param "SCRIPT_FILENAME" "index.php";

  check_shm_size
    syntax: *check_shm_size size*

    default: *1M*

    context: *http*

    description: Default size is one megabytes. If you check thousands of
    servers, the shared memory for health check may be not enough, you can
    enlarge it with this directive.

  check_status
    syntax: *check_status [html|csv|json]*

    default: *none*

    context: *location*

    description: Display the health checking servers' status by HTTP. This
    directive should be set in the http block.

    You can specify the default display format. The formats can be `html`,
    `csv` or `json`. The default type is `html`. It also supports to specify
    the format by the request argument. Suppose your `check_status` location
    is '/status', the argument of `format` can change the display page's
    format. You can do like this:

        /status?format=html
        /status?format=csv
        /status?format=json

    At present, you can fetch the list of servers with the same status by
    the argument of `status`. For example:

        /status?format=html&status=down
        /status?format=csv&status=up

    Below it's the sample html page:

        <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN
        "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
        <title>Nginx http upstream check status</title>
            <h1>Nginx http upstream check status</h1>
            <h2>Check upstream server number: 1, generation: 3</h2>
                    <th>Index</th>
                    <th>Upstream</th>
                    <th>Name</th>
                    <th>Status</th>
                    <th>Rise counts</th>
                    <th>Fall counts</th>
                    <th>Check type</th>
                    <th>Check port</th>
                    <td>0</td>
                    <td>backend</td>
                    <td>106.187.48.116:80</td>
                    <td>up</td>
                    <td>39</td>
                    <td>0</td>
                    <td>http</td>
                    <td>80</td>

    Below it's the sample of csv page:

        0,backend,106.187.48.116:80,up,46,0,http,80

    Below it's the sample of json page:

        {"servers": {
          "total": 1,
          "generation": 3,
          "server": [
           {"index": 0, "upstream": "backend", "name": "106.187.48.116:80", "status": "up", "rise": 58, "fall": 0, "type": "http", "port": 80}
          ]
         }}

Installation
    Download the latest version of the release tarball of this module from
    github (<http://github.com/yaoweibin/nginx_upstream_check_module>)

    Grab the nginx source code from nginx.org (<http://nginx.org/>), for
    example, the version 1.0.14 (see nginx compatibility), and then build
    the source with this module:

        $ wget 'http://nginx.org/download/nginx-1.0.14.tar.gz'
        $ tar -xzvf nginx-1.0.14.tar.gz
        $ cd nginx-1.0.14/
        $ patch -p1 < /path/to/nginx_http_upstream_check_module/check.patch

        $ ./configure --add-module=/path/to/nginx_http_upstream_check_module

        $ make
        $ make install

Note
    If you use nginx-1.2.1 or nginx-1.3.0, the nginx upstream round robin
    module changed greatly. You should use the patch named
    'check_1.2.1.patch'.

    If you use nginx-1.2.2+ or nginx-1.3.1+, It added the upstream
    least_conn module. You should use the patch named 'check_1.2.2+.patch'.

    If you use nginx-1.2.6+ or nginx-1.3.9+, It adjusted the round robin
    module. You should use the patch named 'check_1.2.6+.patch'.

    If you use nginx-1.5.12+, You should use the patch named
    'check_1.5.12+.patch'.

    If you use nginx-1.7.2+, You should use the patch named
    'check_1.7.2+.patch'.

    The patch just adds the support for the official Round-Robin, Ip_hash
    and least_conn upstream module. But it's easy to expand my module to
    other upstream modules. See the patch for detail.

    If you want to add the support for upstream fair module, you can do it
    like this:

        $ git clone git://github.com/gnosek/nginx-upstream-fair.git
        $ cd nginx-upstream-fair
        $ patch -p2 < /path/to/nginx_http_upstream_check_module/upstream_fair.patch
        $ cd /path/to/nginx-1.0.14
        $ ./configure --add-module=/path/to/nginx_http_upstream_check_module --add-module=/path/to/nginx-upstream-fair-module
        $ make
        $ make install

    If you want to add the support for nginx sticky module, you can do it
    like this:

        $ svn checkout http://nginx-sticky-module.googlecode.com/svn/trunk/ nginx-sticky-module
        $ cd nginx-sticky-module
        $ patch -p0 < /path/to/nginx_http_upstream_check_module/nginx-sticky-module.patch
        $ cd /path/to/nginx-1.0.14
        $ ./configure --add-module=/path/to/nginx_http_upstream_check_module --add-module=/path/to/nginx-sticky-module
        $ make
        $ make install

    Note that, the nginx-sticky-module also needs the original check.patch.

Compatibility
    *   The module version 0.1.5 should be compatibility with 0.7.67+

    *   The module version 0.1.8 should be compatibility with Nginx-1.0.14+

Notes
TODO
Known Issues
Changelogs
  v0.3
    *   support keepalive check requests

    *   fastcgi check requests

    *   json/csv check status page support

  v0.1
    *   first release

Authors
    Weibin Yao(姚伟斌) *yaoweibin at gmail dot com*

    Matthieu Tourne

Copyright & License
    This README template copy from agentzh (<http://github.com/agentzh>).

    The health check part is borrowed the design of Jack Lindamood's
    healthcheck module healthcheck_nginx_upstreams
    (<http://github.com/cep21/healthcheck_nginx_upstreams>);

    This module is licensed under the BSD license.

    Copyright (C) 2014 by Weibin Yao <[email protected]>

    Copyright (C) 2010-2014 Alibaba Group Holding Limited

    Copyright (C) 2014 by LiangBin Li

    Copyright (C) 2014 by Zhuo Yuan

    Copyright (C) 2012 by Matthieu Tourne

    All rights reserved.

    Redistribution and use in source and binary forms, with or without
    modification, are permitted provided that the following conditions are
    met:

    *   Redistributions of source code must retain the above copyright
        notice, this list of conditions and the following disclaimer.

    *   Redistributions in binary form must reproduce the above copyright
        notice, this list of conditions and the following disclaimer in the
        documentation and/or other materials provided with the distribution.

    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
    IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
    TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
    PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
    HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
    TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
    PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
    LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
    NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
    SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

nginx_upstream_check_module's People

Contributors

bradq avatar cep21 avatar chobits avatar devilsmans avatar dmitry-saprykin avatar flombardi avatar jbergstroem avatar juzipeek avatar piotrsikora avatar prune998 avatar saravsars avatar whissi avatar yaoweibin avatar yongjianchn avatar zengjinji avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nginx_upstream_check_module's Issues

nginx reload not working; creates orphan yet active worker

I'm experiencing the same bug described in this report: http://forum.nginx.org/read.php?2,216933,225324#msg-225324

Like such report, when I reload nginx, the worker disappears from the current processes (output of ps ux), but keeps running regularly; the worker processes still show up - in fact, the port 80, used by the boad balancer, is still used.

In order to perform the reload, I have to kill the worker processes.

The version of nginx I'm using is 1.6.2; this was happening already with 1.4.x versions; upstream_check is the only module I'm compiling.

I can reproduce it easily with a couple of consecutive nginx -s reload calls.
If I don't compile upstream_check, the problem doesn't reproduce.

check_status不可用

Hi,weibin,我在用这个模块的时候遇到check_status错误的问题,求助 :)
错误日志:
2015/05/12 03:39:55 [error] 26982#0: *335 http upstream check module can not find any check server, make sure you've added the check servers, client: 127.0.0.1, server: , request: "GET /ns HTTP/1.1", host: "localhost:7777"
我用的是openresty/1.7.10.1,额外装了你的一致性哈希模块。

segfault on sighup in ngx_http_upstream_check_find_shm_peer()

This is basically the same thing as #46 but I'm still seeing it with current master branch of upstream_check and nginx-1.7.9. The first sighup always succeeds; after that it isn't deterministic. The most I've been able to do is 10 reloads before the segfault.

I've pasted a full backtrace: http://pastebin.com/DrK52J2A

In that same area of ngx_http_upstream_check_module.c:

    if (ngx_memcmp(addr->sockaddr, peer_shm->sockaddr, addr->socklen) == 0
        && upstream_name->len == peer_shm->upstream_name->len
        && ngx_strncmp(upstream_name->data, peer_shm->upstream_name->data, upstream_name->len) == 0) {
        return peer_shm;
    }

(gdb) print upstream_name->len
$23 = 8
(gdb) print peer_shm->upstream_name->len
$24 = 8

So the (upstream_name->len == peer_shm->upstream_name->len) requirement is satisfied.

(gdb) print upstream_name->data
$25 = (u_char *) 0x153b421 "ucr-farm"
(gdb) print peer_shm->upstream_name->data
$26 = (u_char *) 0x1000 <Address 0x1000 out of bounds>

Woops.

compatibility with Nginx 1.9.2

It seems the patch file does not work with nginx 1.9.2.
I know it is quite new and not stable.
Here is the result of the patch application :

patch -p1 < ../nginx_upstream_check_module/check_1.7.5+.patch
patching file src/http/modules/ngx_http_upstream_hash_module.c
Hunk #2 succeeded at 238 (offset -5 lines).
Hunk #3 succeeded at 547 (offset 29 lines).
patching file src/http/modules/ngx_http_upstream_ip_hash_module.c
Hunk #2 succeeded at 208 with fuzz 1 (offset -7 lines).
patching file src/http/modules/ngx_http_upstream_least_conn_module.c
Hunk #1 succeeded at 9 with fuzz 2.
Hunk #2 succeeded at 151 (offset -55 lines).
Hunk #3 FAILED at 269.
1 out of 3 hunks FAILED -- saving rejects to file src/http/modules/ngx_http_upstream_least_conn_module.c.rej
patching file src/http/ngx_http_upstream_round_robin.c
Hunk #2 FAILED at 95.
Hunk #3 FAILED at 151.
Hunk #4 FAILED at 210.
Hunk #5 FAILED at 319.
Hunk #6 FAILED at 359.
Hunk #7 succeeded at 640 with fuzz 1 (offset 222 lines).
Hunk #8 FAILED at 516.
6 out of 8 hunks FAILED -- saving rejects to file src/http/ngx_http_upstream_round_robin.c.rej
patching file src/http/ngx_http_upstream_round_robin.h
Hunk #1 succeeded at 35 (offset 4 lines).

I did move and corrected few things, and it is going well, until an integer cast issue. I think some variables names changed in nginx but can't understand/know what to do :

cc -c -pipe  -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g   -I src/core -I src/event -I src/event/modules -I src/os/unix -I ../nginx_upstream_check_module -I objs -I src/http -I src/http/modules -I src/mail -I
 src/stream \
                -o objs/addon/nginx-sticky-module-ng/ngx_http_sticky_module.o \
                ../nginx-sticky-module-ng/ngx_http_sticky_module.c
../nginx-sticky-module-ng/ngx_http_sticky_module.c: In function ‘ngx_http_get_sticky_peer’:
../nginx-sticky-module-ng/ngx_http_sticky_module.c:340:21: error: assignment makes pointer from integer without a cast [-Werror]
   iphp->rrp.current = iphp->selected_peer;
                     ^
cc1: all warnings being treated as errors
make[1]: *** [objs/addon/nginx-sticky-module-ng/ngx_http_sticky_module.o] Error 1
make[1]: Leaving directory `/opt/data/src/nginx-1.9.2'
make: *** [build] Error 2

关于check_1.9.2+.patch的路径问题

README里没有对check_1.9.2+.patch作说明,但看到有这个文件,应该是支持1.9.2吧,在nginx-1.9.2使用patch -p1 < /path/to/nginx_http_upstream_check_module/check_1.9.2+.patch
提示:File to patch:
找不到--- src/http/modules/ngx_http_upstream_hash_module.c及+++ src/http/modules/ngx_http_upstream_hash_module.c的路径
查了check以往的版本,这两个路径前都分别有a/和b/的,修改加上后再打补丁就可以了。

This module appears to break the "backup" upstream functionality

Hello,

I have patched the 0.8.54 SRPM from EPEL with this module and run some tests. When I specify a health check according to the documentation, and also specify one of my upstreams as a "backup," the backup never kicks in. At the point where both of the non-backup upstreams are down, any request results in error 502 and the backup is untouched.

I have tried both a check_http_send and a tcp health check. Commenting out the health check directives allows the backup upstream to function normally.

I was really hoping this would work. My goal here is to have deep health checking for my normal upstreams, and also have a degraded-mode light-weight upstream as a last resort.

Any clue why this might be occurring?

add consistent hashing support

Howdy,

First thanks for this module!

Second, seems you are also active in tangine. I've been a loooong time user of the hashing module and because of that I needed to kept with an old version of nginx. Now with the new version it is a headache to get it to work with the old healthcheck module (the cep21 one). I took a quick look to the hash module and the reason of why the healthchecks are not working is because the hashing module is expecting the old healthcheck (NGX_HTTP_HEALTHCHECK) to run.

So was wondering if you could help in adding a patch for the hashing module to run with this upstream_check_module or even better port the consistent hash of tangine.

Thanks!

patch for nginx-1.2.1 have an error

patch -p1 < /path/to/nginx_upstream_check_module/check.patch

.....
5 out of 11 hunks FAILED -- saving rejects to file src/http/ngx_http_upstream_round_robin.c.rej
patching file src/http/ngx_http_upstream_round_robin.h
....

May be due to nginx 1.2.1 in this change:
*) Bugfix: nginx might loop infinitely over backends if the
"proxy_next_upstream" directive with the "http_404" parameter was
used and there were backup servers specified in an upstream block.

have changed some logic of round robin.

Nginx 1.6.x

Nginx 1.6.x is the latest stable release. Please create a patch for this release.

Thanks!

Error configuring nginx with --add-module=nginx_upstream_check_module

git clone https://github.com/yaoweibin/nginx_upstream_check_module.git /src/nginx-modules/nginx_upstream_check_module;
cd nginx;
./configure --add-module=
/src/nginx-modules/nginx_upstream_check_module

Error is:
adding module in ~/src/nginx-modules/nginx_upstream_check_module
checking for ngx_http_upstream_check_module ... not found
auto/configure: error: the ngx_http_upstream_check_module addon error.

I have to rename nginx_upstream_check_module to ngx_http_upstream_check_module

1.7.2+ patch fails on 1.7.5+

I found that the upstream module fails when I'm building a centos RPM, output below from the build process along with the rejections:

Build Output

 STATUS=0
+ '[' 0 -ne 0 ']'
+ /bin/chmod -Rf a+rX,u+w,g-w,o-w .
+ echo 'Patch #0 (nginx_upstream_check.patch):'
Patch #0 (nginx_upstream_check.patch):
+ /usr/bin/patch -p0 --fuzz=0
+ /bin/cat /home/vagrant/rpmbuild/SOURCES/nginx_upstream_check.patch
patching file src/http/modules/ngx_http_upstream_ip_hash_module.c
patching file src/http/modules/ngx_http_upstream_least_conn_module.c
patching file src/http/ngx_http_upstream_round_robin.c
Hunk #1 FAILED at 9.
Hunk #2 succeeded at 92 (offset 4 lines).
Hunk #3 succeeded at 156 (offset 4 lines).
Hunk #4 succeeded at 224 (offset 4 lines).
Hunk #5 succeeded at 336 (offset 4 lines).
Hunk #6 succeeded at 378 (offset 4 lines).
Hunk #7 succeeded at 440 (offset 4 lines).
Hunk #8 succeeded at 538 (offset -1 lines).
1 out of 8 hunks FAILED -- saving rejects to file src/http/ngx_http_upstream_round_robin.c.rej
patching file src/http/ngx_http_upstream_round_robin.h
error: Bad exit status from /var/tmp/rpm-tmp.yhVqD5 (%prep)


RPM build errors:
    Bad exit status from /var/tmp/rpm-tmp.yhVqD5 (%prep)

Rejections

--- src/http/ngx_http_upstream_round_robin.c
+++ src/http/ngx_http_upstream_round_robin.c
@@ -9,6 +9,9 @@
 #include <ngx_core.h>
 #include <ngx_http.h>

+#if (NGX_UPSTREAM_CHECK_MODULE)
+#include "ngx_http_upstream_check_handler.h"
+#endif

 static ngx_http_upstream_rr_peer_t *ngx_http_upstream_get_peer(
     ngx_http_upstream_rr_peer_data_t *rrp);

I can provide the spec file used.

How to gethealthy status when use ngx_http_consistent_hash

hi, weibin,thx for this module, Greate job!
I use two modules developed by you,
ngx_http_consistent_hash-master and nginx_upstream_check_module-master.
nginx configure file like below:
check interval=3000 rise=2 fall=5 timeout=1000 type=http;
consistent_hash $dist
but I can not get any healthy status by using "check_status" in this case.

if I comment out consistent_hash,"check_status" can work.
check interval=3000 rise=2 fall=5 timeout=1000 type=http;
consistent_hash $dist

so , How can i get healthy status when use ngx_http_consistent_hash
Thanks & Regard

Proposed patch to show simple Status of upstream in one line in Title and a single line to simplify monitoring and RRDTools

Please find below a simple patch to display the number of upstream in title and

diff -rup nginx_upstream_check_module-master/ngx_http_upstream_check_handler.c ../nginx_upstream_check_module-master-bg/ngx_http_upstream_check_handler.c
--- nginx_upstream_check_module-master/ngx_http_upstream_check_handler.c        2012-12-23 05:45:54.000000000 +0800
+++ ../nginx_upstream_check_module-master-bg/ngx_http_upstream_check_handler.c  2013-01-17 12:15:27.473837663 +0800
@@ -1632,6 +1632,8 @@ ngx_http_upstream_check_status_handler(n
     ngx_int_t                       rc;
     ngx_buf_t                      *b;
     ngx_uint_t                      i;
+    ngx_uint_t                      down_count;
+    ngx_uint_t                      up_count;
     ngx_chain_t                     out;
     ngx_http_check_peer_t          *peer;
     ngx_http_check_peers_t         *peers;
@@ -1686,12 +1688,22 @@ ngx_http_upstream_check_status_handler(n
     out.buf = b;
     out.next = NULL;

+    down_count=0;
+    up_count=0;
+    for (i = 0; i < peers->peers.nelts; i++) {
+        if(peer_shm[i].down) {
+            down_count++;
+        } else {
+            up_count++;
+        }
+    }
+
     b->last = ngx_snprintf(b->last, b->end - b->last,
             "<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Strict//EN\n"
             "\"http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd\">\n"
             "<html xmlns=\"http://www.w3.org/1999/xhtml\">\n"
             "<head>\n"
-            "  <title>Nginx http upstream check status</title>\n"
+            "  <title>Nginx http upstream check status: Up: %ui, Down: %ui, Total: %ui</title>\n"
             "</head>\n"
             "<body>\n"
             "<h1>Nginx http upstream check status</h1>\n"
@@ -1707,6 +1719,7 @@ ngx_http_upstream_check_status_handler(n
             "    <th>Fall counts</th>\n"
             "    <th>Check type</th>\n"
             "  </tr>\n",
+            up_count, down_count, peers->peers.nelts,
             peers->peers.nelts, ngx_http_check_shm_generation);

     for (i = 0; i < peers->peers.nelts; i++) {
@@ -1732,8 +1745,14 @@ ngx_http_upstream_check_status_handler(n

     b->last = ngx_snprintf(b->last, b->end - b->last,
             "</table>\n"
+            "<br></br>\n"
+            "<br></br>\n"
+            "<h2%s>Status: Up: %ui, Down: %ui, Total: %ui</h2>\n"
             "</body>\n"
-            "</html>\n");
+            "</html>\n"
+            ,down_count > 0 ? " style=\"color: #FF0000\"" : ""
+            ,up_count, down_count, peers->peers.nelts
+            );

     r->headers_out.status = NGX_HTTP_OK;
     r->headers_out.content_length_n = b->last - b->pos;
(

checkstatus create many sessions in short time

I found that every request to check backend status create new session (with different session ID). As I have memcached server to save sessions in nginx+tomcat cluster, the number of sessions in memcached go up increasingly in short time. Does it a good solution? I shoud have thought that the check_status directive can stay the same sessionID because I add nginx_sticky_module to use sticky session( and it works fine).

Are multiple healthchecks supported?

Hi

I was just wondering if specifying multiple healthchecks is supported? It's not clear from the examples but would be great if it is.

Cheers
Neil

Some backend servers are "ignored" after config reload (SIGHUP)

Everything works fine (checks are done, backends are marked "up") until I send a SIGHUP signal to nginx master process in order to reload the config file. Then, some if not all the backends seem to be "ignored". They show up in the status page as "down" but neither the rise or fall counters are incrementing, they are stuck at 0. After several SIGHUP it is sometimes possible to have every backend correctly checked and marked as "up".

I used this module against nginx 0.8.53.

Config :
upstream pool_static
{
server static1:80 fail_timeout=30s;
server static2:80 fail_timeout=30s;
server static3:80 fail_timeout=30s;

            check interval=5000 rise=1 fall=3 timeout=1000 type=http;
            check_http_send "GET /healthcheck HTTP/1.0\r\n\r\n";
            check_http_expect_alive http_2xx;
    }

Improvement request: Check status page to show hostnames instead of IP addresses

In the config, we specify upstreams as hostnames, but on the check status page, what's shown is the corresponding IP address. This is true even if a PTR record exists in DNS.

Running in a public cloud, IP addresses are meaningless/random, so it would be very helpful if that page showed what was in the config file instead of what gets resolved.

nginx 1.7.11

Hi,

I've been using version 0.1.9 of your module with nginx 1.7.6, which worked flawlessly. Recently I upgraded to nginx 1.7.11 and my upstreams don't get pinged for status anymore, and calling check_status directive results in the following:

http upstream check module can not find any check server, make sure you've added the check servers

I see this issue with both 0.1.9 and 0.3.0. From the outside it looks like the module is not initialized correctly, e.g. it does not receive a list of upstream servers.

I just wanted to check with you, if this is something you are already aware of?

How to get healthy status when use ngx_http_consistent_hash

hi, weibin,thx for this module, Greate job!
I use two modules developed by you,
ngx_http_consistent_hash-master and nginx_upstream_check_module-master.
nginx configure file like below:
check interval=3000 rise=2 fall=5 timeout=1000 type=http;
consistent_hash $dist
in this case,I can not get any healthy status by using "check_status"

if I comment out consistent_hash,"check_status" can work.
so I think when both use the two modules, "check_status" can't work

but I need to use consistent_hash , How can i get healthy status when use ngx_http_consistent_hash
Thanks & Regard

nginx-1.3.9 - ginx_upstream_check_module patch fails

root@dellserver:/build/nginx-1.3.9$ patch -p1 < ./modules/nginx_upstream_check_module/check_1.2.2+.patch
patching file src/http/modules/ngx_http_upstream_ip_hash_module.c
patching file src/http/modules/ngx_http_upstream_least_conn_module.c
patching file src/http/ngx_http_upstream_round_robin.c
Hunk #7 FAILED at 462.
Hunk #8 succeeded at 564 (offset 4 lines).
1 out of 8 hunks FAILED -- saving rejects to file src/http/ngx_http_upstream_round_robin.c.rej
patching file src/http/ngx_http_upstream_round_robin.h

是否可以在check_status列表里也显示状态为down的服务器信息

你好,这个问题不是BUG,只是希望有这么个功能,在我们运维管理服务信息时,能够更加方便。
是否可以在已有的服务器信息查看列表中显示状态为DOWN的服务信息,可以标注背景颜色为灰色或其它颜色,以便能够区分哪些服务器是正常、连接无效、暂时关闭(DOWN)。
谢谢!

Crash at ngx_http_upstream_check_handler.c Line 1364

My OS is Ubuntu 12.04.4 LTS build nginx-1.4.3.
I got crash on site config "check interval=3000;" directive.

I use gdb core dump to trace the code.
I found "Line 1364: number = peers->peers.nelts;"
The variable "peers" is Null Pointer, and process got segmentation fault.

How to resolve this problem?
Thanks.

check_http_send not working in Azure

Hi All,
I'm trying to set nginx_upstream_check_module working for my web servers in Azure,
But it seems that NGINX not even accessing the servers, as I can’t see any requests from it on the IIS logs.
Other than that, HTTP traffic is getting to the Web servers, they are just not marked down when they need to.

NGINX Version – 1.5.10

upstream cluster {

        # simple round-robin
        server waz-web2.mydomain.local;

        check interval=300 rise=2 fall=5 timeout=1000 type=http;
        check_http_send "GET /servermonitor.aspx HTTP/1.0\r\n\r\n";
        check_http_expect_alive http_2xx http_3xx;
}

What can be the problem?
Tnx, Yaniv.

`check_1.2.6+.patch` does not applies to the latest nginx mainline (1.5.8)

I suppose the latest patch check_1.2.6+.patch should work with the latest nginx version 1.5.8. Here is the stdout of the patch that was not finished successfully:

patching file src/http/modules/ngx_http_upstream_ip_hash_module.c
patching file src/http/modules/ngx_http_upstream_least_conn_module.c
patching file src/http/ngx_http_upstream_round_robin.c
Hunk #1 succeeded at 9 with fuzz 2.
Hunk #2 FAILED at 90.
Hunk #3 succeeded at 142 (offset -6 lines).
Hunk #4 succeeded at 210 (offset -10 lines).
Hunk #5 succeeded at 319 (offset -21 lines).
Hunk #6 succeeded at 362 (offset -14 lines).
Hunk #7 succeeded at 422 (offset -35 lines).
Hunk #8 succeeded at 527 (offset -33 lines).
1 out of 8 hunks FAILED -- saving rejects to file src/http/ngx_http_upstream_round_robin.c.rej
patching file src/http/ngx_http_upstream_round_robin.h

Let me know if you are going to support this version of nginx.

ngx_http_upstream_jvm_route_module.patch not applying nor compiling cleanly

Good work on the upstream check!

Sadly with a newer jvm_route version this patch does not apply cleanly anymore nor am I able to properly build nginx after applying the patch by hand for the two rejects:

src/modules/nginx-upstream-jvm-route/ngx_http_upstream_jvm_route_module.c: In function ‘ngx_http_upstream_init_jvm_route_rr’:
src/modules/nginx-upstream-jvm-route/ngx_http_upstream_jvm_route_module.c:371:21: error: implicit declaration of function ‘ngx_http_check_add_peer’ [-Werror=implicit-function-declaration]
src/modules/nginx-upstream-jvm-route/ngx_http_upstream_jvm_route_module.c: In function ‘ngx_http_upstream_init_jvm_route’:
src/modules/nginx-upstream-jvm-route/ngx_http_upstream_jvm_route_module.c:589:43: error: variable ‘ujrscf’ set but not used [-Werror=unused-but-set-variable]
src/modules/nginx-upstream-jvm-route/ngx_http_upstream_jvm_route_module.c: In function ‘ngx_http_upstream_jvm_route_try_peer’:
src/modules/nginx-upstream-jvm-route/ngx_http_upstream_jvm_route_module.c:785:3: error: implicit declaration of function ‘ngx_http_check_peer_down’ [-Werror=implicit-function-declaration]

Would it be possible to update this patch to work with the latest jvm_route version and nginx 1.3.11? Sadly my skills in C++ and patching sources are very limited and barely enough to fix the patch to apply properly but I would not be able to fix the compile warnings 😦

Hostnames in upstream status

Hi Yao,

Would it be possible to get this #9 into a branch for 1.7/1.8? We have a need to see hostnames in the status page.

Thanks!

crash at ngx_http_upstream_check_handler.c:1656

Got crash:when reloading:

#0  ngx_http_check_add_timers (cycle=0x13832310) at ngx_http_upstream_check/ngx_http_upstream_check_handler.c:1656
1656            cf = ucscf->check_type_conf;
(gdb) where
#0  ngx_http_check_add_timers (cycle=0x13832310) at ngx_http_upstream_check/ngx_http_upstream_check_handler.c:1656
#1  0x000000000042e29e in ngx_worker_process_init (cycle=0x13832310, priority=<value optimized out>)
    at src/os/unix/ngx_process_cycle.c:962
#2  0x000000000042ea0c in ngx_worker_process_cycle (cycle=0x341bf524c0, data=<value optimized out>)
    at src/os/unix/ngx_process_cycle.c:724
#3  0x000000000042d210 in ngx_spawn_process (cycle=0x13832310, proc=0x42e9f0 <ngx_worker_process_cycle>, data=0x0, 
    name=0x4bba79 "worker process", respawn=-4) at src/os/unix/ngx_process.c:196
#4  0x000000000042e09c in ngx_start_worker_processes (cycle=0x13832310, n=2, type=-4)
    at src/os/unix/ngx_process_cycle.c:360
#5  0x000000000042f429 in ngx_master_process_cycle (cycle=0x13832310) at src/os/unix/ngx_process_cycle.c:249
#6  0x00000000004151ff in main (argc=5, argv=<value optimized out>) at src/core/nginx.c:405
(gdb) print peers
$1 = (ngx_http_check_peers_t *) 0x138b2648
(gdb) print *peers
$2 = {check_shm_name = {len = 327886240, data = 0x1391d7a0 "\020|\213\023"}, peers = {elts = 0x137f3970, 
    nelts = 329496184, size = 328325728, nalloc = 327104912, pool = 0x138b2690}, peers_shm = 0x1391dbf0}
(gdb) print (ngx_http_check_peer_t*)peers->peers->elts
$4 = (ngx_http_check_peer_t *) 0x137f3970
(gdb) print *(ngx_http_check_peer_t*)peers->peers->elts
$5 = {state = 327104896, pool = 0xff00000000, index = 327104848, max_busy = 1095216660480, peer_addr = 0x137f39a0, 
  check_ev = {data = 0x137f3970, write = 0, accept = 0, instance = 0, active = 0, disabled = 1, ready = 1, 
    oneshot = 1, complete = 0, eof = 1, error = 0, timedout = 0, timer_set = 0, delayed = 1, read_discarded = 1, 
    unexpected_eof = 0, deferred_accept = 0, pending_eof = 1, posted_ready = 1, available = 1, 
    handler = 0x48b740 <ngx_http_check_begin_handler>, index = 327104960, log = 0x13832328, timer = {key = 327104912, 
      left = 0xff00000000, right = 0x137f39e0, parent = 0xff00000000, color = 176 '°', data = 57 '9'}, closed = 0, 
    channel = 0, resolver = 0, next = 0x137f3a00, prev = 0xff00000000}, check_timeout_ev = {data = 0x137f3970, 
    write = 0, accept = 0, instance = 0, active = 0, disabled = 0, ready = 0, oneshot = 0, complete = 0, eof = 0, 
    error = 0, timedout = 0, timer_set = 0, delayed = 0, read_discarded = 0, unexpected_eof = 0, deferred_accept = 0, 
    pending_eof = 0, posted_ready = 0, available = 0, handler = 0x48ae50 <ngx_http_check_timeout_handler>, 
    index = 1095216660480, log = 0x137f39f0, timer = {key = 1095216660480, left = 0x137f3a40, right = 0xff00000000, 
      parent = 0x137f3a10, color = 0 '\000', data = 0 '\000'}, closed = 0, channel = 0, resolver = 0, 
    next = 0xff00000000, prev = 0x137f3a30}, pc = {connection = 0xff00000000, sockaddr = 0x137f3a80, socklen = 0, 
    name = 0x137f3a50, tries = 1095216660480, check_index = 327105136, get = 0xff00000000, free = 0x20bd00005, 
    data = 0x2441, set_session = 0x13923260, save_session = 0x13a10870, local = 0x0, rcvbuf = 0, log = 0x2450bcd0024, 
    cached = 0, log_error = 0}, check_data = 0x137f2990, send_handler = 0x71, recv_handler = 0x137f2880, 
  init = 0x13960ad0, parse = 0x341e88b780 <Perl_pp_nextstate>, reinit = 0, shm = 0x1391dc08, conf = 0x0}

Clearly dangling pointer.

Ran program with valgrind:

==18586== Invalid read of size 4                                
==18586==    at 0x80CBD81: ngx_http_check_add_timers (ngx_http_upstream_check_handler.c:1628)
==18586==    by 0x80CA082: ngx_http_check_init_process (ngx_http_upstream_check_module.c:602)
==18586==    by 0x8068F86: ngx_worker_process_init (ngx_process_cycle.c:962)
==18586==    by 0x80694D4: ngx_worker_process_cycle (ngx_process_cycle.c:724)
==18586==    by 0x8067C60: ngx_spawn_process (ngx_process.c:196)
==18586==    by 0x8068A6E: ngx_start_worker_processes (ngx_process_cycle.c:360)
==18586==    by 0x806A52A: ngx_master_process_cycle (ngx_process_cycle.c:249)
==18586==    by 0x804D792: main (nginx.c:405)
==18586==  Address 0x447863c is 5,244 bytes inside a block of size 16,384 free'd
==18586==    at 0x400551D: free (vg_replace_malloc.c:325)
==18586==    by 0x804E245: ngx_destroy_pool (ngx_palloc.c:86)
==18586==    by 0x805A49B: ngx_init_cycle (ngx_cycle.c:734)
==18586==    by 0x806A4EC: ngx_master_process_cycle (ngx_process_cycle.c:240)
==18586==    by 0x804D792: main (nginx.c:405)
==18586== 
{
   <insert_a_suppression_name_here>
   Memcheck:Addr4
   fun:ngx_http_check_add_timers
   fun:ngx_http_check_init_process
   fun:ngx_worker_process_init
   fun:ngx_worker_process_cycle
   fun:ngx_spawn_process
   fun:ngx_start_worker_processes
   fun:ngx_master_process_cycle
   fun:main
}

Crash happens when reloading. You need also to disable check first at new config.

I dont know how to make proper test case, but I used following test as base:

use Test::Nginx::LWP;
plan tests => repeat_each() * 2 * blocks();
no_root_location;
Test::Nginx::Util::master_on;
run_tests();
__DATA__
=== TEST 1: sleep
--- http_config
    upstream test{
        server localhost:80;
        check interval=1000 rise=1 fall=2 timeout=200 type=http;
        check_http_send "HEAD / HTTP/1.0\r\n\r\n";
        check_http_expect_alive http_2xx http_3xx;
    }
--- config
    location / {
        echo_sleep 5;
        echo '<ok>';
    }
--- request
GET /
--- response_body_like: ^<(.*)>$

And then disabled check by commenting lines (vi t/servroot/conf/nginx.conf). Config block when I run it etcproxy and valgrind so I can reload consistenlty.

Test script used:

#!/bin/sh
killall nginx
export TEST_NGINX_CLIENT_PORT=1234
export TEST_NGINX_USE_VALGRIND=1
export PATH="$HOME"/nginx/sbin:"$PATH"
prove -r ${1:-t}

This patch seems to fix this crash.

--- nginx-1.0.0-orig/ngx_http_upstream_check/ngx_http_upstream_check_handler.c  2011-01-06 05:35:23.000000000 +0200
+++ nginx-1.0.0/ngx_http_upstream_check/ngx_http_upstream_check_handler.c       2011-04-29 15:59:32.000000000 +0300
@@ -1602,6 +1602,8 @@
         check_peers_ctx = ucmcf->peers;

         shm_zone->init = ngx_http_upstream_check_init_shm_zone;
+    } else {
+        check_peers_ctx = NULL;
     }

     return NGX_CONF_OK;

Ubuntu 14.04 check_status directive

Ubuntu 14.04
nginx-1.4.7
add your check module and use check_status.
The status web will show the instance health twice. (shown two rows)

a bug,if at the beginning have a down server

eg:
upstream {
server a;
server b; #this is a down server
check interval=1000 rise=3 fall=3 timeout=500 type=tcp default_down=true;
}

if interval time large enough, when a request have arrived , the first check still have not do, the first request will receive a http 502 response.

sometimes, i don't want a short time interval

does this module support https?

my config file like this:

upstream nlbserver {
        server 10.180.45.227:8086;
        server 10.180.137.221:8086;

        check interval=10000 rise=1 fall=3 timeout=3000 type=http;
        check_http_send "GET /health-check HTTP/1.1\r\n\r\n";
        check_http_expect_alive http_2xx http_3xx;
    }

my backend server is netty + restlet for rest service, but when health-check catch by my backend server, it caught an exception with :

not an SSL/TLS record: 474554202f6865616c74682d636865636b20485454502f312e310d0a0d0a

my question is : does this module support https?

check_http_send directive is broken

This probably affects all the other check_*_send directives
in ngx_http_upstream_check_http_send()
uscf->send = *value

*value is equivalent to value[0] which is the actual string "check_http_send", and I can see the request "check_http_send" being sent to my backend servers. The correct value is probably value[1].

An easier way to do this is to use ngx_conf_set_str_slot, here is an example for check_http_send that seems to work :

{ ngx_string("check_http_send"),
    NGX_HTTP_UPS_CONF|NGX_CONF_TAKE1,
    ngx_conf_set_str_slot,
    NGX_HTTP_SRV_CONF_OFFSET,
    offsetof(ngx_http_upstream_check_srv_conf_t, send),
    NULL },

Segfault during configuration when calling ngx_http_conf_upstream_srv_conf()

The example of configuration below should trigger the issue.

It seems that any peer that wasn't declared in an upstream block (http://localhost:8899 in the example) has:
((ngx_http_upstream_srv_conf_t *) us)->srv_conf == NULL
Thus calling ngx_http_conf_upstream_srv_conf(us, ngx_http_upstream_check_module) attempts to dereference a null pointer.

I'm using nginx 0.8.50 with nginx_upstream_hash (patch), but this issue seem reproducible on 0.8.49 without the patch as well.

Configuration Example :

master_process off;
daemon off;

error_log /tmp/nginx_error.log debug_http;

debug_points stop;

events {
}

http {
upstream backend {
server localhost:8888;
check interval=3000 rise=2 fall=2 timeout=1000 default_down=true type=http;
}

server {
    listen       32080;
    server_name  localhost;

    location / {
       proxy_pass http://localhost:8899;
    }
}

}

nginx_upstream_hash 和 nginx_upstream_check_module 不能一起使用

我使用以下方式在 nginx-1.2.5 中安装了 nginx_upstream_hash 和 nginx_upstream_check_module 模块

patch -p1 < ../nginx_upstream_check_module-master/check_1.2.2+.patch
patch -p0 < ../nginx_upstream_hash-master/nginx.patch

./configure --user=nginx --group=nginx --prefix=/nginx --add-module=../ngx_cache_purge-1.6 --with-cc-opt='-O3' --with-http_stub_status_module --with-http_ssl_module --with-cpu-opt=opteron --with-debug --with-pcre=../pcre-8.32 --add-module=../nginx_upstream_check_module-master --add-module=../nginx_upstream_hash-master

但是在同一个 upstream 块中使用时,访问http://10.xx.xx.xx/status 则报500错误

http://10.xx.xx.xx/status

500 Internal Server Error

nginx/1.2.5

upstream代码块

upstream ndyxsrv {
hash $cookie_JSESSIONID;
hash_again 2;
server 10.223.30.178:7013;
server 10.223.30.178:7014;
server 10.223.30.184:7011;
server 10.223.30.184:7012;
check interval=6000 rise=4 fall=10 timeout=3000 default_down=false type=http;
check_http_send "GET /sgpms/appjsps/jsp/test/test_web.jsp HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}

如果去掉

    hash $cookie_JSESSIONID;
    hash_again 2;

则正常显示nginx http upstream check status

请问以上问题是不是两个模块不兼容兼容造成的?

另外,我后端使用的是weblogic集群但没有session复制,前端使用 nginx 做反向代理,使用 nginx-sticky-module 和 nginx-upstream-jvm-route 都无法粘连session,只有nginx_upstream_hash模块 的 hash $cookie_JSESSIONID 才能完成,也不知道什么原因。

healthcheck body content

Hi Weibin,

有计划增加对HTTP body返回内容的healthcheck 吗? 现在的 health check只能检查HTTP response
code。 大致看那一下代码, 现在主要是在
ngx_http_upstream_check_parse_status_line里检查返回码,如何extend到检查body内容呢?
如果不是太难的话,我原试着提交patch,但需要一些guidance.

Vincent

Compilation issue with --with-http_perl_module

When compiling with --with-http_perl_module, ngx_http_upstream_check_handler.h included in ngx_http_upstream_round_robin.h cannot be found, because the $ngx_addon_dir is not included by src/http/modules/perl/Makefile.PL

Adding this to ngx_http_upstream_round_robin.c instead ngx_http_upstream_round_robin.h seems to fix the problem :

if (NGX_UPSTREAM_CHECK_MODULE)

include "ngx_http_upstream_check_handler.h"

endif

And adding #include "ngx_http_upstream_check_handler.h" in ngx_http_upstream_check_handler.c

Http check specific URL - documentation

I have two upstream servers (both iis) sitting behind nginx with your module. What I am trying to do is have the upstream module check health using a specific URL. I don't want it to check against my default.htm page but instead a specific health check page.

How do I configure the health checks to check. Http://mybackendiisserver/server-health.asp.

Also is any 5xx error considered an upstream failure?

Enhancement request: Deep health check via AJP

Hello,

I have experimented with both your AJP module and this one. I like using AJP to connect to a Tomcat upstream (instead of HTTP), but I need a deeper health check. Quite often Tomcat will crash in such a way that the CPING/CPONG will still return properly, but my application is still inoperable.

Is it possible to add a check via AJP that is similar to the check_http_send directive, where the check can issue a GET to some path?

upstream_check_module with NGINX 1.7.6 segfaults when reloading configuration

Hi,

My OS is RHEL 6.4, and I compiled NGINX 1.7.6 with the master branch of upstream_check_module (November 18th)

Upstream_check_module is the only added module, and check_1.7.2+.patch was successfully applied.

Symptom : NGINX master process crashes with segmentation fault on the third "nginx -s reload" command (1st and 2nd reload are ok).

Infos extracted from the resulting core file :

Core was generated by `nginx: master process /'.
Program terminated with signal 11, Segmentation fault.
#0  0x00007f9f30bed158 in __strncmp_sse42 () from /lib64/libc.so.6
(gdb) info threads
* 1 Thread 0x7f9f31d057c0 (LWP 14655)  0x00007f9f30bed158 in __strncmp_sse42 () from /lib64/libc.so.6
(gdb) bt
#0  0x00007f9f30bed158 in __strncmp_sse42 () from /lib64/libc.so.6
#1  0x0000000000483672 in ngx_http_upstream_check_find_shm_peer (shm_zone=0x1c36628, data=<value optimized out>)
    at /home/a090966/build/SOURCES/nginx/nginx_upstream_check_module-master/ngx_http_upstream_check_module.c:3995
#2  ngx_http_upstream_check_init_shm_zone (shm_zone=0x1c36628, data=<value optimized out>) at /home/a090966/build/SOURCES/nginx/nginx_upstream_check_module-master/ngx_http_upstream_check_module.c:3897
#3  0x0000000000414aee in ngx_init_cycle (old_cycle=0x1c3a3a0) at src/core/ngx_cycle.c:470
#4  0x00000000004242b2 in ngx_master_process_cycle (cycle=0x1c3a3a0) at src/os/unix/ngx_process_cycle.c:244
#5  0x00000000004082e4 in main (argc=<value optimized out>, argv=<value optimized out>) at src/core/nginx.c:407

Hope you can help !

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.