Giter Site home page Giter Site logo

vozlt / nginx-module-vts Goto Github PK

View Code? Open in Web Editor NEW
3.1K 130.0 454.0 1.06 MB

Nginx virtual host traffic status module

License: BSD 2-Clause "Simplified" License

HTML 15.28% C 76.07% Perl 8.14% Shell 0.51%
c nginx nginx-module nginx-vhost-traffic-status vozlt-nginx-modules monitoring

nginx-module-vts's Introduction

Nginx virtual host traffic status module

CI License

Nginx virtual host traffic status module

Table of Contents

Version

GitHub Release

See the GitHub Releases for the latest tagged release.

Test

Run sudo prove -r t after you have installed this module. The sudo is required because the test requires Nginx to listen on port 80.

Dependencies

Compatibility

  • Nginx
    • 1.22.x (last tested: 1.22.0)
    • 1.19.x (last tested: 1.19.6)
    • 1.18.x (last tested: 1.18.0)
    • 1.16.x (last tested: 1.15.1)
    • 1.15.x (last tested: 1.15.0)
    • 1.14.x (last tested: 1.14.0)
    • 1.13.x (last tested: 1.13.12)
    • 1.12.x (last tested: 1.12.2)
    • 1.11.x (last tested: 1.11.10)
    • 1.10.x (last tested: 1.10.3)
    • 1.8.x (last tested: 1.8.0)
    • 1.6.x (last tested: 1.6.3)
    • 1.4.x (last tested: 1.4.7)

Earlier versions is not tested.

Screenshots

screenshot-vts-0


screenshot-vts-1

Installation

  1. Clone the git repository.
shell> git clone git://github.com/vozlt/nginx-module-vts.git
  1. Add the module to the build configuration by adding --add-module=/path/to/nginx-module-vts

  2. Build the nginx binary.

  3. Install the nginx binary.

Synopsis

http {
    vhost_traffic_status_zone;

    ...

    server {

        ...

        location /status {
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}

Description

This is an Nginx module that provides access to virtual host status information. It contains the current status such as servers, upstreams, caches. This is similar to the live activity monitoring of nginx plus. The built-in html is also taken from the demo page of old version.

First of all, the directive vhost_traffic_status_zone is required, and then if the directive vhost_traffic_status_display is set, can be access to as follows:

  • /status/format/json
    • If you request /status/format/json, will respond with a JSON document containing the current activity data for using in live dashboards and third-party monitoring tools.
  • /status/format/html
    • If you request /status/format/html, will respond with the built-in live dashboard in HTML that requests internally to /status/format/json.
  • /status/format/jsonp
    • If you request /status/format/jsonp, will respond with a JSONP callback function containing the current activity data for using in live dashboards and third-party monitoring tools.
  • /status/format/prometheus
    • If you request /status/format/prometheus, will respond with a prometheus document containing the current activity data.
  • /status/control
    • If you request /status/control, will respond with a JSON document after it reset or delete zones through a query string. See the Control.

JSON document contains as follows:

{
    "hostName": ...,
    "moduleVersion": ...,
    "nginxVersion": ...,
    "loadMsec": ...,
    "nowMsec": ...,
    "connections": {
        "active":...,
        "reading":...,
        "writing":...,
        "waiting":...,
        "accepted":...,
        "handled":...,
        "requests":...
    },
    "sharedZones": {
        "name":...,
        "maxSize":...,
        "usedSize":...,
        "usedNode":...
    },
    "serverZones": {
        "...":{
            "requestCounter":...,
            "inBytes":...,
            "outBytes":...,
            "responses":{
                "1xx":...,
                "2xx":...,
                "3xx":...,
                "4xx":...,
                "5xx":...,
                "miss":...,
                "bypass":...,
                "expired":...,
                "stale":...,
                "updating":...,
                "revalidated":...,
                "hit":...,
                "scarce":...
            },
            "requestMsecCounter":...,
            "requestMsec":...,
            "requestMsecs":{
                "times":[...],
                "msecs":[...]
            },
            "requestBuckets":{
                "msecs":[...],
                "counters":[...]
            },
        }
        ...
    },
    "filterZones": {
        "...":{
            "...":{
                "requestCounter":...,
                "inBytes":...,
                "outBytes":...,
                "responses":{
                    "1xx":...,
                    "2xx":...,
                    "3xx":...,
                    "4xx":...,
                    "5xx":...,
                    "miss":...,
                    "bypass":...,
                    "expired":...,
                    "stale":...,
                    "updating":...,
                    "revalidated":...,
                    "hit":...,
                    "scarce":...
                },
                "requestMsecCounter":...,
                "requestMsec":...,
                "requestMsecs":{
                    "times":[...],
                    "msecs":[...]
                },
                "requestBuckets":{
                    "msecs":[...],
                    "counters":[...]
                },
            },
            ...
        },
        ...
    },
    "upstreamZones": {
        "...":[
            {
                "server":...,
                "requestCounter":...,
                "inBytes":...,
                "outBytes":...,
                "responses":{
                    "1xx":...,
                    "2xx":...,
                    "3xx":...,
                    "4xx":...,
                    "5xx":...
                },
                "requestMsecCounter":...,
                "requestMsec":...,
                "requestMsecs":{
                    "times":[...],
                    "msecs":[...]
                },
                "requestBuckets":{
                    "msecs":[...],
                    "counters":[...]
                },
                "responseMsecCounter":...,
                "responseMsec":...,
                "responseMsecs":{
                    "times":[...],
                    "msecs":[...]
                },
                "responseBuckets":{
                    "msecs":[...],
                    "counters":[...]
                },
                "weight":...,
                "maxFails":...,
                "failTimeout":...,
                "backup":...,
                "down":...
            }
            ...
        ],
        ...
    }
    "cacheZones": {
        "...":{
            "maxSize":...,
            "usedSize":...,
            "inBytes":...,
            "outBytes":...,
            "responses":{
                "miss":...,
                "bypass":...,
                "expired":...,
                "stale":...,
                "updating":...,
                "revalidated":...,
                "hit":...,
                "scarce":...
            }
        },
        ...
    }
}
  • main
    • Basic version, uptime((nowMsec - loadMsec)/1000)
    • nowMsec, loadMsec is a millisecond.
  • connections
    • Total connections and requests(same as stub_status_module in NGINX)
  • sharedZones
    • The shared memory information using in nginx-module-vts.
  • serverZones
    • Traffic(in/out) and request and response counts and cache hit ratio per each server zone
    • Total traffic(In/Out) and request and response counts(It zone name is *) and hit ratio
  • filterZones
    • Traffic(in/out) and request and response counts and cache hit ratio per each server zone filtered through the vhost_traffic_status_filter_by_set_key directive
    • Total traffic(In/Out) and request and response counts(It zone name is *) and hit ratio filtered through the vhost_traffic_status_filter_by_set_key directive
  • upstreamZones
    • Traffic(in/out) and request and response counts per server in each upstream group
    • Current settings(weight, maxfails, failtimeout...) in nginx.conf
  • cacheZones
    • Traffic(in/out) and size(capacity/used) and hit ratio per each cache zone when using the proxy_cache directive.

The overCounts objects in JSON document are mostly for 32bit system and will be increment by 1 if its value is overflowed. The directive vhost_traffic_status_display_format sets the default ouput format that is one of json, jsonp, html, prometheus. (Default: json)

Traffic calculation as follows:

  • ServerZones
    • in += requested_bytes
    • out += sent_bytes
  • FilterZones
    • in += requested_bytes via the filter
    • out += sent_bytes via the filter
  • UpstreamZones
    • in += requested_bytes via the ServerZones
    • out += sent_bytes via the ServerZones
  • cacheZones
    • in += requested_bytes via the ServerZones
    • out += sent_bytes via the ServerZones

All calculations are working in log processing phase of Nginx. Internal redirects(X-Accel-Redirect or error_page) does not calculate in the UpstreamZones.

Caveats: this module relies on nginx logging system(NGX_HTTP_LOG_PHASE:last phase of the nginx http), so the traffic may be in certain cirumstances different that real bandwidth traffic. Websocket, canceled downloads may be cause of inaccuracies. The working of the module doesn't matter at all whether the access_log directive "on" or "off". Again, this module works well on "access_log off". When using several domains it sets to be first domain(left) of server_name directive. If you don't want it, see the vhost_traffic_status_filter_by_host, vhost_traffic_status_filter_by_set_key directive.

See the following modules for the stream traffic statistics:

Calculations and Intervals

Averages

All averages are currently calculated as AMM(Arithmetic Mean) over the last 64 values.

Control

It is able to reset or delete traffic zones through a query string. The request responds with a JSON document.

  • URI Syntax
    • /{status_uri}/control?cmd={command}&group={group}&zone={name}
http {

    geoip_country /usr/share/GeoIP/GeoIP.dat;

    vhost_traffic_status_zone;
    vhost_traffic_status_filter_by_set_key $geoip_country_code country::*;

    ...

    server {

        server_name example.org;

        ...

        vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;

        location /status {
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}

If it set as above, then the control uri is like example.org/status/control.

The available request arguments are as follows:

  • cmd=<status|reset|delete>
    • status
      • It returns status of traffic zones to json format like status/format/json.
    • reset
      • It reset traffic zones without deleting nodes in shared memory.(= init to 0)
    • delete
      • It delete traffic zones in shared memory. when re-request recreated.
  • group=<server|filter|upstream@alone|upstream@group|cache|*>
    • server
    • filter
    • upstream@alone
    • upstream@group
    • cache
    • *
  • zone=name
    • server
      • name
    • filter
      • filter_group@name
    • upstream@group
      • upstream_group@name
    • upstream@alone
      • @name
    • cache
      • name

To get status of traffic zones on the fly

This is similar to the status/format/json except that it can get each zones.

To get fully zones

  • It is exactly the same with the status/format/json.
    • /status/control?cmd=status&group=*

To get group zones

  • mainZones
    • /status/control?cmd=status&group=server&zone=::main
  • serverZones
    • /status/control?cmd=status&group=server&zone=*
  • filterZones
    • /status/control?cmd=status&group=filter&zone=*
  • upstreamZones
    • /status/control?cmd=status&group=upstream@group&zone=*
  • upstreamZones::nogroups
    • /status/control?cmd=status&group=upstream@alone&zone=*
  • cacheZones
    • /status/control?cmd=status&group=cache&zone=*

The mainZones values are default status values including hostName, moduleVersion, nginxVersion, loadMsec, nowMsec, connections.

To get each zones

  • single zone in serverZones
    • /status/control?cmd=status&group=server&zone=name
  • single zone in filterZones
    • /status/control?cmd=status&group=filter&zone=filter_group@name
  • single zone in upstreamZones
    • /status/control?cmd=status&group=upstream@group&zone=upstream_group@name
  • single zone in upstreamZones::nogroups
    • /status/control?cmd=status&group=upstream@alone&zone=name
  • single zone in cacheZones
    • /status/control?cmd=status&group=cache&zone=name

To reset traffic zones on the fly

It reset the values of specified zones to 0.

To reset fully zones

  • /status/control?cmd=reset&group=*

To reset group zones

  • serverZones
    • /status/control?cmd=reset&group=server&zone=*
  • filterZones
    • /status/control?cmd=reset&group=filter&zone=*
  • upstreamZones
    • /status/control?cmd=reset&group=upstream@group&zone=*
  • upstreamZones::nogroups
    • /status/control?cmd=reset&group=upstream@alone&zone=*
  • cacheZones
    • /status/control?cmd=reset&group=cache&zone=*

To reset each zones

  • single zone in serverZones
    • /status/control?cmd=reset&group=server&zone=name
  • single zone in filterZones
    • /status/control?cmd=reset&group=filter&zone=filter_group@name
  • single zone in upstreamZones
    • /status/control?cmd=reset&group=upstream@group&zone=upstream_group@name
  • single zone in upstreamZones::nogroups
    • /status/control?cmd=reset&group=upstream@alone&zone=name
  • single zone in cacheZones
    • /status/control?cmd=reset&group=cache&zone=name

To delete traffic zones on the fly

It delete the specified zones in shared memory.

To delete fully zones

  • /status/control?cmd=delete&group=*

To delete group zones

  • serverZones
    • /status/control?cmd=delete&group=server&zone=*
  • filterZones
    • /status/control?cmd=delete&group=filter&zone=*
  • upstreamZones
    • /status/control?cmd=delete&group=upstream@group&zone=*
  • upstreamZones::nogroups
    • /status/control?cmd=delete&group=upstream@alone&zone=*
  • cacheZones
    • /status/control?cmd=delete&group=cache&zone=*

To delete each zones

  • single zone in serverZones
    • /status/control?cmd=delete&group=server&zone=name
  • single zone in filterZones
    • /status/control?cmd=delete&group=filter&zone=filter_group@name
  • single zone in upstreamZones
    • /status/control?cmd=delete&group=upstream@group&zone=upstream_group@name
  • single zone in upstreamZones::nogroups
    • /status/control?cmd=delete&group=upstream@alone&zone=name
  • single zone in cacheZones
    • /status/control?cmd=delete&group=cache&zone=name

Set

It can get the status values in nginx configuration separately using vhost_traffic_status_set_by_filter directive. It can acquire almost all status values and the obtained value is stored in user-defined-variable which is first argument.

  • Directive Syntax
    • vhost_traffic_status_set_by_filter $variable group/zone/name
http {

    geoip_country /usr/share/GeoIP/GeoIP.dat;

    vhost_traffic_status_zone;
    vhost_traffic_status_filter_by_set_key $geoip_country_code country::*;

    ...
    upstream backend {
        10.10.10.11:80;
        10.10.10.12:80;
    }

    server {

        server_name example.org;

        ...

        vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;

        vhost_traffic_status_set_by_filter $requestCounter server/example.org/requestCounter;
        vhost_traffic_status_set_by_filter $requestCounterKR filter/country::example.org@KR/requestCounter;

        location /backend {
            vhost_traffic_status_set_by_filter $requestCounterB1 upstream@group/[email protected]:80/requestCounter;
            proxy_pass http://backend;
        }
    }
}

The above settings are as follows:

  • $requestCounter
    • serverZones -> example.org -> requestCounter
  • $requestCounterKR
    • filterZones -> country::example.org -> KR -> requestCounter
  • $requestCounterB1
    • upstreamZones -> backend -> 10.0.10.11:80 -> requestCounter

Please see the vhost_traffic_status_set_by_filter directive for detailed usage.

JSON

The following status information is provided in the JSON format:

Json used by status

/{status_uri}/format/json

/{status_uri}/control?cmd=status&...

  • hostName
    • Host name.
  • moduleVersion
    • Version of the module in {version}(|.dev.{commit}) format.
  • nginxVersion
    • Version of the provided.
  • loadMsec
    • Loaded process time in milliseconds.
  • nowMsec
    • Current time in milliseconds
  • connections
    • active
      • The current number of active client connections.
    • reading
      • The total number of reading client connections.
    • writing
      • The total number of writing client connections.
    • waiting
      • The total number of wating client connections.
    • accepted
      • The total number of accepted client connections.
    • handled
      • The total number of handled client connections.
    • requests
      • The total number of requested client connections.
  • sharedZones
    • name
      • The name of shared memory specified in the configuration.(default: vhost_traffic_status)
    • maxSize
      • The limit on the maximum size of the shared memory specified in the configuration.
    • usedSize
      • The current size of the shared memory.
    • usedNode
      • The current number of node using in shared memory. It can get an approximate size for one node with the following formula: (usedSize / usedNode)
  • serverZones
    • requestCounter
      • The total number of client requests received from clients.
    • inBytes
      • The total number of bytes received from clients.
    • outBytes
      • The total number of bytes sent to clients.
    • responses
      • 1xx, 2xx, 3xx, 4xx, 5xx
        • The number of responses with status codes 1xx, 2xx, 3xx, 4xx, and 5xx.
      • miss
        • The number of cache miss.
      • bypass
        • The number of cache bypass.
      • expired
        • The number of cache expired.
      • stale
        • The number of cache stale.
      • updating
        • The number of cache updating.
      • revalidated
        • The number of cache revalidated.
      • hit
        • The number of cache hit.
      • scarce
        • The number of cache scare.
    • requestMsecCounter
      • The number of accumulated request processing time in milliseconds.
    • requestMsec
      • The average of request processing times in milliseconds.
    • requestMsecs
      • times
        • The times in milliseconds at request processing times.
      • msecs
        • The request processing times in milliseconds.
    • requestBuckets
      • msecs
        • The bucket values of histogram set by vhost_traffic_status_histogram_buckets directive.
      • counters
        • The cumulative values for the reason that each bucket value is greater than or equal to the request processing time.
  • filterZones
    • It provides the same fields with serverZones except that it included group names.
  • upstreamZones
    • server
      • An address of the server.
    • requestCounter
      • The total number of client connections forwarded to this server.
    • inBytes
      • The total number of bytes received from this server.
    • outBytes
      • The total number of bytes sent to this server.
    • responses
      • 1xx, 2xx, 3xx, 4xx, 5xx
        • The number of responses with status codes 1xx, 2xx, 3xx, 4xx, and 5xx.
    • requestMsecCounter
      • The number of accumulated request processing time including upstream in milliseconds.
    • requestMsec
      • The average of request processing times including upstream in milliseconds.
    • requestMsecs
      • times
        • The times in milliseconds at request processing times.
      • msecs
        • The request processing times including upstream in milliseconds.
    • requestBuckets
      • msecs
        • The bucket values of histogram set by vhost_traffic_status_histogram_buckets directive.
      • counters
        • The cumulative values for the reason that each bucket value is greater than or equal to the request processing time including upstream.
    • responseMsecCounter
      • The number of accumulated only upstream response processing time in milliseconds.
    • responseMsec
      • The average of only upstream response processing times in milliseconds.
    • responseMsecs
      • times
        • The times in milliseconds at request processing times.
      • msecs
        • The only upstream response processing times in milliseconds.
    • responseBuckets
      • msecs
        • The bucket values of histogram set by vhost_traffic_status_histogram_buckets directive.
      • counters
        • The cumulative values for the reason that each bucket value is greater than or equal to the only upstream response processing time.
    • weight
      • Current weight setting of the server.
    • maxFails
      • Current max_fails setting of the server.
    • failTimeout
      • Current fail_timeout setting of the server.
    • backup
      • Current backup setting of the server.
    • down
      • Current down setting of the server. Basically, this is just a mark the ngx_http_upstream_module's server down(eg. server backend3.example.com down), not actual upstream server state. It will changed to actual state if you enabled the upstream zone directive.
  • cacheZones
    • maxSize
      • The limit on the maximum size of the cache specified in the configuration. If max_size in proxy_cache_path directive is not specified, the system dependent value NGX_MAX_OFF_T_VALUE is assigned by default. In other words, this value is from nginx, not what I specified.
    • usedSize
      • The current size of the cache. This value is taken from nginx like the above maxSize value.
    • inBytes
      • The total number of bytes received from the cache.
    • outBytes
      • The total number of bytes sent from the cache.
    • responses
      • miss
        • The number of cache miss.
      • bypass
        • The number of cache bypass.
      • expired
        • The number of cache expired.
      • stale
        • The number of cache stale.
      • updating
        • The number of cache updating.
      • revalidated
        • The number of cache revalidated.
      • hit
        • The number of cache hit.
      • scarce
        • The number of cache scare.

Json used by control

/{status_uri}/control?cmd=reset&...

/{status_uri}/control?cmd=delete&...

  • processingReturn
    • The result of true or false.
  • processingCommandString
    • The requested command string.
  • processingGroupString
    • The requested group string.
  • processingZoneString
    • The requested zone string.
  • processingCounts
    • The actual processing number.

Variables

The following embedded variables are provided:

  • $vts_request_counter
    • The total number of client requests received from clients.
  • $vts_in_bytes
    • The total number of bytes received from clients.
  • $vts_out_bytes
    • The total number of bytes sent to clients.
  • $vts_1xx_counter
    • The number of responses with status codes 1xx.
  • $vts_2xx_counter
    • The number of responses with status codes 2xx.
  • $vts_3xx_counter
    • The number of responses with status codes 3xx.
  • $vts_4xx_counter
    • The number of responses with status codes 4xx.
  • $vts_5xx_counter
    • The number of responses with status codes 5xx.
  • $vts_cache_miss_counter
    • The number of cache miss.
  • $vts_cache_bypass_counter
    • The number of cache bypass.
  • $vts_cache_expired_counter
    • The number of cache expired.
  • $vts_cache_stale_counter
    • The number of cache stale.
  • $vts_cache_updating_counter
    • The number of cache updating.
  • $vts_cache_revalidated_counter
    • The number of cache revalidated.
  • $vts_cache_hit_counter
    • The number of cache hit.
  • $vts_cache_scarce_counter
    • The number of cache scare.
  • $vts_request_time_counter
    • The number of accumulated request processing time.
  • $vts_request_time
    • The average of request processing times.

Limit

It is able to limit total traffic per each host by using the directive vhost_traffic_status_limit_traffic. It also is able to limit all traffic by using the directive vhost_traffic_status_limit_traffic_by_set_key. When the limit is exceeded, the server will return the 503 (Service Temporarily Unavailable) error in reply to a request. The return code can be changeable.

To limit traffic for server

http {

    vhost_traffic_status_zone;

    ...

    server {

        server_name *.example.org;

        vhost_traffic_status_limit_traffic in:64G;
        vhost_traffic_status_limit_traffic out:1024G;

        ...
    }
}
  • Limit in/out total traffic on the *.example.org to 64G and 1024G respectively. It works individually per each domain if vhost_traffic_status_filter_by_host directive is enabled.

To limit traffic for filter

http {
    geoip_country /usr/share/GeoIP/GeoIP.dat;

    vhost_traffic_status_zone;

    ...

    server {

        server_name example.org;

        vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;
        vhost_traffic_status_limit_traffic_by_set_key FG@country::$server_name@US out:1024G;
        vhost_traffic_status_limit_traffic_by_set_key FG@country::$server_name@CN out:2048G;

        ...

    }
}
  • Limit total traffic of going into US and CN on the example.org to 1024G and 2048G respectively.

To limit traffic for upstream

http {

    vhost_traffic_status_zone;

    ...

    upstream backend {
        server 10.10.10.17:80;
        server 10.10.10.18:80;
    }

    server {

        server_name example.org;

        location /backend {
            vhost_traffic_status_limit_traffic_by_set_key UG@[email protected]:80 in:512G;
            vhost_traffic_status_limit_traffic_by_set_key UG@[email protected]:80 in:1024G;
            proxy_pass http://backend;
        }

        ...

    }
}
  • Limit total traffic of going into upstream backend on the example.org to 512G and 1024G per each peer.

Caveats: Traffic is the cumulative transfer or counter, not a bandwidth.

Use cases

It is able to calculate the user defined individual stats by using the directive vhost_traffic_status_filter_by_set_key.

To calculate traffic for individual country using GeoIP

http {
    geoip_country /usr/share/GeoIP/GeoIP.dat;

    vhost_traffic_status_zone;
    vhost_traffic_status_filter_by_set_key $geoip_country_code country::*;

    ...

    server {

        ...

        vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;

        location /status {
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}
  • Calculate traffic for individual country of total server groups.
  • Calculate traffic for individual country of each server groups.

Basically, country flags image is built-in in HTML. The country flags image is enabled if the country string is included in group name which is second argument of vhost_traffic_status_filter_by_set_key directive.

To calculate traffic for individual storage volume

http {
    vhost_traffic_status_zone;

    ...

    server {

        ...

        location ~ ^/storage/(.+)/.*$ {
            set $volume $1;
            vhost_traffic_status_filter_by_set_key $volume storage::$server_name;
        }

        location /status {
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}
  • Calculate traffic for individual storage volume matched by regular expression of location directive.

To calculate traffic for individual user agent

http {
    vhost_traffic_status_zone;

    map $http_user_agent $filter_user_agent {
        default 'unknown';
        ~iPhone ios;
        ~Android android;
        ~(MSIE|Mozilla) windows;
    }

    vhost_traffic_status_filter_by_set_key $filter_user_agent agent::*;

    ...

    server {

        ...

        vhost_traffic_status_filter_by_set_key $filter_user_agent agent::$server_name;

        location /status {
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}
  • Calculate traffic for individual http_user_agent

To calculate traffic for detailed http status code

http {
    vhost_traffic_status_zone;

    server {

        ...

        vhost_traffic_status_filter_by_set_key $status $server_name;

        location /status {
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}
  • Calculate traffic for detailed http status code

Caveats: $status variable is available in nginx-(1.3.2, 1.2.2).

To calculate traffic for dynamic dns

If the domain has multiple DNS A records, you can calculate traffic for individual IPs for the domain using the filter feature or a variable in proxy_pass.

http {
    vhost_traffic_status_zone;

    upstream backend {
        elb.example.org:80;
    }

    ...

    server {

        ...

        location /backend {
            vhost_traffic_status_filter_by_set_key $upstream_addr upstream::backend;
            proxy_pass backend;
        }
    }
}
  • Calculate traffic for individual IPs for the domain elb.example.org. If elb.example.org has multiple DNS A records, will be display all IPs in filterZones. In the above settings, as NGINX starts up or reloads it configuration, it queries a DNS server to resolve domain and DNS A records is cached in memory. Therefore the DNS A records are not changed in memory even if DNS A records are chagned by DNS administrator unless NGINX re-starts up or reloads.
http {
    vhost_traffic_status_zone;

    resolver 10.10.10.53 valid=10s

    ...

    server {

        ...

        location /backend {
            set $backend_server elb.example.org;
            proxy_pass http://$backend_server;
        }
    }
}
  • Calculate traffic for individual IPs for the domain elb.example.org. If elb.example.org's DNS A record is changed, will be display both the old IP and the new IP in ::nogroups. Unlike the first upstream group setting, the second setting works well even if DNS A records are chagned by DNS administrator.

Caveats: Please more details about NGINX DNS see the dns-service-discovery-nginx-plus.

To calculate traffic except for status page

http {
    vhost_traffic_status_zone;

    ...

    server {

        ...

        location /status {
            vhost_traffic_status_bypass_limit on;
            vhost_traffic_status_bypass_stats on;
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}

To maintain statistics data permanently

http {
    vhost_traffic_status_zone;
    vhost_traffic_status_dump /var/log/nginx/vts.db;

    ...

    server {

        ...

    }
}
  • The vhost_traffic_status_dump directive maintains statistics data permanently even if system has been rebooted or nginx has been restarted. Please see the vhost_traffic_status_dump directive for detailed usage.

Customizing

To customize after the module installed

  1. You need to change the {{uri}} string to your status uri in status.template.html as follows:
shell> vi share/status.template.html
var vtsStatusURI = "yourStatusUri/format/json", vtsUpdateInterval = 1000;
  1. And then, customizing and copy status.template.html to server root directory as follows:
shell> cp share/status.template.html /usr/share/nginx/html/status.html
  1. Configure nginx.conf
   server {
       server_name example.org;
       root /usr/share/nginx/html;

       # Redirect requests for / to /status.html
       location = / {
           return 301 /status.html;
       }

       location = /status.html {}

       # Everything beginning /status (except for /status.html) is
       # processed by the status handler
       location /status {
           vhost_traffic_status_display;
           vhost_traffic_status_display_format json;
       }
   }
  1. Access to your html.
http://example.org/status.html

To customize before the module installed

  1. Modify share/status.template.html (Do not change {{uri}} string)

  2. Recreate the ngx_http_vhost_traffic_status_module_html.h as follows:

shell> cd util
shell> ./tplToDefine.sh ../share/status.template.html > ../src/ngx_http_vhost_traffic_status_module_html.h
  1. Add the module to the build configuration by adding --add-module=/path/to/nginx-module-vts

  2. Build the nginx binary.

  3. Install the nginx binary.

Directives

draw_io_vts_diagram

vhost_traffic_status

- -
Syntax vhost_traffic_status <on|off>
Default off
Context http, server, location

Description: Enables or disables the module working. If you set vhost_traffic_status_zone directive, is automatically enabled.

vhost_traffic_status_zone

- -
Syntax vhost_traffic_status_zone [shared:name:size]
Default shared:vhost_traffic_status:1m
Context http

Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processes. In most cases, the shared memory size used by nginx-module-vts does not increase much. The shared memory size is increased pretty when using vhost_traffic_status_filter_by_set_key directive but if filter's keys are fixed(eg. the total number of the country code is about 240) it does not continuously increase.

If you use vhost_traffic_status_filter_by_set_key directive, set it as follows:

  • Set to more than 32M shared memory size by default. (vhost_traffic_status_zone shared:vhost_traffic_status:32m)
  • If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).

vhost_traffic_status_dump

- -
Syntax vhost_traffic_status_dump path [period]
Default -
Context http

Description: Enables the statistics data dump and restore. The path is a location to dump the statistics data.(e.g. /var/log/nginx/vts.db) The period is a backup cycle time.(Default: 60s) It is backed up immediately regardless of the backup cycle if nginx is exited by signal(SIGKILL).

vhost_traffic_status_display

- -
Syntax vhost_traffic_status_display
Default -
Context http, server, location

Description: Enables or disables the module display handler.

vhost_traffic_status_display_format

- -
Syntax vhost_traffic_status_display_format <json|html|jsonp|prometheus>
Default json
Context http, server, location

Description: Sets the display handler's output format. If you set json, will respond with a JSON document. If you set html, will respond with the built-in live dashboard in HTML. If you set jsonp, will respond with a JSONP callback function(default: ngx_http_vhost_traffic_status_jsonp_callback). If you set prometheus, will respond with a prometheus document.

vhost_traffic_status_display_jsonp

- -
Syntax vhost_traffic_status_display_jsonp callback
Default ngx_http_vhost_traffic_status_jsonp_callback
Context http, server, location

Description: Sets the callback name for the JSONP.

vhost_traffic_status_display_sum_key

- -
Syntax vhost_traffic_status_display_sum_key name
Default *
Context http, server, location

Description: Sets the sum key string in serverZones field's JSON. The default sum key string is the "*".

vhost_traffic_status_filter

- -
Syntax vhost_traffic_status_filter <on|off>
Default on
Context http, server, location

Description: Enables or disables the filter features.

vhost_traffic_status_filter_by_host

- -
Syntax vhost_traffic_status_filter_by_host <on|off>
Default off
Context http, server, location

Description: Enables or disables the keys by Host header field. If you set on and nginx's server_name directive set several or wildcard name starting with an asterisk, e.g. “*.example.org” and requested to server with hostname such as (a|b|c).example.org or *.example.org then json serverZones is printed as follows:

server {
  server_name *.example.org;
  vhost_traffic_status_filter_by_host on;

  ...

}
  ...
  "serverZones": {
      "a.example.org": {
      ...
      },
      "b.example.org": {
      ...
      },
      "c.example.org": {
      ...
      }
      ...
   },
   ...

It provides the same function that set vhost_traffic_status_filter_by_set_key $host.

vhost_traffic_status_filter_by_set_key

- -
Syntax vhost_traffic_status_filter_by_set_key key [name]
Default -
Context http, server, location

Description: Enables the keys by user defined variable. The key is a key string to calculate traffic. The name is a group string to calculate traffic. The key and name can contain variables such as $host, $server_name. The name's group belongs to filterZones if specified. The key's group belongs to serverZones if not specified second argument name. The example with geoip module is as follows:

server {
  server_name example.org;
  vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;

  ...

}
  ...
  "serverZones": {
  ...
  },
  "filterZones": {
      "country::example.org": {
          "KR": {
              "requestCounter":...,
              "inBytes":...,
              "outBytes":...,
              "responses":{
                  "1xx":...,
                  "2xx":...,
                  "3xx":...,
                  "4xx":...,
                  "5xx":...,
                  "miss":...,
                  "bypass":...,
                  "expired":...,
                  "stale":...,
                  "updating":...,
                  "revalidated":...,
                  "hit":...,
                  "scarce":...
              },
              "requestMsecCounter":...,
              "requestMsec":...,
              "requestMsecs":{
                  "times":[...],
                  "msecs":[...]
              },
          },
          "US": {
          ...
          },
          ...
      },
      ...
  },
  ...

vhost_traffic_status_filter_check_duplicate

- -
Syntax vhost_traffic_status_filter_check_duplicate <on|off>
Default on
Context http, server, location

Description: Enables or disables the deduplication of vhost_traffic_status_filter_by_set_key. It is processed only one of duplicate values(key + name) in each directives(http, server, location) if this option is enabled.

vhost_traffic_status_filter_max_node

- -
Syntax vhost_traffic_status_filter_max_node number [string ...]
Default 0
Context http

Description: Enables the limit of filter size using the specified number and string values. If the number is exceeded, the existing nodes are deleted by the LRU algorithm. The number argument is the size of the node that will be limited. The default value 0 does not limit filters. The one node is an object in filterZones in JSON document. The string arguments are the matching string values for the group string value set by vhost_traffic_status_filter_by_set_key directive. Even if only the first part matches, matching is successful like the regular expression /^string.*/. By default, If you do not set string arguments then it applied for all filters.

For examples:

$ vi nginx.conf

http {

    geoip_country /usr/share/GeoIP/GeoIP.dat;

    vhost_traffic_status_zone;

    # The all filters are limited to a total of 16 nodes.
    # vhost_traffic_status_filter_max_node 16

    # The `/^uris.*/` and `/^client::ports.*/` group string patterns are limited to a total of 64 nodes.
    vhost_traffic_status_filter_max_node 16 uris client::ports;

    ...

    server {

        server_name example.org;

        ...

        vhost_traffic_status_filter_by_set_key $uri uris::$server_name;
        vhost_traffic_status_filter_by_set_key $remote_port client::ports::$server_name;
        vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name;

    }
}

$ for i in {0..1000}; do curl -H 'Host: example.org' -i "http://localhost:80/test$i"; done

screenshot-vts-filter-max-node

In the above example, the /^uris.*/ and /^client::ports.*/ group string patterns are limited to a total of 16 nodes. The other filters like country::.* are not limited.

vhost_traffic_status_limit

- -
Syntax vhost_traffic_status_limit <on|off>
Default on
Context http, server, location

Description: Enables or disables the limit features.

vhost_traffic_status_limit_traffic

- -
Syntax vhost_traffic_status_limit_traffic member:size [code]
Default -
Context http, server, location

Description: Enables the traffic limit for specified member. The member is a member string to limit traffic. The size is a size(k/m/g) to limit traffic. The code is a code to return in response to rejected requests.(Default: 503)

The available member strings are as follows:

  • request
    • The total number of client requests received from clients.
  • in
    • The total number of bytes received from clients.
  • out
    • The total number of bytes sent to clients.
  • 1xx
    • The number of responses with status codes 1xx.
  • 2xx
    • The number of responses with status codes 2xx.
  • 3xx
    • The number of responses with status codes 3xx.
  • 4xx
    • The number of responses with status codes 4xx.
  • 5xx
    • The number of responses with status codes 5xx.
  • cache_miss
    • The number of cache miss.
  • cache_bypass
    • The number of cache bypass.
  • cache_expired
    • The number of cache expired.
  • cache_stale
    • The number of cache stale.
  • cache_updating
    • The number of cache updating.
  • cache_revalidated
    • The number of cache revalidated.
  • cache_hit
    • The number of cache hit.
  • cache_scarce
    • The number of cache scare.

vhost_traffic_status_limit_traffic_by_set_key

- -
Syntax vhost_traffic_status_limit_traffic_by_set_key key member:size [code]
Default -
Context http, server, location

Description: Enables the traffic limit for specified key and member. The key is a key string to limit traffic. The member is a member string to limit traffic. The size is a size(k/m/g) to limit traffic. The code is a code to return in response to rejected requests.(Default: 503)

The key syntax is as follows:

  • group@[subgroup@]name

The available group strings are as follows:

  • NO
    • The group of server.
  • UA
    • The group of upstream alone.
  • UG
    • The group of upstream group.(use subgroup)
  • CC
    • The group of cache.
  • FG
    • The group of filter.(use subgroup)

The available member strings are as follows:

  • request
    • The total number of client requests received from clients.
  • in
    • The total number of bytes received from clients.
  • out
    • The total number of bytes sent to clients.
  • 1xx
    • The number of responses with status codes 1xx.
  • 2xx
    • The number of responses with status codes 2xx.
  • 3xx
    • The number of responses with status codes 3xx.
  • 4xx
    • The number of responses with status codes 4xx.
  • 5xx
    • The number of responses with status codes 5xx.
  • cache_miss
    • The number of cache miss.
  • cache_bypass
    • The number of cache bypass.
  • cache_expired
    • The number of cache expired.
  • cache_stale
    • The number of cache stale.
  • cache_updating
    • The number of cache updating.
  • cache_revalidated
    • The number of cache revalidated.
  • cache_hit
    • The number of cache hit.
  • cache_scarce
    • The number of cache scare.

The member is the same as vhost_traffic_status_limit_traffic directive.

vhost_traffic_status_limit_check_duplicate

- -
Syntax vhost_traffic_status_limit_check_duplicate <on|off>
Default on
Context http, server, location

Description: Enables or disables the deduplication of vhost_traffic_status_limit_by_set_key. It is processed only one of duplicate values(member | key + member) in each directives(http, server, location) if this option is enabled.

vhost_traffic_status_set_by_filter

- -
Syntax vhost_traffic_status_set_by_filter $variable group/zone/name
Default -
Context http, server, location, if

Description: Get the specified status value stored in shared memory. It can acquire almost all status values and the obtained value is stored in $variable which is first argument.

  • group
    • server
    • filter
    • upstream@alone
    • upstream@group
    • cache
  • zone
    • server
      • name
    • filter
      • filter_group@name
    • upstream@group
      • upstream_group@name
    • upstream@alone
      • @name
    • cache
      • name
  • name
    • requestCounter
      • The total number of client requests received from clients.
    • requestMsecCounter
      • The number of accumulated request processing time in milliseconds.
    • requestMsec
      • The average of request processing times in milliseconds.
    • responseMsecCounter
      • The number of accumulated only upstream response processing time in milliseconds.
    • responseMsec
      • The average of only upstream response processing times in milliseconds.
    • inBytes
      • The total number of bytes received from clients.
    • outBytes
      • The total number of bytes sent to clients.
    • 1xx, 2xx, 3xx, 4xx, 5xx
      • The number of responses with status codes 1xx, 2xx, 3xx, 4xx, and 5xx.
    • cacheMaxSize
      • The limit on the maximum size of the cache specified in the configuration.
    • cacheUsedSize
      • The current size of the cache.
    • cacheMiss
      • The number of cache miss.
    • cacheBypass
      • The number of cache bypass.
    • cacheExpired
      • The number of cache expired.
    • cacheStale
      • The number of cache stale.
    • cacheUpdating
      • The number of cache updating.
    • cacheRevalidated
      • The number of cache revalidated.
    • cacheHit
      • The number of cache hit.
    • cacheScarce
      • The number of cache scare.
    • weight
      • Current weight setting of the server.
    • maxFails
      • Current max_fails setting of the server.
    • failTimeout
      • Current fail_timeout setting of the server.
    • backup
      • Current backup setting of the server.(0|1)
    • down
      • Current down setting of the server.(0|1)

Caveats: The name is case sensitive. All return values take the integer type.

For examples:

  • requestCounter in serverZones
    • vhost_traffic_status_set_by_filter $requestCounter server/example.org/requestCounter
  • requestCounter in filterZones
    • vhost_traffic_status_set_by_filter $requestCounter filter/country::example.org@KR/requestCounter
  • requestCounter in upstreamZones
    • vhost_traffic_status_set_by_filter $requestCounter upstream@group/[email protected]:80/requestCounter
  • requestCounter in upstreamZones::nogroups
    • vhost_traffic_status_set_by_filter $requestCounter upstream@alone/10.10.10.11:80/requestCounter
  • cacheHit in cacheZones
    • vhost_traffic_status_set_by_filter $cacheHit cache/my_cache_name/cacheHit

vhost_traffic_status_average_method

- -
Syntax vhost_traffic_status_average_method <AMM|WMA> [period]
Default AMM 60s
Context http, server, location

Description: Sets the method which is a formula that calculate the average of response processing times. The period is an effective time of the values used for the average calculation.(Default: 60s) If period set to 0, effective time is ignored. In this case, the last average value is displayed even if there is no requests and after the elapse of time. The corresponding values are requestMsec and responseMsec in JSON.

vhost_traffic_status_histogram_buckets

- -
Syntax vhost_traffic_status_histogram_buckets second ...
Default -
Context http, server, location

Description: Sets the observe buckets to be used in the histograms. By default, if you do not set this directive, it will not work. The second can be expressed in decimal places with a minimum value of 0.001(1ms). The maximum size of the buckets is 32. If this value is insufficient for you, change the NGX_HTTP_VHOST_TRAFFIC_STATUS_DEFAULT_BUCKET_LEN in the src/ngx_http_vhost_traffic_status_node.h

For examples:

  • vhost_traffic_status_histogram_buckets 0.005 0.01 0.05 0.1 0.5 1 5 10
    • The observe buckets are [5ms 10ms 50ms 100ms 500ms 1s 5s 10s].
  • vhost_traffic_status_histogram_buckets 0.005 0.01 0.05 0.1
    • The observe buckets are [5ms 10ms 50ms 100ms].

Caveats: By default, if you do not set this directive, the histogram statistics does not work. The restored histograms by vhost_traffic_status_dump directive have no affected by changes to the buckets by vhost_traffic_status_histogram_buckets directive. So you must first delete the zone or the dump file before changing the buckets by vhost_traffic_status_histogram_buckets directive. Similar to the above, delete the dump file when using the histogram for the first time.

vhost_traffic_status_bypass_limit

- -
Syntax vhost_traffic_status_bypass_limit <on|off>
Default off
Context http, server, location

Description: Enables or disables to bypass vhost_traffic_status_limit directives. The limit features is bypassed if this option is enabled. This is mostly useful if you want to connect the status web page like /status regardless of vhost_traffic_status_limit directives as follows:

http {
    vhost_traffic_status_zone;

    ...

    server {

        ...

        location /status {
            vhost_traffic_status_bypass_limit on;
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}

vhost_traffic_status_bypass_stats

- -
Syntax vhost_traffic_status_bypass_stats <on|off>
Default off
Context http, server, location

Description: Enables or disables to bypass vhost_traffic_status. The traffic status stats features is bypassed if this option is enabled. In other words, it is excluded from the traffic status stats. This is mostly useful if you want to ignore your request in status web page like /status as follows:

http {
    vhost_traffic_status_zone;

    ...

    server {

        ...

        location /status {
            vhost_traffic_status_bypass_stats on;
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}

Releases

To cut a release, create a changelog entry PR with git-chglog

version="v0.2.0"
git checkout -b "cut-${version}"
git-chglog -o CHANGELOG.md --next-tag "${version}"
git add CHANGELOG.md
sed -i "s/NGX_HTTP_VTS_MODULE_VERSION \".*/NGX_HTTP_VTS_MODULE_VERSION \"${version}\"/" src/ngx_http_vhost_traffic_status_module.h
git add src/ngx_http_vhost_traffic_status_module.h
git-chglog -t .chglog/RELNOTES.tmpl --next-tag "${version}" "${version}" | git commit -F-

After the PR is merged, create the new tag and release on the GitHub Releases.

See Also

TODO

  • Add an implementation that periodically updates computed statistic in each worker processes to shared memory to reduce the contention due to locks when using ngx_shmtx_lock().

Author

YoungJoo.Kim(김영주) [[email protected]]

nginx-module-vts's People

Contributors

boyanhh avatar cohalz avatar gemfield avatar jongiddy avatar mathieu-aubin avatar patsevanton avatar robn avatar spacewander avatar superq avatar susuper avatar timgates42 avatar tobilarscheid avatar u5surf avatar vozlt avatar wandenberg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nginx-module-vts's Issues

shm_add_upstream

Hi,
there is some issues with nginx 1.7.10

  1. 2015/04/08 10:26:17 [error] 1073#0: *118 shm_add_upstream failed while logging request, client: 127.0.0.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://127.0.0.1:8000/favicon.ico", host: "example.com"
  2. on 'Server zones' there is something looks like part of nginx process memory dump in 'Zone' field.
  3. This happens if server_name is empty or there is multiple names in it.

Cache status causes cache lock

With the new cache status overview (which is awesome) the cache locks are not released and upon restart the following errors occur:

2015/08/25 07:29:15 [alert] 5661#5661: ignore long locked inactive cache entry 6c7b231443395c911bbfed66866da923, count:1

This appears to be a similar issue to what I found here: FRiCKLE/ngx_slowfs_cache@5800076 and because of a lock on the increment count the files are not release/flushed.

Reset counters after getting status, in one request

First of all, thank you for this great plugin!

It'd be nice to have one more feature, though.
I want to periodically request status and reset counters after every one.
Currently, it can be done only in two requests:

/status/control?cmd=status...
/status/control?cmd=reset...

But some numbers will be certainly between them.
One more argument to cmd=status would save the situation, for example:

/status/control?cmd=status&reset=true

or

/status/control?cmd=pop # means "pop numbers and clear"

I've tried to do it on my own, but my knowledge of C is very limited.

Wrong cacheZones maxSize calculation

Hello,
I've seen some inconsistent data in cacheZones:

Filesystem Size Used Avail Use% Mounted on
/dev/mapper/centos_r620-root 70G 1.7G 69G 3% /
devtmpfs 48G 0 48G 0% /dev
tmpfs 48G 0 48G 0% /dev/shm
tmpfs 48G 137M 47G 1% /run
tmpfs 48G 0 48G 0% /sys/fs/cgroup
tmpfs 48G 155M 47G 1% /cache

"my_zone" Cache directory is /cache.

"cacheZones":{"my_zone":{"maxSize":9223372036854771712,"usedSize":161124352,"inBytes":2191759,"outBytes":1966063488,"responses":

maxSize is really good to be true:). The issue is also in the web page. Maybe tmpfs is a problem?

I'm using master.

BRS

Connections show

i use openresty/1.9.7.2 with nginx-module-vts,when i used cosbench (mode=s3)to test HTTP request(with 6000 workers),if my active|reading|writing|waiting num is > 12000,the nums are not correct when my test finished(in fact requests is 0,but it still show a big Numbers like 600x) ,please check and fix it!thx!

Building error

Mac OS X El Capitan
Nginx 1.9.15

/Users/User/Downloads/nginx-module-vts-master/src/ngx_http_vhost_traffic_status_module.c:1277:24: error: comparison of address of 'limits[i].variable' equal to a null pointer is always false [-Werror,-Wtautological-pointer-compare]
        if (&limits[i].variable == NULL) {
             ~~~~~~~~~~^~~~~~~~    ~~~~
/Users/User/Downloads/nginx-module-vts-master/src/ngx_http_vhost_traffic_status_module.c:2151:25: error: comparison of address of 'filters[i].filter_key' equal to a null pointer is always false [-Werror,-Wtautological-pointer-compare]
        if (&filters[i].filter_key == NULL || &filters[i].filter_name == NULL) {
             ~~~~~~~~~~~^~~~~~~~~~    ~~~~
/Users/User/Downloads/nginx-module-vts-master/src/ngx_http_vhost_traffic_status_module.c:2151:59: error: comparison of address of 'filters[i].filter_name' equal to a null pointer is always false [-Werror,-Wtautological-pointer-compare]
        if (&filters[i].filter_key == NULL || &filters[i].filter_name == NULL) {
                                               ~~~~~~~~~~~^~~~~~~~~~~    ~~~~
3 errors generated.

possible traffic limit?

호스팅 환경에서 아파치 시절의 mod_throttle이나 mod_cband의 대용으로 사용할 수 있을지 고려중입니다.
bandwidth는 기타 모듈들로 제어가 가능한데, transfer를 기준으로 제어하는 모듈은 현재까지 없어 보입니다.
혹시 transfer limit의 기능에 대해서 개발 계획이 있으신지 궁금합니다.

좋은 모듈 개발해 주셔서 감사합니다.

Wildcard domain support

I'm using this module with nginx 1.8, and I plan on having hundreds of different virtualhosts proxied through this nginx installation.

My current setup allows me to use wildcard nginx virtualhosts (ie. I'm specifying "server_name *.domain.com;"), but unfortunately this doesn't seem to work with this nginx module.

As you might've guessed, I'm seeing "*.domain.com" as the zone name in the module's output.

Is there a way to overwrite this, so I could, say, use "$host" instead of server_name as the zone name?

Clustered Stats

Is there a way or plans to support aggregating the results from multiple nginx instances? IE, one instance collects results via json from other additional nginx instances in addition to its own and provides a full aggregated results page.

On a side note, This tool is pretty awesome.

Failure when exiting worker process

If I enable the nginx-module-vts via the nginx config, I notice that the active and waiting connections in nginx have sudden spikes. They constantly grow but the number of active/waiting connections is not correct. Output of netstat shows that nginx does not count them right.

The cause of this behaviour could be the following from the nginx error log:
2016/08/18 20:08:13 [notice] 29511#0: signal 17 (SIGCHLD) received
2016/08/18 20:08:13 [alert] 29511#0: worker process 1341 exited on signal 11
2016/08/18 20:08:13 [notice] 29511#0: start worker process 29713
2016/08/18 20:08:13 [notice] 29511#0: signal 29 (SIGIO) received

Usually it should say "worker process 123456 exited with code 0" but if nginx-module-vts is enabled the worker process does not shut down cleanly.

nginx version: 1.11.3

In the http section I have this config:
vhost_traffic_status on;
vhost_traffic_status_zone;
vhost_traffic_status_limit off;

Later I have some filters defined in specific locations.

nginx -s reload FAILED

Hi,

Now we are using your Nginx module VTS and I feel it's very powerful.

But there is a problem around me and I can't find why it happened and something helpful about this error on GitHub. when I use nginx -s reload three times or four times, there will be an error nginx: [alert] kill (25315, 1) failed (3: No such process) and the nginx master process had been exited.

We use Nginx 1.8.1 and CentOS 6.6, but I have tested the versions which marked by Compatibility on Readme.md and all of them have this problem except for 1.9.9.

Tanks for the excellent module.

Uptime question

When I access html version of status page I get current nginx uptime. Readme says it's counted as nowMsec - loadMsec but when I try to count it from json - numbers differs.
How to get correct uptime from json?

The value of total connections is wrong

Hi, I'm back

The new problem is that the value of total connections in Server main of the status webpage is too big, it's wrong, it should be less than 15k, there's not so many connections in this one of our systems. after I restart Nginx, it's back to normal(about 10000 connections), but after about a week, it's into wrong again.

BTW, when it's in the abnormal state, the data of original module stub_status_module of Nginx is also abnormal, of course, they are equal.

SS:

561818f4-3cce-49e4-8513-2d40d5e475cf

nginx-module-vts:

d81a4716-353d-4543-8af1-005cbf410f9c

stub_status_module:

d55a9022-68e2-4718-9a0a-73427f584f35

1.9 issues and new stuff

with 1.9 I get a negative uptime value;
Version Uptime Connections Requests
active reading writing waiting accepted handled Total Req/s
1.9.0 -4293105030ms 1 0 1 0 57 57 212 0

any ideas? also can you have a look at the new 'streams' feature in 1.9 ?

VTS segfault when balancer_by_lua breaks

Maybe a bit of a weird corner case but could be worth handling:

When playing around with balancer_by_lua_block in (nginx/OpenResty) and "vhost_traffic_status_zone" set, I noticed that workers would segfault when there are errors in the lua code. While this is not something that should usually happen, it highlights the fact that there are cases where ngx_http_upstream_state_t.peer is NULL, which is not handled in VTS (looking at other parts of the nginx source, it seems like they always NULL-check that variable before use).

This change adds handling for that:
https://github.com/moshisushi/nginx-module-vts/commit/592a13d00cefefa0c4028243703048330cbe3c56

Error log without fix:
2016/07/13 10:56:12 [error] 11293#11293: *1 failed to run balancer_by_lua*: balancer_by_lua:8: attempt to call global 'foo' (a nil value) stack traceback: balancer_by_lua:8: in function <balancer_by_lua:1> while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost" 2016/07/13 10:56:12 [alert] 11291#11291: worker process 11293 exited on signal 11

Error log with fix:
2016/07/13 10:53:55 [error] 11239#11239: *1 failed to run balancer_by_lua*: balancer_by_lua:8: attempt to call global 'foo' (a nil value) stack traceback: balancer_by_lua:8: in function <balancer_by_lua:1> while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost" 2016/07/13 10:53:55 [error] 11239#11239: *1 handler::shm_add_upstream() failed while logging request, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", host: "localhost"

make pull(push)time (refresh) configurable

Can you make the refresh time configurable? 1 - 5 seconds, mostly a refresh every 3 seconds is enough.
vhost_traffic_status_display_refresh 3

Maybe add configuration options for colors?
ea.
when value is higher then xx change text colors.
vhost_traffic_status_display_trigger_connections_waiting 1000 white on red
vhost_traffic_status_display_trigger_responses_5xx 500 white on red
(click on white text on red background to reset counter and color)
vhost_traffic_status_display_upstream_down white on red

Upstreams not showing as down

Hello everybody I just set this up and doing some testing i turned one of my upstream serves off and spammed the Load Balancing with requests from "ab" but it seems to show that it is up.

server 1 is 192.xxx.1.14:80 up 17ms
server 2 is 192.xxx.1.15:80 up 3.4s

but server 2 is turned off and there is no fallback setup yet

my config is round robin is that a problem?

please help

split by virtual hosts

first of all, thak you very much for the amazing plugin,

but the question is how to split traffic detail on 1 main host ?
please see the example :

http {
    vhost_traffic_status_zone;
    include       mime.types;
    default_type  application/octet-stream;
   server {
        listen 80;
        server_name main.com;

        location / {
            root   /home/admin/html;
            index  index.html index.htm;
        }
        location /status {
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }

    server {
        listen 80;
        server_name user1.com;

        location /1 {
            root   /home/user1/html/1;
            index  index.html index.htm;
        }
        location /status {  
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }

    server {
        listen 80;
        server_name user2.com;

        location /2 {
            root   /home/user2/html/2;
            index  index.html index.htm;
        }
        location /status {  
            vhost_traffic_status_display;
            vhost_traffic_status_display_format html;
        }
    }
}

result in attachment
vts

how to split virtual host traffic and show them on the /status page ?

nginx version 1.11.3
compiled from source

Cant disable server zones by host

Hi,

How i can disable server zones statistics by server_name/$host ?

vhost_traffic_status_filter off;
vhost_traffic_status_filter_by_host off;

This directives doesn't make any changes on status page.

worker process XXXXX exited on signal 11

Hi.

Have problem with current master version.

2015/05/30 04:12:10 [alert] 10757#0: worker process 12092 exited on signal 11
2015/05/30 04:12:11 [alert] 10757#0: worker process 12093 exited on signal 11
2015/05/30 04:12:12 [alert] 10757#0: worker process 12106 exited on signal 11
2015/05/30 04:16:01 [alert] 24442#0: worker process 24443 exited on signal 11
2015/05/30 04:16:02 [alert] 24442#0: worker process 24463 exited on signal 11
2015/05/30 04:16:12 [alert] 24442#0: worker process 24527 exited on signal 11
2015/05/30 04:16:17 [alert] 24442#0: worker process 24531 exited on signal 11
2015/05/30 04:16:17 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24531
2015/05/30 04:16:17 [alert] 24442#0: worker process 24534 exited on signal 11
2015/05/30 04:16:17 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24534
2015/05/30 04:16:18 [alert] 24442#0: worker process 24535 exited on signal 11
2015/05/30 04:16:20 [alert] 24442#0: worker process 24536 exited on signal 11
2015/05/30 04:16:20 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24536
2015/05/30 04:16:27 [alert] 24442#0: worker process 24539 exited on signal 11
2015/05/30 04:16:27 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24539
2015/05/30 04:16:28 [alert] 24442#0: worker process 24548 exited on signal 11
2015/05/30 04:16:31 [alert] 24442#0: worker process 24549 exited on signal 11
2015/05/30 04:16:31 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24549
2015/05/30 04:16:32 [alert] 24442#0: worker process 24551 exited on signal 11
2015/05/30 04:16:32 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24551
2015/05/30 04:16:35 [alert] 24442#0: worker process 24552 exited on signal 11
2015/05/30 04:16:35 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24552
2015/05/30 04:16:36 [alert] 24442#0: worker process 24554 exited on signal 11
2015/05/30 04:16:36 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24554
2015/05/30 04:16:37 [alert] 24442#0: worker process 24555 exited on signal 11
2015/05/30 04:16:37 [alert] 24442#0: worker process 24556 exited on signal 11
2015/05/30 04:16:38 [alert] 24442#0: worker process 24557 exited on signal 11
2015/05/30 04:16:38 [alert] 24442#0: worker process 24558 exited on signal 11
2015/05/30 04:16:40 [alert] 24442#0: worker process 24559 exited on signal 11
2015/05/30 04:16:40 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24559
2015/05/30 04:16:47 [alert] 24442#0: worker process 24562 exited on signal 11
2015/05/30 04:16:47 [alert] 24442#0: shared memory zone "vhost_traffic_status" was locked by 24562

Nginx worker killed which cause % of requests to fail. I've reverted to this tree and it works fine. https://github.com/vozlt/nginx-module-vts/tree/5548b2df4a3f698474c50b27a490b108c429850f
One of the next commits made that issue.

I am using Tengine latest master https://github.com/alibaba/tengine (nginx 1.6.2 - compatible)

Sincerely,

Alex.

cacheZones output combining two different caches

We have two different cache zones search and search_live and it seems whatever one is accessed first will show up in the cacheZones, and both zones seem to aggregate their stats into the single zone (I have confirmed on the server that we have separate caches, and they are being handled appropriately).

Potentially an issue with _ in the name?

They have changed response timings

They have changed response timings in http://hg.nginx.org/nginx/rev/59fc60585f1e

ngx_http_vhost_traffic_status_module.c(476) : error C2039: 'response_sec' : is not a member of 'ngx_http_upstream_state_t'
src/http\ngx_http_upstream.h(56) : see declaration of 'ngx_http_upstream_state_t'
ngx_http_vhost_traffic_status_module.c(476) : error C2039: 'response_msec' : is not a member of 'ngx_http_upstream_state_t'
src/http\ngx_http_upstream.h(56) : see declaration of 'ngx_http_upstream_state_t'

Maybe:
// (state[i].response_sec * 1000 + state[i].response_msec);
(state[i].response_time * 1000 + state[i].response_time);

?

Response time on server or filter zones?

It would be incredibly useful if I could monitor the response time of server zones.

I'm happy to open a pull request, if you can tell me where I should start looking.

broken nginx 1.9.7 support ?

@vozlt did latest Nov 20th commits break nginx 1.9.7 support https://community.centminmod.com/posts/20624/ ? As it compiled fine prior to those commits

make -f objs/Makefile install
make[1]: Entering directory `/svr-setup/nginx-1.9.7'
ccache /usr/bin/clang -ferror-limit=0 -c -I/usr/local/include/luajit-2.1  -pipe  -O -Wall -Wextra -Wpointer-arith -Wconditional-uninitialized -Wno-unused-parameter -Werror -g -m64 -mtune=native -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations -Wno-unused-parameter -Wno-unused-const-variable -Wno-conditional-uninitialized -Wno-mismatched-tags -Wno-c++11-extensions -Wno-sometimes-uninitialized -Wno-parentheses-equality -Wno-tautological-compare -Wno-self-assign -Wno-deprecated-register -Wno-deprecated -Wno-invalid-source-encoding -Wno-pointer-sign -Wno-parentheses -Wno-enum-conversion  -DNDK_SET_VAR -DNDK_UPSTREAM_LIST -DNDK_SET_VAR  -I src/core -I src/event -I src/event/modules -I src/os/unix -I ../ngx_pagespeed-1.9.32.10-beta/psol/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/chromium/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/google-sparsehash/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/google-sparsehash/gen/arch/linux/x64/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/protobuf/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/re2/src -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/out/Release/obj/gen -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/out/Release/obj/gen/protoc_out/instaweb -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/apr/src/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/aprutil/src/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/apr/gen/arch/linux/x64/include -I ../ngx_pagespeed-1.9.32.10-beta/psol/include/third_party/aprutil/gen/arch/linux/x64/include -I ../ngx_devel_kit-0.2.19/objs -I objs/addon/ndk -I /usr/local/include/luajit-2.1 -I ../lua-nginx-module-0.9.18/src/api -I ../nginx_upstream_check_module-0.3    .0 -I ../pcre-8.37 -I ../libressl-2.2.4/.openssl/include -I objs -I src/http -I src/http/modules -I src/http/v2 -I ../ngx_devel_kit-0.2.19/src -I src/mail -I src/stream \
        -o objs/addon/src/ngx_http_vhost_traffic_status_module.o \
        ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:904:22: error: adding 'unsigned int' to a string does not append to the string [-Werror,-Wstring-plus-int]
    len = ngx_strlen(ngx_vhost_traffic_status_group_to_string(type));
          ~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
    : "NO\0UA\0UG\0CC\0FG\0" + 3 * n                                           \
                             ^
src/core/ngx_string.h:61:51: note: expanded from macro 'ngx_strlen'
#define ngx_strlen(s)       strlen((const char *) s)
                                                  ^
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:904:22: note: use array indexing to silence this warning
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
    : "NO\0UA\0UG\0CC\0FG\0" + 3 * n                                           \
                             ^
src/core/ngx_string.h:61:51: note: expanded from macro 'ngx_strlen'
#define ngx_strlen(s)       strlen((const char *) s)
                                                  ^
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:915:23: error: adding 'unsigned int' to a string does not append to the string [-Werror,-Wstring-plus-int]
    p = ngx_cpymem(p, ngx_vhost_traffic_status_group_to_string(type), len);
        ~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
    : "NO\0UA\0UG\0CC\0FG\0" + 3 * n                                           \
                             ^
src/core/ngx_string.h:103:60: note: expanded from macro 'ngx_cpymem'
#define ngx_cpymem(dst, src, n)   (((u_char *) memcpy(dst, src, n)) + (n))
                                                           ^
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:915:23: note: use array indexing to silence this warning
../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:341:30: note: expanded from macro 'ngx_vhost_traffic_status_group_to_string'
    : "NO\0UA\0UG\0CC\0FG\0" + 3 * n                                           \
                             ^
src/core/ngx_string.h:103:60: note: expanded from macro 'ngx_cpymem'
#define ngx_cpymem(dst, src, n)   (((u_char *) memcpy(dst, src, n)) + (n))
                                                           ^
2 errors generated.
make[1]: *** [objs/addon/src/ngx_http_vhost_traffic_status_module.o] Error 1
make[1]: Leaving directory `/svr-setup/nginx-1.9.7'
make: *** [install] Error 2

ah it might be due to using clang compiler and adding 'unsigned int' to a string does not append to the string [-Werror,-Wstring-plus-int] ?

Add average response time per interval

It would really nice to have a sliding window with the average (max, min also?) response time to users.
There is a lot of people doing this today through access log or other modules, like nginx-statsd, which seems to me very expensive.

Feature Request: Cache status

First of all great work on this module!

One of the things that would make this module even more awesome if you could add a cache status (efficiency, hits/misses) section to the output statistics as well.

Additionally do you think that you can tag various releases for easy integration when building packages?

got coredump when group not exist

http://127.0.0.1:83/status/control?cmd=status&group=test

Program terminated with signal 11, Segmentation fault.
#0  0x00000000005851cf in ngx_http_vhost_traffic_status_node_control_range_set
    (control=0x7f40c32e61a8)
    at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:2644
2644            if (control->zone->len == 0) {
Missing separate debuginfos, use: debuginfo-install glibc-2.17-106.el7_2.4.x86_64 libgcc-4.8.5-4.el7.x86_64 lua-5.1.4-14.el7.x86_64 nss-softokn-freebl-3.16.2.3-13.el7_1.x86_64 pcre-8.32-15.el7.x86_64 sssd-client-1.13.0-40.el7_2.2.x86_64 zlib-1.2.7-15.el7.x86_64
(gdb) bt
#0  0x00000000005851cf in ngx_http_vhost_traffic_status_node_control_range_set
    (control=0x7f40c32e61a8)
    at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:2644
#1  0x0000000000582439 in ngx_http_vhost_traffic_status_display_handler_control
    (r=0x7f40c32e51a0)
    at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:1516
#2  0x0000000000581f39 in ngx_http_vhost_traffic_status_display_handler (
    r=0x7f40c32e51a0)
    at ../nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:1386
#3  0x00000000004a06e1 in ngx_http_core_content_phase (r=0x7f40c32e51a0, 
    ph=0x7f40c33041c8) at src/http/ngx_http_core_module.c:1368
#4  0x000000000049f227 in ngx_http_core_run_phases (r=0x7f40c32e51a0)
    at src/http/ngx_http_core_module.c:845
#5  0x000000000049f195 in ngx_http_handler (r=0x7f40c32e51a0)
    at src/http/ngx_http_core_module.c:828
#6  0x00000000004aedfa in ngx_http_process_request (r=0x7f40c32e51a0)
    at src/http/ngx_http_request.c:1914
#7  0x00000000004ad75e in ngx_http_process_request_headers (rev=0x7f40c31b8310)
    at src/http/ngx_http_request.c:1346
#8  0x00000000004acb41 in ngx_http_process_request_line (rev=0x7f40c31b8310)
    at src/http/ngx_http_request.c:1026
#9  0x00000000004ab7af in ngx_http_wait_request_handler (rev=0x7f40c31b8310)
    at src/http/ngx_http_request.c:503
#10 0x000000000048db99 in ngx_epoll_process_events (cycle=0x7f40c32b5000, 
    timer=924, flags=1) at src/event/modules/ngx_epoll_module.c:907
#11 0x000000000047cc89 in ngx_process_events_and_timers (cycle=0x7f40c32b5000)
    at src/event/ngx_event.c:242
#12 0x000000000048b40b in ngx_worker_process_cycle (cycle=0x7f40c32b5000, 
    data=0x1) at src/os/unix/ngx_process_cycle.c:753
#13 0x0000000000487d07 in ngx_spawn_process (cycle=0x7f40c32b5000, 
    proc=0x48b316 <ngx_worker_process_cycle>, data=0x1, 
    name=0x71a783 "worker process", respawn=-3)
    at src/os/unix/ngx_process.c:198
#14 0x000000000048a247 in ngx_start_worker_processes (cycle=0x7f40c32b5000, 
    n=2, type=-3) at src/os/unix/ngx_process_cycle.c:358
#15 0x0000000000489864 in ngx_master_process_cycle (cycle=0x7f40c32b5000)
    at src/os/unix/ngx_process_cycle.c:130
#16 0x000000000044be59 in main (argc=3, argv=0x7ffdf3a760d8)
    at src/core/nginx.c:367

VTS segfault on nginx reload with SIGHUP

I was careful to give enough time between reloads for any workers that needed to die to die, but this causes a failure to bind error from nginx, not a segfault. I think maybe the init conf method isn't hardened for repeated re-cycling?

Repro steps:
service nginx start
gdb -p

signal SIGHUP
ctrl-c
signal SIGHUP
ctrl-c
signal SIGHUP
segfault
backtrace full:
#0  __memcpy_sse2_unaligned () at ../sysdeps/x86_64/multiarch/memcpy-sse2-unaligned.S:33

No locals.
#1  0x00000000004ad46b in memcpy (__len=, __src=, __dest=0x7f03996f2029) at /usr/include/x86_64-linux-gnu/bits/string3.h:51

No locals.
#2  ngx_http_vhost_traffic_status_filter_unique (pool=0x1100190, keys=keys@entry=0x1103780) at ./modules/nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:3184

        hash =
        p = 0x7f03996f2029 ""
        key = {len = 4976979, data = 0x7f03996f2010 "$server_addr:$server_port"}
        i = 0
        n = 1
        uniqs = 0x1196488
        filter_keys = 0x0
        filter =
        filters = 0x1157648
        filter_uniqs =
#3  0x00000000004ad975 in ngx_http_vhost_traffic_status_init_main_conf (cf=0x7ffe12032d40, conf=0x1103778) at ./modules/nginx-module-vts/src/ngx_http_vhost_traffic_status_module.c:4779

        ctx = 0x1103778
        rc =
        vtscf = 0x11037c8
#4  0x0000000000432e34 in ngx_http_block (cf=0x7ffe12032d40, cmd=0xa, conf=0x4bf13a) at src/http/ngx_http.c:269

        rv = 0xffff80fc6644eea7 <error: Cannot access memory at address 0xffff80fc6644eea7>
        ctx = 0x117b9e0
        s = 7487744
        clcf = 0x5
#5  0x0000000000419f33 in ngx_conf_handler (last=1, cf=0x7ffe12032d40) at src/core/ngx_conf_file.c:427

        rv =
        conf =
        i = 9
        confp =
        found = 1
        name = 0x1101220
        cmd = 0x70aae0 <ngx_http_commands>
#6  ngx_conf_parse (cf=cf@entry=0x7ffe12032d40, filename=filename@entry=0x1100390) at src/core/ngx_conf_file.c:283

        rv =
        p =
        size =
        fd = 4
        rc =
        buf = {
          pos = 0x10fafec "\n", ' ' <repeats 17 times>, "tcl tk;\n    application/x-x509-ca-cert", ' ' <repeats 12 times>, "der pem crt;\n    application/x-xpinstall", ' ' <repeats 15 times>, "xpi;\n    application/xhtml+xml", ' ' <repeats 17 times>, "xhtml;\n    application/xspf+xm"...,
          last = 0x10fafed ' ' <repeats 17 times>, "tcl tk;\n    application/x-x509-ca-cert", ' ' <repeats 12 times>, "der pem crt;\n    application/x-xpinstall", ' ' <repeats 15 times>, "xpi;\n    application/xhtml+xml", ' ' <repeats 17 times>, "xhtml;\n    application/xspf+xml"..., file_pos = 8602557481092212992, file_last = 3544386174626525807,
          start = 0x10fa6c0 "user www-data;\nworker_rlimit_core 500m;\nworking_directory /tmp/nginxcores/;\nworker_rlimit_nofile 1000000;\nworker_processes 16;\nworker_cpu_affinity\n", ' ' <repeats 20 times>, '0' <repeats 15 times>, "100000000\n        "...,
          end = 0x10fb6c0 "\020\020", tag = 0xffff8001edfcd401, file = 0x401, shadow = 0x100, temporary = 1, memory = 0, mmap = 0, recycled = 0, in_file = 1, flush = 1, sync = 0, last_buf = 0, last_in_chain = 1, last_shadow = 0, temp_file = 0, num = 0}
        tbuf =
        prev = 0x0
        conf_file = {file = {fd = 4, name = {len = 21, data = 0x1100406 "/etc/nginx/nginx.conf"}, info = {st_dev = 2049, st_ino = 133886, st_nlink = 1, st_mode = 33188, st_uid = 0, st_gid = 0, __pad0 = 0, st_rdev = 0, st_size = 2349, st_blksize = 4096, st_blocks = 8,
              st_atim = {tv_sec = 1468545661, tv_nsec = 100448752}, st_mtim = {tv_sec = 1468545433, tv_nsec = 231630832}, st_ctim = {tv_sec = 1468545433, tv_nsec = 231630832}, __glibc_reserved = {0, 0, 0}}, offset = 2349, sys_offset = 17826272, log = 0x11904b8,
            thread_handler = 0x7ffe12032c98, thread_ctx = 0x7ffe12032c98, aio = 0x4000, valid_info = 0, directio = 1}, buffer = 0x7ffe12032b50, dump = 0x0, line = 87}
        cd =
        type = parse_file
#7  0x00000000004177a1 in ngx_init_cycle (old_cycle=old_cycle@entry=0x11904a0) at src/core/ngx_cycle.c:268

        rv =
        senv = 0x7ffe12033358
        env =
        i =
        n =
        log = 0x11904b8
        conf = {name = 0x0, args = 0x1100fe8, cycle = 0x11001e0, pool = 0x1100190, temp_pool = 0x11325b0, conf_file = 0x7ffe12032ba0, log = 0x11904b8, ctx = 0x1101550, module_type = 1347703880, cmd_type = 33554432, handler = 0x0, handler_conf = 0x0}
        pool = 0x1100190
        cycle = 0x11001e0
        old =
        shm_zone =
        oshm_zone =
        part =
        opart =
        file =
        ls =
        nls =
        ccf =
        old_ccf =
        module =
hostname = "omitted.com'
#8 0x0000000000428a22 in ngx_master_process_cycle (cycle=0x11904a0, cycle@entry=0x10f4340) at src/os/unix/ngx_process_cycle.c:234

    title = <optimized out>
    p = <optimized out>
    size = <optimized out>
    i = <optimized out>
    n = <optimized out>
    sigio = 0
    set = {__val = {0 <repeats 16 times>}}
    itv = {it_interval = {tv_sec = 0, tv_usec = 0}, it_value = {tv_sec = 0, tv_usec = 0}}
    live = <optimized out>
    delay = 0
    ls = <optimized out>
    ccf = 0x11910b0

#9 0x0000000000407b9f in main (argc=, argv=) at src/core/nginx.c:359

    b = <optimized out>
    log = 0x7251c0 <ngx_log>
    i = <optimized out>
    cycle = 0x10f4340
    init_cycle = {conf_ctx = 0x0, pool = 0x10f3d90, log = 0x7251c0 <ngx_log>, new_log = {log_level = 0, file = 0x0, connection = 0, disk_full_time = 0, handler = 0x0, data = 0x0, writer = 0x0, wdata = 0x0, action = 0x0, next = 0x0}, log_use_stderr = 0, files = 0x0, 
      free_connections = 0x0, free_connection_n = 0, reusable_connections_queue = {prev = 0x0, next = 0x0}, listening = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, paths = {elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, config_dump = {
        elts = 0x0, nelts = 0, size = 0, nalloc = 0, pool = 0x0}, open_files = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, pool = 0x0}, shared_memory = {last = 0x0, part = {elts = 0x0, nelts = 0, next = 0x0}, size = 0, nalloc = 0, 
        pool = 0x0}, connection_n = 0, files_n = 0, connections = 0x0, read_events = 0x0, write_events = 0x0, old_cycle = 0x0, conf_file = {len = 21, data = 0x4b6749 "/etc/nginx/nginx.conf"}, conf_param = {len = 0, data = 0x0}, conf_prefix = {len = 11, 
        data = 0x4b6749 "/etc/nginx/nginx.conf"}, prefix = {len = 11, data = 0x4b673d "/etc/nginx/"}, lock_file = {len = 0, data = 0x0}, hostname = {len = 0, data = 0x0}}
    cd = <optimized out>
    ccf = <optimized out>

Can't use control functionality if location has more than a segment

I'm using this module with nginx 1.8, and made the update to version VTS v0.1.6 to benefit from the stats reset functionality.

I was surprised that calling any urls with /my/location/status/control?args would simply return the HTML page without executing the command.

I tried again with a vanilla config, and it seems the module doesn't support control if the location scope containing "vhost_traffic_status_display" is composed of multiple segments.

This configuration will work without a problem:

location /status {
     vhost_traffic_status_display;
     vhost_traffic_status_display_format html;
}

while calling http://127.0.0.1/status/control?cmd=reset&group=server&zone=_

If configuring the location with a composed path


location /a/segment/structure/status {
     vhost_traffic_status_display;
     vhost_traffic_status_display_format html;
}

http://127.0.0.1/a/segment/structure/status/control?cmd=reset&group=server&zone=_

Will return the HTML page.

Anything I'm doing wrong?

If nginx overloaded - module is not responding

Hi.

This module is useful especially in case of DDoS attack or just traffic flow, but the problem is that if you load nginx by 100% - module stop responding.

Is there any ability to give a module highest priority, over other tasks, so it can continue to work even when load is 100%?

add prometheus display format

Prometheus (prometheus.io) is a very powerfull open-source monitoring solution. Prometheus get the data scraping metrics from the applications.

To use prometheus with nginx-module-vts is a good way to monitoring and aggregate metrics from more than one nginx servers.

A way to do it is the nginx-module-vts support prometheus as a display format.

Incorrect Upstream/Server/State

Hi,

I've got configured a upstream with two servers

upstream BENJS {
server 127.0.0.1:53000;
server 127.0.0.1:53001;
}

On 127.0.0.1:53001 the service is stopped, but on VTS's status page, the state is "up".

It's a mistake ?

Best Regards,
Zarmack

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.