Giter Site home page Giter Site logo

lua-resty-redis-cluster's Introduction

resty-redis-cluster

Openresty lua client for redis cluster.

Why we build this client?

Openresty has no official client which can support redis cluster. (We could see discussion at openresty/lua-resty-redis#43). Also, looking around other 3rd party openresty redis cluster client , we do't find one can completely support redis cluster features as our project requirement.

Resty-redis-cluster is a new build openresty module which can currently support most of redis-cluster features.

While building the client, thanks for https://github.com/cuiweixie/lua-resty-redis-cluster which gave us some good reference.

feature list

  1. resty-redis-cluster will cache slot->redis node mapping relationship, and support to calculate slot of key by CRC16, then access data by the cached mapping. The way we calculate CRC16 and caching is somewhat similar with https://github.com/cuiweixie/lua-resty-redis-cluster.

  2. Support usual redis cluster access and most command

  3. Support pipe-line operation. in case key is seperated in multiple nodes, resty-redis-cluster will organize the slot which in same target nodes into groups, then commit them with several pipeline group.

  4. Support hashtag. Just give your key like name{tag}

  5. Support read from slave node by readonly mode, both usual command and pipeline. While enable slave node read, resty-redis-cluster will randomly pickup a node which is mapped to the request key, no matter it's master or slave.

  6. Support online resharding of redis cluster(both for usual command and pipeline. resty-redis-cluster will handle the #MOVED signal by re-cache the slot mapping and retrying. resty-redis-cluster will handle the #ASK signal by retrying with asking to redirection target nodes

  7. Support error handling for the different failure scenario of redis cluster. (etc.Singel slave, master fail, cluster down)

  8. fix some critical issues of https://github.com/cuiweixie/lua-resty-redis-cluster.

    1. memory leak issues while there is high throughput. Socket request will cause the suspend and swith of coroutine, so there would be multiple requests still have reference to the big slot mapping cache. This will cause LUAJIT VM crashed.

    2. we must refresh slot cache mapping in case any redis nodes connection failure, otherwise we will not get the latest slot cache mapping and always get failure. Refer to Jedis, same behaviour to referesh cache mapping while any unknown connection issue.

    3. We must handle ASK redirection in usual/MOVED commands

    4. Pipeline must also handle MOVED signal with refreshing slot cache mapping and retry.

  9. Support authentication.

  10. Support eval command with zero or one key

  11. Also verified working properly in AWS elasticache.

  12. Allows rolling replacement of redis cluster. Example) Redis Cluster with IPs 10.0.0.2, .3 and .4 is present. New nodes are introduced at IPs 10.0.0.5, .6 and .7. Slots are relocated fom node .2, .3 and .4 to .5, .6, and .7. The initial nodes can now be removed without downtime in nginx, since the initial configuration is not used anymore.

installation

  1. please compile and generate librestyredisslot.so from redis_slot.c (can done by gcc)

  2. please add xmodem.lua and rediscluster.lua at lualib, Also please add library:lua-resty-redis and lua-resty-lock

    nginx.conf like:

    lua_package_path "/path/lualib/?.lua;"; lua_package_cpath "/path/lualib/?.so;";

  3. nginx.conf add config:

    lua_shared_dict redis_cluster_slot_locks 100k;

Sample usage

  1. Use normal commands:
local config = {
    dict_name = "test_locks",               --shared dictionary name for locks
    name = "testCluster",                   --rediscluster name
    serv_list = {                           --redis cluster node list(host and port),
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connect_timeout = 1000,              --timeout while connecting
    max_redirection = 5,                    --maximum retry attempts for redirection
    max_connection_attempts = 1             --maximum retry attempts for connection
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

local v, err = red_c:get("name")
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end

authentication:

local config = {
    name = "testCluster",                   --rediscluster name
    serv_list = {                           --redis cluster node list(host and port),
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connect_timeout = 1000,              --timeout while connecting
    read_timeout = 1000,                    --timeout while reading
    send_timeout = 1000,                    --timeout while sending
    max_redirection = 5,                    --maximum retry attempts for redirection,
    max_connection_attempts = 1,            --maximum retry attempts for connection
    auth = "pass"                           --set password while setting auth
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

local v, err = red_c:get("name")
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end 
  1. Use pipeline:
local cjson = require "cjson"

local config = {
    name = "testCluster",
    serv_list = {
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,
    keepalive_cons = 1000,
    connect_timeout = 1000,
    read_timeout = 1000,
    send_timeout = 1000,
    max_redirection = 5,
    max_connection_attempts = 1
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)


red_c:init_pipeline()
red_c:get("name")
red_c:get("name1")
red_c:get("name2")

local res, err = red_c:commit_pipeline()

if not res then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(cjson.encode(res))
end
  1. enable slave node read:

    Note: Currently enable_slave_read is only limited in pure read scenario. We don't support mixed read and write scenario(distingush read, write operation) in single config set with enable_slave_read now. If your scenario is mixed with write operation, please disable the option.

    Also, you can isolate pure read scenaro into another config set.

local cjson = require "cjson"

local config = {
    name = "testCluster",
    enable_slave_read = true,
    serv_list = {
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,
    keepalive_cons = 1000,
    connect_timeout = 1000,
    read_timeout = 1000,
    send_timeout = 1000,
    max_redirection = 5,
    max_connection_attempts = 1
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

local v, err = red_c:get("name")
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end
  1. hashtag
local cjson = require "cjson"

local config = {
    name = "testCluster",
    enable_slave_read = true,
    serv_list = {
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,
    keepalive_cons = 1000,
    connect_timeout = 1000,
    read_timeout = 1000,
    send_timeout = 1000,
    max_redirection = 5,
    max_connection_attempts = 1
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)


red_c:init_pipeline()
red_c:get("item100:sub1{100}")
red_c:get("item100:sub2{100}")
red_c:get("item100:sub3{100}")

local res, err = red_c:commit_pipeline()

if not res then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(cjson.encode(res))
end
  1. eval
local config = {
    name = "testCluster",                   --rediscluster name
    serv_list = {                           --redis cluster node list(host and port),
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connect_timeout = 1000,              --timeout while connecting
    read_timeout = 1000,                    --timeout while reading
    send_timeout = 1000,                    --timeout while sending
    max_redirection = 5,                    --maximum retry attempts for redirection
    max_connection_attempts = 1             --maximum retry attempts for connection
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)
local step = 2
local v, err = red_c:eval("return redis.call('incrby',KEYS[1],ARGV[1])",1,"counter",step)
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end
  1. Use SSL :

Note: connect_opts is optional config field that can be set and will be passed to underlying redis connect call. More information about these options can be found in lua-resty-redis documentation.

local config = {
    dict_name = "test_locks",               --shared dictionary name for locks
    name = "testCluster",                   --rediscluster name
    serv_list = {                           --redis cluster node list(host and port),
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connect_timeout = 1000,              --timeout while connecting
    max_redirection = 5,                    --maximum retry attempts for redirection
    max_connection_attempts = 1,             --maximum retry attempts for connection
    connect_opts = {
        ssl = true,
        ssl_verify = true,
        server_name = "test-cluster.redis.myhost.com",
        pool = "redis-cluster-connection-pool",
        pool_size = 20,
        backlog = 10
    }
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

local v, err = red_c:get("name")
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end

### Limitation

1. Doesn't support MSET, MGET operations yet

2. Doesn't support transactions operations: MULTI DISCARD EXEC WATCH

3. Doesn't support pub sub. Actually redis cluster didn't check slot for pub sub commands, so using normal resty redis client to conenct with specific node in a cluster still works.

4. Limitation only for turn on enable slave read: If we need to discover new slave node(but without adding new master), must retrigger new slot mapping cache refresh, otherwise slot mapping still record the last version of node tables.(easiest way is rebooting nginx nodes)

5. Limitation only for turn on enable slave read: If slave -> master link is down(maybe still under sync and recovery), resty-redis-cluster will not filter these nodes out. Thus, read from slave may return unexpected response. Suggest always catch the response parsing exception while enable slave read. 
   This is because client depends on cluster slots command.
   
   
## Copyright and License

This module is licensed under the Apache License Version 2.0 .

Copyright (C) 2017, by steve.xu stevehui0511@gmail.com

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

lua-resty-redis-cluster's People

Contributors

artomal avatar bungle avatar membphis avatar steve0511 avatar tieske avatar toredash avatar vinayakhulawale avatar wangrzneu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

lua-resty-redis-cluster's Issues

attempt to index a nil value

I'm trying fetch data from redis cluster via openresty nginx + LuaJIT + this module.
I've tried different enviroment (alpine-fat \ stretch \ with and without LuaRocks \ openresty 1.13 - 1.15) and got same error.
Probably trouble with resty.utils, but same config work perfectly at one of physical servers...

Error message:
lua entry thread aborted: runtime error: /usr/local/openresty/lualib/rediscluster.lua:296: attempt to index a nil value
stack traceback:
/usr/local/openresty/lualib/rediscluster.lua:416: in function 'smembers'
/lua/read.lua:34: in function 'r_members'
/lua/read.lua:70: in function 'gatherNightNodes'
/lua/read.lua:208: in function 'dispatcher'

My last setup:

  1. openresty + opm
    xiedacon/lua-pretty-json 0.1
    fffonion/lua-resty-shdict-server 0.02
    xiangnanscu/lua-resty-utils 1.21
    openresty/lua-resty-redis 0.25
    xiangnanscu/lua-resty-repr 1.0
    openresty/lua-resty-lock 0.07

openresty -V
nginx version: openresty/1.15.8.2
built by gcc 8.3.0 (Alpine 8.3.0)
built with OpenSSL 1.1.1c 28 May 2019
TLS SNI support enabled
configure arguments: --prefix=/usr/local/openresty/nginx --with-cc-opt='-O2 -DNGX_LUA_ABORT_AT_PANIC -I/usr/local/openresty/pcre/include -I/usr/local/openresty/openssl/include' --add-module=../ngx_devel_kit-0.3.1rc1 --add-module=../echo-nginx-module-0.61 --add-module=../xss-nginx-module-0.06 --add-module=../ngx_coolkit-0.2 --add-module=../set-misc-nginx-module-0.32 --add-module=../form-input-nginx-module-0.12 --add-module=../encrypted-session-nginx-module-0.08 --add-module=../srcache-nginx-module-0.31 --add-module=../ngx_lua-0.10.15 --add-module=../ngx_lua_upstream-0.07 --add-module=../headers-more-nginx-module-0.33 --add-module=../array-var-nginx-module-0.05 --add-module=../memc-nginx-module-0.19 --add-module=../redis2-nginx-module-0.15 --add-module=../redis-nginx-module-0.3.7 --add-module=../rds-json-nginx-module-0.15 --add-module=../rds-csv-nginx-module-0.09 --add-module=../ngx_stream_lua-0.0.7 --with-ld-opt='-Wl,-rpath,/usr/local/openresty/luajit/lib -L/usr/local/openresty/pcre/lib -L/usr/local/openresty/openssl/lib -Wl,-rpath,/usr/local/openresty/pcre/lib:/usr/local/openresty/openssl/lib' --with-pcre --with-compat --with-file-aio --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-http_xslt_module=dynamic --with-ipv6 --with-mail --with-mail_ssl_module --with-md5-asm --with-pcre-jit --with-sha1-asm --with-stream --with-stream_ssl_module --with-threads --with-stream --with-stream_ssl_preread_module
2) nginx.conf
worker_processes 20;
events {
worker_connections 4096;
multi_accept on;
use epoll;
}
http {
lua_shared_dict redis_cluster_slot_locks 100k;
log_format travelata '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" $request_time';
server {
listen 80;
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.0;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
proxy_set_header Accept-Encoding 'gzip';
access_log /dev/stdout;
error_log /dev/stdout;
location /searchOneThread {
default_type text/html;
content_by_lua_file "/lua/read.lua";
}
location /writeOneThread {
default_type text/html;
content_by_lua_file "/lua/write.lua";
}
location /nginx_status {
stub_status on;
access_log off;
}
}
}
3) content_by_lua_file "/lua/read.lua
local json = require('cjson')
local redis_cluster = require "rediscluster"
local utils = require "resty.utils"

ngx.req.read_body()

local data = ngx.req.get_body_data();
local request = json.decode(data);

local config = {
name = "testCluster", --rediscluster name
serv_list = { --redis cluster node list(host and port),
{ ip = "10.10.6.4", port = 6380 },
{ ip = "10.10.6.4", port = 6381 },
},
--enableSlaveRead = true,
keepalive_timeout = 60000, --redis connection pool idle timeout
keepalive_cons = 1000, --redis connection pool size
connection_timout = 1000, --timeout while connecting
max_redirection = 5, --maximum retry attempts for redirection
}

local red_c = redis_cluster:new(config)

local currentTime = os.time()

--[[
{"criteria":{"operators":[],"hotels":[],"resorts":[],"hotelCategories":[4,7,8,9],"nights":[7,8,9,10,11,12],"priceRangeFrom":6000,"priceRangeTo":1000000,"rootNodes":["20190308-92-2-2-0-0","20190309-92-2-2-0-0","20190310-92-2-2-0-0"],"meals":[],"limit":6000,"toursPerHotelLimit":8}}
]]--

local r_members = function (redis, key)

local res, err = redis:smembers(key)

if not res then
ngx.log("Error redis smembers key: " .. key .. "Error: " .. err)
end

return res
--[[
local co = coroutine.create(function (redis, key)
coroutine.yield(redis:smembers(key))
end)

return coroutine.resume(co, redis, key)
]]--
end

local hGetAll = function (redis, key, currentTime)
res, ok = redis:hgetall(key)

-- res, ok = redis:eval("local r = {} local t = {} for i, d in pairs(redis.call('HGETALL', KEYS[1])) do if(math.fmod(i,2) > 0) then t = {i = d, d = nil} else if tonumber(string.sub(d, 1,10)) > tonumber(ARGV[1]) then r[#r+1] = t.i r[#r+1] = d end t = {i = nil, d = nil} end end return r", 1, key, currentTime)

if not res then
return {}
end
return res;
-- return redis:array_to_hash(res)
end

local buildNodes = function (parentNodeKey, members)
local new = {}
for i,v in pairs(members) do new[i] = parentNodeKey .. "-" .. v end
return new
end;

local getNextLevelSmembers = function (redis, nodeType, nodeKey, criteria)
local members = r_members(redis, nodeType .. nodeKey);

if nodeType == "RootNode_" then
	if criteria.nights[1] == nil then
       return buildNodes(nodeKey, members)
    else
       return buildNodes(nodeKey, utils.filter(members, function(el) return utils.list_has(criteria.nights, tonumber(el)) end))
    end
elseif nodeType == "NightNode_" then
    if criteria.resorts[1] == nil then
       return buildNodes(nodeKey, members)
    else
       return buildNodes(nodeKey, utils.filter(members, function(el) return utils.list_has(criteria.resorts, tonumber(el)) end))
    end
elseif nodeType == "ResortNode_" then
    return members
end

end

local gatherNightNodes = function (redis, rootNode, criteria)
return getNextLevelSmembers(redis, "RootNode_", rootNode, criteria)
end

local gatherResortNodes = function (redis, nightNode, criteria)
return getNextLevelSmembers(redis, "NightNode_", nightNode, criteria)
end

local gatherPriceNodes = function (redis, resortNode, priceNodesCollection, criteria)

local priceBucketFrom = tonumber(criteria.priceRangeFrom) ~= nil and tonumber(criteria.priceRangeFrom) > 0 and math.floor(criteria.priceRangeFrom / 3000) or nil
local priceBucketTo = tonumber(criteria.priceRangeTo) ~= nil and tonumber(criteria.priceRangeTo) > 0 and math.floor(criteria.priceRangeTo / 3000) or nil

local priceNodes = getNextLevelSmembers(redis, "ResortNode_", resortNode, criteria)

for i,v in pairs(priceNodes) do
if (priceBucketFrom == nil or priceBucketFrom <= tonumber(v)) and (priceBucketTo == nil or priceBucketTo >= tonumber(v)) then
if priceNodesCollection[v] == nil then
priceNodesCollection[v] = {}
end
priceNodesCollection[v][#priceNodesCollection[v]+1] = resortNode .. "-" .. v
end
end

return priceNodesCollection
end

local gatherTours = function(redis, priceNodeCollection, criteria, currentTime)

local tours = {}
local count = 0
local nodeTours;

local needExit;
local cycle = 0;

local filteredTour = false
local totalFiltered = 0
local totalPriceNodesRead = 0

local tour = {
    identity = nil,
    data = {}
}

redis:init_pipeline()
for pool, priceNodePool in priceNodeCollection do
    for i, v in ipairs(priceNodePool) do
        totalPriceNodesRead = totalPriceNodesRead + 1
        redis:hgetall("PriceNode_" .. v)

-- ngx.say("PriceNode_" .. v)
end
end

local nodeBuckets = redis:commit_pipeline()
for i, nodeTours in pairs(nodeBuckets) do
    for key, val in pairs(nodeTours) do
	    local match = {}
	    if(math.fmod(key,2) > 0) then
	        tour = {
                identity = val,
                data = {}
            }
	    else
            for s in val:gmatch("([^;]*);?") do
                table.insert(tour.data, s)
            end


	        if tonumber(tour.data[1]) < currentTime then
		        filteredTour = true

-- totalFiltered = totalFiltered + 1
elseif tonumber(criteria.priceRangeFrom) ~= nill and tonumber(criteria.priceRangeTo) ~= nill and tonumber(criteria.priceRangeFrom) > 0 and tonumber(criteria.priceRangeTo) > tonumber(criteria.priceRangeFrom)
and (tonumber(criteria.priceRangeFrom) > tonumber(tour.data[4]) or tonumber(criteria.priceRangeTo) < tonumber(tour.data[4])) then
filteredTour = true
elseif criteria.operators[1] ~= nil and not utils.list_has(criteria.operators, tonumber(tour.data[6])) then
filteredTour = true
elseif criteria.hotels[1] ~= nil and not utils.list_has(criteria.hotels, tonumber(tour.data[3])) then
filteredTour = true
elseif criteria.hotelCategories[1] ~= nil and not utils.list_has(criteria.hotelCategories, tonumber(tour.data[18])) then
filteredTour = true
elseif criteria.meals[1] ~= nil and not utils.list_has(criteria.meals, tonumber(tour.data[17])) then
filteredTour = true
end

            if not filteredTour then
                tours[#tours+1] = tour
                count = count + 1
            else
                totalFiltered = totalFiltered + 1
            end

            tour = {
                identity = nil,
                data = {}
            }

            filteredTour = false
        end
    end
end

-- ngx.say("Total filtered: " .. totalFiltered)
-- ngx.say("Total price nodes read: " .. totalPriceNodesRead)
return tours

end

local dispatcher = function(utils, redis, criteria, currentTime)

 local allNightNodes = {}
 local allResortNodes = {}
 local allPriceNodes = {}
 local priceNodesCollection = {}

-- local sortedPriceNodes = {}

 for i,v in pairs(criteria.rootNodes) do
     allNightNodes = utils.list_extend(allNightNodes, gatherNightNodes(redis, v, criteria) or {})
 end

 for i,v in pairs(allNightNodes) do
     allResortNodes = utils.list_extend(allResortNodes, gatherResortNodes(redis, v, criteria) or {})
 end

--for i,v in pairs(allResortNodes) do
--ngx.say(v)
--end

 for i,v in pairs(allResortNodes) do
     priceNodesCollection = gatherPriceNodes(redis, v, priceNodesCollection, criteria)
 end


 local toursPerHotels = {}

 local tours = gatherTours(redis, utils.sorted(priceNodesCollection, function(a,b) return tonumber(a) < tonumber(b) end), criteria, currentTime)

 local sortedTours = {}

 for i,v in utils.sorted(tours, function(a,b)
    return
	tonumber(tours[a].data[4] or 0)
	+ tonumber((tours[a].data[7] and tours[a].data[7] ~= '') and tours[a].data[7] or 0)
	< tonumber(tours[b].data[4] or 0)
	+ tonumber((tours[b].data[7] and tours[b].data[7] ~= '') and tours[b].data[7] or 0)
	 end) do

if toursPerHotels[v.data[3]] == nil then
    toursPerHotels[v.data[3]] = 0
end

if toursPerHotels[v.data[3]] < criteria.toursPerHotelLimit then
        sortedTours[#sortedTours+1] = v
        toursPerHotels[v.data[3]] = toursPerHotels[v.data[3]] + 1
end

if(#sortedTours >= criteria.limit) then
   break
end

 end

return sortedTours

end

--local data, error = red_c:smembers("ResortNode_20190308-92-2-2-0-0-11-2162")

--local ok, members = r_members(red_c, "ResortNode_20190308-92-2-2-0-0-11-2162")

--local ok, members = r_members(red_c, "RootNode_" .. request.criteria.rootNodes[1]);

--local members = getNextLevelSmembers(red_c, "RootNode_", request.criteria.rootNodes[1])
--ngx.say(tonumber(request.criteria.priceRangeFrom))
local tours = dispatcher(utils, red_c, request.criteria, currentTime)
--res = red_c:eval("local r = {} local t = {} for i, d in pairs(redis.call('HGETALL', KEYS[1])) do if(math.fmod(i,2) > 0) then t = {i = d, d = nil} else if tonumber(string.sub(d, 1,10)) > tonumber(ARGV[1]) then r[#r+1] = t.i r[#r+1] = d end t = {i = nil, d = nil} end end return r", 1, "PriceNode_20190310-92-2-2-0-0-12-2185-25", currentTime)
--ngx.say(res[2])

red_c:close()
ngx.say(json.encode(tours))
--ngx.say("Total tours: " .. #tours)

--[[
--red_c:init_pipeline()
--red_c:hgetall("PriceNode_20190407-92-2-2-0-0-7-2161-8")
red_c:hgetall("PriceNode_20190407-92-2-2-0-0-8-2162-13")
red_c:hgetall("PriceNode_20190407-92-2-2-0-0-7-2184-13")
local res, err = red_c:commit_pipeline()
if not res then
ngx.log(ngx.ERR, "err: ", err)
else
ngx.say(json.encode(res))
end
--]]

--for i, v in pairs(members) do ngx.print(v .. "\n") end
4) redis_slot.so
gcc redis_slot.c -fPIC -shared -o redis_slot.so
cp redis_slot.so /usr/local/openresty/lualib/
cp rediscluster.lua /usr/local/openresty/lualib/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.