Giter Site home page Giter Site logo

resty-redis-cluster's Introduction

resty-redis-cluster

Openresty lua client for redis cluster.

Why we build this client?

Openresty has no official client which can support redis cluster. (We could see discussion at openresty/lua-resty-redis#43). Also, looking around other 3rd party openresty redis cluster client , we do't find one can completely support redis cluster features as our project requirement.

Resty-redis-cluster is a new build openresty module which can currently support most of redis-cluster features.

While building the client, thanks for https://github.com/cuiweixie/lua-resty-redis-cluster which gave us some good reference.

feature list

  1. resty-redis-cluster will cache slot->redis node mapping relationship, and support to calculate slot of key by CRC16, then access data by the cached mapping. The way we calculate CRC16 and caching is somewhat similar with https://github.com/cuiweixie/lua-resty-redis-cluster.

  2. Support usual redis cluster access and most command

  3. Support pipe-line operation. in case key is seperated in multiple nodes, resty-redis-cluster will organize the slot which in same target nodes into groups, then commit them with several pipeline group.

  4. Support hashtag. Just give your key like name{tag}

  5. Support read from slave node by readonly mode, both usual command and pipeline. While enable slave node read, resty-redis-cluster will randomly pickup a node which is mapped to the request key, no matter it's master or slave.

  6. Support online resharding of redis cluster(both for usual command and pipeline. resty-redis-cluster will handle the #MOVED signal by re-cache the slot mapping and retrying. resty-redis-cluster will handle the #ASK signal by retrying with asking to redirection target nodes

  7. Support error handling for the different failure scenario of redis cluster. (etc.Singel slave, master fail, cluster down)

  8. fix some critical issues of https://github.com/cuiweixie/lua-resty-redis-cluster.

    1. memory leak issues while there is high throughput. Socket request will cause the suspend and swith of coroutine, so there would be multiple requests still have reference to the big slot mapping cache. This will cause LUAJIT VM crashed.

    2. we must refresh slot cache mapping in case any redis nodes connection failure, otherwise we will not get the latest slot cache mapping and always get failure. Refer to Jedis, same behaviour to referesh cache mapping while any unknown connection issue.

    3. We must handle ASK redirection in usual/MOVED commands

    4. Pipeline must also handle MOVED signal with refreshing slot cache mapping and retry.

  9. Support authentication.

  10. Support eval command with zero or one key

  11. Also verified working properly in AWS elasticache.

  12. Allows rolling replacement of redis cluster. Example) Redis Cluster with IPs 10.0.0.2, .3 and .4 is present. New nodes are introduced at IPs 10.0.0.5, .6 and .7. Slots are relocated fom node .2, .3 and .4 to .5, .6, and .7. The initial nodes can now be removed without downtime in nginx, since the initial configuration is not used anymore.

installation

  1. please add xmodem.lua and rediscluster.lua at lualib, Also please add library:lua-resty-redis and lua-resty-lock

    nginx.conf like:

    lua_package_path "/path/lualib/?.lua;";

  2. nginx.conf add config:

    lua_shared_dict redis_cluster_slot_locks 100k;

  3. or install by luarock, link: https://luarocks.org/modules/steve0511/resty-redis-cluster

Sample usage

  1. Use normal commands:
local config = {
    dict_name = "test_locks",               --shared dictionary name for locks, if default value is not used 
    refresh_lock_key = "refresh_lock",      --shared dictionary name prefix for lock of each worker, if default value is not used 
    name = "testCluster",                   --rediscluster name
    serv_list = {                           --redis cluster node list(host and port),
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connect_timeout = 1000,              --timeout while connecting
    max_redirection = 5,                    --maximum retry attempts for redirection
    max_connection_attempts = 1             --maximum retry attempts for connection
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

local v, err = red_c:get("name")
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end

authentication:

local config = {
    dict_name = "test_locks",               --shared dictionary name for locks, if default value is not used 
    refresh_lock_key = "refresh_lock",      --shared dictionary name prefix for lock of each worker, if default value is not used 
    name = "testCluster",                   --rediscluster name
    serv_list = {                           --redis cluster node list(host and port),
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connect_timeout = 1000,              --timeout while connecting
    read_timeout = 1000,                    --timeout while reading
    send_timeout = 1000,                    --timeout while sending
    max_redirection = 5,                    --maximum retry attempts for redirection,
    max_connection_attempts = 1,            --maximum retry attempts for connection
    auth = "pass"                           --set password while setting auth
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

local v, err = red_c:get("name")
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end 
  1. Use pipeline:
local cjson = require "cjson"

local config = {
    dict_name = "test_locks",               --shared dictionary name for locks, if default value is not used 
    refresh_lock_key = "refresh_lock",      --shared dictionary name prefix for lock of each worker, if default value is not used 
    name = "testCluster",
    serv_list = {
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,
    keepalive_cons = 1000,
    connect_timeout = 1000,
    read_timeout = 1000,
    send_timeout = 1000,
    max_redirection = 5,
    max_connection_attempts = 1
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)


red_c:init_pipeline()
red_c:get("name")
red_c:get("name1")
red_c:get("name2")

local res, err = red_c:commit_pipeline()

if not res then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(cjson.encode(res))
end
  1. enable slave node read:

    Note: Currently enable_slave_read is only limited in pure read scenario. We don't support mixed read and write scenario(distingush read, write operation) in single config set with enable_slave_read now. If your scenario is mixed with write operation, please disable the option.

    Also, you can isolate pure read scenaro into another config set.

local cjson = require "cjson"

local config = {
    dict_name = "test_locks",               --shared dictionary name for locks, if default value is not used 
    refresh_lock_key = "refresh_lock",      --shared dictionary name prefix for lock of each worker, if default value is not used 
    name = "testCluster",
    enable_slave_read = true,
    serv_list = {
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,
    keepalive_cons = 1000,
    connect_timeout = 1000,
    read_timeout = 1000,
    send_timeout = 1000,
    max_redirection = 5,
    max_connection_attempts = 1
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

local v, err = red_c:get("name")
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end
  1. hashtag
local cjson = require "cjson"

local config = {
    dict_name = "test_locks",               --shared dictionary name for locks, if default value is not used 
    refresh_lock_key = "refresh_lock",      --shared dictionary name prefix for lock of each worker, if default value is not used 
    name = "testCluster",
    enable_slave_read = true,
    serv_list = {
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,
    keepalive_cons = 1000,
    connect_timeout = 1000,
    read_timeout = 1000,
    send_timeout = 1000,
    max_redirection = 5,
    max_connection_attempts = 1
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)


red_c:init_pipeline()
red_c:get("item100:sub1{100}")
red_c:get("item100:sub2{100}")
red_c:get("item100:sub3{100}")

local res, err = red_c:commit_pipeline()

if not res then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(cjson.encode(res))
end
  1. eval
local config = {
    dict_name = "test_locks",               --shared dictionary name for locks, if default value is not used 
    refresh_lock_key = "refresh_lock",      --shared dictionary name prefix for lock of each worker, if default value is not used 
    name = "testCluster",                   --rediscluster name
    serv_list = {                           --redis cluster node list(host and port),
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connect_timeout = 1000,              --timeout while connecting
    read_timeout = 1000,                    --timeout while reading
    send_timeout = 1000,                    --timeout while sending
    max_redirection = 5,                    --maximum retry attempts for redirection
    max_connection_attempts = 1             --maximum retry attempts for connection
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)
local step = 2
local v, err = red_c:eval("return redis.call('incrby',KEYS[1],ARGV[1])",1,"counter",step)
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end
  1. Use SSL :

Note: connect_opts is optional config field that can be set and will be passed to underlying redis connect call. More information about these options can be found in lua-resty-redis documentation.

local config = {
    dict_name = "test_locks",               --shared dictionary name for locks, if default value is not used 
    refresh_lock_key = "refresh_lock",      --shared dictionary name prefix for lock of each worker, if default value is not used 
    name = "testCluster",                   --rediscluster name
    serv_list = {                           --redis cluster node list(host and port),
        { ip = "127.0.0.1", port = 7001 },
        { ip = "127.0.0.1", port = 7002 },
        { ip = "127.0.0.1", port = 7003 },
        { ip = "127.0.0.1", port = 7004 },
        { ip = "127.0.0.1", port = 7005 },
        { ip = "127.0.0.1", port = 7006 }
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connect_timeout = 1000,              --timeout while connecting
    max_redirection = 5,                    --maximum retry attempts for redirection
    max_connection_attempts = 1,             --maximum retry attempts for connection
    connect_opts = {
        ssl = true,
        ssl_verify = true,
        server_name = "test-cluster.redis.myhost.com",
        pool = "redis-cluster-connection-pool",
        pool_size = 20,
        backlog = 10
    }
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

local v, err = red_c:get("name")
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end

Limitation

  1. Doesn't support MSET, MGET operations yet

  2. Doesn't support transactions operations: MULTI DISCARD EXEC WATCH

  3. Doesn't support pub sub. Actually redis cluster didn't check slot for pub sub commands, so using normal resty redis client to conenct with specific node in a cluster still works.

  4. Limitation only for turn on enable slave read: If we need to discover new slave node(but without adding new master), must retrigger new slot mapping cache refresh, otherwise slot mapping still record the last version of node tables.(easiest way is rebooting nginx nodes)

  5. Limitation only for turn on enable slave read: If slave -> master link is down(maybe still under sync and recovery), resty-redis-cluster will not filter these nodes out. Thus, read from slave may return unexpected response. Suggest always catch the response parsing exception while enable slave read. This is because client depends on cluster slots command.

Copyright and License

This module is licensed under the Apache License Version 2.0 .

Copyright (C) 2017, by steve.xu [email protected]

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

resty-redis-cluster's People

Contributors

artomal avatar bungle avatar membphis avatar nivlipetz avatar pintsized avatar shuishui3 avatar steve0511 avatar tieske avatar toredash avatar vinayakhulawale avatar wangrzneu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

resty-redis-cluster's Issues

do I need to close the connection?

Thanks for your code, we need it in deep.

after execute the command, do I need to close the connection manually? like keep-alived or close ? how to call these methods ? can u show some examples ?

Thanks.

failed to create lock in initialization slot cache

2020/07/06 06:22:31 [error] 6#0: *32 [lua] rediscluster.lua:221: init_slots(): failed to create lock in initialization slot cache: dictionary not found, client: 192.168.16.1, server: , request: "GET / HTTP/1.0", host: "upstream_server"
2020/07/06 06:22:31 [error] 6#0: *32 lua entry thread aborted: runtime error: /usr/local/openresty/lualib/project/application.lua:25: attempt to index local 'red_c' (a nil value)
stack traceback:
coroutine 0:
/usr/local/openresty/lualib/project/application.lua: in main chunk, client: 192.168.16.1, server: , request: "GET / HTTP/1.0", host: "upstream_server"

cmd 'evalsha' assigned to wrong slot?

Hello, I meet a problem at 'evalsha' with rediscluster, here's my code.

local redis = require 'resty.redis'
local rediscluster = require 'resty.rediscluster'

local config = {
    name = "redis_cluster",
    serv_list = {
        -- M
        { ip = "127.0.0.1", port = 6379 },
        { ip = "127.0.0.1", port = 6380 },
        { ip = "127.0.0.1", port = 6381 },
        -- S
        { ip = "127.0.0.1", port = 6382 },
        { ip = "127.0.0.1", port = 6383 },
        { ip = "127.0.0.1", port = 6384 },
    },
    keepalive_timeout = 10000,   
    keepalive_cons = 1,                     
    connection_timout = 5000,         
    max_redirection = 5,                    
}

local script = script = [[
    return 'hello ' .. KEYS[1]
]]

-- preload script into every endpoint
local sha_redis
for k, v in ipairs(config.serv_list) do
    if v.port <= 6381 then
        local r = redis:new()
        r:connect(v.ip, v.port)
        sha_redis, err = r:script('load', script)
        ngx.log(ngx.DEBUG, 'sha: ', sha)
        r:close()
    end
end

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

-- Double check loading script with rediscluster
local sha = red_c:script('load', script)
assert(sha == sha_redis)

-- Not OK if 'MOVED' to slot calculated with 'sha' but 'key'
red_c:evalsha(sha, 1, 'some_key')
...

Try some other 'some_key' and will finally get an endless loop error rising up "failed to execute command, reaches maximum redirection attempts". I checked the source code rediscluster.lua function handleCommandWithRetry

local function handleCommandWithRetry(self, targetIp, targetPort, asking, cmd, key, ...)
local config = self.config
key = tostring(key)
local slot = redis_slot(key)

The slot is calculated by 'key' but for 'evalsha' this is actrually the SHA1 sum. Then I make some changes, to get the real 'key' for 'evalsha':

    local slot = redis_slot(key)
    local t = {...}
    if cmd == 'evalsha' and #t >= 2 then
        slot = redis_slot(t[2])
    end
    ...

Now everyting works OK. Please help me if there's anyting wrong, thanks.

ab并发压测下的lock超时

redis连接参数:
idletimeout=1
keepaliveTimeout=60000
keepaliveSize=300
connectionTimout=1000
maxRedirection=5

ab压测:
ab -c 10 -n 50 无报错

ab -c 10 -n 100 产生下面报错:
`2019/08/26 19:16:13 [error] 60698#5338817: *962 [lua] rediscluster.lua:168: init_slots(): failed to acquire the lock in initialization slot cache: timeout, client: 127.0.0.1, server: ***, request: "GET /hehe/init?ok=2222000&version=7.2.0&source=2 HTTP/1.0", host: "localhost:9110"

2019/08/26 19:16:13 [error] 60698#5338817: 963 [lua] rediscluster.lua:168: init_slots(): failed to acquire the lock in initialization slot cache: timeout, client: 127.0.0.1, server:**, request: "GET /hehe/init?ok=2222000&version=7.2.0&source=2 HTTP/1.0", host: "localhost:9110"

2019/08/26 19:16:13 [error] 60698#5338817: 964 [lua] rediscluster.lua:168: init_slots(): failed to acquire the lock in initialization slot cache: timeout, client: 127.0.0.1, server:**, request: "GET /hehe/init?ok=2222000&version=7.2.0&source=2 HTTP/1.0", host: "localhost:9110"`

查找代码,定位到是 line 166 获取lock 超时:
local elapsed, err = lock:lock("redis_cluster_slot_" .. sel f.config.name) if not elapsed then ngx.log(ngx.ERR, "failed to acquire the lock in initial ization slot cache: ", err) return end

大家知道什么原因吗?

datas distributed uniformity in multi redis clusters

hi:
I have 10 redis clusters for storage. I sent 20billions datas to the 10 clusters with Performance Test tool. I found data distributed uniformity in the 10 clusters. does anybody met the problem?
ps:
1.each cluster has 3 instances.
2.each cluster has a independent config table.

报找不到 cluster 方法

openresty/1.9.7.4

rediscluster.lua:105: attempt to call method 'cluster' (a nil value)

是不是需要其他库

Performance problem of set

function _M.access(self)
local host = ngx.var.host
local session_cookie = self:session_cookie(host)
local session_id, err = session_cookie:get()
local sso_session_store = self:get_sso_session_store()
local server_list = self.config.redis.nodes
local password = self.config.redis.password
local config = {
name = "esec_cluster",
enableSlaveRead = true,
serv_list = server_list,
auth = password,
keepalive_timeout = 60000,
keepalive_cons = 1000,
connect_timeout = 1000,
read_timeout = 1000,
send_timeout = 1000,
max_redirection = 3,
max_connection_attempts = 1
}
local redis = redis_client:new(config)
return redis:set(session_id, ngx.now())
end

root@ubuntu:/opt/wrk# wrk -t96 -c240 -d30s --script test.lua https://w3.esec.test.com
Running 30s test @ https://w3.esec.test.com
96 threads and 240 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 116.09ms 123.13ms 1.07s 85.94%
Req/Sec 21.75 16.95 170.00 81.88%
56053 requests in 30.10s, 44.53MB read
Non-2xx or 3xx responses: 6626
Requests/sec: 1862.36
Transfer/sec: 1.48MB

**When the rediscluster is used to perform the set operation, the QPS is only about 2000, and when resty.redis is used, the QPS can reach about 30000. Does anyone know why? and I'm using a 48-core server.

function _M.access(self)
local host = ngx.var.host
local session_cookie = self:session_cookie(host)
local session_id, err = session_cookie:get()
local red = redis:new()

red:set_timeouts(1000, 1000, 1000) 

local ok, err = red:connect("172.18.31.24", 7005)
if not ok then
    ngx.say("failed to connect: ", err)
    return false, err
end

ok, err = red:set(session_id, ngx.now())
if not ok then
    ngx.say("failed to set session_id: ", err)
    return false, err
end

local ok, err = red:set_keepalive(10000, 100)
if not ok then
    ngx.say("failed to set keepalive: ", err)
    return false, err
end

return true

end

root@ubuntu:/opt/wrk# wrk -t96 -c240 -d30s --script test.lua https://w3.esec.test.com
Running 30s test @ https://w3.esec.test.com
96 threads and 240 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 7.41ms 23.76ms 1.01s 99.79%
Req/Sec 305.92 17.04 0.97k 90.91%
879734 requests in 30.08s, 197.96MB read
Requests/sec: 29243.78
Transfer/sec: 6.58MB

压力测试下或者并发下的问题

当我用ab进行压力测试的时候就会出现如下错误,不清楚问题出在哪里,是因为加锁的原因吗,应该怎么解决

ab -n 1000 -c 100 -k http://127.0.0.1:8002/product?id=15

下面是代码测试就会报错
local config = {
name = "testCluster", --rediscluster name
serv_list = {{ ip = "xx.xx.xx.xx", port = 6383 } },
keepalive_timeout = 1000, --redis connection pool idle timeout
keepalive_cons = 10, --redis connection pool size
connection_timout = 1000, --timeout while connecting
max_redirection = 5 --maximum retry attempts for redirection
}
local redis_cluster = require "rediscluster"
red_c,err = redis_cluster:new(config)
if not red_c then
ngx.say(ngx.ERR, "connect to redis error : ", err)
return
end
local content,err =red_c:get('shop_info_'..id)

image

Database select

Hello,

I'm starting to use this module and I finally was able to make it work, but I've noticed that there's no option to select the database (or at least is not documented and select() function doesn't work), so it always uses the 0 database.

Is there any way to change the database?

Best regards.

Simplify shared memory allocation for redis cluster lock.

I tested
lua_shared_dict redis_cluster_slot_locks 100k;
approach with memory allocation on my test project with simple nginx config and it was working fine.
But now I use different nginx config in production project and I am having a hard time to allocate shared memory with this directive.

I have tried to put it in different parts of my config and error is different
emerg] the shared memory zone "redis_cluster_slot_locks" is already declared for a different use in /var/www/path/config/lua.conf:9
which is declared in the single place

or this (looks like memory wasn't initialized and I get )
[lua] rediscluster.lua:137: init_slots(): failed to create lock in initialization slot cache: dictionary not found while handling client connection, client: 172.19.0.1, server: 0.0.0.0:9999
Config contains several includes.

Is it possible to move this config directive logic into the code so it doesn't need configuration in the config file?

redis_slot.c missing from repo

Hey,
I have managed to find the following commit 6a9af0b

However this file doesn't appear to exist in the repo in master or the latest version 1.02. Is there a reason this is missing or can it be re-added?

Thanks,
Liam

lua:61: attempt to index upvalue 'clib' (a nil value)

how to generate redis_slot.so file?
lua entry thread aborted: runtime error: /app/vendor/resty-redis-cluster/lib/rediscluster.lua:61: attempt to index upvalue 'clib' (a nil value) stack traceback: coroutine 0: /app/vendor/resty-redis-cluster/lib/rediscluster.lua: in function 'redis_slot' /app/vendor/resty-redis-cluster/lib/rediscluster.lua:262: in function 'handleCommandWithRetry' /app/vendor/resty-redis-cluster/lib/rediscluster.lua:383: in function 'get' content_by_lua(nginx.conf:82):18: in function <content_by_lua(nginx.conf:82):1>, client: 127.0.0.1, server: localhost, request: "GET /redis HTTP/1.1", host: "localhost"

Custom commands for redis

Hi
How run custom commands?
I use this

local res, err = red_c:keys("*")

but get only one key in result. However I add 2

For get DBSIZE I run

 local res, err = red_c:dbsize()

But get
ERR wrong number of arguments for 'dbsize' command
Does this plugin support functions of redis: keys, dbsize, flushdb?

Error getting slots when a node is down

Hello,

I'm testing failover situations on my redis cluster and I've noticed that lua script stopped working after stop a node.

My setup is a redis cluster with 6 nodes (3 master, +1 replica on every master), and I've configured the lua redis cluster as suggested on the readme.

red_config = {
    name = "Cluster",
    serv_list = {
        {ip = srv1, port = 6379},
        {ip = srv2, port = 6379},
        {ip = srv3, port = 6379},
        {ip = srv4, port = 6379},
        {ip = srv5, port = 6379},
        {ip = srv6, port = 6379},
    },
    keepalive_timeout = 60000,
    keepalive_cons = 1000,
    connection_timout = 1000,
    max_redirection = 5,
    auth = nil,
}

Was working fine, but after stop the first master, my lua script has stopped working giving errors getting the slots from Redis Cluster:

2020/04/21 11:38:40 [error] 17#17: *2275 [lua] rediscluster.lua:179: init_slots(): failed to acquire the lock in initialization slot cache: timeout, client: 46.25.49.202, server: www.example.com, request: "GET /index.php?action=abrir_capa_login&idioma=spa HTTP/1.1", host: "www.example.com", referrer: "https://www.example.com/"
2020/04/21 11:38:40 [error] 17#17: *2275 lua entry thread aborted: runtime error: .../tmp.ctbxBlkqPA/resty-redis-cluster/lib/rediscluster.lua:310: attempt to index local 'slots' (a nil value)
stack traceback:
coroutine 0:
	.../tmp.ctbxBlkqPA/resty-redis-cluster/lib/rediscluster.lua: in function 'handleCommandWithRetry'
	.../tmp.ctbxBlkqPA/resty-redis-cluster/lib/rediscluster.lua:430: in function 'exists'

I'm sure that the cluster still working, because I've look into the server log and I've seen how it failover to slave:

716:M 21 Apr 2020 13:17:33.577 * Marking node a012fdaf648326f2b47fd220b932962ef3e3cb37 as failing (quorum reached).
716:M 21 Apr 2020 13:17:34.228 # Failover auth granted to e2af791d8f96c549b03c399cc0beb34ca0cbe7c0 for epoch 7

(ignore timestamps, one is +2 and the other UTC)
and also the PHP sessions are stored on same cluster, and the webpage still working without problems.

I suspect that is always trying to connect to first node and is not doing the failover, because I've removed the first node IP (wich I've stopped) and it started working again.

Is there any way to failover to another server if the first fails?, maybe I've not configured the module correctly.

Best regards and thanks.

remove c dependency

Hi @steve0511
redis_slot.c currently is used to calculate the slot.
Is there a blocker for to convert this piece code into Lua?
I'm assuming here that we use C here for performance reasons only. Correct me if I'm wrong.

error require redis

 lua entry thread aborted: runtime error: attempt to yield across C-call boundary

rediscluster.lua:416: in function 'get'

2019/06/05` 07:37:47 [error] 30194#30194: *58 [lua] rediscluster.lua:159: init_slots(): failed to create lock in initialization slot cache: dictionary not found, client: 127.0.0.1, server: ngx.test.local, request: "GET /test HTTP/1.1", host: "ngx.test.local"
2019/06/05 07:37:47 [error] 30194#30194: *58 lua entry thread aborted: runtime error: /usr/local/openresty-1.13.6.2/lualib/rediscluster.lua:296: attempt to index local 'slots' (a nil value)
stack traceback:
coroutine 0:
	/usr/local/openresty-1.13.6.2/lualib/rediscluster.lua: in function 'handleCommandWithRetry'
	/usr/local/openresty-1.13.6.2/lualib/rediscluster.lua:416: in function 'get'
	/usr/local/openresty-1.13.6.2/nginx/script/zone.lua:29: in function </usr/local/openresty-1.13.6.2/nginx/script/zone.lua:1>, client: 127.0.0.1, server: ngx.test.local, request: "GET /test HTTP/1.1", host: "ngx.test.local"
local config = {
        name = "test",
        enableSlaveRead = true,
        serv_list = {
                { ip = "127.0.0.1", port = 7001 },
                { ip = "127.0.0.1", port = 7002 },
                { ip = "127.0.0.1", port = 7003 },
                { ip = "127.0.0.1", port = 7004 },
                { ip = "127.0.0.1", port = 7005 },
                { ip = "127.0.0.1", port = 7006 }
        },
        keepalive_timeout = 60000,              --redis connection pool idle timeout
        keepalive_cons = 1000,                  --redis connection pool size
        connection_timout = 1000,               --timeout while connecting
        max_redirection = 5,                    --maximum retry attempts for redirection
}
local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)

local v, err = red_c:get("name")
if err then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(v)
end

make

gcc -shared -o redis_slot.so -fPIC redis_slot.c

set weight to different server

serv_list = {
{ ip = "192.168.1.1", port = 7001 },--weight:5
{ ip = "192.168.1.1", port = 7002 },
{ ip = "192.168.1.2", port = 7001}, --weight:10
{ ip = "192.168.1.2", port = 7002 },
}
when I set serv_list,can I set different weight to different server
Thanks

attempt to call method 'cluster' (a nil value)

2019/11/27 16:34:51 [error] 15214#0: *1 lua entry thread aborted: runtime error: /usr/local/wongcu/openresty/lualib/rediscluster.lua:127: attempt to call method 'cluster' (a nil value)
stack traceback:
coroutine 0:
/usr/local/wongcu/openresty/lualib/rediscluster.lua: in function 'try_hosts_slots'
/usr/local/wongcu/openresty/lualib/rediscluster.lua:160: in function 'fetch_slots'
/usr/local/wongcu/openresty/lualib/rediscluster.lua:191: in function 'init_slots'
/usr/local/wongcu/openresty/lualib/rediscluster.lua:207: in function 'new'
content_by_lua(nginx.conf:52):2: in function <content_by_lua(nginx.conf:52):1>, client: 127.0.0.1, server: , request: "GET /lua_content HTTP/1.1", host: "localhost:8899"

[Suggestion] Speed up parseAskSignal with ngx.re.match

Just my humble idea: maybe we could speed up the parseAskSignal with ngx.re.match.
Here is my benchmark:

The split version (original one):

local function split(str, separator)
    local splitArray = {}
    if (string.len(str) < 1) then
        return splitArray
    end
    local startIndex = 1
    local splitIndex = 1
    while true do
        local lastIndex = string.find(str, separator, startIndex)
        if not lastIndex then
            splitArray[splitIndex] = string.sub(str, startIndex, string.len(str))
            break
        end
        splitArray[splitIndex] = string.sub(str, startIndex, lastIndex - 1)
        startIndex = lastIndex + string.len(separator)
        splitIndex = splitIndex + 1
    end
    return splitArray
end

local function parseAskSignal(res)
    --ask signal sample:ASK 12191 127.0.0.1:7008, so we need to parse and get 127.0.0.1, 7008
    if res ~= ngx.null then
        if type(res) == "string" and string.sub(res, 1, 3) == "ASK" then
            local askStr = split(res, " ")
            local hostAndPort = split(askStr[3], ":")
            return hostAndPort[1], hostAndPort[2]
        else
            for i = 1, #res do
                if type(res[i]) == "string" and string.sub(res[i], 1, 3) == "ASK" then
                    local askStr = split(res[i], " ")
                    local hostAndPort = split(askStr[3], ":")
                    return hostAndPort[1], hostAndPort[2]
                end
            end
        end
    end
    return nil, nil
end

local s = "ASK 12191 127.0.0.1:7008"
local host, port
local start = ngx.now()
for _ = 1, 1e7 do
    host, port = parseAskSignal(s)
end
ngx.update_time()
ngx.say(ngx.now() - start)
ngx.say(host, " ", port)

The code above takes 5.8 seconds.

The ngx.re.match version, like this, could be a little faster:

local askHostAndPort = {}

local function parseAskSignal(res)
    --ask signal sample:ASK 12191 127.0.0.1:7008, so we need to parse and get 127.0.0.1, 7008
    if res ~= ngx.null then
        if type(res) == "string" and string.sub(res, 1, 3) == "ASK" then
            local matched = ngx.re.match(res, [[^ASK [^ ]+ ([^:]+):([^ ]+)]], "jo", nil, askHostAndPort)
            if not matched then
                return nil, nil
            end
            return matched[1], matched[2]

The code above takes 5.5 seconds, over 5% faster.
It is because splitting in Lua is slow and regex with PCRE is fast.

Note that the ngx.re.match use PCRE JIT option, which requires PCRE version >= 8.21.

The ngx.re.match version could be faster if you require resty.core.re in init_by_lua* like this:

require 'resty.core.regex'

The ngx.re.match implement in resty.core.regex benefits from JIT. If your code could be jitted, there would be a leap in speed. Admittedly, you should test it in your real environment.

Why did he connect to the docker network

This is the configuration item
image

This is the docker network for redis clusters
image

When access is started, connect to the docker network timeout

tcp socket connect timed out, when connecting to 172.25.0.4:6379

But the same cluster USES lua-resty-redis, along with the IP: mapped port configured for the virtual machine, and can be accessed normally
image

help: attempt to index a nil value

感谢作者提供 redis cluster 的代码

但我在使用时,始终出现错误,就像这样:

[error] 8054#8054: *139 lua entry thread aborted: runtime error: ...e/openresty/nginx/app/lib/resty/rediscluster.lua:269: attempt to index a nil value
stack traceback:
coroutine 0:
	...e/openresty/nginx/app/lib/resty/rediscluster.lua: in function 'handleCommandWithRetry'

code style: naming convention

@steve0511 - current code uses cameCasing for most of the variables and function names. Most of the openresty/lua projects are using snake_case.

If you are ok, I can submit PR to refactor code to use snake_case consistently.

redis_cluster_slot_locks what size to set ?

in the read me file, you set 【lua_shared_dict redis_cluster_slot_locks 100k;】,but how do i hnow 100k is enough,can i enlarge it?How do i to assess the redis_cluster_slot_locks size?

在readme的文件里面设置了【lua_shared_dict redis_cluster_slot_locks 100k;】大小为100k,我不确定这个大小是否足够,我是否能扩大它?我怎么评估它的大小?

Redis compatibility version?

Have you tested this library against a set of Redis versions?

I get the following error-
attempt to get length of a number value
on this line: https://github.com/steve0511/resty-redis-cluster/blob/master/lib/rediscluster.lua#L431

2017/09/28 00:31:02 [error] 52#0: *1598 lua entry thread aborted: runtime error: /usr/local/openresty/lualib/rediscluster.lua:431: attempt to get length of a number value
stack traceback:
coroutine 0:
	/usr/local/openresty/lualib/rediscluster.lua: in function 'hasClusterFailSignalInPipeline'
	/usr/local/openresty/lualib/rediscluster.lua:530: in function 'commit_pipeline'

math.randomseed() must be called in init_worker context

I'm not sure if we can really fix this issue and this might not be a problem inn most cases.
Thoughts?

2017/09/28 15:45:24 [warn] 45#0: [lua] globalpatches.lua:214: randomseed(): math.randomseed() must be called in init_worker context
stack traceback:
        /usr/local/openresty/lualib/rediscluster.lua:175: in main chunk

eval and some other suggestion

1.i note the Limitation say "Doesn't support transactions operations: MULTI DISCARD EXEC WATCH", so i use eval, but after a failed test and some studied of your SC, i found i can use "local key = {123456} return redis.call("set", "key1", "val1")" as the script for eval, in this way, your redis_slot can treat the script as a key with hashtag, and it worked, but it's a loosely solution, so i think if you can provide a special function for redis_lua to specify it's working slot(node).(case in my loosely solution, i can't use {} anymore, but {} is indispensable for lua.)
2.when i study youer 'handleCommandWithRetry', i found you use set_keepalive directly without checking redis operation's return value(so as you do in other function), case i think the return value may tell you some problem of cosocket which is not recommend to recycle the connection to connections pool.
3.i think use resty.core.regex instead of lua string regex operation can receive much performance boost:)
4.and some spell error, like: DEFUALT_MAX_REDIRECTION

redis_client:set_timeout error

first set timeout
then connect

in rediscluster.lua: 120 line

    for i = 1, #serv_list do
        local ip = serv_list[i].ip
        local port = serv_list[i].port
        local redis_client = redis:new()
        local ok, err = redis_client:connect(ip, port)
        redis_client:set_timeout(config.connection_timout or DEFAULT_CONNECTION_TIMEOUT)
        if ok then
            local authok, autherr = checkAuth(self, redis_client)
            if autherr then
                table.insert(errors, autherr)
                return nil, errors
            end

采用的lapis框架,为什么报错,openresty 1.15.x

local config = {
    name = "",                   --rediscluster name
    serv_list = {                           --redis cluster node list(host and port),
        { ip = "172.20.154.101", port = 7001 },
        { ip = "172.20.154.104", port = 7001 },
        { ip = "172.20.154.103", port = 7001 },                           --redis cluster node list(host and port),
        { ip = "172.20.154.101", port = 7002 },
        { ip = "172.20.154.104", port = 7002 },
        { ip = "172.20.154.103", port = 7002 }
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connection_timout = 1000,               --timeout while connecting
    max_redirection = 5,                    --maximum retry attempts for redirection
    auth = "3323"
}

local redis_cluster = require "rediscluster"
local red_c = redis_cluster:new(config)


red_c:init_pipeline()
red_c:get("name")
red_c:get("name1")
red_c:get("name2")

local res, err = red_c:commit_pipeline()

if not res then
    ngx.log(ngx.ERR, "err: ", err)
else
    ngx.say(cjson.encode(res))
end
<title>Error</title><style type="text/css"> body {
      color: #222;
      background: #ddd;
      font-family: sans-serif;
      margin: 20px;
    }

    h1, h2, pre {
      margin: 20px;
    }

    pre {
      white-space: pre-wrap;
    }

    .box {
      background: white;
      overflow: hidden;
      box-shadow: 1px 1px 8px gray;
      border-radius: 1px;
    }

    .footer {
      text-align: center;
      font-family: serif;
      margin: 10px;
      font-size: 12px;
      color: #A7A7A7;
    }
  </style></head><body><div class="box"><h1>Error</h1><pre>/usr/local/share/lua/5.1/rediscluster.lua:504: attempt to index local &#039;slots&#039; (a nil value)</pre><h2>Traceback</h2><pre>

stack traceback:
/usr/local/share/lua/5.1/rediscluster.lua:504: in function 'commit_pipeline'
.///controllers/ngx_manager.lua:179: in function 'handler'
/usr/local/share/lua/5.1/lapis/application.lua:130: in function 'resolve'
/usr/local/share/lua/5.1/lapis/application.lua:167: in function </usr/local/share/lua/5.1/lapis/application.lua:165>
[C]: in function 'xpcall'
/usr/local/share/lua/5.1/lapis/application.lua:173: in function 'dispatch'
/usr/local/share/lua/5.1/lapis/nginx.lua:230: in function 'serve'
content_by_lua(nginx.conf.compiled:43):2: in function <content_by_lua(nginx.conf.compiled:43):1>

lapis 1.7.0

help: attempt to send data on a closed socket

在使用 redis 链接的过程中, 出现错误,像这样,请问下作者遇到过吗:

 attempt to send data on a closed socket: u:0000000040267D38, c:0000000000000000, ft:0 eof:0

Auto-refresh support and cluster discovery support?

Hi @steve0511,
I've the read the code, but I want to confirm the following:

  1. Do we need to provide all the nodes of a cluster when we initially initialize the plugin? If only a single node is provided, will the library figure out other nodes and keep those nodes in it's configuration (in case the 1st one goes down or if the first node is not a master)?

  2. How is failover handled? Suppose we've a 3 node redis cluster and we provide the 3 nodes in configuration, after some time, gradually, the 3 nodes die (not at the same time) and every time a node dies, another one is brought up. In such a case, will the library keep discovering these new nodes and work seamlessly?

Duplicate data is inserted

this is my conf

-- mydata.lua
local redis_cluster = require "resty.rediscluster"
--引用自定义封装函数
local myfunction = require "resty.myfunction"
local _M = {}
local password = os.getenv("REDIS_PASSWORD");
local config = {
    name = "testCluster",                   --rediscluster name
    serv_list = {                           --redis集群节点 调用自定义函数拆解环境变量(ip:port -> {ip = "", port ="})
        myfunction.hosttable(os.getenv("REDIS_NODE1_HOST")),
        myfunction.hosttable(os.getenv("REDIS_NODE2_HOST")),
        myfunction.hosttable(os.getenv("REDIS_NODE3_HOST")),
        myfunction.hosttable(os.getenv("REDIS_NODE4_HOST")),
        myfunction.hosttable(os.getenv("REDIS_NODE5_HOST")),
        myfunction.hosttable(os.getenv("REDIS_NODE6_HOST"))
    },
    keepalive_timeout = 60000,              --redis connection pool idle timeout
    keepalive_cons = 1000,                  --redis connection pool size
    connection_timout = 1000,               --timeout while connecting
    max_redirection = 5,                    --maximum retry attempts for redirection
    auth = password
}

function _M.get_Connect()
    local red = redis_cluster:new(config);

    local times = red:get_reused_times();
    ngx.log(ngx.ERR,"-------连接被重用次数 -->",times);
    return red;
end

return _M

This is a reference to the redis cluster insert value

local redis = require "resty.myrediscluster"
--local redis = require "resty.myredis"
local red = redis.get_Connect()
local data = "rcs.opening.3slb::" .. math.random(0,10000)
local serverHost = 123
local res = red:get(data)
local expire_time = 60

if res==ngx.null or res==nil then
    local v, err = red:set(data, serverHost)
    ngx.log(ngx.ERR, "\n\n------------------",data,serverHost)
    if err then
        ngx.log(ngx.ERR, "err: ", err)
    else
        ngx.log(ngx.ERR, "expire_time: ", expire_time)
        -- 设置过期时间
        red:expire(data,expire_time)
        ngx.log(ngx.ERR,  v)
    end
else
    ngx.log(ngx.ERR, "\n\n------------------已存在-------------------")
end

I used postman 10 times, but there were some identical keys inserted
image

location /hello/test {
            default_type 'text/plain';
            content_by_lua_file /usr/local/openresty/lualib/my/grey.lua;
}

Use postman to request ip:port/hello/test, inserting a random number each time, but there was the same key

local times = red:get_reused_times(); 
ngx.log(ngx.ERR,"-------连接被重用次数 -->",times);

I got the wrong times for keep_alive
image

I don't know if there is something wrong, but I would like to ask the author to help me check it. Thank you very much

Only ET_DYN and ET_EXEC can be loaded

我按照README中的做法,用GCC編譯了redis_slot.c
並且把rediscluster.lua以及redis_slot.so 複製到了/usr/local/openresty/lualib.
我的nginx配置如下:
worker_processes auto;
events {
worker_connections 1024;
}

http {
include mime.types;
default_type application/octet-stream;
lua_package_path "/usr/local/openresty/lualib/?.lua;";
lua_package_cpath "/usr/local/openresty/lualib/?.so;";
lua_shared_dict redis_cluster_slot_locks 100k;
sendfile on;
#tcp_nopush on;

#keepalive_timeout  0;
keepalive_timeout  65;

#gzip  on;

server {
    listen       80;
    server_name  localhost;
    location /redis {
            content_by_lua_file /usr/local/openresty/lualib/biz/redisclustertest.lua;
    }

    location / {
        root   html;
        index  index.html index.htm;
    }


    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

}
我只是做了簡單的DEMO,記錄到了錯誤如下:
2019/02/11 16:11:02 [error] 30741#30741: *144 lua entry thread aborted: runtime error: /usr/local/openresty/lualib/rediscluster.lua:40: /usr/local/openresty/lualib/redis_slot.so: only ET_DYN and ET_EXEC can be loaded
stack traceback:
coroutine 0:
[C]: in function 'require'
/usr/local/openresty/lualib/biz/redisclustertest.lua:1: in function </usr/local/openresty/lualib/biz/redisclustertest.lua:1>, client: xxx.xx.xxx.xxx, server: localhost, request: "GET /redis HTTP/1.1", host: "xxx.xx.xxx.xxx"
2019/02/11 16:13:39 [error] 30741#30741: *145 lua entry thread aborted: runtime error: /usr/local/openresty/lualib/biz/redisclustertest.lua:1: loop or previous error loading module 'rediscluster

redisclustertest.lua的源代碼如下:

local redis_cluster = require "rediscluster"
local redis_config = require "biz.redisconfig"
local cjson = require "cjson"
local red_c = redis_cluster:new(redis_config)

red_c:set("hello","world")
ngx.header.content_type = "text/html"
ngx.say(red_c:get("hello"))
red_c:del("hello")
~

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.