Comments (14)
Hi,
Regarding node-redis
(https://www.npmjs.com/package/redis) that is already used as a FallbackDriver in fast-redis-cluster
.
- How faster
fast-redis-cluster
than thenode-redis
? - What are the benefits of using
fast-redis-cluster
thannode-redis
?
Thanks in advance.
from fast-redis-cluster.
If you need Speed
- my lib is faster. We've used it for millions of requests per second during online TV shows on ABC, FOX, Univision television networks, and more in the US and worldwide as second-screen mobile applications. It worked well for several years and actually still works for India and Pakistan.
Feel free to try.
from fast-redis-cluster.
Quick and dirty test results:
My lib is still fastest :)
Details:
My lib
==========================
fast-redis-cluster2: 2.0.12
Date: 2021-09-21T21:17:07.621Z
CPU: 16
OS: darwin x64
node version: v14.14.0
==========================
results {
'PING foo0 Bulk size 5': { key: 'PING foo0 Bulk size 5', speed: 237099, ops: 1600000 },
'PING foo0 Bulk size 10': { key: 'PING foo0 Bulk size 10', speed: 416496, ops: 1600000 },
'PING foo0 Bulk size 100': { key: 'PING foo0 Bulk size 100', speed: 1185715, ops: 1600000 },
'PING foo0 Bulk size 1000': { key: 'PING foo0 Bulk size 1000', speed: 1810063, ops: 1600000 },
'PING foo0 Bulk size 5000': { key: 'PING foo0 Bulk size 5000', speed: 1721738, ops: 1600000 },
'PING foo0 Bulk size 10000': { key: 'PING foo0 Bulk size 10000', speed: 1877281, ops: 1600000 },
'SET foo0 bar Bulk size 5': { key: 'SET foo0 bar Bulk size 5', speed: 171966, ops: 1600000 },
'SET foo0 bar Bulk size 10': { key: 'SET foo0 bar Bulk size 10', speed: 272297, ops: 1600000 },
'SET foo0 bar Bulk size 100': { key: 'SET foo0 bar Bulk size 100', speed: 473838, ops: 1600000 },
'SET foo0 bar Bulk size 1000': { key: 'SET foo0 bar Bulk size 1000', speed: 1011509, ops: 1600000 },
'SET foo0 bar Bulk size 5000': { key: 'SET foo0 bar Bulk size 5000', speed: 1187494, ops: 1600000 },
'SET foo0 bar Bulk size 10000': { key: 'SET foo0 bar Bulk size 10000', speed: 1484291, ops: 1600000 },
'GET foo0 Bulk size 5': { key: 'GET foo0 Bulk size 5', speed: 260778, ops: 1600000 },
'GET foo0 Bulk size 10': { key: 'GET foo0 Bulk size 10', speed: 389101, ops: 1600000 },
'GET foo0 Bulk size 100': { key: 'GET foo0 Bulk size 100', speed: 1693645, ops: 1600000 },
'GET foo0 Bulk size 1000': { key: 'GET foo0 Bulk size 1000', speed: 2877971, ops: 1600000 },
'GET foo0 Bulk size 5000': { key: 'GET foo0 Bulk size 5000', speed: 3146196, ops: 1600000 },
'GET foo0 Bulk size 10000': { key: 'GET foo0 Bulk size 10000', speed: 3059741, ops: 1600000 },
'INCR incr:foo0 Bulk size 5': { key: 'INCR incr:foo0 Bulk size 5', speed: 150901, ops: 1600000 },
'INCR incr:foo0 Bulk size 10': { key: 'INCR incr:foo0 Bulk size 10', speed: 214526, ops: 1600000 },
'INCR incr:foo0 Bulk size 100': { key: 'INCR incr:foo0 Bulk size 100', speed: 1298302, ops: 1600000 },
'INCR incr:foo0 Bulk size 1000': {
key: 'INCR incr:foo0 Bulk size 1000',
speed: 2551056,
ops: 1600000
},
'INCR incr:foo0 Bulk size 5000': {
key: 'INCR incr:foo0 Bulk size 5000',
speed: 3176064,
ops: 1600000
},
'INCR incr:foo0 Bulk size 10000': {
key: 'INCR incr:foo0 Bulk size 10000',
speed: 3145206,
ops: 1600000
}
}
ioredis
results {
'ping foo0 Bulk size 5': { key: 'ping foo0 Bulk size 5', speed: 112570, ops: 1600000 },
'ping foo0 Bulk size 10': { key: 'ping foo0 Bulk size 10', speed: 115750, ops: 1600000 },
'ping foo0 Bulk size 100': { key: 'ping foo0 Bulk size 100', speed: 114529, ops: 1600000 },
'ping foo0 Bulk size 1000': { key: 'ping foo0 Bulk size 1000', speed: 107648, ops: 1600000 },
'ping foo0 Bulk size 5000': { key: 'ping foo0 Bulk size 5000', speed: 156172, ops: 1600000 },
'ping foo0 Bulk size 10000': { key: 'ping foo0 Bulk size 10000', speed: 238090, ops: 1600000 },
'set foo0,bar Bulk size 5': { key: 'set foo0,bar Bulk size 5', speed: 76271, ops: 1600000 },
'set foo0,bar Bulk size 10': { key: 'set foo0,bar Bulk size 10', speed: 80604, ops: 1600000 },
'set foo0,bar Bulk size 100': { key: 'set foo0,bar Bulk size 100', speed: 183176, ops: 1600000 },
'set foo0,bar Bulk size 1000': { key: 'set foo0,bar Bulk size 1000', speed: 416716, ops: 1600000 },
'set foo0,bar Bulk size 5000': { key: 'set foo0,bar Bulk size 5000', speed: 418999, ops: 1600000 },
'set foo0,bar Bulk size 10000': { key: 'set foo0,bar Bulk size 10000', speed: 466361, ops: 1600000 },
'get foo0 Bulk size 5': { key: 'get foo0 Bulk size 5', speed: 60460, ops: 1600000 },
'get foo0 Bulk size 10': { key: 'get foo0 Bulk size 10', speed: 89647, ops: 1600000 },
'get foo0 Bulk size 100': { key: 'get foo0 Bulk size 100', speed: 256728, ops: 1600000 },
'get foo0 Bulk size 1000': { key: 'get foo0 Bulk size 1000', speed: 609413, ops: 1600000 },
'get foo0 Bulk size 5000': { key: 'get foo0 Bulk size 5000', speed: 782759, ops: 1600000 },
'get foo0 Bulk size 10000': { key: 'get foo0 Bulk size 10000', speed: 820932, ops: 1600000 },
'incr incr:foo0 Bulk size 5': { key: 'incr incr:foo0 Bulk size 5', speed: 48568, ops: 1600000 },
'incr incr:foo0 Bulk size 10': { key: 'incr incr:foo0 Bulk size 10', speed: 94079, ops: 1600000 },
'incr incr:foo0 Bulk size 100': { key: 'incr incr:foo0 Bulk size 100', speed: 340846, ops: 1600000 },
'incr incr:foo0 Bulk size 1000': { key: 'incr incr:foo0 Bulk size 1000', speed: 684154, ops: 1600000 },
'incr incr:foo0 Bulk size 5000': { key: 'incr incr:foo0 Bulk size 5000', speed: 780722, ops: 1600000 },
'incr incr:foo0 Bulk size 10000': {
key: 'incr incr:foo0 Bulk size 10000',
speed: 818468,
ops: 1600000
}
}
redis@next
==========================
redis: 4.0.0-rc.1
Date: 2021-09-21T22:00:15.440Z
CPU: 16
OS: darwin x64
node version: v14.14.0
==========================
results {
'ping foo0 Bulk size 5': { key: 'ping foo0 Bulk size 5', speed: 76116, ops: 1600000 },
'ping foo0 Bulk size 10': { key: 'ping foo0 Bulk size 10', speed: 72173, ops: 1600000 },
'ping foo0 Bulk size 100': { key: 'ping foo0 Bulk size 100', speed: 78615, ops: 1600000 },
'ping foo0 Bulk size 1000': { key: 'ping foo0 Bulk size 1000', speed: 67140, ops: 1600000 },
'ping foo0 Bulk size 5000': { key: 'ping foo0 Bulk size 5000', speed: 70304, ops: 1600000 },
'ping foo0 Bulk size 10000': { key: 'ping foo0 Bulk size 10000', speed: 74659, ops: 1600000 },
'set foo0,bar Bulk size 5': { key: 'set foo0,bar Bulk size 5', speed: 50494, ops: 1600000 },
'set foo0,bar Bulk size 10': { key: 'set foo0,bar Bulk size 10', speed: 45936, ops: 1600000 },
'set foo0,bar Bulk size 100': { key: 'set foo0,bar Bulk size 100', speed: 47909, ops: 1600000 },
'set foo0,bar Bulk size 1000': { key: 'set foo0,bar Bulk size 1000', speed: 48017, ops: 1600000 },
'set foo0,bar Bulk size 5000': { key: 'set foo0,bar Bulk size 5000', speed: 46994, ops: 1600000 },
'set foo0,bar Bulk size 10000': { key: 'set foo0,bar Bulk size 10000', speed: 49115, ops: 1600000 },
'get foo0 Bulk size 5': { key: 'get foo0 Bulk size 5', speed: 72141, ops: 1600000 },
'get foo0 Bulk size 10': { key: 'get foo0 Bulk size 10', speed: 70104, ops: 1600000 },
'get foo0 Bulk size 100': { key: 'get foo0 Bulk size 100', speed: 66602, ops: 1600000 },
'get foo0 Bulk size 1000': { key: 'get foo0 Bulk size 1000', speed: 67541, ops: 1600000 },
'get foo0 Bulk size 5000': { key: 'get foo0 Bulk size 5000', speed: 66941, ops: 1600000 },
'get foo0 Bulk size 10000': { key: 'get foo0 Bulk size 10000', speed: 68335, ops: 1600000 },
'incr incr:foo0 Bulk size 5': { key: 'incr incr:foo0 Bulk size 5', speed: 50547, ops: 1600000 },
'incr incr:foo0 Bulk size 10': { key: 'incr incr:foo0 Bulk size 10', speed: 48538, ops: 1600000 },
'incr incr:foo0 Bulk size 100': { key: 'incr incr:foo0 Bulk size 100', speed: 48268, ops: 1600000 },
'incr incr:foo0 Bulk size 1000': { key: 'incr incr:foo0 Bulk size 1000', speed: 47761, ops: 1600000 },
'incr incr:foo0 Bulk size 5000': { key: 'incr incr:foo0 Bulk size 5000', speed: 50049, ops: 1600000 },
'incr incr:foo0 Bulk size 10000': { key: 'incr incr:foo0 Bulk size 10000', speed: 48798, ops: 1600000 }
}
from fast-redis-cluster.
Added rawCallAsync
and found a bug in bench, redis@next
actually faster than ioredis
(great job BTW)
So, according to this benchmark, my lib is in average faster x 5.4 times against ioredis
and x 3.01 times against redis@next
{
'ping foo0 Bulk size 5': { key: 'ping foo0 Bulk size 5', speed: 157658, ops: 1600000 },
'ping foo0 Bulk size 10': { key: 'ping foo0 Bulk size 10', speed: 171948, ops: 1600000 },
'ping foo0 Bulk size 100': { key: 'ping foo0 Bulk size 100', speed: 178106, ops: 1600000 },
'ping foo0 Bulk size 1000': { key: 'ping foo0 Bulk size 1000', speed: 209247, ops: 1600000 },
'ping foo0 Bulk size 5000': { key: 'ping foo0 Bulk size 5000', speed: 497951, ops: 1600000 },
'ping foo0 Bulk size 10000': { key: 'ping foo0 Bulk size 10000', speed: 650268, ops: 1600000 },
'set foo0,bar Bulk size 5': { key: 'set foo0,bar Bulk size 5', speed: 107483, ops: 1600000 },
'set foo0,bar Bulk size 10': { key: 'set foo0,bar Bulk size 10', speed: 161727, ops: 1600000 },
'set foo0,bar Bulk size 100': { key: 'set foo0,bar Bulk size 100', speed: 479640, ops: 1600000 },
'set foo0,bar Bulk size 1000': { key: 'set foo0,bar Bulk size 1000', speed: 813473, ops: 1600000 },
'set foo0,bar Bulk size 5000': { key: 'set foo0,bar Bulk size 5000', speed: 763573, ops: 1600000 },
'set foo0,bar Bulk size 10000': { key: 'set foo0,bar Bulk size 10000', speed: 731751, ops: 1600000 },
'get foo0 Bulk size 5': { key: 'get foo0 Bulk size 5', speed: 110320, ops: 1600000 },
'get foo0 Bulk size 10': { key: 'get foo0 Bulk size 10', speed: 168143, ops: 1600000 },
'get foo0 Bulk size 100': { key: 'get foo0 Bulk size 100', speed: 455576, ops: 1600000 },
'get foo0 Bulk size 1000': { key: 'get foo0 Bulk size 1000', speed: 918136, ops: 1600000 },
'get foo0 Bulk size 5000': { key: 'get foo0 Bulk size 5000', speed: 980671, ops: 1600000 },
'get foo0 Bulk size 10000': { key: 'get foo0 Bulk size 10000', speed: 952764, ops: 1600000 },
'incr incr:foo0 Bulk size 5': { key: 'incr incr:foo0 Bulk size 5', speed: 85222, ops: 1600000 },
'incr incr:foo0 Bulk size 10': { key: 'incr incr:foo0 Bulk size 10', speed: 165740, ops: 1600000 },
'incr incr:foo0 Bulk size 100': { key: 'incr incr:foo0 Bulk size 100', speed: 734394, ops: 1600000 },
'incr incr:foo0 Bulk size 1000': {
key: 'incr incr:foo0 Bulk size 1000',
speed: 1383908,
ops: 1600000
},
'incr incr:foo0 Bulk size 5000': {
key: 'incr incr:foo0 Bulk size 5000',
speed: 1176017,
ops: 1600000
},
'incr incr:foo0 Bulk size 10000': {
key: 'incr incr:foo0 Bulk size 10000',
speed: 1099064,
ops: 1600000
}
}
from fast-redis-cluster.
so, what did you decide, can I close the issue?
from fast-redis-cluster.
Hi, thanks for warm words :)
You need to compare it with ioredis
(https://github.com/luin/ioredis) actually, they implement a very cool library and have a huge community.
In short, it very much depends on many factors, if you will be able to make 3+ requests per redis-server per transaction then yes, it is valuable.
If you have tons KV that you need to deal with then it is good, but if you have 1-2 huge ZSETS/HSETS then you will not feel the difference too much.
from fast-redis-cluster.
ioredis
BTW also async
, so performance is very close to my lib. When I wrote this library redis-cluster was in alpha, now the situation is changed
from fast-redis-cluster.
When I wrote this library redis-cluster was in alpha, now the situation is changed
how exactly did it change? :) did it become faster/slower than the ioredis?
from fast-redis-cluster.
@Dmitry-N-Medvedev my driver is still faster than ioredis
implementation, but they have more features (PUBSUB for example) and a much bigger community.
-
I didn't perform benchmarks for years, last time it was faster 10-30 times, but it depends on the scenario, as I said before if you need to query 10-100+ keys, then it is fast. But if you need to increment 1 key and then based on this incremented value you need to set one another key (e.g. random access to small amount of keys) then it is not valuable because the pipeline will not work... Without pipeline, the driver works at the regular speed of
node-redis
. -
My opinion - if you want some other more modern library than my, use
ioredis
instead ofnode-redis
.ioredis
much much much more good and fast.
from fast-redis-cluster.
@h0x91b ,
I don't care about the size of the community. It's just a bad metric. I care about speed and speed only. We require speed. We need it to handle millions of clients. If your lib is faster by 0.5% compared to some other lib this is enough for me personally to stick to your lib.
Going back to the community size argument: for instance, node-redis has a huge community compared to yours. But you replied within 1 hour to my message, and according to what I can see in the issues section of the node-redis lots of questions are left unanswered for days. Which level of service would I choose to build a crypto bank? Easy question.
I also don't care about wrappers like redisInstance.hmset()
, I am absolutely fine with the redisInstance.rawCallAsync([])
notation specifically in the light of the fact that redis itself is constantly adding new functions and I have no intention to sit and wait for you to add a corresponding wrapper for them. Array notation is future proof and just fine. I would even convert all strings into Buffers if that was an option to speed your lib for a tiny bit more.
Syntactic sugar is for the front-end guys. Back-end guys know the price of it.
Regarding the pub/sub: not sure at the moment
, but if you could export the redis-fast-driver instance from the fast-redis-cluster then I could use the pub/sub just fine. If it is not feasible, I can import these two libraries and use one for the cluster and the other for the pub/sub. Or completely abandon the pub/sub in favour of streams or other protocols ( like zmq ). No problem here.
speed
is the only problem I am constantly addressing. nothing but speed.
from fast-redis-cluster.
Tried to renew the benchmark but had no luck, matcha
is dead and doesn't work on Node 14, benchmark.js
is working well only for synchronous code, @c4312/matcha
is complete crap...
Literally, I just can't find any library for benchmarking in NodeJS.
Unbelivieble.
from fast-redis-cluster.
@h0x91b , anyways -- thanks for trying! much appreciated!
and again -- speed is everything. It's 2021 -- software must run fast :)
from fast-redis-cluster.
@MoGnedy how do you like the numbers? :)
from fast-redis-cluster.
yes please :)
from fast-redis-cluster.
Related Issues (2)
- Support for ZPOPMIN HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fast-redis-cluster.