bunogi / darkredis Goto Github PK
View Code? Open in Web Editor NEW[UNMAINTAINED] An asyncronous Redis client in Rust using std::future
License: zlib License
[UNMAINTAINED] An asyncronous Redis client in Rust using std::future
License: zlib License
There does not appear to be a way to stream individual Command
responses back from Redis. Instead, memory is allocated in a tight loop. For large values (e.g. blobs stored in Redis) this is problematic.
Sorry for this question. There was no channel for asking this.
My question is, i want to listening my redis all the times. Programs should not close. Always keep listening redis port. Can i do that and how?
Ok. İ create simple one.
`let pool = ConnectionPool::create("127.0.0.1:6379".into(), None, num_cpus::get()).await?;
let server = async move {
let mut conn = pool.get().await;
loop {
let school = conn.get("generates").await;
match school{
Ok(s)=>{
///
},
Err(e)=>{
}
}
//println!("Accepted connection from ");
// TODO: Process socket
}
};
//And away!
//conn.set("generates", b"behind the bookshelf").await?;
//let secret_entrance = conn.get("generates").await?;
//println!("{:?}", String::from_utf8(secret_entrance.unwrap()).unwrap().parse::<i64>().unwrap());
//Keep our secrets
//conn.del("secret_entrance").await?;
println!("Accepted connection from ");
server.await;
Ok(())`
The current connection pool implementation is pretty naive and is probably better handled in a separate crate. deadpool
seems like a great option.
Calls to .serialize
end up causing a lot of heap churn. It would be very helpful for users with high performance requirements to be able to control heap allocations in this library. Especially if those allocations occur in tight loops. One way to do this would be to allow users to pass an owned Vec<u8>
to the command serializer. Another way to do this would be to create a CommandBuilder
type that serializes commands, and can be reset to a clean state between serializations (this is the approach we take in FlatBuffers).
I'll note that, if serialization can't fail, then it would be even better to not use scratch space at all. Instead, use a Write trait from async-std as the serialization destination.
Cool library, and thanks for working on it!
Currently, most functions take AsRef<[u8]>
when serializing and return a Vec<u8>
when deserializing, leaving this to the user. It would be more ergonomic and less error-prone to have a conversion trait of some kind which performs this automatically.
Currently these functions cause a lot of allocations when executed which is far from ideal.
This is a tracking issue for 0.6.0. The following changes have to be made before release:
CommandList
a list of Result
s instead of wrapping the whole thing in oneConnectionPool
in favour of an adapter crate for deadpool
(#3. This will no longer block 0.6.0's release)runtime_agnostic
feature to reflect that it's using async-std's runtimeI almost wrote an issue asking for fallible Value
types, but then I realized that Value
is meant to handle errors, too. It would be clearer to new users to call that type something like CommandResult
.
addrerss like this: redis://[email protected]:6379/2
The objects returned from ConnectionPool::get
are MutexGuard<'_, Connection>
values. The definition of MutexGuard
is dependent on whether the async_std feature is enabled or not. Could we make the darkredis MutexGuard
public so that we can reference it? Thanks!
Right now, Value::Integer
is an isize
. That means it's 32 bits on 32-bit systems, and 64 bits on 64-bit systems. Could we change this to be i32 or i64, depending on what Redis provides? Or even a bigint of some kind? I anticipate having a mixture of 32 and 64 bit systems running and getting them to interoperate is confusing when the API is isize
. Thanks!
As far as I can tell, this is a regression in the rust compiler. rust-lang/rust#64433. Recommend using nightly-2019-09-10 or earlier for now.
We do this in FlatBuffers and it has served us well: https://github.com/google/flatbuffers/blob/master/tests/rust_usage_test/bin/alloc_check.rs
Using darkredis with async-std, if I suspend my redis instance (using Ctrl-Z) to simulate a netsplit, it seems that I can't catch the timeout. In particular, the ConnectionPool::get
function seems to time out, even when I wrap it in async_std::future::timeout
.
Is there a way to do this in the current API? Thanks!
Currently, there's no good API for the SCAN commands. This should be added.
Pipelined command results would be more performant and ergonomic if they were streamed back to the user, instead of batched and allocated.
Currently, the implementation of ConnectionPool::create
calls to_string
on the password argument Option<&str>
. From the perspective of the caller, working with lifetimes for the password only to have it be converted to a String anyway, seems not to be worth it. I suggest changing the password argument to be Option<String>
. Thanks!
This prevents you from doing things like
let password: Option<String> = None; // or anything else!
darkredis::ConnectionPool::create("127.0.0.1:1234", password.as_ref(), 1);
The change should be made backwards compatible if possible.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.