armon / bloomd Goto Github PK
View Code? Open in Web Editor NEWC network daemon for bloom filters
Home Page: http://armon.github.io/bloomd
License: Other
C network daemon for bloom filters
Home Page: http://armon.github.io/bloomd
License: Other
I see that this is logging to syslog. Can this location be changed through the config file?
Hello, I want to access bloomd in a service written in c, is there any client lib I could use?You know, there exists hiredis which helps us to access redis in a c service, dose such lib exist for bloomd?
Repo can update to the latest version
Is this project abandoned? I'm asking this question because the build for the master branch has been broken for almost a year. I think this is a really useful project and I want to use it in a production environment.
Hi,
I've been using bloomd in production since yesterday and I must say I'm impressed by its stability and low CPU consumption. You did a really good job there, congrats!
However, I've got some questions regarding memory consumption. Currently bloomd memory consumption is constantly increasing (RES: 106M, SHR: 105M, VIRT 243M).
Here are my bloom filters after one day.
f1 0.000100 300046 100000 91793
f2 0.000100 105575294 34100000 13919873
f3 0.000100 300046 100000 72710
f3 0.000100 1509656 500000 291040
f4 0.000100 1509656 500000 124098
Note that they are going to increase like that at nearly constant rate. And since Scalable Bloom Filters work by adding new bloom filters when the size ratio is reached, the memory consumption will indefinitely increase.
I'm not an expert in C, but I wondered if you could update the readme to give some input on how the "Automatically faults cold filters out of memory to save resources" feature works, in order to take advantage of it.
If I understand it well, since here my filters won't be ever cold (new data is added constantly), I thought maybe I could create filters with composed name like "f{filterid}{weekoftheyear}{year}" where "{weekoftheyear}{year}" are informations extracted and available from every data that the filters test against. That way, filters with older data could be removed from memory but still available just in case.
Is this the right approach? What do you think?
writev(8, [{"START\n", 6}, {"metrics:rewards:WEEKLY:2012-06-0"..., 74}, {"metrics:sessions:DAILY:2012-05-1"..., 77}, {"metrics:ach_lb:DAILY:2012-05-30T"..., 74}, {"metrics:rewards:DAILY:2012-04-19"..., 71}, {"metrics:rewards:ALL_TIME 0.00010"..., 45}, {"metrics:rewards:DAILY:2012-06-05"..., 71}, {"metrics:sessions:DAILY:2012-09-1"..., 77}, {"metrics:offers:DAILY:2012-08-07T"..., 75}, {"metrics:rewards:DAILY:2012-04-21"..., 71}, {"metrics:rewards:WEEKLY:2012-03-2"..., 74}, {"metrics:offers:WEEKLY:2012-04-09"..., 76}, {"metrics:sessions:WEEKLY:2012-10-"..., 78}, {"metrics:sessions:DAILY:2012-03-2"..., 77}, {"metrics:rewards:DAILY:2012-06-09"..., 71}, {"metrics:rewardsgifted:MONTHLY:20"..., 79}, {"metrics:sessions:DAILY:2012-08-1"..., 77}, {"metrics:ach_lb:DAILY:2012-07-26T"..., 75}, {"metrics:ach_lb:DAILY:2012-06-24T"..., 75}, {"metrics:rewardsgifted:DAILY:2012"..., 75}, {"metrics:sessions:DAILY:2012-07-0"..., 77}, {"metrics:sessions:DAILY:2012-09-0"..., 77}, {"metrics:offers:DAILY:2012-05-20T"..., 74}, {"metrics:rewards:DAILY:2012-07-16"..., 73}, {"metrics:sessions:DAILY:2012-04-1"..., 77}, {"metrics:swarm:DAILY:2012-06-30T0"..., 68}, {"metrics:ach_lb:DAILY:2012-06-16T"..., 75}, {"metrics:ach_lb:DAILY:2012-05-19T"..., 74}, {"metrics:ach_lb:DAILY:2012-08-05T"..., 75}, {"metrics:ach_lb:DAILY:2012-07-11T"..., 75}, {"metrics:sessions:DAILY:2012-06-2"..., 77}, {"metrics:offers:DAILY:2012-05-24T"..., 74}, ...], 1270) = -1 EINVAL (Invalid argument)
Hi,
The protocol for bloomd, while very easy to read (for an English-speaker ;) ), makes it hard to program against for some reasonable use cases.
A concrete example:
I'm implementing a "set this key, and automatically create the filter if it doesn't exist, then retry" method for node-bloomd. This will important for our use case, and for incrementally creating bloom filters on-demand instead of provisioning them all up front.
The response I get back from the server, when the filter doesn't exist is "Filter does not exist". String matching against this is flaky, and does not fill me with confidence :) What if you changed it to "Filter doesn't exist"? :)
Another example: it is hard(er than it should be) to differentiate a bulk/multi response which is a single line of words, from an error, which is also a single line of words.
I'd like to propose that errors come back with a distinct first character, an error code, and then the human readable part. For example:
"_42: Filter does not exist"
"_17: Filter is being deleted"
This would balance the needs of being able to read the response (which is very helpful in debugging), with the ability to respond programmatically to error cases where appropriate.
I don't care about the specific syntax really, only that it be reasonably parseable in a sane manner :)
Thanks for your consideration!
Cheers,
Jamie.
Hello! We created rpm for this daemon and init script, so maybe you mention our repo for rhel users in readme?
http://vortex-rpm.org -- repo
https://github.com/vortex-rpm/bloomd -- spec for bloomd
Alpine linux uses musl.
$ scons
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
gcc -o src/libbloom/bitmap.o -c -std=c99 -Wall -Werror -Wextra -O2 -D_GNU_SOURCE src/libbloom/bitmap.c
In file included from src/libbloom/bitmap.c:9:0:
/usr/include/sys/errno.h:1:2: error: #warning redirecting incorrect #include <sys/errno.h> to <errno.h> [-Werror=cpp]
#warning redirecting incorrect #include <sys/errno.h> to <errno.h>
^
cc1: all warnings being treated as errors
scons: *** [src/libbloom/bitmap.o] Error 1
scons: building terminated because of errors.
Content of /usr/include/sys/errno.h
#warning redirecting incorrect #include <sys/errno.h> to <errno.h>
#include <errno.h>
OS: Pop!_OS 22.04 LTS
Build Log
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
gcc -o src/bloomd/conn_handler.o -c -std=c99 -D_GNU_SOURCE -Wall -Wextra -Werror -O2 -pthread -Isrc/bloomd/ -Ideps/inih/ -Ideps/libev/ -Isrc/libbloom/ src/bloomd/conn_handler.c
src/bloomd/conn_handler.c: In function 'handle_client_connect':
src/bloomd/conn_handler.c:160:5: error: 'arg_buf_len' may be used uninitialized in this function [-Werror=maybe-uninitialized]
160 | handle_filt_key_cmd(handle, args, args_len, filtmgr_check_keys);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
src/bloomd/conn_handler.c:65:18: note: 'arg_buf_len' was declared here
65 | int buf_len, arg_buf_len, should_free;
| ^~~~~~~~~~~
cc1: all warnings being treated as errors
scons: *** [src/bloomd/conn_handler.o] Error 1
scons: building terminated because of errors.
And once this is fixed, another error
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
gcc -o src/bloomd/conn_handler.o -c -std=c99 -D_GNU_SOURCE -Wall -Wextra -Werror -O2 -pthread -Isrc/bloomd/ -Ideps/inih/ -Ideps/libev/ -Isrc/libbloom/ src/bloomd/conn_handler.c
gcc -o src/bloomd/bloomd.o -c -std=c99 -D_GNU_SOURCE -Wall -Wextra -Werror -O2 -pthread -Isrc/bloomd/ -Ideps/inih/ -Ideps/libev/ -Isrc/libbloom/ src/bloomd/bloomd.c
src/bloomd/bloomd.c: In function 'main':
src/bloomd/bloomd.c:180:43: error: cast between incompatible function types from 'void (*)(worker_args *)' to 'void * (*)(void *)' [-Werror=cast-function-type]
180 | pthread_create(&threads[i], NULL, (void*(*)(void*))worker_main, &wargs);
| ^
cc1: all warnings being treated as errors
scons: *** [src/bloomd/bloomd.o] Error 1
scons: building terminated because of errors.
running bloomd with default options
create bar capacity=89000 prob=0.002 in_memory=0
Done
I am setting in_memory to 0, then
info bar
START
capacity 89000
checks 0
check_hits 0
check_misses 0
in_memory 1 # HERE
page_ins 0
page_outs 0
probability 0.002000
sets 0
set_hits 0
set_misses 0
size 0
storage 197730
seems bloomd is ignoring the in_memory option. why?
Hi there. Recently we started seeing a lot of timeouts when performing queries against a BIG set (over 200 Million filters). Timeouts for us, means that the query to bloomd is taking longer than 250ms (we even increased it to 500ms and still times out).
Is there any best practices regarding the size of the filters? Is it better to split them into multiple small ones and perform multiple queries? Are there any optimizations we can do regarding the query to it?
This is a pretty well-known issue with hashmap implementations; it made the rounds of major programming languages over the past few years, here's Python for an example:
http://bugs.python.org/issue13703
Anywhere untrusted user input is used for a key, collisions can be generated so that all keys land in the same bucket, yielding O(N) lookups (instead of expected O(1)). This can DOS hashmap-implementation consumers (in the case of Python, web apps; for bloomd, unwary users of bloomd).
Hi,
I was wondering, using both bloom and cuckoo filters with different hash functions and thus reducing the collision rate, it would be highly unlikely that both filters could make false positive for the same element, also the overall size/memory footprint and performance would be better.
Ie. using both bloom and cuckoo filters with high FPP (0.01) would lead to wayyy lower collision rate than using only Bloom or Cuckoo filter with low FPP (0.00001) and it would be faster and smaller.
Have you considered adding support for https://github.com/efficient/cuckoofilter ?
Just wanted to let you know of a NodeJS implementation that I recently put together, which you can see here:
https://github.com/majelbstoat/node-bloomd
Please feel free to add it to the list of known clients, if you think it suitable.
Thanks for bloomd; it's awesome! :)
What are the limits for the number of elements i can add into bloomd?
Hi,
Bloomd works for us very stable in production environment when we do only adds and checks. We do hunderds queries per second and it never crashed.
However, when performing automated tests it crashes quite often. My guess is that it crashes when we recreate the filters between the tests.
I can easily crash bloomd with this script https://gist.github.com/4682164 by running it multiple times. One after one until it crashes... Like this
for i in `seq 100`; do ruby crash.rb; sleep 1; done
in a C program, I would like to reuse an existing file generated by bloomd. I've used the standard bloomd/telnet commands "create"/"set", which generates the bloomd flushed file : bloomd.foobar/data.000.mmap
int retCode ;
char myFile[1024];
strcpy(myFile,"bloomd.foobar/data.000.mmap");
struct stat myFileStat;
retCode = stat(myFile,&myFileStat) ; // return 0 : ok
bloom_bitmap myBloom_bitmap ;
retCode = bitmap_from_filename(myFile,myFileStat.st_size, 0, ANONYMOUS , &myBloom_bitmap); // return 0 : ok
bloom_bloomfilter mybloom_bloomfilter ;
retCode = bf_from_bitmap(&myBloom_bitmap, 1, 0, &mybloom_bloomfilter ); // return -1 : ko
bitmap_from_filename is OK ( return 0 )
bf_from_bitmap is KO ( return -1 )
Next steps : check if a string does not exist in the bloom_bloomfilter ...
Thank you for your support.
I just got an pybloomd.BloomdError: Got response: Internal Error
(from filters[fname] = client.create_filter(fname)
see code below).
I have 6 python bloomd clients that send a lot of requests to it. The data dir of bloomd currently contains 1023 bloom filters and counting.
My bloomd configuration is:
[bloomd]
tcp_port = 8673
udp_port = 8674
data_dir = /data/bloomd
log_level = INFO
cold_interval = 3600
flush_interval = 300
initial_capacity = 10001
default_probability = 0.0001
workers = 4
use_mmap = 1
Each client, does the following:
for each incoming message
- filter = getBloomFilter(message.created_at)
- if not filter.__contains__(message.id):
- filter.add(message.id)
- ... do other things ...
with the following python code:
client = BloomdClient(['localhost'])
filters = {}
serviceName = "hey"
def getBloomFilterName(created_at):
now = datetime.utcfromtimestamp(created_at/1000.0)
now = now.replace(microsecond=0, second=0, minute=0, hour=0)
return "%s.%s-%s-%s" % (serviceName, now.year, now.month, now.day)
def getBloomFilter(created_at):
fname = getBloomFilterName(created_at)
if fname not in filters:
filters[fname] = client.create_filter(fname)
return filters[fname]
Multiple clients got the pybloomd.BloomdError: Got response: Internal Error
error within ~50 seconds interval while calling client.create_filter(fname)
. Some of them were just under huge load so I'm sure they were creating a lot of bloom filters at the same time.
It's too bad that this error did not say what was the cause so I've got several questions and feedbacks:
[Update] In fact I've seen this error in the log:
Apr 8 09:24:33 serv1 bloomd[11870]: Failed to fault in the filter 'f1.2012-8-10'. Err: -1
Apr 8 09:24:45 serv1 bloomd[11870]: Failed to scan files for filter 'f2.2012-11-15'. Too many open files
Apr 8 09:24:59 serv1 bloomd[11870]: Failed to scan files for filter f1.2013-1-26'. Too many open files
Apr 8 09:24:59 serv1 bloomd[11870]: Failed to fault in the filter 'f1.2013-1-26'. Err: -1
After a restart:
cat /proc/sys/fs/file-nr
1728 0 6606290
So currently the only way to fix this is to restart bloomd, so I think there may be an issue (or a place for improvement) with the file handling in bloomd.
Again, thanks for your hard work!
Hi, I'm running your server on one of my machines and a few clients that are trying to put some links in a filter. Seems that this is my output from bloomd server.
Failed to accept() connection! Too many open files.
Failed to accept() connection! Too many open files.
Failed to accept() connection! Too many open files.
And my configuration of client:
client = BloomdClient(["localhost"])
bloom = self.client.create_filter("domains")
--- some code and a loop here ---
bloom.add(domain)
So I have almost 10 clients trying to put items in the same filter, sometimes in the same time.
What can be the problem?
Hello,
I'm trying to use Bloomd and bloom-python-driver with lots of sequential checks and also I'm adding keys when they are not existing. This process takes more than flush interval and when flush thread starts, it drops all connections to the Bloomd.
I guess this is a bug, or is it made like this on purpose? Do you have any recommendation?
Thank you very much for your help in advance!
I did not see a configuration option to bind bloomd to 127.0.0.1 to make it more secure on public servers.
MurmurHash3_x86_32, MurmurHash3_x86_128, and MurmurHash3_x64_128 all take a 'uint32_t seed', but only MurmurHash3_x64_128 calls it a 'const uint32_t seed'. I don't think the const really matters, this is a pass-by-value C API … and this is a super minor nit. Sorry.
we current use bloomd to filter user browser history, bloomd' check performance is very good, about 300k (24 workers), BUT we recive SIGSEGV sometimes in bloomf_contains, because of there are so many filters, we have to change sata to ssd. yesterday i found filtmgr_check_keys has race condition that cause SIGSEGV, change pthread_rwlock_rdlock(&filt->rwlock); to pthread_rwlock_wrlock(&filt->rwlock);, sloves problem. because thread_safe_fault change sbf->filter,there's tiny change sbf->filter has been read wrong.
forgive my poor engish
Thanks armon, this is a very helpful project! My lack of full understanding of stable bloom filters is probably the cause of this issue. But i see the "default" probability that i provide when creating a new sbf change from .00000001 to .0001000 on subsequent restarts.
I am not sure if when successive plain bloom filters are created and with the probability reduction (default .9) the overall error probability tends towards the original prob value specified (.0000001) as we approach the capacity (in this case 1B); and the probability specified from the info command after initial start is the probability value of latest plain filter in the sbf array?
repo
$ bloomd &
$ echo "create test capacity=1000000000 prob=0.0000001" | nc 127.0.0.1 8673
$ echo "info test" | nc 127.0.0.1 8673
START
capacity 1000000000
checks 0
check_hits 0
check_misses 0
in_memory 1
page_ins 0
page_outs 0
probability 0.0000001000
sets 0
set_hits 0
set_misses 0
size 0
storage 4792529701
END
$ kill -9 ps | grep bloomd | awk '{print $1}'
$ bloomd &
$ echo "info test" | nc 127.0.0.1 8673
START
capacity 1000000000
checks 0
check_hits 0
check_misses 0
in_memory 1
page_ins 0
page_outs 0
probability 0.0001000000
sets 0
set_hits 0
set_misses 0
size 0
storage 4792529701
END
Hi there,
It would be nice if we could configure the syslog facilities (facility, priority and tag) in the bloomd config file and imagine this kind of configuration:
syslog_facility=local0
syslog_tag="myprogram"
syslog_stdout_priority=info
syslog_stderr_priority=err
Hello,
After setting up and running bloomd without a problem, I see higher false positive rate than expected. I'm using bloom-python-driver and creating filter with initial_capacity=1000000, 0.0001 max false positive probability. Then I'm trying to insert and check same url set for several times. After adding almost 110000 URL to filter (in one loop), I'm running the script again, and this time it's adding almost 40.000 of them again.
While script running sometimes it returns "Client Error: Command not supported" error, I suppose it's because of some malformed etc. URLs and I just pass (in try..except) and continue to loop, can it be the reason of subsequent false positives?
When I check key with telnet connection, it returns "yes". What can be the reason for this problem? Do you have any ideas?
Thank you very much for your help in advance...
Created in_memory = 0 Why is shown in the info or 1
create test capacity=10001 prob=0.01 in_memory=0
Array
(
[capacity] => 10001
[checks] => 0
[check_hits] => 0
[check_misses] => 0
[in_memory] => 1
[page_ins] => 0
[page_outs] => 0
[probability] => 0.010000
[sets] => 0
[set_hits] => 0
[set_misses] => 0
[size] => 0
[storage] => 18486
)
bloomd.conf
[bloomd]
port = 8673
udp_port = 8674
scale_size = 4
flush_interval = 60
cold_interval = 3600
in_memory = 0
use_mmap = 0
workers = 1
initial_capacity = 100000
default_probability = 1e-4
probability_reduction = 0.9
data_dir = /var/lib/bloomd
log_level = INFO
I try to get bloomd running in a cluster configuration with the data_dir on a drbd volume synced between two nodes. Everything works fine except the redirect of stdout. nohup with redirect, daemonize or any other approach seems to fail. does bloomd require a terminal?
screen -d -m -L ./bloomd seems to be the only option
It would be great to have server supported atomic renames.
We have a script that builds several large bloom filters, which can take a while. It would be great if we could build the filters under temporary names and then (atomically) rename the all the filters:
rename temp_filter_1 filter_1, temp_filter_2 filter_2, temp_filter_3 filter_3
That would allow us to build the bloom filters in the background under a pseudonym, then checks against the final filter name will fail. Once the build completes, we perform the renames and suddenly the client checks begin working.
Client 1:
c filter_1 key
Filter does not exist
Client 2:
rename temp_filter_1 filter_1
Client 1:
c filter_1 key
No
If we add support for a write-ahead-log, then external tools can be used to setup log streaming and build a sane high availability setup.
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
gcc -o src/bloomd/art.o -c -std=c99 -D_GNU_SOURCE -Wall -Wextra -Werror -O2 -pthread -Isrc/bloomd/ -Ideps/inih/ -Ideps/libev/ -Isrc/libbloom/ src/bloomd/art.c
src/bloomd/art.c:5:23: fatal error: emmintrin.h: No such file or directory
#include <emmintrin.h>
^
compilation terminated.
scons: *** [src/bloomd/art.o] Error 1
scons: building terminated because of errors.
can somebody tell how to fix it?
Using built-in specs.
Target: i686-apple-darwin11
Configured with: /private/var/tmp/llvmgcc42/llvmgcc42-2336.9~22/src/configure --disable-checking --enable-werror --prefix=/Applications/Xcode.app/Contents/Developer/usr/llvm-gcc-4.2 --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-prefix=llvm- --program-transform-name=/^[cg][^.-]*$/s/$/-4.2/ --with-slibdir=/usr/lib --build=i686-apple-darwin11 --enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2336.9~22/dst-llvmCore/Developer/usr/local --program-prefix=i686-apple-darwin11- --host=x86_64-apple-darwin11 --target=i686-apple-darwin11 --with-gxx-include-dir=/usr/include/c++/4.2.1
Thread model: posix
gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00)
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
gcc -o src/bloomd/networking.o -c -std=c99 -D_GNU_SOURCE -O2 -pthread -Isrc/bloomd/ -Ideps/inih/ -Ideps/libev/ -Isrc/libbloom/ src/bloomd/networking.c
gcc -o src/bloomd/conn_handler.o -c -std=c99 -D_GNU_SOURCE -Wall -Wextra -Werror -O2 -pthread -Isrc/bloomd/ -Ideps/inih/ -Ideps/libev/ -Isrc/libbloom/ src/bloomd/conn_handler.c
gcc -o src/bloomd/filter.o -c -std=c99 -D_GNU_SOURCE -Wall -Wextra -Werror -O2 -pthread -Isrc/bloomd/ -Ideps/inih/ -Ideps/libev/ -Isrc/libbloom/ src/bloomd/filter.c
cc1: warnings being treated as errors
src/bloomd/filter.c: In function 'bloomf_delete':
src/bloomd/filter.c:250: warning: passing argument 3 of 'scandir' from incompatible pointer type
src/bloomd/filter.c: In function 'discover_existing_filters':
src/bloomd/filter.c:452: warning: passing argument 3 of 'scandir' from incompatible pointer type
src/bloomd/filter.c: In function 'bloomf_sbf_callback':
src/bloomd/filter.c:596: warning: passing argument 3 of 'scandir' from incompatible pointer type
scons: *** [src/bloomd/filter.o] Error 1
scons: building terminated because of errors.
I've removed -Werror
from SConstruct and it did compile.
Everything works beside that, great job!
I'm working off the latest master, on Ubuntu 18.10.
scons version:
script: v3.0.5.a56bbd8c09fb219ab8a9673330ffcd55279219d0, 2019-03-26 23:16:31, by bdeegan on kufra
engine: v3.0.5.a56bbd8c09fb219ab8a9673330ffcd55279219d0, 2019-03-26 23:16:31, by bdeegan on kufra
$ python3 `which scons`
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
gcc -o src/bloomd/bloomd.o -c -std=c99 -D_GNU_SOURCE -Wall -Wextra -Werror -O2 -pthread -Isrc/bloomd/ -Ideps/inih/ -Ideps/libev/ -Isrc/libbloom/ src/bloomd/bloomd.c
src/bloomd/bloomd.c: In function 'main':
src/bloomd/bloomd.c:180:43: error: cast between incompatible function types from 'void (*)(worker_args *)' {aka 'void (*)(struct <anonymous> *)'} to 'void * (*)(void *)' [-Werror=cast-function-type]
pthread_create(&threads[i], NULL, (void*(*)(void*))worker_main, &wargs);
^
cc1: all warnings being treated as errors
scons: *** [src/bloomd/bloomd.o] Error 1
scons: building terminated because of errors.
GCC 4.7.2 with -Wextra noticed this. 'more' is a uint64_t, while pread() returns a ssize_t ('signed size_t'). The code checks for error returns (-1) by checking if 'more' is less than zero, which will never be true (it's unsigned). So, the error handling never happens.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.