Giter Site home page Giter Site logo

lemire / streamvbyte Goto Github PK

View Code? Open in Web Editor NEW
363.0 23.0 36.0 199 KB

Fast integer compression in C using the StreamVByte codec

License: Apache License 2.0

C 94.85% Makefile 0.77% CMake 1.61% C++ 2.77%
integer-compression compression simd neon arm ssse3 x64

streamvbyte's Introduction

streamvbyte

Ubuntu 22.04 CI (GCC 9, 10, 11 and 12, LLVM 12, 13, 14) Ubuntu 20.04 CI (GCC 9.4 and 10, LLVM 10 and 11) macOS 11 CI (LLVM 13, GCC 10, 11, 12) VS16-CI VS17-CI

StreamVByte is a new integer compression technique that applies SIMD instructions (vectorization) to Google's Group Varint approach. The net result is faster than other byte-oriented compression techniques.

The approach is patent-free, the code is available under the Apache License.

It includes fast differential coding.

It assumes a recent Intel processor (most Intel and AMD processors released after 2010) or an ARM processor with NEON instructions (which is almost all of them except for the tiny cores). Big-endian processors are unsupported at this time, but they are getting to be extremely rare.

The code should build using most standard-compliant C99 compilers. The provided makefile expects a Linux-like system. We have a CMake build.

Requirements

  • A C99 compatible compiler (GCC 9 and up, LLVM 10 and up, Visual Studio 2019 and up).
  • We support macOS, Linux and Windows. It should be easy to extend support to FreeBSD and other POSIX systems.

For high performance, you should have either a 64-bit ARM processor or a 64-bit x64 system with SSE 4.1 support. SSE 4.1 was added to Intel processors in 2007 so it is almost certain that your Intel or AMD processor supports it.

Users

This library is used by

Usage

See examples/example.c for an example.

Short code sample:

// suppose that datain is an array of uint32_t integers
size_t compsize = streamvbyte_encode(datain, N, compressedbuffer); // encoding
// here the result is stored in compressedbuffer using compsize bytes
streamvbyte_decode(compressedbuffer, recovdata, N); // decoding (fast)

If the values are sorted, then it might be preferable to use differential coding:

// suppose that datain is an array of uint32_t integers
size_t compsize = streamvbyte_delta_encode(datain, N, compressedbuffer,0); // encoding
// here the result is stored in compressedbuffer using compsize bytes
streamvbyte_delta_decode(compressedbuffer, recovdata, N,0); // decoding (fast)

You have to know how many integers were coded when you decompress. You can store this information along with the compressed stream.

During decoding, the library may read up to STREAMVBYTE_PADDING extra bytes from the input buffer (these bytes are read but never used).

1. Building with CMake:

We expect a recent CMake. Please make sure that your version of CMake is up-to-date or you may need to adapt our instructions.

The cmake build system also offers a libstreamvbyte_static static library (libstreamvbyte_static under linux) in addition to libstreamvbyte shared library (libstreamvbyte.so under linux).

-DCMAKE_INSTALL_PREFIX:PATH=/path/to/install is optional. Defaults to /usr/local{include,lib}

cmake -DCMAKE_BUILD_TYPE=Release \
         -DCMAKE_INSTALL_PREFIX:PATH=/path/to/install \
	 -DSTREAMVBYTE_ENABLE_EXAMPLES=ON \
	 -DSTREAMVBYTE_ENABLE_TESTS=ON -B build

cmake --build build
# run the tests like:
ctest --test-dir build

Installation with CMake

cmake --install build 

Benchmarking with CMake

After building, you may run our benchmark as follows:

./build/test/perf

The benchmarks are not currently built under Windows.

2. Building with Makefile:

  make
  ./unit

Installation with Makefile

You can install the library (as a dynamic library) on your machine if you have root access:

  sudo make install

To uninstall, simply type:

  sudo make uninstall

It is recommended that you try make dyntest before proceeding.

Benchmarking with Makefile

You can try to benchmark the speed in this manner:

  make perf
  ./perf

Make sure to run make test before, as a sanity test.

Signed integers

We do not directly support signed integers, but you can use fast functions to convert signed integers to unsigned integers.

#include "streamvbyte_zigzag.h"

zigzag_encode(mysignedints, myunsignedints, number); // mysignedints => myunsignedints

zigzag_decode(myunsignedints, mysignedints, number); // myunsignedints => mysignedints

Technical posts

Alternative encoding

By default, Stream VByte uses 1, 2, 3 or 4 bytes per integer. In the case where you expect many of your integers to be zero, you might try the streamvbyte_encode_0124 and streamvbyte_decode_0124 which use 0, 1, 2, or 4 bytes per integer.

Stream VByte in other languages

Format Specification

We specify the format as follows.

We do not store how many integers (count) are compressed in the compressed data per se. If you want to store the data stream (e.g., to disk), you need to add this information. It is intentionally left out because, in applications, it is often the case that there are better ways to store this count.

There are two streams:

  • The data starts with an array of "control bytes". There are (count + 3) / 4 of them.
  • Following the array of control bytes, there are data bytes.

We can interpret the control bytes as a sequence of 2-bit words. The first 2-bit word is made of the least significant 2 bits in the first byte, and so forth. There are four 2-bit words written in each byte.

Starting from the first 2-bit word, we have corresponding sequence in the data bytes, written in sequence from the beginning:

  • When the 2-bit word is 00, there is a single data byte.
  • When the 2-bit words is 01, there are two data bytes.
  • When the 2-bit words is 10, there are three data bytes.
  • When the 2-bit words is 11, there are four data bytes.

The data bytes are stored using a little-endian encoding.

Consider the following example:

control bytes: [0x40 0x55 ... ]
data bytes: [0x00 0x64 0xc8 0x2c 0x01 0x90  0x01 0xf4 0x01 0x58 0x02 0xbc 0x02 ...]

The first control byte is 0x40 or the four 2-bit words : 00 00 00 01. The second control byte is 0x55 or the four 2-bit words : 01 01 01 01. Thus the first three values are given by the first three bytes: 0x00, 0x64, 0xc8 (or 0, 100, 200 in base 10). The five next values are stored using two bytes each: 0x2c 0x01, 0x90 0x01, 0xf4 0x01, 0x58 0x02, 0xbc 0x02. As little endian integers, these are to be interpreted as 300, 400, 500, 600, 700.

Thus, to recap, the sequence of integers (0,100,200,300,400,500,600,700) gets encoded as the 15 bytes 0x40 0x55 0x00 0x64 0xc8 0x2c 0x01 0x90 0x01 0xf4 0x01 0x58 0x02 0xbc 0x02.

If the countis not divisible by four, then we include a final partial group where we use zero 2-bit corresponding to no data byte.

Reference

See also

streamvbyte's People

Contributors

0x55555555 avatar amallia avatar aqrit avatar aras-p avatar daniel-j-h avatar emaxerrno avatar gatesn avatar iiseymour avatar ishitatsuyuki avatar kwillets avatar lemire avatar markpapadakis avatar mmooyyii avatar pps83 avatar spaceim avatar striezel avatar victor1234 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

streamvbyte's Issues

Can this be built on Mac M1?

According to the README, Mac M1 is supported.

... an ARM processor with NEON instructions ...

Makefile uses aarch64 as the condition for test:

ifeq ($(PROCESSOR), aarch64)
# for 64-bit ARM processors
CFLAGS = -fPIC -std=c99 -O3 -Wall -Wextra -pedantic -Wshadow -D__ARM_NEON__

But in fact what uname -m returns on my Mac M1 is arm64. What about the following condition:

ifneq ($(filter $(PROCESSOR), arm64 aarch64),)

Shared oibject version

The cmake build is not producing libstreamvbyte.so.0.0.1, which I see is built by the mnimal Makefile. Do you see the same on your end or is there maybe something amiss in my cmake configuration?

Port to ARM NEON (aarch64)

This should get ported to ARM NEON instructions (specifically aarch64). The main difficulty is the byte shuffling.

Compression uint32_t stream with lots of zeroes

I am trying to use streamvbyte in an in-house archiving software which we'll hopefully be able to publish as open source at some point. Your library is a great match for my use-case with it's brilliant performance and compression that's good enough.

There's a catch though.

My stream of uint32_t's looks like the following: 234, 566, 0, 0, 333, 0, 0, 0, 1578987, 0, 234, 444, <a few million uint32_t's more>. Notice that there are lots of zeroes, about 30-40% of the stream. Zero distribution is highly unpredictable, and I know that zero run length is probably gonna be about 2-3 zeroes max.

But streamvbyte can only use 1,2,3,4 bytes per integer, depending on the value. I calculated that for my use case it's much more reasonable to have something like 0,1,2,4, i.e.:

  1. I don't want to include zeroes in the stream.
  2. I don't really need 3 bytes values.

This would mean that I would only have to keep about 2 bits per zero value.

I am going through your related papers - which are very readable! - and the code and it seems to me that it should be possible to just patch streamvbyte to match my needs.

So here's a question:

  1. Is there anything that I don't understand and that might become a problem here?
  2. I'll have 3-5 full time days to solve the issue. Is there a way I can solve the issue and contribute back some code?

Thank you!

About a warning from valgrind when running through valgrind

Hi streamvbyte team,

I am getting an "invalid read" warning when running the attached example for streamvbyte through valgrind. Is it a false positive that I could safely ignore? (or I could be doing some stupid mistake).

Also, it seems like when I changed to the allocation in the attached example to uint8_t *compressedbuffer = malloc(((COMPRESSED_SIZE*sizeof(uint8_t))+(32-1))/32*32);, the valgrind warning disappears. Is this because of address alignment requirements for vector instructions?

example.c.txt

==2132== Memcheck, a memory error detector
==2132== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==2132== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==2132== Command: ./a.out
==2132==
==2132== error calling PR_SET_PTRACER, vgdb might block
==2132== Invalid read of size 16
==2132==    at 0x4010E8: _mm_loadu_si128 (emmintrin.h:698)
==2132==    by 0x4010E8: _decode_avx (streamvbyte_x64_decode.c:7)
==2132==    by 0x4010E8: svb_decode_avx_simple (streamvbyte_x64_decode.c:97)
==2132==    by 0x401248: streamvbyte_decode (streamvbyte_decode.c:72)
==2132==    by 0x4005F6: main (example.c:22)
==2132==  Address 0x522969f is 36,479 bytes inside a block of size 36,494 alloc'd
==2132==    at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2132==    by 0x4005C0: main (example.c:16)
==2132==
==2132== Invalid read of size 16
==2132==    at 0x401106: _mm_loadu_si128 (emmintrin.h:698)
==2132==    by 0x401106: _decode_avx (streamvbyte_x64_decode.c:7)
==2132==    by 0x401106: svb_decode_avx_simple (streamvbyte_x64_decode.c:99)
==2132==    by 0x401248: streamvbyte_decode (streamvbyte_decode.c:72)
==2132==    by 0x4005F6: main (example.c:22)
==2132==  Address 0x52296a3 is 36,483 bytes inside a block of size 36,494 alloc'd
==2132==    at 0x4C2DB8F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==2132==    by 0x4005C0: main (example.c:16)
==2132==
==2132==
==2132== HEAP SUMMARY:
==2132==     in use at exit: 0 bytes in 0 blocks
==2132==   total heap usage: 3 allocs, 3 frees, 153,642 bytes allocated
==2132==
==2132== All heap blocks were freed -- no leaks are possible
==2132==
==2132== For counts of detected and suppressed errors, rerun with: -v
==2132== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)

Java/Kotlin version?

Guess the new Vector API would make it possible to port the project to Java, for instance.

runtime error: store to misaligned address

#include <stdio.h>
#include <stdlib.h>

#include <cassert>
#include <vector>

#include "thirdparty/streamvbyte/include/streamvbyte.h"

auto main(int argc, char* argv[]) -> int {
    auto test = std::vector<uint32_t>{431, 292, 979, 994, 761, 879, 672, 690, 296, 931, 379,
                                   98,  132, 105, 116, 841, 387, 831, 335, 333, 557, 915};
    
    // streamvbyte/examples/example.c
    uint32_t N = test.size();
    uint32_t* datain = static_cast<uint32_t*>(malloc(N * sizeof(uint32_t)));
    uint8_t* compressedbuffer = static_cast<uint8_t*>(malloc(streamvbyte_max_compressedbytes(N)));
    uint32_t* recovdata = static_cast<uint32_t*>(malloc(N * sizeof(uint32_t)));
    for (uint32_t k = 0; k < N; ++k) datain[k] = test[k]; 

    size_t compsize = streamvbyte_encode(datain, N, compressedbuffer);  // encoding
    // here the result is stored in compressedbuffer using compsize bytes
    size_t compsize2 = streamvbyte_decode(compressedbuffer, recovdata,
                                          N);  // decoding (fast)
    assert(compsize == compsize2);
    free(datain);
    free(compressedbuffer);
    free(recovdata);
    printf("Compressed %d integers down to %d bytes.\n", N, (int)compsize);
}
/home/moyi/thirdparty/streamvbyte/src/streamvbyte_x64_encode.c:91:3: runtime error: store to misaligned address 0x50b000001032 for type 'uint32_t' (aka 'unsigned int'), which requires 4 byte alignment
0x50b000001032: note: pointer points here
 69 74  49 03 00 00 00 00 00 00  00 00 00 00 00 be be be  be be be be be be be be  be be be be be be
              ^ 
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /home/moyi/thirdparty/streamvbyte/src/streamvbyte_x64_encode.c:91:3 in 

Reduce peak memory consumtion during encoding

We don't know the required output memory upfront, so we use a function returning the worst case memory required

// return the maximum number of compressed bytes given length input integers
static inline size_t streamvbyte_max_compressedbytes(const uint32_t length) {
// number of control bytes:
size_t cb = (length + 3) / 4;
// maximum number of control bytes:
size_t db = (size_t) length * sizeof(uint32_t);
return cb + db;
}

but in case we are encoding small integers (or small deltas), often times most if not all values fit into a single byte.

In these cases, we still need to allocate upfront

control bytes + n * 4

whereas

control bytes + n * 1

bytes would suffice.

There are use cases where I'd like to only allocate e.g. 1 GB instead of 4 GB an then throwing out 3 GB immediately after encoding.

Should this library provide a two-pass approach, where

  • the user first calls a function to determine the allocation required
  • the user then calls a function to encode the input data

This two-pass approach might be slower in terms of runtime, but we can reduce the allocations required for data bytes by a factor of four in the best case.

Users can write their own version (summing up the bytes required per input item) but having a function in the library would be great for convenience and would allow efficient implementations in the future. Thoughts?

Encoded size doesnt match decoded size.

Hey Guys,

A small code used to show the problem i m facing using streamvbytes library on linux

#include <stdint.h>
#include <streamvbyte.h>
#include <malloc.h>
#include <stdio.h>

int main( void )
{
size_t encoded = 0, decoded = 0;
uint8_t *set = malloc( 1024 );
uint32_t entry[ 2 ];

entry[ 0 ] = 10;
entry[ 1 ] = 0;

encoded += streamvbyte_encode( entry, 2, set );
for( size_t i = 0; i < 2; i++ ) {
decoded += streamvbyte_decode( set + decoded, entry, 1 );
}
printf( "encoded size : %zd vs decoded size : %zd\n", encoded, decoded );
free( set );

return 0;
}

When I run this code I got this output +1

encoded size : 3 vs decoded size : 4

Can you help me finding the issue ?

Port differential coded version to ARM NEON

The generic codec supports both x64 and ARM NEON, however the differential-encoded version is x64 only.

It seems like it would be easy to port them over. The Delta function in ARM is almost identical:

uint32x4_t Delta(uint32x4_t curr, uint32x4_t prev) {
   return vsubq_u32(curr, vextq_u32 (prev,curr,3));
}

And so is the prefix sum which is currently mixed with the store in _write_avx_d1 (for historical reasons I suppose)...

uint32x4_t PrefixSum(uint32x4_t curr, uint32x4_t prev) {
   uint32x4_t zero = {0, 0, 0, 0};
   uint32x4_t add = vextq_u32 (zero, curr, 3);
   uint8x16_t BroadcastLast = {12,13,14,15,12,13,14,15,12,13,14,15,12,13,14,15};
   prev = vreinterpretq_u32_u8(vqtbl1q_u8(vreinterpretq_u8_u32(prev),BroadcastLast));
   curr = vaddq_u32(curr,add);
   add = vextq_u32 (zero, curr, 2);
   curr = vaddq_u32(curr,prev);
   curr = vaddq_u32(curr,add);
   return curr;
}

It could be that my implementations are suboptimal, but I think that they are correct and given these functions it should be easy to create a differentially coded codec.

Illegal instruction (core dumped)

OS: Ubuntu 20.04.5 LTS
CPU: Intel Xeon X5660

Step to reproduce:

git clone [email protected]:lemire/streamvbyte.git
mkdir build && cd build
cmake ..
cmake --build .

Output cmake ..

-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is GNU 9.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- No build type selected
-- Default to Release
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- CMAKE_BUILD_TYPE: Release
-- CMAKE_C_COMPILER: /usr/bin/cc
-- CMAKE_C_FLAGS: 
-- CMAKE_C_FLAGS_DEBUG: -g
-- CMAKE_C_FLAGS_RELEASE: -O3 -DNDEBUG
-- Configuring done
-- Generating done
-- Build files have been written to: /home/kataev/development/streamvbyte/build

Output ctest -V

UpdateCTestConfiguration  from :/home/kataev/development/streamvbyte/build/DartConfiguration.tcl
UpdateCTestConfiguration  from :/home/kataev/development/streamvbyte/build/DartConfiguration.tcl
Test project /home/kataev/development/streamvbyte/build
Constructing a list of tests
Done constructing a list of tests
Updating test list for fixtures
Added 0 tests to meet fixture requirements
Checking test dependency graph...
Checking test dependency graph end
test 1
    Start 1: unit

1: Test command: /home/kataev/development/streamvbyte/build/unit
1: Test timeout computed to be: 10000000
1/1 Test #1: unit .............................***Exception: Illegal  0.28 sec

0% tests passed, 1 tests failed out of 1

Total Test time (real) =   0.29 sec

The following tests FAILED:
	  1 - unit (ILLEGAL)
Errors while running CTest

Output ./perf

Illegal instruction (core dumped)

Reduce the size of the lookup tables

The current lookup tables are quite large. Finding a way to substantially reduce their memory usage without adversally affecting performance would be a worthy goal.

Better integrate the 0,1,2,4 bytes mode

Following this PR #26 we now have code that can use a 0,1,2,4 byte encoding. However, it is basically achieved through pure code duplication. Worse: it does not benefit from @aqrit 's latest improvements.

Obviously, we could do better.

incomplete sentence in readme?

see the description in "Usage" and somehow it ends with ". The":

You have to know how many integers were coded when you decompress. You can store this information along with the compressed stream. The

Endianness

From Fig 3 in https://arxiv.org/pdf/1709.08990.pdf, it looks like the data layout is intended to be big-endian. However, in the test data from https://bitbucket.org/marshallpierce/stream-vbyte-rust/commits/ad95ed76e271a10c0c0bb57e23800a4e23d606e9 encoding 0, 100, 200, 300, ..., we have the following hex (format from hexdump -C):

00000000  40 55 55 55 55 55 55 55  55 55 55 55 55 55 55 55  |@UUUUUUUUUUUUUUU|
00000010  55 55 55 55 55 55 55 55  55 55 55 55 55 55 55 55  |UUUUUUUUUUUUUUUU|

Since the first four numbers are 0, 100, 200, 300 taking 1, 1, 1, 2 bytes respectively, based on the figure's diagram of control bits to encoded ints I would expect the first control byte to be 0b00000001 = 0x01, not 0b01000000 = 0x40.

There are 1250 = 0x4E2 control bytes, so looking at where the encoded numbers start, we see:

000004e0  aa aa 00 64 c8 2c 01 90  01 f4 01 58 02 bc 02 20  |...d.,.....X... |

0 = 0x00, 100 = 0x64, 200 = 0xC8 are single bytes of course, but 300 = 0x012C in big endian, and the sample data has 0x2C01.

Of course, little-endian is just as valid a choice as big-endian! Am I misinterpreting the paper? Should I be letting the user choose which endianness to expect?

Support for Signed Types Integrated in the CODECs

I have an interest in using streamvbyte with signed types (specifically int16) and have created a Python wrapper which supports this however the overhead is quite large as you would imagine iiSeymour/pystreamvbyte#1.

Native support for an efficient zigzag encoding/decoding for int16 and int32 would be great. I imagine this is something that would vectorise well?

If a 16-bit value in the range `0...255` is encoded as 1 byte

          If a 16-bit value in the range `0...255` is encoded as 1 byte 

and everything else is encoded as 2 bytes.

How does one instead encode the the range -128...127 as 1 byte?
Answer: just add/subtract 128 to/from each value.

I can't decide how to optimize the encoder...
So instead here is an non-optimized version w/delta coding and signed integer support
(that may or may not be correct)
https://gist.github.com/aqrit/ef84d284cebe861d9e57db4129bcafc3

Originally posted by @aqrit in #28 (comment)

Const uint32_t pointer

Hi,

I believe you could (and should) make the input pointer const. As far as I can tell, only two signatures must be changed:

size_t streamvbyte_encode(const uint32_t *in, uint32_t length, uint8_t *out);
size_t streamvbyte_encode_quad(const uint32_t *in, uint8_t *outData, uint8_t *outKey);

instead of

size_t streamvbyte_encode(uint32_t *in, uint32_t length, uint8_t *out);
size_t streamvbyte_encode_quad(uint32_t *in, uint8_t *outData, uint8_t *outKey);

I found myself in the need of casting away the const because of this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.