Giter Site home page Giter Site logo

gzp's Introduction

⛓️gzp

Build Status license Version info

Multi-threaded encoding and decoding.

Why?

This crate provides a near drop in replacement for Write that has will compress chunks of data in parallel and write to an underlying writer in the same order that the bytes were handed to the writer. This allows for much faster compression of data.

Additionally, this provides multi-threaded decompressors for Mgzip and BGZF formats.

Supported Encodings:

Usage / Features

By default gzp has the deflate_default and libdeflate features enabled which brings in the best performing zlib implementation as the backend for flate2 as well as libdeflater for the block gzip formats.

Examples

  • Deflate default
[dependencies]
gzp = { version = "*" }
  • Rust backend, this means that the Zlib format will not be available.
[dependencies]
gzp = { version = "*", default-features = false, features = ["deflate_rust"] }
  • Snap only
[dependencies]
gzp = { version = "*", default-features = false, features = ["snap_default"] }

Note: if you are running into compilation issues with libdeflate and the i686-pc-windows-msvc target, please see this issue for workarounds.

Examples

Simple example

use std::{env, fs::File, io::Write};

use gzp::{deflate::Gzip, ZBuilder, ZWriter};

fn main() {
    let mut writer = vec![];
    // ZBuilder will return a trait object that transparent over `ParZ` or `SyncZ`
    let mut parz = ZBuilder::<Gzip, _>::new()
        .num_threads(0)
        .from_writer(writer);
    parz.write_all(b"This is a first test line\n").unwrap();
    parz.write_all(b"This is a second test line\n").unwrap();
    parz.finish().unwrap();
}

An updated version of pgz.

use gzp::{
    ZWriter,
    deflate::Mgzip,
    par::{compress::{ParCompress, ParCompressBuilder}}
};
use std::io::{Read, Write};

fn main() {
    let chunksize = 64 * (1 << 10) * 2;

    let stdout = std::io::stdout();
    let mut writer: ParCompress<Mgzip> = ParCompressBuilder::new().from_writer(stdout);

    let stdin = std::io::stdin();
    let mut stdin = stdin.lock();

    let mut buffer = Vec::with_capacity(chunksize);
    loop {
        let mut limit = (&mut stdin).take(chunksize as u64);
        limit.read_to_end(&mut buffer).unwrap();
        if buffer.is_empty() {
            break;
        }
        writer.write_all(&buffer).unwrap();
        buffer.clear();
    }
    writer.finish().unwrap();
}

Same thing but using Snappy instead.

use gzp::{parz::{ParZ, ParZBuilder}, snap::Snap};
use std::io::{Read, Write};

fn main() {
    let chunksize = 64 * (1 << 10) * 2;

    let stdout = std::io::stdout();
    let mut writer: ParZ<Snap> = ParZBuilder::new().from_writer(stdout);

    let stdin = std::io::stdin();
    let mut stdin = stdin.lock();

    let mut buffer = Vec::with_capacity(chunksize);
    loop {
        let mut limit = (&mut stdin).take(chunksize as u64);
        limit.read_to_end(&mut buffer).unwrap();
        if buffer.is_empty() {
            break;
        }
        writer.write_all(&buffer).unwrap();
        buffer.clear();
    }
    writer.finish().unwrap();
}

Acknowledgements

  • Many of the ideas for this crate were directly inspired by pigz, including implementation details for some functions.

Contributing

PRs are very welcome! Please run tests locally and ensure they are passing. May tests are ignored in CI because the CI instances don't have enough threads to test them / are too slow.

cargo test --all-features && cargo test --all-features -- --ignored

Note that tests will take 30-60s.

Future todos

Benchmarks

All benchmarks were run on the file in ./bench-data/shakespeare.txt catted together 100 times which creates a rough 550Mb file.

The primary benchmark takeaway is that compression time decreases proportionately to the number of threads used.

benchmarks

gzp's People

Contributors

joshtriplett avatar nh13 avatar shnatsel avatar sstadick avatar sylvestre avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

gzp's Issues

gzp spawns one less threads than CPUs, which hurts performance

I've run some tests comparing crabz to pigz using the benchmarking setup described in crabz readme. On a 4-core system with no hyperthreading crabz was measurably slower.

I've profiled both using perf and it turned out that crabz spends the vast majority of the time in zlib compression, so parallelization overhead is not an issue. However, crabz only spawned 3 threads performing compression while pigz spawned 4 compression threads. After passing -p3 to pigz so that it would only spawn 3 compression threads, the compression time became identical to crabz.

I suspect this is also why you're not seeing any parallelization gains on dual-core systems.

Technical details

crabz profile: https://share.firefox.dev/3zeVRxN
pigz profile: https://share.firefox.dev/2WeYe4V

crabz installed via cargo install crabz on a clean Ubuntu 20.04 installation, pigz installed via apt.

$ hyperfine 'crabz -c 3 < /media/elementary/ssd/large-file.txt' 'pigz -3 < /media/elementary/ssd/large-file.txt'
Benchmark #1: crabz -c 3 < /media/elementary/ssd/large-file.txt
  Time (mean ± σ):      4.642 s ±  0.351 s    [User: 11.326 s, System: 0.196 s]
  Range (min … max):    4.312 s …  5.465 s    10 runs
 
Benchmark #2: pigz -3 < /media/elementary/ssd/large-file.txt
  Time (mean ± σ):      3.884 s ±  0.253 s    [User: 14.307 s, System: 0.167 s]
  Range (min … max):    3.556 s …  4.248 s    10 runs
 
Summary
  'pigz -3 < /media/elementary/ssd/large-file.txt' ran
    1.20 ± 0.12 times faster than 'crabz -c 3 < /media/elementary/ssd/large-file.txt'

Seamless writing of uncompressed output

Hi,

Thank you for writing and maintaining gzp. I have found it extremely useful. I have run into one limitation with the interface. In my current application, I would like to optionally generate uncompressed output. This is especially true if the output is going to be written to stdout. However, with the current gzp interface this is challenging since a ParCompress object must call finish() before going out of scope. As such, the typical solution below doesn't work:

let mut seq_writer: Box<dyn Write + Send> = if compress {
    Box::new(
        ParCompressBuilder::new()
            .compression_level(flate2::Compression::new(compression_level as u32))
            .num_threads(processes)?
            .from_writer(seq_out),
    )
} else {
    Box::new(BufWriter::with_capacity(1024 * 1024, seq_out))
};

One can't call finish() in this situation. One solution would be to provide a ParCompress<NoCompress> type of something similar.

Is there another way to address this?

Thanks,
Donovan

Support `zune-inflate`, the 100% safe Rust port of `libdeflate`

zune-inflate is a port of libdeflate to safe Rust. It's not as fast as libdeflate yet, but it already beats miniz_oxide and even zlib-ng.

It has been extensively tested and fuzzed, including roundtrip fuzzing with both miniz_oxide and zlib_ng to ensure correctness.

It would be nice to use it in place of libdeflate in the pure-Rust configuration.

error when target is i686-pc-windows-msvc

  = note: liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_alloc_decompressor referenced in function __ZN11libdeflater12Decompressor3new17hf31a74519cf4bcd1E
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_gzip_decompress referenced in function __ZN11libdeflater12Decompressor15gzip_decompress17hde9a1276df679f8cE
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_zlib_decompress referenced in function __ZN11libdeflater12Decompressor15zlib_decompress17hce9268d56b6995e5E
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_deflate_decompress referenced in function __ZN11libdeflater12Decompressor18deflate_decompress17h5546d5b293d64f6cE
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_free_decompressor referenced in function __ZN67_$LT$libdeflater..Decompressor$u20$as$u20$core..ops..drop..Drop$GT$4drop17h9c4c188330724ceeE
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_alloc_compressor referenced in function __ZN11libdeflater10Compressor3new17h9374eb469501ac7aE
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_deflate_compress_bound referenced in function __ZN11libdeflater10Compressor22deflate_compress_bound17h4313659dcc1e1e3fE
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_deflate_compress referenced in function __ZN11libdeflater10Compressor16deflate_compress17hae2d861b3dc9d63dE
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_zlib_compress_bound referenced in function __ZN11libdeflater10Compressor19zlib_compress_bound17h047ad95075c7dac8E
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_zlib_compress referenced in function __ZN11libdeflater10Compressor13zlib_compress17h5c62d3f235f4ec27E
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_gzip_compress_bound referenced in function __ZN11libdeflater10Compressor19gzip_compress_bound17h03ff8ed43415946cE
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_gzip_compress referenced in function __ZN11libdeflater10Compressor13gzip_compress17h9aee05245d69c40aE
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_free_compressor referenced in function __ZN65_$LT$libdeflater..Compressor$u20$as$u20$core..ops..drop..Drop$GT$4drop17hf882442a05e00411E
          liblibdeflater-a26db91458bc2d2e.rlib(libdeflater-a26db91458bc2d2e.libdeflater.c676b24c-cgu.3.rcgu.o) : error LNK2019: unresolved external symbol _libdeflate_crc32 referenced in function __ZN11libdeflater3Crc6update17h1dbd33df57087956E

Please support generating a single gzip-compatible or deflate-compatible stream

Currently, pargz uses many GzEncoder instances, which each emit a gzip header, so decoding it requires MultiGzDecoder. If pargz instead used DeflateEncoder to emit raw deflate streams, with an optional gzip or zlib header in front, the resulting stream would be a single gzip/zlib/deflate file, and wouldn't require MultiGzDecoder.

I'd love to be able to use pargz in file formats that expect raw deflate, or that expect zlib, or that expect gzip but don't handle multiple gzip headers.

GPU-Compression

It might be useful when its larger than a specific size to compress the data on the graphics card if available.

[feature] Implement BufRead

Given that gzp does have internal buffer, it should implement BufRead so that double buffering isn't required to do things like iterate over lines.

#28

[Idea] Make gzp usable as C library and/or python module

Make gzp usable as C library and/or python module

For example, there is a python wrapper around ISA-Lwhich uses theigzip` code to provide fast gzip decompression/compression (bad compression ratios) which mimics the standard gzip module to provide faster gzip (de)compression speed.

python-isal

Faster zlib and gzip compatible compression and decompression by providing Python bindings for the ISA-L library.

This package provides Python bindings for the ISA-L library. The Intel(R) Intelligent Storage Acceleration Library (ISA-L) implements several key algorithms in assembly language. This includes a variety of functions to provide zlib/gzip-compatible compression.

python-isal provides the bindings by offering three modules:

    isal_zlib: A drop-in replacement for the zlib module that uses ISA-L to accelerate its performance.
    igzip: A drop-in replacement for the gzip module that uses isal_zlib instead of zlib to perform its compression and checksum tasks, which improves performance.
    igzip_lib: Provides compression functions which have full access to the API of ISA-L's compression functions.

isal_zlib and igzip are almost fully compatible with zlib and gzip from the Python standard library. There are some minor differences see: differences-with-zlib-and-gzip-modules.

https://github.com/pycompression/python-isal

It would be great if gzp could be exposed in a similar way, so more bioinformatics (python) programs can read/write gzip/bgzf files faster.

Tokio Compat?

Hello, I am wondering if its possible to use gzp with tokio or if support might be added in the future. Cheers :)

Help with the bgzf reader?

Hello,

I am newish to rust and I have what is probably a very simple question. How do I read in line by line a bgzipped file?

I have this code which is largely borrowed from crabz (see bottom), but once I have the BgzfSyncReader I am not sure how to iterate over it, or manipulate it in any way.

Thanks in advance!
Mitchell

/// Get a buffered input reader from stdin or a file
fn get_input(path: Option<PathBuf>) -> Result<Box<dyn BufRead + Send + 'static>> {
    let reader: Box<dyn BufRead + Send + 'static> = match path {
        Some(path) => {
            if path.as_os_str() == "-" {
                Box::new(BufReader::with_capacity(BUFFER_SIZE, io::stdin()))
            } else {
                Box::new(BufReader::with_capacity(BUFFER_SIZE, File::open(path)?))
            }
        }
        None => Box::new(BufReader::with_capacity(BUFFER_SIZE, io::stdin())),
    };
    Ok(reader)
}

/// Example trying bgzip
/// ```
/// use rustybam::myio;
/// let f = ".test/asm_small.paf.bgz";
/// myio::test_gbz(f);
/// ```
pub fn test_bgz(filename: &str) {
    let ext = Path::new(filename).extension();
    eprintln!("{:?}", ext);
    let pathbuf = PathBuf::from(filename);
    let box_dny_bufread_send = get_input(Some(pathbuf)).unwrap();
    let gbzf_reader = BgzfSyncReader::new(box_dny_bufread_send);
    // How do I now loop over lines?
    for line in gbzf_reader {
        eprintln!(line);
    }
}

Please support runtimes other than tokio

I'd love to use gzp in libraries that are already using a different async runtime, and I'd like to avoid adding the substantial additional dependencies of a separate async runtime that isn't otherwise used by the library.

Would you consider either supporting a thread pool like Rayon, or supporting other async runtimes?

(If it helps, you could use a channel library like flume that works on any runtime, so that you only need a spawn function.)

Some package updates

libdeflater: Updated to latest libdeflate.
num_cpus: support for cgroups v2

In the latest flate2, libz-ng also got its own sys packages that might need to be added:

libz-sys = { version = "1.1.8", optional = true, default-features = false }
libz-ng-sys = { version = "1.1.8", optional = true }
❯ git diff Cargo.toml
diff --git a/Cargo.toml b/Cargo.toml
index 47c759b..3bcfa44 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -33,20 +33,20 @@ snappy = []
 any_zlib = ["flate2/any_zlib"]
 
 [dependencies]
-bytes = "1.0.1"
-num_cpus = "1.13.0"
-thiserror = "1.0.26"
-flume = { version = "0.10.9" }
+bytes = "1.3"
+num_cpus = "1.15.0"
+thiserror = "1.0.38"
+flume = { version = "0.10.14" }
 byteorder = "1.4.3"
 flate2 = { version = "~1", default-features = false, optional = true }
-libdeflater = { version = "0.11.0", optional = true }
-libz-sys = { version = "1.1.3", default-features = false, optional = true }
+libdeflater = { version = "0.12.0", optional = true }
+libz-sys = { version = "1.1.8", default-features = false, optional = true }
 snap = { version = "~1", optional = true }
 core_affinity = "0.7.6"
 
 [dev-dependencies]
 proptest = "1.0.0"
-tempfile = "3.2.0"
+tempfile = "3.3.0"
 criterion = "0.4"
 
 [[bench]]

multi-threaded snap decoding?

First off, thanks for this awesome crate @sstadick !

Thanks to gzp, I can compress 2.5gb/sec in my project using your multi-threaded snappy compression implementation.

Is there any chance snappy can also have multi-threaded snappy decompression?

Performance comparison with rust-snappy

This looks really interesting, @sstadick! We (optionally) use compression to reduce the size of collated RAD files in our aleinv-fry project. Specifically, we make use of the Snappy frame encoding to allow multiple different threads to compress the data in parallel. It's not quite as fast as writing the raw data to disk, but the overhead is small and the compression benefits are pretty large.

Do you have any idea how this compares to snap? Specifically, in the multithreaded case, I'd be curious of the time / size tradeoffs.

Thanks!
Rob

Error when writting with ` ZBuilder::<Bgzf, _>::new().num_threads(8)...`

Hello,

I am getting an error whenever I use Zbuilder::<Bgzf, _> with more than one thread and I am having trouble figuring out what is going wrong. Here is the error message I am getting:

thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ChannelReceive(Disconnected)', /Users/mrvollger/.cargo/registry/src/github.com-1ecc6299db9ec823/gzp-0.9.2/src/par/compress.rs:301:27
stack backtrace:
   0: rust_begin_unwind
             at /rustc/a7e2e33960e95d2eb1a2a2aeec169dba5f73de05/library/std/src/panicking.rs:498:5
   1: core::panicking::panic_fmt
             at /rustc/a7e2e33960e95d2eb1a2a2aeec169dba5f73de05/library/core/src/panicking.rs:107:14
   2: core::result::unwrap_failed
             at /rustc/a7e2e33960e95d2eb1a2a2aeec169dba5f73de05/library/core/src/result.rs:1690:5
   3: core::result::Result<T,E>::unwrap
             at /rustc/a7e2e33960e95d2eb1a2a2aeec169dba5f73de05/library/core/src/result.rs:1018:23
   4: <gzp::par::compress::ParCompress<F> as core::ops::drop::Drop>::drop
             at /Users/mrvollger/.cargo/registry/src/github.com-1ecc6299db9ec823/gzp-0.9.2/src/par/compress.rs:301:13
   5: core::ptr::drop_in_place<gzp::par::compress::ParCompress<gzp::deflate::Bgzf>>
             at /rustc/a7e2e33960e95d2eb1a2a2aeec169dba5f73de05/library/core/src/ptr/mod.rs:188:1
   6: core::ptr::drop_in_place<alloc::boxed::Box<dyn gzp::ZWriter>>
             at /rustc/a7e2e33960e95d2eb1a2a2aeec169dba5f73de05/library/core/src/ptr/mod.rs:188:1
   7: core::ptr::drop_in_place<alloc::boxed::Box<dyn std::io::Write>>
             at /rustc/a7e2e33960e95d2eb1a2a2aeec169dba5f73de05/library/core/src/ptr/mod.rs:188:1
   8: test::run_split_fastx
             at ./src/test.rs:72:5
   9: test::main
             at ./src/test.rs:77:5
  10: core::ops::function::FnOnce::call_once
             at /rustc/a7e2e33960e95d2eb1a2a2aeec169dba5f73de05/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

And here is some code and a test file that recreates this error for me:

use gzp::deflate::Bgzf;
use gzp::Compression;
use gzp::ZBuilder;
use needletail::{parse_fastx_file, parse_fastx_stdin, parser::LineEnding};
use std::ffi::OsStr;
use std::fs::File;
use std::io::{BufWriter, Write};
use std::path::{Path, PathBuf};

const BUFFER_SIZE: usize = 128 * 1024;

/// Uses the presence of a `.gz` extension to decide if compression is needed
pub fn writer(filename: &str) -> Box<dyn Write> {
    let ext = Path::new(filename).extension();
    let path = PathBuf::from(filename);
    let buffer = Box::new(BufWriter::with_capacity(
        BUFFER_SIZE,
        File::create(path).expect("Error: cannot create output file"),
    ));

    if ext == Some(OsStr::new("gz")) {
        let writer = ZBuilder::<Bgzf, _>::new()
            .num_threads(8)
            .compression_level(Compression::new(6))
            .from_writer(buffer);
        Box::new(writer)
    } else {
        buffer
    }
}

/// Split a fasta file across outputs
pub fn run_split_fastx(files: &[String], infile: &str) {
    // open the output files
    let mut outs = Vec::new();
    for f in files {
        let handle = writer(f);
        outs.push(handle);
    }
    // open reader
    let mut reader = if infile == "-" {
        parse_fastx_stdin().expect("Missing or invalid stdin for fastx parser.")
    } else {
        parse_fastx_file(infile).expect("Missing or invalid stdin for fastx parser.")
    };
    // iterate
    let mut out_idx = 0;
    let mut rec_num = 0;
    while let Some(record) = reader.next() {
        let seq_rec =
            record.unwrap_or_else(|_| panic!("Error reading record number {}", rec_num + 1));
        seq_rec
            .write(&mut outs[out_idx], Some(LineEnding::Unix))
            .unwrap_or_else(|_| panic!("Error writing record number {}", rec_num + 1));

        eprintln!("Wrote record number {}", rec_num + 1);
        out_idx += 1;
        rec_num += 1;
        if out_idx == outs.len() {
            out_idx = 0;
        }
    }
    // Close all the files.
    let mut n_out = 0;
    for mut out in outs {
        out.flush()
            .unwrap_or_else(|_| panic!("Error flushing output!"));
        n_out += 1;
        eprintln!("Finished output number {}", n_out);
    }
}

pub fn main() {
    let infile = "large.test.fa.gz";
    run_split_fastx(&["a.fa".to_string(), "b.fa.gz".to_string()], infile);
}

Hete is the fasta file used in this example code:
large.test.fa.gz

Any help would be greatly appreciated!

Thanks!
Mitchell

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.