Giter Site home page Giter Site logo

dpc / rdedup Goto Github PK

View Code? Open in Web Editor NEW
832.0 27.0 42.0 1.01 MB

Data deduplication engine, supporting optional compression and public key encryption.

Makefile 1.54% Rust 95.11% Shell 1.66% Nix 1.57% Vim Script 0.11%
data-deduplication backup encryption deduplication

rdedup's Introduction

Travis CI Build Status crates.io Gitter Chat

rdedup

rdedup is a data deduplication engine and a backup software. See current project status and original use case description wiki pages.

rdedup is generally similar to existing software like duplicacy, restic, attic, duplicity, zbackup, etc., with a skew towards asymmetric encryption and synchronization friendly data model. Thanks to Rust and solid architecture, rdedup is also exteremely performant and very reliable (no data-loss bugs ever reported).

rdedup is written in Rust and provides both command line tool and library API (rdedup-lib). The library can be used to embed the core engine into other applications, or building custom frontends and tools.

Features

  • simple but solid cryptography:
    • libsodium based
    • public-key encryption mode (the only tool like that I'm aware of, and primary reason rdedup was created)
  • flat-file synchronization friendly (Dropbox/syncthing, rsync, rclone)
  • immutable data-conflict-free data store
  • cloud backends are WIP
  • incremental, scalable garbage collection
  • variety of supported algorithms:
    • chunking: fastcdc, gear, bup
    • hashing: blake2b, sha256
    • compression: zstd, deflate, xz2, bzip2, none
    • encryption: curve25519, none
    • very easy to add new ones
    • check rdedup init --help output for up-to-date list
  • extreme performance and parallelism - see Rust fearless concurrency in rdedup
  • reliability focus (eg. rdedup is using fsync + rename to avoid data corruption even in case of a hardware crash)
  • built-in time/performance profiler

Strong parts

It's written in Rust. It's a modern language, that is actually really nice to use. Rust makes it easy to have a very robust and fast software.

The author is a nice person, welcomes contributions, and helps users. Or at least he's trying... :)

Shortcomings and missing features:

rdedup currently does not implement own backup/restore functionality (own directory traversal), and because of that it's typically paired with tar or rdup tools. Built-in directory traversal could improve deduplication ratio for workloads with many small, frequently changing files.

Cloud storage integrations are missing. The architecture to support it is mostly implemented, but the actual backends are not.

Installation

If you have cargo installed:

cargo install --locked rdedup

If not, I highly recommend installing rustup (think pip, npm but for Rust)

If you're interested in running rdedup with maximum possible performance, try:

RUSTFLAGS="-C target-cpu=native" cargo install --locked rdedup

In case of troubles, check rdedup building issues or report a new one (sorry)!

Usage

See rdedup -h for help.

Rdedup always operates on a repo, that you provide as an argument (eg. --dir <DIR>), or via environment variable (eg. RDEDUP_DIR).

Supported commands:

  • rdedup init - create a new repo.
    • rdedup init --help for repository configuration options.
  • rdedup store <name> - store data from standard input under a given name.
  • rdedup load <name> - load data stored under given name and write it to standard output.
  • rdedup rm <name> - remove the given name.
  • rdedup ls - list all stored names.
  • rdedup gc - remove any no longer reachable data.

In combination with rdup this can be used to store and restore your backup like this:

rdup -x /dev/null "$HOME" | rdedup store home
rdedup load home | rdup-up "$HOME.restored"

rdedup is data agnostic, so formats like tar, cpio and other will work, but to get benefits of deduplication, archive format should not be compressed or encrypted already.

RDEDUP_PASSPHRASE environment variable

If RDEDUP_PASSPHRASE is defined, it will be used instead of interactively asking user for password.

License

rdedup is licensed under: MPL-2.0

rdedup's People

Contributors

aidanhs avatar benprew avatar dpc avatar dywedir avatar fredeil avatar jamespharaoh avatar llogiq avatar misuzu avatar mkroman avatar newpavlov avatar nikolay avatar nivkner avatar pfernie avatar phillipcouto avatar ralith avatar spikebike avatar steveej avatar tbroadley avatar thinkrapido avatar tim-seoss avatar zjzdy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rdedup's Issues

Add `rdedup gc`

  • start with an empty chunk hashmap
  • go through all backup, record every chunk id in the hashmap
  • go through every chunk/index file and it it's not in a the hashmap - delete it

Rdedup as a log deduplication.

So I'm also author of slog-rs and I'm thinking that when logging a lot of data, deduplication could be really useful (since logging tends to repeat the same information over and over again).

I'm thinking that I could write a slog::Drain (logging destination) that logs things to rdedup repo.

I think the following primitives would have to be added to rdedup-lib:

  • init_or_open - open repo, create from scratch if not already there
  • appending data - since rdedup treats data as a binary stream, it makes sense to be; should create a new name if not existing
  • encryptionless mode - not every application requires encryption; logging in most cases does not, and the raw speed is actually more important;

`gc` command fails

I'm getting a failure when trying to perform gc; stack trace below. Any ideas?

$ RUST_BACKTRACE=1 rdedup -d repo gc
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error { repr: Os { code: 20, message: "Not a directory" } }', src/libcore/result.rs:799
stack backtrace:
   1:        0x100d86fe8 - std::sys::backtrace::tracing::imp::write::h6f1d53a70916b90d
   2:        0x100d89fbf - std::panicking::default_hook::{{closure}}::h137e876f7d3b5850
   3:        0x100d89025 - std::panicking::default_hook::h0ac3811ec7cee78c
   4:        0x100d89636 - std::panicking::rust_panic_with_hook::hc303199e04562edf
   5:        0x100d894d4 - std::panicking::begin_panic::h6ed03353807cf54d
   6:        0x100d893f2 - std::panicking::begin_panic_fmt::hc321cece241bb2f5
   7:        0x100d89357 - rust_begin_unwind
   8:        0x100daa7f0 - core::panicking::panic_fmt::h27224b181f9f037f
   9:        0x100d150c0 - core::result::unwrap_failed::hbbe45964de00535c
  10:        0x100d257d7 - rdedup_lib::Repo::list_stored_chunks::insert_all_digest::h42a864493005e8a7
  11:        0x100d2670d - rdedup_lib::Repo::gc::h2dfed9052416104b
  12:        0x100d0b11c - rdedup::main::hc2f849d74320671f
  13:        0x100d8a57a - __rust_maybe_catch_panic
  14:        0x100d88b66 - std::rt::lang_start::h538f8960e7644c80

Scalable GC approach.

It seems to me that current way of doing GC won't scale too well. With 128KiB chunks, each represented by 32 bytes hash, it requires building a hashmap that is only 128 * 1024 / 32 = 4096 smaller than the data itself (+ control structures). In case of repository of 1PiB, that's 256GiB index data and similar in-memory set to map against.

Also to build such in-memory index-set, one needs to go through all index data each time.

I see following angels of tacking that problem: persistence, reference counting, probabilistic datastructures.

Reference counting could be implemented such as each time a data chunk is created, it would get a refcount = 1, and when another reference to it is written in index, the refcount is incremented.

Persistence would mean that the index-set does not have to be in memory and can be built, and updated on disk.

Probabilistic datastructures (like Bloom Filter) could be used as false positive matches are acceptable, and if I understand things right for eg. Bloom Filter 1% false positive rate can be reached by using just 10bits per element, which cuts the size of a index-set around 25 times (32 * 8 bit / 10 bit = 25).

The problems to consider are:

  • atomicity of file operations (refcount increments, etc.)
  • using in-place file modification is not very file synchronization friendly. Index-set could be however stored as host-local cache and therfore excluded from syncing. Each host would recreate it as needed.

Switch meta-configuration to more flexible file format

Right now I just glued nonce, salt and so on to each other and dumped them in hex form to a file, with a "repo version" number first. But since rdedup is picking up functionality, a more extendable format will be better.

  • pick format: JSON/TOML (see #36 (comment))
  • make rdedup-lib read old or new format
  • make rdedup write a new format metadata
  • add rdedup-lib call and rdedup upgrade command to upgrade rdedup repo

See #36 for previous discussion

Add `rdedup rm`.

Just delete the backup file, and optionally run rdedup gc afterwards.

Configurable chunks folder nesting (e.g. flat) for faster sync?

Modern filesystems can have more than 64k files in them, as do many cloud storage systems.

Due to the two-level chunk folder organization, even a small backup can end up with a lot of folders. Trying to sync a tree with about 1k folders to Google Drive (using rclone) takes forever because it has to issue a separate HTTP/API request for each folder to list the chunks for sync.

Can we have a configuration to select the nesting-level of the chunks folder? It could default to two as current, but go down to zero where it's a single folder with all chunks contained.

[solved] Ubuntu/Mint: Compilation failure: libsodium was not found in the pkg-config search path

Hi,

On two systems I've tried (Ubuntu 16.04, Mint 17.1), with no prior Rust install, I find rdedup fails to build with cargo:

# Install Rust
curl -sSf https://static.rust-lang.org/rustup.sh | sh
$ rustc --version
rustc 1.8.0 (db2939409 2016-04-11)

# Install cargo as per https://crates.io/install
wget 'https://static.rust-lang.org/cargo-dist/cargo-nightly-x86_64-unknown-linux-gnu.tar.gz'
tar xf cargo-nightly-x86_64-unknown-linux-gnu.tar.gz
cd cargo-nightly-x86_64-unknown-linux-gnu/
sudo ./install.sh
$ cargo --version
cargo 0.11.0-nightly (45eaca1 2016-05-03)

# install rdedup
$ cargo install rdedup
    Updating registry `https://github.com/rust-lang/crates.io-index`
 Downloading rdedup v0.1.0
 Downloading env_logger v0.3.3
 Downloading rustc-serialize v0.3.19
 Downloading log v0.3.6
 Downloading rdedup-lib v0.1.0
 Downloading argparse v0.2.1
 Downloading regex v0.1.69
 Downloading regex-syntax v0.3.1
 Downloading memchr v0.1.11
 Downloading aho-corasick v0.5.2
 Downloading utf8-ranges v0.1.3
 Downloading thread_local v0.2.5
 Downloading libc v0.2.11
 Downloading thread-id v2.0.0
 Downloading kernel32-sys v0.2.2
 Downloading winapi v0.2.6
 Downloading winapi-build v0.1.1
 Downloading rust-crypto v0.2.35
 Downloading sodiumoxide v0.0.10
 Downloading rollsum v0.2.0
 Downloading flate2 v0.2.13
 Downloading rand v0.3.14
 Downloading time v0.1.35
 Downloading libc v0.1.12
 Downloading gcc v0.3.28
 Downloading libsodium-sys v0.0.10
 Downloading pkg-config v0.3.8
 Downloading miniz-sys v0.1.7
   Compiling winapi-build v0.1.1
   Compiling gcc v0.3.28
   Compiling pkg-config v0.3.8
   Compiling regex-syntax v0.3.1
   Compiling libc v0.1.12
   Compiling rustc-serialize v0.3.19
   Compiling libc v0.2.11
   Compiling argparse v0.2.1
/home/jturner/.cargo/registry/src/github.com-88ac128001ac3a9a/libc-0.1.12/rust/src/liblibc/lib.rs:81:21: 81:39 warning
: lint raw_pointer_derive has been removed: using derive with raw pointers is ok
/home/jturner/.cargo/registry/src/github.com-88ac128001ac3a9a/libc-0.1.12/rust/src/liblibc/lib.rs:81 #![allow(bad_styl
e, raw_pointer_derive)]

   ^~~~~~~~~~~~~~~~~~
   Compiling winapi v0.2.6
   Compiling rollsum v0.2.0
   Compiling log v0.3.6
   Compiling kernel32-sys v0.2.2
   Compiling rand v0.3.14
   Compiling memchr v0.1.11
   Compiling aho-corasick v0.5.2
   Compiling time v0.1.35
   Compiling libsodium-sys v0.0.10
   Compiling thread-id v2.0.0
   Compiling thread_local v0.2.5
   Compiling utf8-ranges v0.1.3
Build failed, waiting for other jobs to finish...
failed to compile `rdedup v0.1.0`, intermediate artifacts can be found at `/home/jturner/Downloads/cargo-nightly-x86_6
4-unknown-linux-gnu/target-install`

Caused by:
  failed to run custom build command for `libsodium-sys v0.0.10`
Process didn't exit successfully: `/home/jturner/Downloads/cargo-nightly-x86_64-unknown-linux-gnu/target-install/relea
se/build/libsodium-sys-893da188a4086141/build-script-build` (exit code: 101)
--- stderr
thread '<main>' panicked at 'called `Result::unwrap()` on an `Err` value: "`\"pkg-config\" \"--libs\" \"--cflags\" \"l
ibsodium\"` did not exit successfully: exit code: 1\n--- stdout\nPackage libsodium was not found in the pkg-config sea
rch path.\nPerhaps you should add the directory containing `libsodium.pc\'\nto the PKG_CONFIG_PATH environment variabl
e\nNo package \'libsodium\' found\n"', ../src/libcore/result.rs:746
note: Run with `RUST_BACKTRACE=1` for a backtrace.

Use more threads.

Reading data, compression, encryption and writing chunks could each be running on a separate (or even more then one for each).

Restoring time is less critical, but could also be parallelized. .

Limit size of the repo (quota-like support)

I'd love to be able to effectively support a total-size restriction on the repo.

Initially I was thinking of just ordering my named backups by timestamp, and rm && gc'ing in chronological order until the size of the repo is within check. But...

  1. Would each successive gc run be slow on a large repo?
  2. Would it then make sense to combine gc with a rm into some kind of new prune command?
  3. Would it be feasible to rely on the timestamps of the named backups?

Support Additional Backends?

Would anyone be interested in have rdedup natively support multiple backends?

Examples of a backends would be:

  • Local Filesystem (current)
  • Amazon S3
  • Openstack Swift
  • SFTP
  • Webdav
  • Google Drive

Just to name a few. Thoughts?

Analyse the `rdedup store` pipeline performance.

Separate threads are used for reading, assembling, compression, encryption and storing. By measuring channel saturation, performance could be optimized: more threads could be used to perform bottleneck tasks, or bigger channel could be used to amortize workload.

Make chunk sizes configurable

Pre-requirement: #39

  • Add a argument to repo init to use a chunk size different then a default one
  • Add a field in repo meta-data to configure average chunk size

See #36 for previous discussion.

Chunks files are impractically small

Chunk files made by rdedup are mostly few KB, which makes larger backups consist of tons of files. This leads to a substantial fs metadata overhead and influences syncing.

I think they should be stored collapsed in larger groups or via some KV library (leveldb?).

Store setings inside the chunk.

Each chunk should start with short header, describing:

  • what compression algorithm (if any) was used
  • encryption algorithm (in case eg. libsodium changes it's algorithms etc.)
  • possibly other data

Support LZMA or LZO compression

Looks like deflate is the only scheme supported. Would be nice to see if we get further savings using LZMA and/or LZO, selectable at the time the repo is created.

changepassphrase command not working

When executing

rdedup changepassphrase

The following error is returned:

Usage:
    rdedup [OPTIONS] [COMMAND] [ARGUMENTS ...]
rdedup: Bad value changepassphrase

Implement Test Cases for GC Accuracy

These test cases are to make sure the GC logic does not delete a reachable chunk by accident. This is part of the initiative to expand testing of the code.

Incremental Garbage Collection

When repos get massive with large counts of chunks I think an incremental gc may be beneficial. I am think of processing the repo by using ranges on the first 2 bytes of a digest. After the run is complete we can create a file in the repo to record the last position of the gc. The next run would check the last position and work from there. Every run of the gc would scan all the reachable digests first.

Example:

The first run would scan from chunks/00/ to chunks/0F/, purge what is considered garbage, write the position file with a value of 0f and exit.

The second run would see the file exists, verify it is within the range of a hex value and is not at the end of the range ff. It would then proceed from the last position + 1 so in the case above it would scan from 10 to 1f, purge what is considered garbage, write the position file with a value of 1f and exit.

This would continue till the end and then wrap to the beginning.

On my machine the list_stored_chunks to 488 seconds to scan everything. If we break the work down to smaller units it can even get to the point were the gc can be run regularly after a delete or regular backups without huge amounts of time dedicated to the process.

What do you think? Maybe make it a seperate command?

Add own directory traverser.

While I personally enjoy feeding rdedup with rdup archive on standard input, having own directory traverser that utilizes mmap, could make things much faster.

Refactor to be a library.

Most functions already take GlobalOptions which is kind of self. Create struct Rdedup and move it it's own crate rdedup-lib. Call it from rdedup binary.

Performance Difference between sync_all and sync_data

Version of nightly: rustc 1.16.0-nightly (c8af93f09 2017-01-18)

Currently I was testing some code out and from curiosity I was wondering if we are seeing any gains from using sync_data over sync_all.

When I ran my tests I am noticing that using sync_all instead of sync_data results in rdedup using only 70% of the time used to store a bunch of git project data in an empty repo compared to the current version of rdedup

I am running rdedup from mac os 10.12.2 so it may be specific to mac os but I am going to test it on a linux machine I have and see if there is similar gains there.

The ironic thing is sync_data is supposedly more efficient if not the same as sync_all. Maybe from a simplicity point of view we should just use sync_all regardless.

Define the Goals or Core Values of rdedup

To assist in guiding what contributions will be accepted for rdedup we should define the goals or core values of the project that must be maintained at all times.

For example, consistency of data written to disk. ๐Ÿ˜„

Probably best if it is is in the README front an center for everyone to see and referenced in the contributing guidelines.

Linting of Code

I think it may be ideal to implement linting part of CI to keep the code in relatively good shape with best practices for rust. Just like we run rustfmt to keep the code formatted correctly.

I think clippy is the go to utility at the moment so we can break this item up into 2:

  • Fix all current linting warnings and errors found by clippy
  • integrate into CI for ongoing maintenance of the code.
  • Add rustfmt checking to make sure code is formatted before testing

@dpc what do you think?

Increase chunk sizes?

Are you opposed to increasing chunk sizes? I was thinking of increasing the BUP bitsize to 20 instead of 17. This is showing some promising speed improvements on my end. I know it would impact the reuse of existing chunks in the repo but I am trying to use rdedup to backup 80-90GB of data and I am finding the chunk sizes on average are too small with lots of random small IO that is thrashing my backup drives.

Provide more information / statistics

I think it would be beneficial to get more statistics out of the rdedup utility. Either as stdout or hidden behind the logging implementation with info!.

A few examples are:

  • Number of chunks written / referenced and bytes consumed when storing
  • Number of chunks / bytes freed when garbage collecting
  • Number of bytes referenced when using du

What do you think?

Indexes and random access.

By writing index files one could have random access to stored data.

There is unfortunate naming collision. I'll call current index files, a chunk-index, and the new data byte-index.

Initial idea: for every chunk-index there could be a corresponding byte-index consisting of 128bit values: each value for every chunk in chunk-index, describng the total size of the data from the beginning to the given chunk (inclusive).

Using a binary search on byte-index one could quickly O(log n) identify a chunk storing given portion of data. This could eg. be used to implement exposing data as block devices or mounting them as fuse filesystems.

Reducing Size of Channels

Are you opposed to reducing the size of the channels?

The issue I am having is I am using rdedup to backup Virtual Machines and when channels are full because my mac is faster then the HDD I am storing the backup on I am using between 600-800MB of memory. After adjusting the code to see what may be causing the large memory usage tweaking the channels down to 16 and reducing the buffer size to 4K for various buffered operations lead to a huge reduction in memory consumption.

Currently reduced the memory consumption to 40-60MB. As for backup performance I did not notice an impact as reviewing the call stack rdedup is waiting on FS operations like stat, rename, open most of time.

[solved] Doesn't build on ubuntu-16.04

$ sudo cargo install rdedup
Updating registry https://github.com/rust-lang/crates.io-index
Downloading rdedup v0.1.0
Downloading flate2 v0.2.13
Downloading rust-crypto v0.2.35
Downloading winapi-build v0.1.1
Downloading rand v0.3.14
Downloading regex-syntax v0.3.1
Downloading utf8-ranges v0.1.3
Downloading log v0.3.6
Downloading memchr v0.1.11
Downloading pkg-config v0.3.8
Downloading argparse v0.2.1
Downloading rustc-serialize v0.3.19
Downloading thread_local v0.2.5
Downloading sodiumoxide v0.0.10
Downloading libsodium-sys v0.0.10
Downloading regex v0.1.69
Downloading libc v0.2.11
Downloading thread-id v2.0.0
Downloading gcc v0.3.28
Downloading kernel32-sys v0.2.2
Downloading miniz-sys v0.1.7
Downloading env_logger v0.3.3
Downloading aho-corasick v0.5.2
Downloading winapi v0.2.6
Downloading rollsum v0.2.0
Downloading time v0.1.35
Downloading libc v0.1.12
Downloading rdedup-lib v0.1.0
Compiling pkg-config v0.3.8
Compiling log v0.3.6
Compiling regex-syntax v0.3.1
Compiling gcc v0.3.28
Compiling argparse v0.2.1
Compiling libc v0.2.11
Compiling miniz-sys v0.1.7
Compiling winapi v0.2.6
Compiling memchr v0.1.11
Compiling rustc-serialize v0.3.19
Compiling libc v0.1.12
.cargo/registry/src/github.com-88ac128001ac3a9a/libc-0.1.12/rust/src/liblibc/lib.rs:81:21: 81:39 warning: lint raw_pointer_derive has been removed: using derive with raw pointers is ok
.cargo/registry/src/github.com-88ac128001ac3a9a/libc-0.1.12/rust/src/liblibc/lib.rs:81 #![allow(bad_style, raw_pointer_derive)]
^~~~~~~~~~~~~~~~~~
Compiling rollsum v0.2.0
Compiling rust-crypto v0.2.35
Compiling flate2 v0.2.13
Compiling aho-corasick v0.5.2
Compiling rand v0.3.14
Compiling winapi-build v0.1.1
Compiling kernel32-sys v0.2.2
Compiling thread-id v2.0.0
Compiling time v0.1.35
Compiling thread_local v0.2.5
Compiling libsodium-sys v0.0.10
Compiling utf8-ranges v0.1.3
Compiling regex v0.1.69
Compiling sodiumoxide v0.0.10
Compiling env_logger v0.3.3
Compiling rdedup-lib v0.1.0
Compiling rdedup v0.1.0
.cargo/registry/src/github.com-88ac128001ac3a9a/rdedup-0.1.0/src/bin.rs:89:13: 89:29 error: imports are not allowed after non-item statements [E0154]
.cargo/registry/src/github.com-88ac128001ac3a9a/rdedup-0.1.0/src/bin.rs:89 use argparse::*;
^~~~~~~~~~~~~~~~
.cargo/registry/src/github.com-88ac128001ac3a9a/rdedup-0.1.0/src/bin.rs:89:13: 89:29 help: run rustc --explain E0154 to see a detailed explanation

As instructed I reran with --verbose:
$ sudo cargo install --verbose rdedup
Updating registry https://github.com/rust-lang/crates.io-index
Fresh pkg-config v0.3.8
Fresh log v0.3.6
Fresh libc v0.2.11
Fresh regex-syntax v0.3.1
Fresh rustc-serialize v0.3.19
Fresh libc v0.1.12
Fresh gcc v0.3.28
Fresh rand v0.3.14
Fresh winapi v0.2.6
Fresh winapi-build v0.1.1
Fresh memchr v0.1.11
Fresh aho-corasick v0.5.2
Fresh utf8-ranges v0.1.3
Fresh rollsum v0.2.0
Fresh argparse v0.2.1
Fresh kernel32-sys v0.2.2
Fresh miniz-sys v0.1.7
Fresh libsodium-sys v0.0.10
Fresh thread-id v2.0.0
Fresh time v0.1.35
Fresh flate2 v0.2.13
Fresh sodiumoxide v0.0.10
Fresh thread_local v0.2.5
Fresh rust-crypto v0.2.35
Fresh regex v0.1.69
Fresh rdedup-lib v0.1.0
Fresh env_logger v0.3.3
Compiling rdedup v0.1.0
Running rustc .cargo/registry/src/github.com-88ac128001ac3a9a/rdedup-0.1.0/src/bin.rs --crate-name rdedup --crate-type bin -C opt-level=3 --out-dir /home/bill/target-install/release --emit=dep-info,link -L dependency=/home/bill/target-install/release -L dependency=/home/bill/target-install/release/deps --extern rustc_serialize=/home/bill/target-install/release/deps/librustc_serialize-d9e72695d437325f.rlib --extern log=/home/bill/target-install/release/deps/liblog-30a8a27ec161f1be.rlib --extern rdedup_lib=/home/bill/target-install/release/deps/librdedup_lib-5adcfb581909a471.rlib --extern argparse=/home/bill/target-install/release/deps/libargparse-6266299da68ee07e.rlib --extern env_logger=/home/bill/target-install/release/deps/libenv_logger-6ddad1820b981994.rlib -L native=/usr/local/lib -L native=/home/bill/target-install/release/build/miniz-sys-d03126dbc9ee0074/out -L native=/home/bill/target-install/release/build/rust-crypto-6acb19580f9fa57f/out
.cargo/registry/src/github.com-88ac128001ac3a9a/rdedup-0.1.0/src/bin.rs:89:13: 89:29 error: imports are not allowed after non-item statements [E0154]
.cargo/registry/src/github.com-88ac128001ac3a9a/rdedup-0.1.0/src/bin.rs:89 use argparse::*;
^~~~~~~~~~~~~~~~
.cargo/registry/src/github.com-88ac128001ac3a9a/rdedup-0.1.0/src/bin.rs:89:13: 89:29 help: run rustc --explain E0154 to see a detailed explanation
error: aborting due to previous error
failed to compile rdedup v0.1.0, intermediate artifacts can be found at `/home/bill

Add some documentation for Contributing

Since there will be two of us and in the event anyone else would like to contribute we should create a contributing guideline either in the README.md or a separate document so everyone is on the same page for the process of contributing to rdedup.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.