Giter Site home page Giter Site logo

privacy-scaling-explorations / mpz Goto Github PK

View Code? Open in Web Editor NEW
162.0 10.0 35.0 10.63 MB

Multi-party computation libraries written in Rust ๐Ÿฆ€

Rust 100.00%
cryptography garbled-circuits mpc multi-party-computation oblivious-transfer privacy zero-knowledge

mpz's Introduction

CI

MPZ

mpz is a collection of multi-party computation libraries written in Rust ๐Ÿฆ€.

This project strives to provide safe, performant, modular and portable MPC software with a focus on usability.

See our design doc for information on design choices, standards and project structure.

โš ๏ธ Notice

This project is currently under active development and should not be used in production. Expect bugs and regular major breaking changes. Use at your own risk.

Crates

Core

  • mpz-core - Assortment of low-level primitives.
  • matrix-transpose - Bit-wise matrix transposition.
  • clmul - Carry-less multiplication

Circuits

  • mpz-circuits - Boolean circuit DSL
  • mpz-circuits-macros - Proc-macros for mpz-circuits

Oblivious Transfer

  • mpz-ot - High-level async APIs
  • mpz-ot-core - Low-level types for OT, and core implementations of OT protocols.

Garbled Circuits

  • mpz-garble - High-level APIs for boolean garbled circuit protocols, including VM abstractions.
  • mpz-garble-core - Low-level types for boolean half-gate garbling algorithm.

Share Conversion

  • mpz-share-conversion - High-level APIs for Multiplicative-to-Additive and Additive-to-Multiplicative share conversion protocols for a variety of fields.
  • mpz-share-conversion-core - Low-level types for share conversion protocols.

License

All crates in this repository are licensed under either of

at your option.

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

See CONTRIBUTING.md.

Contributors

Pronunciation

mpz is pronounced "em-peasy".

mpz's People

Contributors

000wan avatar caglaryucekaya avatar heeckhau avatar rex4539 avatar saeyoon17 avatar sheagrief avatar sinui0 avatar th4s avatar themighty1 avatar valpaq avatar voltrevo avatar xiangxiecrypto avatar yuroitaki avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mpz's Issues

chunk KOS extension message

The Extend message sent from the KOS receiver to the sender during extension can be very large. This clashes with DoS mitigation at the IO layer which limits the size of incoming data.

sink.feed(Message::Extend(extend)).await?;

Instead of sending it as a single message, the receiver should send it in chunks with a max size.

Half-gate Privacy Free Garbling

The half-gate garbling scheme supports "privacy-free garbling" which reduces bandwidth usage by 50%. Implement this in mpz-garble-core, and following that we can utilize it in DEAP.

Predicate attack against current DEAP impl

I just realized there is an predicate attack against our current DEAP impl we overlooked which doesn't require malicious garbling or OT to perform.

The Follower simply has to choose different inputs for each execution. This depends on the function in question but generally the follower can select different inputs such that:

$$f(x_0, y) \neq f(x_1, y) \longleftrightarrow g(y) = true$$

where $x$ and $y$ are the Follower and Leader inputs respectively. This leaks an arbitrary predicate of $y$ which satisfies this.

This attack is also present in standard Dual-ex, but we overlooked mitigating it. Fortunately the mitigation is relatively simple (but will require some work).

We need to implement the CommittedReceiver counter-part to our CommittedSender impl. Then during DEAP finalization, we can check that the Follower provided the same inputs to both executions by validating their OT choices.

It's still not clear whether this is something we need to mitigate for our application, the Followers only input is the PMS share to the PRF and it seems that it would be infeasible to select a predicate that affects the AES-CTR circuit.

Add fuzzing feature to trace macro

In our mpc-circuits crate we have a macro for generating a circuit from Rust code, and we have another macro for testing the circuit against that code for conformity.

We should add an argument to the trace macro to generate fuzz targets which tests conformity for us. Should be pretty simple.

Optimize garbling control flow

A significant portion of the garbling/evaluation time in our implementation is spent on branching while iterating over the gates of a circuit. We should optimize it without using unsafe.

Fully integrate `itybity`

In various parts of the codebase we convert types to Vec<bool> this is quite inefficient. Rather, we should adopt the traits in itybity which facilitate iterating over the bits of a type without copying or allocations.

Parallelize garbled circuit generation/evaluation

Presently our generator implementation processes gates sequentially. I believe there is an opportunity to parallelize this processing by batching the gates into "levels" which do not have wire interdependencies.

The first step in this enhancement is to validate the assumption that these batches are large enough to justify the additional overhead from multithreading.

'simd-transpose' does not work on arm64 architecture

cargo test --features simd-transpose

results in:

...
 Compiling matrix-transpose v0.1.0 (/Users/heeckhau/tlsnotary/mpz/matrix-transpose)
error[E0432]: unresolved import `super::LANE_COUNT`
 --> matrix-transpose/src/simd.rs:1:29
  |
1 | use super::{TransposeError, LANE_COUNT};
  |                             ^^^^^^^^^^ no `LANE_COUNT` in the root

error[E0425]: cannot find function `transpose_bits` in module `simd`
  --> matrix-transpose/src/lib.rs:55:11
   |
55 |     simd::transpose_bits(matrix, rows)?;
   |           ^^^^^^^^^^^^^^ not found in `simd`

error[E0554]: `#![feature]` may not be used on the stable release channel
 --> matrix-transpose/src/lib.rs:3:5
  |
3 |     feature(slice_split_at_unchecked),
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error[E0554]: `#![feature]` may not be used on the stable release channel
 --> matrix-transpose/src/lib.rs:4:5
  |
4 |     feature(portable_simd),
  |     ^^^^^^^^^^^^^^^^^^^^^^

error[E0554]: `#![feature]` may not be used on the stable release channel
 --> matrix-transpose/src/lib.rs:5:5
  |
5 |     feature(stmt_expr_attributes),
  |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

error[E0554]: `#![feature]` may not be used on the stable release channel
 --> matrix-transpose/src/lib.rs:6:5
  |
6 |     feature(slice_as_chunks)
  |     ^^^^^^^^^^^^^^^^^^^^^^^^

warning: unused import: `TransposeError`
 --> matrix-transpose/src/simd.rs:1:13
  |
1 | use super::{TransposeError, LANE_COUNT};
  |             ^^^^^^^^^^^^^^
  |
  = note: `#[warn(unused_imports)]` on by default

warning: unused import: `ops::ShlAssign`
 --> matrix-transpose/src/simd.rs:3:5
  |
3 |     ops::ShlAssign,
  |     ^^^^^^^^^^^^^^

error[E0554]: `#![feature]` may not be used on the stable release channel
 --> matrix-transpose/src/lib.rs:3:13
  |
3 |     feature(slice_split_at_unchecked),
  |             ^^^^^^^^^^^^^^^^^^^^^^^^

error[E0554]: `#![feature]` may not be used on the stable release channel
 --> matrix-transpose/src/lib.rs:4:13
  |
4 |     feature(portable_simd),
  |             ^^^^^^^^^^^^^

error[E0554]: `#![feature]` may not be used on the stable release channel
 --> matrix-transpose/src/lib.rs:6:13
  |
6 |     feature(slice_as_chunks)
  |             ^^^^^^^^^^^^^^^

Research VOLE IZK proof deferral

I'm still working on grokking these newer VOLE-based IZK protocols, but wanted to jot this down before I forget. No idea if it is sound.

In the Quicksilver paper:

image

It appears to me that verification of a statement can be deferred to another party (assuming this 3rd party Verifier trusts that the original Verifier was honest).

Assume that the circuit $C$ is not known to $V$, and modify the protocol such that $V$ simply accepts blind commitments from $P$.

Step 4: $P$ sends a binding commitment to the correction bits $\delta_i$.
Step 5b: $P$ sends a binding commitment to the correction bits $d_i$
Step 7a: $V$ samples $\chi$ and sends it to $P$ as usual
Step 7b: $P$ computes $(U, V)$ as usual, and sends a binding commitment of it to $V$
Step 8a: $P$ sends a binding commitment to $m_h$

Now $P$ should be fully committed to the witness and statement, $V$ can sign all commitments, $\Delta$ and their input seed to the PCG $\rho$.

Later, $V_2$, knowing the circuit $C$, can receive the signed commitments including the decommitments from $P$ and non-interactively verify the proof themselves.

The motivation for the above is that $P$ can generate proofs without $V$ having to even know the statement. This is relevant in the TLSNotary protocol, where the Prover does not want to disclose anything to the Notary but it would be beneficial to commit to these proofs for disclosure to another party later. Pretty rad that these proofs would be non-interactive!

Tracking: Threading + IO redesign

This is a tracking issue for the redesign work for how we model protocol execution and IO.

Tasks

  • Introduce mpz-common which provides types and traits for threading and synchronization. It also adds serio for our IO abstraction. #107
  • Encapsulate our coin-toss protocols async layer #108
  • Update mpz-ot to use mpz-common and deprecate the KOS15 actor impl #109
  • Finish implementing OT traits for KOS15 shared variant.
  • Implement a multi-threaded executor/context in mpz-common
  • Update mpz-garble
  • Update mpz-share-conversion

Unbundle/rewrite `mpz-garble`

The mpz-garble crate is a bit of a mess as it was implemented on a deadline. It needs to be rewritten and unbundled.

Some high-level steps:

  1. Move all VM related abstractions into a dedicated crate.
  2. Decouple the Generator and Evaluator types from the VM abstractions. They should have simple APIs which do not operate on a memory type but simply execute a circuit with provided input encodings.
  3. Completely rewrite the VM memory implementation to be backed by linear memory and fix the clunky type representation model.

Improve Oblivious Transfer for arbitrary message length

Some bandwidth can be saved if we assume that the random block which the receiver holds after the Random OT setup is already an AES key. Then the only thing which the Sender needs to do is to take one of his 2 random blocks based on the derandomization bit and encrypt the message with it. This saves the need to send the actual AES key over the wire.

Support garbling with Salsa20

Currently we use AES for garbling which is very performant with AES-NI hardware acceleration.
In the wasm environment, however, AES-NI is not available and we fall back to a software AES impl.

Edit: benchmarks had a bug, Salsa is actually 4x slower in wasm, i.e. in the same ballpark as AES in wasm.
Salsa20 outperforms software AES in wasm by a factor of 2.5x-3x
Previously benchmarked here tlsnotary/tlsn#1 (comment) (compare the lines AES vs Salsa)

Improve `Block` with SIMD

Right now the backing type of Block is [u8; 16] as it was simple to start with. However, core arrays can not always take advantage of auto-vectorization/SIMD for important operations.

For example, on my machine, it currently takes ~3ns to compute BitXor between two blocks. If using std::arch::x86_64::__mm128i this would be <1ns. We can of course just convert the [u8; 16] to/from __mm128i for the operation, but the overhead of doing so erases any performance gains.

Block would ideally use an u8x16 SIMD backing type depending on target architecture. Unfortunately std::simd is not stable yet so we will have to provide implementations for the various architectures.

To briefly motivate doing this, after some quick profiling and napkin math it seems that garbling a circuit is currently bottlenecked on BitXor, not the CCR hashing!

For example, the AES circuit has 6500 AND and 30,163 XOR gates. Garbling this circuit currently takes ~350 microseconds on my machine. Garbling an AND gate requires up to 9 XOR ops. In total that is 88,663 XOR ops to garble AES, which adds up to 266 microseconds or roughly 75% of CPU time.

Workspace lints

We should take advantage of the recent Cargo support for workspace lints so we have consistency across all packages.

Full GC pipelining support.

Right now we only partially support so-called "pipelining" of garbled circuits. Pipelining is where the Generator and Evaluator can execute a circuit without having to hold the entire garbled circuit in memory at once. Our current implementation supports pipelining of the encrypted gates where they are executed in batches. However, both parties hold all of the wire labels in memory until execution is complete.

To fully support pipelining we need to add a new feature to our mpc-circuits crate which performs post-processing on a circuit to group its gates into "tranches". The idea is that a wire label can be dropped from memory as soon as all of its dependent gates have been executed. The circuit builder can perform a pass over all the gates to compute the necessary information to support this.

This should be straight forward to implement on its own, however I expect it to be more difficult to support this feature together with #5

See also #11 for a somewhat related issue which also involves post-processing a circuit to group the gates.

Strange test failure for `cargo test --all-features`

When running cargo test --all-features on my machine, I currently get some strange error:

Screenshot_2023-09-16_19-27-02

lldb stack

* thread #9, name = 'chou_orlandi::t', stop reason = signal SIGABRT
  * frame #0: 0x00007ffff7c8e83c libc.so.6`___lldb_unnamed_symbol3604 + 268
    frame #1: 0x00007ffff7c3e668 libc.so.6`raise + 24
    frame #2: 0x00007ffff7c264b8 libc.so.6`abort + 215
    frame #3: 0x000055555598c7c7 mpz_ot-211b9f21d3574d40`std::sys::unix::abort_internal::h1fa2ae509782b879 at mod.rs:370:14
    frame #4: 0x000055555559af87 mpz_ot-211b9f21d3574d40`std::process::abort::h16f1305053836e20 at process.rs:2271:5
    frame #5: 0x00005555558a9d0b mpz_ot-211b9f21d3574d40`_$LT$rayon_core..unwind..AbortIfPanic$u20$as$u20$core..ops..drop..Drop$GT$::drop::h3b3e40d32a9a2a70(self=0x00007ffff6bf60bf) at unwind.rs:29:9
    frame #6: 0x00005555558cf7eb mpz_ot-211b9f21d3574d40`core::ptr::drop_in_place$LT$rayon_core..unwind..AbortIfPanic$GT$::h0c5ceb0a14736899((null)=0x00007ffff6bf60bf) at mod.rs:497:1
    frame #7: 0x0000555555644042 mpz_ot-211b9f21d3574d40`rayon_core::registry::Registry::catch_unwind::ha75ac8e6decffd97(self=0x00007fffec01c200, f=<unavailable>) at registry.rs:383:9
    frame #8: 0x00005555555e15fe mpz_ot-211b9f21d3574d40`rayon_core::spawn::spawn_job::_$u7b$$u7b$closure$u7d$$u7d$::h7e8fe66225c0bb28 at mod.rs:97:13
    frame #9: 0x00005555556d309c mpz_ot-211b9f21d3574d40`_$LT$rayon_core..job..HeapJob$LT$BODY$GT$$u20$as$u20$rayon_core..job..Job$GT$::execute::h6e15266a6f1531ee(this=0x00007fffe8006960) at job.rs:169:9
    frame #10: 0x00005555558cb8db mpz_ot-211b9f21d3574d40`rayon_core::job::JobRef::execute::h87026288824c8d09(self=JobRef @ 0x00007ffff6bf61b8) at job.rs:64:9
    frame #11: 0x00005555558ddff0 mpz_ot-211b9f21d3574d40`rayon_core::registry::WorkerThread::execute::hcfb916dba90ae66f(self=0x00007ffff6bf6400, job=JobRef @ 0x00007ffff6bf61e8) at registry.rs:874:9
    frame #12: 0x00005555558ddd30 mpz_ot-211b9f21d3574d40`rayon_core::registry::WorkerThread::wait_until_cold::h1078101fc15e4f80(self=0x00007ffff6bf6400, latch=0x00007fffec01af20) at registry.rs:820:17
    frame #13: 0x00005555558ddb0b mpz_ot-211b9f21d3574d40`rayon_core::registry::WorkerThread::wait_until::h77a2c86a140d2ec5(self=0x00007ffff6bf6400, latch=0x00007fffec01af20) at registry.rs:803:13
    frame #14: 0x00005555558de5aa mpz_ot-211b9f21d3574d40`rayon_core::registry::main_loop::h163f9599ce30afa4(thread=<unavailable>) at registry.rs:948:5
    frame #15: 0x00005555558d97b6 mpz_ot-211b9f21d3574d40`rayon_core::registry::ThreadBuilder::run::hdac840bf4c239037(self=<unavailable>) at registry.rs:54:18
    frame #16: 0x00005555558d9c4d mpz_ot-211b9f21d3574d40`_$LT$rayon_core..registry..DefaultSpawn$u20$as$u20$rayon_core..registry..ThreadSpawn$GT$::spawn::_$u7b$$u7b$closure$u7d$$u7d$::h3773f5b0958c3f2d at registry.rs:99:20
    frame #17: 0x00005555558b7006 mpz_ot-211b9f21d3574d40`std::sys_common::backtrace::__rust_begin_short_backtrace::h66d7e2eec8b3d0e8(f=<unavailable>) at backtrace.rs:154:18
    frame #18: 0x00005555558cb4ed mpz_ot-211b9f21d3574d40`std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::_$u7b$$u7b$closure$u7d$$u7d$::hf3ade9479b48c884 at mod.rs:529:17
    frame #19: 0x00005555558a9d91 mpz_ot-211b9f21d3574d40`_$LT$core..panic..unwind_safe..AssertUnwindSafe$LT$F$GT$$u20$as$u20$core..ops..function..FnOnce$LT$$LP$$RP$$GT$$GT$::call_once::ha0142f76bf9e7df7(self=<unavailable>, (null)=<unavailable>) at unwind_safe.rs:271:9
    frame #20: 0x00005555558d88c1 mpz_ot-211b9f21d3574d40`std::panicking::try::do_call::ha10112c86933c458(data="") at panicking.rs:526:40
    frame #21: 0x00005555558ded5b mpz_ot-211b9f21d3574d40`__rust_try + 27
    frame #22: 0x00005555558d8772 mpz_ot-211b9f21d3574d40`std::panicking::try::hfcd2907c5b135a23(f=<unavailable>) at panicking.rs:490:19
    frame #23: 0x00005555558cb299 mpz_ot-211b9f21d3574d40`std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::he3551449000acac5 at panic.rs:142:14
    frame #24: 0x00005555558cb288 mpz_ot-211b9f21d3574d40`std::thread::Builder::spawn_unchecked_::_$u7b$$u7b$closure$u7d$$u7d$::he3551449000acac5 at mod.rs:528:30
    frame #25: 0x00005555558cde0f mpz_ot-211b9f21d3574d40`core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h133ed29666fb9846((null)=0x00007fffec0106f0, (null)=<unavailable>) at function.rs:250:5
    frame #26: 0x000055555598b275 mpz_ot-211b9f21d3574d40`std::sys::unix::thread::Thread::new::thread_start::h1156b44d8da5e788 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::hd987c3f0ab28cc33 at boxed.rs:2007:9
    frame #27: 0x000055555598b26d mpz_ot-211b9f21d3574d40`std::sys::unix::thread::Thread::new::thread_start::h1156b44d8da5e788 [inlined] _$LT$alloc..boxed..Box$LT$F$C$A$GT$$u20$as$u20$core..ops..function..FnOnce$LT$Args$GT$$GT$::call_once::h2437bdbf6c1b281b at boxed.rs:2007:9
    frame #28: 0x000055555598b266 mpz_ot-211b9f21d3574d40`std::sys::unix::thread::Thread::new::thread_start::h1156b44d8da5e788 at thread.rs:108:17
    frame #29: 0x00007ffff7c8c9eb libc.so.6`___lldb_unnamed_symbol3601 + 731
    frame #30: 0x00007ffff7d10dfc libc.so.6`___lldb_unnamed_symbol4048 + 7

Identity gate feature

In our garbled circuit protocols, we have two variants of encoded values: Values which have been encoded directly via the encoder, and values generated by computing the garbling algorithm (aka output labels). Recomputing a computed value can be quite expensive depending on the complexity of the circuit they come from.

We avoid this expense in our transcript commitment protocol by making sure that the encodings for the plaintext values are always generated directly via the encoder. This introduces some extra complexity during decryption, as we have to perform an additional oblivious transfer and ZKP to prove the chosen plaintext encrypts to the expected ciphertext.

There is a fairly simple alternative to our current approach: identity gates. A gate which encrypts a label using an output label as the key.

  1. Generator garbles a circuit which generates output labels $W_O$
  2. Evaluator evaluates the garbled circuit to get active output labels $w_O$
  3. Generator generates new labels using the encoder $W_X$
  4. Generator encrypts $W_X$ using $W_O$ for keys: $Gate_X = Enc(W_O, W_X)$
  5. Generator sends $Gate_X$ directly
  6. Evaluator evaluates $Gate_X$ using $w_O$ to learn the active $w_X$ value.

This process need not be coupled to the garbled circuit execution, it can be pretty neatly encapsulated into a new trait:

/// This trait provides a method for cloning values in memory.
#[async_trait]
pub trait ValueClone {
  /// Clones the provided values.
  async fn clone(&mut self, values: &[ValueRef], new_values: &[ValueRef]) -> Result<(), CloneError>;
}

Using this, our decryption protocol can be simplified. We would simply decrypt the ciphertext in the VM, clone the plaintext so the labels are generated via the encoder, then the Prover would commit to these labels instead. Eliminating the need for additional OT + ZKP process.

README should mention mpz limitation

The README should be updated to mention that this lib provides malicious security with 1-bit info leakage. (maybe also explain that it is leakage of a predicate with 1/2^-n probability of being detected). Also give some examples in what real world situations such 1-bit leakage would be acceptable.
We want to emphasize the leakage to prevent accidental use of this lib where full malicious security is expected.

CSPRG comparison

Recently we added a PRG based on AES-128 in counter-mode, which should be quite performant due to hardware acceleration. However, there was not any discussion on the trade-offs compared to using ChaCha(8/12/20).

Local benchmarks on my machine indicate that our new AES PRG has the same performance as rand_chacha::ChaCha8Rng and is roughly 50% faster than ChaCha12. I haven't gone looking for figures on which ChaCha variant is most appropriate in terms of security level to compare with AES.

We should also consider the difference for targets without hardware acceleration available (WASM).

Improve memory footprint of committed OT

Right now the Receiver stores all the ciphertexts received from the Sender. Afterwards we perform an equality check to make sure the Sender behaved honestly. We can instead store a hash of the sender tape to reduce memory footprint.

Improve performance of verify function for receiver in Committed OT

The current implementation (#86) fast-forwards the state of sender and receiver by splitting, if there is an offset. This requires to compute the complete KOS matrix.

An improvement would instantiate the sender and receiver with an rng offset, so that only the part of the KOS matrix, which is really needed for verification, would be created. This avoids some unnecessary computation.

This requires some adaptations to the inner workings:
- We can no longer use the rng to generate padding.
- There is not only a single rng per sender/receiver. The other rngs probably need to be set to the right offset, as well.

Improve malicious security implementation for M2A/A2M conversion protocols

The current replay functionality added by tlsnotary/tlsn#131 allows to detect malicious behavior of the sender.

However, we need a thorough analysis if the current implementation is sufficient. This involves to determine what security checks we need and why. Especially it is currently not clear if we need to

  • seed the rng by a cointoss
  • verify ot envelopes

Once we figure this out, we need to add possible new checks to the current implementation.

KOS iterative extension

Currently iterative extension is disabled for our KOS implementation due to no reference which considers the security of doing so. The correlation check itself is already called into queston by Roy, so we're being conservative here.

I expect we will deprecate KOS in favor of SoftSpoken anyhow, but if we do keep KOS around we should investigate re-enabling iterative extension.

Improve `append` functionality in mpc-circuits

The CircuitBuilder API facilitates circuit composition via the append method, which copies the gates of the appended circuit. Though it works and is simple, it is quite inefficient and results in a large memory footprint for more complex circuits. Rather than copying the gates in, I propose we modify the append method to take an Arc<Circuit>. The new circuit will hold these references instead of copying the gates.

Following this we modify the Circuit API, where instead of

pub fn gates(&self) -> &[Gate] { .. }

we return an iterator:

pub fn gates(&self) -> impl Iterator<Item = &Gate> { .. }

This iterator handles traversing the gates of the circuit (and any appended circuits).

One slight complication is handling constant wires. Our current implementation factors gates with constant wires out, this iterator would ideally do the same. It would have some interactions with circuit statistics, eg and_count, so we'd have to sort out what to do there as those stats would change depending on constant inputs to the circuit.

These changes would make the CircuitBuilder API more performant, reduce memory consumption and caching complexity.

Abstract Ristretto details in CO15

Currently low-level Ristretto details are leaked into our CO15 impl.

(//TODO show which details once the PR is merged)

We can hide those details behind a Point (or PublicKey) module.

Migrate unit tests to use the same PRNG seed

Currently our unit tests derive their seed from the machine's entropy pool. This leads to non-deterministic rng input for unit tests, which is bad because it leads to non-reproducible test cases. Instead we agreed to use a fixed seed from now on for unit tests and to add fuzzing separately where it makes sense.

We need to migrate our unit tests and add fuzzing where appropriate.

Refactor out dispatch from `clmul`

Right now the clmul crate has to match on whether instrinsics are available for every operation, incurring significant overhead.

mpz/clmul/src/backend.rs

Lines 86 to 124 in a2007e6

pub fn clmul(self, x: Self) -> (Self, Self) {
match self.intrinsics {
Some(s_intr) => match x.intrinsics {
Some(x_intr) => {
let (r0, r1) = s_intr.clmul(x_intr);
(
Self {
intrinsics: Some(r0),
soft: None,
},
Self {
intrinsics: Some(r1),
soft: None,
},
)
}
None => unreachable!(),
},
None => match self.soft {
Some(s_soft) => match x.soft {
Some(x_soft) => {
let (r0, r1) = s_soft.clmul(x_soft);
(
Self {
intrinsics: None,
soft: Some(r0),
},
Self {
intrinsics: None,
soft: Some(r1),
},
)
}
None => unreachable!(),
},
None => unreachable!(),
},
}
}

We should be able to factor this out using the original approach from the polyval crate with Union

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.