Giter Site home page Giter Site logo

halo2's Introduction

Zcash 5.9.0

What is Zcash?

Zcash is HTTPS for money.

Initially based on Bitcoin's design, Zcash has been developed from the Zerocash protocol to offer a far higher standard of privacy and anonymity. It uses a sophisticated zero-knowledge proving scheme to preserve confidentiality and hide the connections between shielded transactions. More technical details are available in our Protocol Specification.

The zcashd Full Node

This repository hosts the zcashd software, a Zcash consensus node implementation. It downloads and stores the entire history of Zcash transactions. Depending on the speed of your computer and network connection, the synchronization process could take several days.

The zcashd code is derived from a source fork of Bitcoin Core. The code was forked initially from Bitcoin Core v0.11.2, and the two codebases have diverged substantially.

πŸ”’ Security Warnings

See important security warnings on the Security Information page.

Zcash is experimental and a work in progress. Use it at your own risk.

πŸ“’ Deprecation Policy

This release is considered deprecated 16 weeks after the release day. There is an automatic deprecation shutdown feature which will halt the node some time after this 16-week period. The automatic feature is based on block height.

Other Zcash Implementations

The Zebra project offers a different Zcash consensus node implementation, written largely from the ground up.

Getting Started

Please see our user guide for instructions on joining the main Zcash network.

Need Help?

  • πŸ“˜ See the documentation at the ReadTheDocs for help and more information.
  • πŸ“¨ Ask for help on the Zcash forum.
  • πŸ’¬ Join our community on Discord
  • πŸ§‘β€πŸŽ“: Learn at ZecHub

Participation in the Zcash project is subject to a Code of Conduct.

Building

Build Zcash along with most dependencies from source by running the following command:

./zcutil/build.sh -j$(nproc)

Currently, Zcash is only officially supported on Debian and Ubuntu. See the Debian / Ubuntu build for detailed instructions.

License

For license information see the file COPYING.

halo2's People

Contributors

3for avatar alexander-camuto avatar ashwhitehat avatar carlomodicaportfolio avatar chihchengliang avatar cperezz avatar daira avatar dconnolly avatar defuse avatar dependabot[bot] avatar duguorong009 avatar ebfull avatar han0110 avatar haoyuathz avatar hollowman6 avatar honeywest avatar immanuelsegol avatar kobigurk avatar l-as avatar nagatoism avatar nalinbhardwaj avatar nuttycom avatar parazyd avatar r3ld3v avatar rex4539 avatar simonmasson avatar str4d avatar therealyingtong avatar trel avatar zhiqiangxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

halo2's Issues

Add blinding factors to aux wires

Right now, aux wires don't have blinding factors because the G element in the accumulator doesn't have one either. However, we can amend the polynomial commitment opening protocol so that it naturally supports a blinding factor of 1 for the G element just like fixed wires have.

Remove fork hack from commitment opening implementation

Currently, the commitment opening implementation stores a fork value in the opening proof which the prover can vary until the point that is sampled at the beginning of the argument is on the curve. Let's remove this and instead make the proving fallible; the caller is then responsible for forking the transcript some other way (changing a blinding factor in a commitment) and trying again until it succeeds.

Remove unused fixed_values from ProvingKey

As of #53, the ProvingKey includes values of all fixed columns so that the prover can use them in lookup arguments.

However, not all fixed columns are used in a lookup argument. We can remove unused fixed columns in keygen.rs. We will also have to change how the prover indexes into this subset of fixed columns.

Check probability that Fp/Fq elements can be reinterpreting

Currently, we take field elements from a transcript instantiated over the "other curve" and interpret them as field elements in the "current curve." This is fine for one curve's scalar field but not the other one, because it's smaller. However, it's really close in size so perhaps if the output of the hash is random, there's no way for it to overflow. (Perhaps even when it overflows it doesn't matter?)

Run MockProver to collect error information whenever Proof::create returns an invalid proof

In #95 we added a tool for developers to debug their circuits while writing them. It would be useful to have this also be used to augment production errors when creating proofs. We could do this by running MockProver on the given circuit if Proof::create returns an invalid proof (rather than, say, the circuit struct not being set up correctly by e.g. not having all witnesses set). If MockProver did not find an issue with an invalid proof, that itself would mean we'd uncovered non-determinism within the circuit, which is incredibly important to convey to developers.

Extract `commit_poly` helper utility

This is used in a few places and can be extracted into a common helper utility (see #53 (comment)):

// Closure to construct commitment to column of values
        let commit_column = |column: &Polynomial<C::Scalar, LagrangeCoeff>| {
            let poly = pk.vk.domain.lagrange_to_coeff(column.clone());
            let blind = Blind(C::Scalar::rand());
            let commitment = params.commit_lagrange(&column, blind).to_affine();
            (poly, blind, commitment)
        };

The helper utility will most likely live in the poly crate.
Possibly return cosets as well.

Write a simple first-fit FloorPlanner

Input: a set of regions, each of which takes up a given set of columns region.cols for a given number of rows region.rows.
Output: a mapping from regions to their start rows.

Greedy first fit algorithm (sketch):

For each column, keep track of the set of allocated [start, end) intervals so far.
We greedily find a placement for each region in turn (there is no backtracking at the region level):

  • Start with the interval of all possible rows. For each column used by this region, recursively narrow the interval in which we could potentially place the region. We do this by intersecting the current interval with the free intervals in each used column. If it won't fit, backtrack to the next free interval.

It is probably easier to explain this just by giving a Python implementation:

#!/usr/bin/env python3

from math import inf

class Region:
    def __init__(self, rows, cols):
        self.rows = rows
        self.cols = cols

    def __repr__(self):
        return "Region(%d, %r)" % (self.rows, self.cols)

class Column:
    def __init__(self, index):
        self.index = index
        # allocs is an unordered list of [start, end) pairs.
        self.allocs = []

    def __repr__(self):
        return "Column(%d): %r" % (self.index, self.allocs)

def plan(regions):
    for reg in regions:
        start = first_fit_region(reg, 0, 0, inf)

        # Some placement must be possible given infinite rows.
        assert start is not None
        print("%4r: %r" % (start, reg))

def first_fit_region(reg, col_index, start, end):
    assert start + reg.rows <= end
    if col_index == len(reg.cols):
        return start

    c = reg.cols[col_index]

    # Iterate over the unallocated nonempty intervals in c that intersect [start, end).
    for (c_start, c_end) in free_intervals(c.allocs, start, end):
        # Do we have enough room for this column of reg?
        if c_start + reg.rows <= c_end:
            # Recursively check whether the placement works for all other columns of reg.
            row = first_fit_region(reg, col_index+1, c_start, c_end)
            if row is not None:
                # It works. On the way back out of the recursion, record the placement.
                assert row + reg.rows <= c_end
                c.allocs.append((row, row + reg.rows))
                return row

    # No placement worked; the caller will need to try other possibilities.
    return None

# allocs is a set of [a_start, a_end) pairs representing disjoint allocated intervals.
# Return all the *unallocated* nonempty intervals intersecting [start, end).
def free_intervals(allocs, start, end):
    row = start
    for (a_start, a_end) in sorted(allocs):
        if a_start >= end:
            # further allocated intervals don't intersect
            break

        if row < a_start:
            yield (row, a_start)

        row = max(row, a_end)  # Edit: fixed bug here.

    if row < end:
        yield (row, end)

def test():
    (c1, c2, c3) = (Column(1), Column(2), Column(3))
    reg1 = Region(15, [c1, c2])
    reg2 = Region(10, [c3])
    reg3 = Region(10, [c3, c1])
    plan([reg1, reg2, reg3])


if __name__ == "__main__":
    test()

Here's the placement that this algorithm finds for the test case:

   0: Block(15, [Column(1): [(0, 15)], Column(2): [(0, 15)]])
   0: Block(10, [Column(3): [(0, 10)]])
  15: Block(10, [Column(3): [(0, 10), (15, 25)], Column(1): [(0, 15), (15, 25)]])

i.e.
example

Multi-proof prover

It's possible to structure halo2 proofs in such a way that they can share resources. We can then create multiple proofs at the same time, with most of the proof size and time in a common block, and then marginal size and time costs per proof.

Incorrect documentation of return type from verification

Here:

halo2/src/plonk/verifier.rs

Lines 180 to 190 in 18746f1

// We are now convinced the circuit is satisfied so long as the
// polynomial commitments open to the correct values.
self.multiopening
.verify(
params,
&mut transcript,
&mut transcript_scalar,
queries,
msm,
)
.map_err(|_| Error::OpeningError)

we return an error from verification if the multiopening is not satisfied. If I'm not mistaken, this check can fail for an invalid proof without there being any mistake in the calling code (unlike, say, the checks that return IncompatibleParams). So I think it is not an error and should just return Ok(false).

Implement table-based variant of Sarkar's square root algorithm

A Sage prototype for the Pasta fields is here: https://github.com/zcash/pasta/blob/master/squareroottab.sage

This includes a few minor optimizations and simplifications that are not in the paper. We essentially follow section 3 but specialize it to the case β„“0..k-2 = w, which greatly simplifies the algorithm. We also merge Tab1 and Tab2 (which is mentioned in the paper) and use perfect hashing for Tab3 (which is not mentioned).

Performance modelling

Define a parameterized model that can predict the prover and verifier performance of a given Halo 2 circuit, so that I can understand how to optimize circuits. The model needs to be validated against measured performance.

Design gadget API or interaction model

We will want some way to create reusable gadgets that can be composed to create halo2 circuits. This is pretty closely coupled to the general circuit API.

If possible, it would be great to maintain the bellman approach of keeping circuit generation and witness synthesis close together. The current halo2 circuit API doesn't quite achieve that (they are both part of the same trait but occur in different functions), so we either need to adjust that API or design gadgets to abstract over it.

Optimize multi-point opening

Currently, the implementation randomly adds together commitments that are being opened at the same point, then performs a multi-point opening. However, this is inefficient when commitments are opened at multiple points, as then we have to do an unnecessary amount of arithmetic.

Instead, we'll make it so that the set of points at which every commitment is opened is queried separately, to minimize the amount of group arithmetic done by the verifier. It'll probably be nicer to break this abstraction out and separate it from the plonk prover; we could place it in the poly module somewhere.

Incorporate MSM into PLONK verifier

Currently the PLONK verifier will manually compute the f_commitment for input into the commitment opening proof verifier. Instead, it should just incorporate this computation into the MSM that it feeds into the commitment verifier. Then, the responsibility for scaling the MSM randomly falls to the plonk verifier instead.

It also may be best to return the Guard from the PLONK verifier.

Implement SHA-256 gadget

It's a pretty common building block in several circuits, and thus a useful test case (even if we won't use it ourselves in Pollard).

Add support for coefficient blinding factors in poly::commitment

Right now you can only commit to polynomials of degree 2^k - 1, but in the PLONK protocol you'll learn several openings of these polynomials and thus zero-knowledge is broken. In order to fix this we have to commit to a polynomial of slightly larger degree (just enough to compensate for the number of openings) but this is not easy to do with the existing APIs, especially because of the FFTs and the assumptions about the size of the evaluation domains.

This will require changing how the Polynomial structure works (probably forcing it to keep track of the degree of the polynomial) among other things.

Use probabilistic compression of pairs of curve points

This optimization allows us to use the Rescue duplex sponge for Fiat–Shamir with a rate of 3 rather than 4 (so, a width of 4 field elements rather than 5). This reduces the number of wires needed for a RESCUE gate. (Actually this optimization is independent of whether we use Rescue or Poseidon.) The code that currently absorbs two curve points, in each round of the polynomial commitment argument, is here:

transcript.absorb(l.0);
transcript.absorb(l.1);
transcript.absorb(r.0);
transcript.absorb(r.1);

and here:
transcript.absorb(l_x);
transcript.absorb(l_y);
transcript.absorb(r_x);
transcript.absorb(r_y);

Versioning of gadgets

Let's say that at some point we have a healthy ecosystem of third-party gadget implementations. A circuit implementation for a given protocol will need to make sure that it can pick up all security fixes and circuit-independent performance fixes, in both the proving system library and any third-party gadgets, without changing the circuit and invalidating existing proofs.

I propose that we assign a date-based version number (say yyyymmdd) to each circuit, and we pass this into all gadgets [*]. Now the stability policy for a gadget can be either:

  • unstable β€” the gadget can be changed at any time (hence not suitable for deployed protocols, and marked as such in the gadget metadata);
  • stable β€” the subcircuit implemented by the gadget will not change unless the version number changes.

How this would be expressed in a gadget implementation, is that if you're adding a circuit-breaking change on yyyymmdd, then you must make the change conditional on whether the circuit version is β‰₯ yyyymmdd. Using dates ensures that gadget authors don't have to coordinate, but circuit authors can control when to make breaking changes and then just pick up all such changes up to a given date (after reviewing them for security, of course). Non-breaking updates will be much easier in this system.

[*] We likely will have some kind of context argument passed to gadgets anyway when instantiating them, so the version doesn't have to be passed as a separate argument.

Change the Hasher to be generic over curve points

Instead of setting up two transcripts individually, merge the transcripts together so that the implementation can choose how to optimize for the particular setting.

Also, probably rename Hasher to Transcript.

Fix terminology in code

We've decided to call wires "columns" instead, and witness values "cells." Let's unify the terminology we're using with what's implemented.

Simplify h_evals loop in verifier

let (_, h_eval) = self
            .h_evals
            .iter()
            .fold((C::Scalar::one(), C::Scalar::zero()), |(cur, acc), eval| {
                (cur * &x_3n, acc + &(cur * eval))
            });

Rewriting x^n as x, we can apply Horner's rule to simplify

let h_eval = self
            .h_evals
            .iter()
            .rev()
            .fold(C::Scalar::zero(), |acc, eval| {
                acc * &x_3n + eval
            });

Support batching and accumulation in polynomial opening argument

Currently, the verifier performs a complete verification of the OpeningProof. We want to support both accumulation and batching in the opening argument, and here's a sketch of what that might look like:

We'll have an MSM (multiscalar multiplication) structure parallel to Params. This structure will let you add arbitrary terms (the scalar and the point) or add to terms that already exist (such as g and h). It'll also let you scale the entire thing by a random blinding factor. Finally, it'll let you check is_zero() to perform the multiexp and check that it results in zero.

The OpeningProof::verify() method will take an MSM as input for it to append the necessary values. It will not compute the G element (expensive computation) and instead return (1) a structure containing the challenges, and (2) a guard structure which lets the prover either supply the challenges and obtain an MSM, or supply the purported G value and obtain an MSM. The caller will either keep the challenges and the G value and place it in an accumulator object, or it will discard the challenges and check the MSM either immediately or later on.

Use multiexp in commitment opening prover

Right now, in parallel_generator_collapse we do

for (g_lo, g_hi) in g_lo.iter().zip(g_hi.iter()) {
    // TODO: could use multiexp
    tmp.push(((*g_lo) * challenge_inv) + &((*g_hi) * challenge));
}

This could be done instead with a small multiexp. Right now we only have Pippinger's multiexp implemented, but this would be more efficient using the standard "share doublings" approach, maybe with some w-NAF added in.

Furthermore, it may be possible to "defer" these individual multiexps to later rounds and just work over the original basis temporarily until the number of individual multiexps is reduced.

Support for auxiliary advice wires

In order to actually use the accumulation trick, we need "auxiliary" advice wires that are brought in from outside. These can engage in the argument itself, though the only example I can think where that is useful is with public inputs. Somehow, the proof structure exposes these auxiliary evaluations so that the verifier can check them if needed, which is the case with the accumulation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.