Giter Site home page Giter Site logo

honeybadgerbft-python's People

Contributors

amiller avatar bts avatar dantengsky avatar sbellem avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

honeybadgerbft-python's Issues

Setup minimal logging infrastructure

The main goal of this issue is to setup a proper logging infrastructure so that we can eventually have some kind of minimal logging activity that can be used for monitoring and troubleshooting purposes.

Implement recovery mechanism for "out-of-sync" nodes

NOTE: This is in a way a duplicate (or a refinement) of #33 (originally amiller/HoneyBadgerBFT#6). Perhaps the two issues can be merged into one.

Using the block signature mechanism outlined in #15, nodes that need to catch up should be able to do so.

From amiller/HoneyBadgerBFT#57:

After finalizing each HBBFT-Block, nodes produce a t+1 threshold signature on a merkle tree over all the transactions, as well as a merkle tree over the current state file S. This signature serves as a CHECKPOINT, a succinct piece of evidence that the current block has concluded. CHECKPOINTs are used for three different purposes:

...

  • II. Allows lagging (or restarted) nodes to recover. When a node receives a valid CHECKPOINT for a later block than the current one (for block B’ > B), then it determines it has fallen behind. It deallocates any buffered incoming or outgoing messages in block B, and

...

Proposed name for this mechanism: speedybadger

Rename "test" dir to "tests"

This seems to be a convention for many libraries. But I am not sure why, and this may appear to be highly subjective.

Nevertheless, the change is being proposed, and if possible "good" reasons to do so will be provided.

Python error: Segmentation fault

When I run docker-compose run --rm honeybadge, I got error.

�[1mTest session starts (platform: linux, Python 3.7.4, pytest 6.2.2, pytest-sugar 0.9.4)�[0m
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/usr/local/src/HoneyBadgerBFT/.hypothesis/examples')
rootdir: /usr/local/src/HoneyBadgerBFT, configfile: pytest.ini, testpaths: test/
plugins: cov-2.11.1, mock-3.5.1, sugar-0.9.4, hypothesis-6.7.0
�[1mcollecting ... �[0mFatal Python error: Segmentation fault

This is my computer hardware info. This computer hardware can satisfy this project?

H/W path      Device           Class      Description
=====================================================
                               system     CVM
/0                             bus        Motherboard
/0/0                           memory     96KiB BIOS
/0/400                         processor  Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50GHz
/0/1000                        memory     16GiB System Memory
/0/1000/0                      memory     16GiB DIMM RAM

In BA, est_{r+1} needs to be set to v when v != coin

The binary agreement algorithm requires that est be set to v if len(values) == 1, but the current code is more restrictive as it only sets est = v if v == s: If v != s then est is simply not set and therefore remains the same for the next round:

        # ...
        s = coin(r)

        if len(values) == 1:
            v = next(iter(values))
            if v == s:
                if already_decided is None:
                    already_decided = v
                    decide(v)
                elif already_decided == v:
                    _thread_recv.kill()
                    return
                est = v
        else:
            est = s
        r += 1

So if we want to follow the description of the algorithm found in the Honey Badger paper and in Moustefaoui et al. the above code should be modified to something like:

        # ...
        s = coin(r)

        if len(values) == 1:
            v = next(iter(values))
            if v == s:
                if already_decided is None:
                    already_decided = v
                    decide(v)
                elif already_decided == v:
                    _thread_recv.kill()
                    return
            est = v
        else:
            est = s
        r += 1

NOTE It would be good to have a test that catches this problem.

Implement proposed batch size to be floor(B/N)

From @sbellem on August 26, 2017 0:57

@amiller I assume you are well aware of this as there's already a TODO note in the code about implementing the random selection.

Nevertheless, regardless of the size, only one element (tx_to_send[0]) is currently passed to _run_round():

class HoneyBadgerBFT():
    # ...
    def run(self):
        # ...
        while True:
            # ...
            # Select all the transactions (TODO: actual random selection)
            tx_to_send = self.transaction_buffer[:self.B]

            # Run the round
            send_r = _make_send(r)
            recv_r = self._per_round_recv[r].get
            new_tx = self._run_round(r, tx_to_send[0], send_r, recv_r)

So _run_round() and tpke.encrypt() should be capable to take a list or a similar data structure.

tpke.encrypt() would need to be modified so that a string (e.g.: cPickle.dumps(raw)) is passed to the padding operation.

So this issue could be done in two (or three) parts:

  1. Pass a list to _run_round(), e.g.: tx_to_send[:1]
  2. Pass floor(B/N) transactions to _run_round(), e.g.: tx_to_send[:int(B/N)]
  3. Implement the random selection

Copied from original issue: amiller/HoneyBadgerBFT#36

Visualization of Messages

From @amiller on May 30, 2017 16:56

A visualization of the protocol flow could facilitate understanding the protocol flow for newcomers. It could also provide a way to help debugging or diagnosing network problems/faults. Also it could help build a mental model for the relative costs and bandwidth usage for protocol components.

One of HoneyBadgerBFT's features is its regular message pattern. Each possible message fits into a predefined possibly possible slot. To me this suggests a "grid" layout to make the structure as clear as possible. Here's a mockup/sketch of a visualization panel:

honeybadger message dashboard 2

This would be a display of a single node's view of a single block/epoch of the honey badger protocol. The idea is that each possible message that could be received would be indicated by an unlit light (grey square), when such message is received, the light turns yellow. Messages sent

This visualization makes some invariants clear. For example, in Reliable Broadcast, only one ECHO message and one READY message can be received from any node (redundant such messages are discarded). In the case of Binary Agreement, possibly both of EST(0) or EST(1) could be received.

Copied from original issue: amiller/HoneyBadgerBFT#15

Implement bounded ABA

From amiller/HoneyBadgerBFT#57:

It is less clear that the instances of ABA can be bounded; as written, the ABA protocol proceeds in rounds, each round making use of a common COIN, until a termination condition is reached, which does not occur with any a priori bound. However, the running time analysis of the protocol suggests that even in the worst case, an instance of ABA requires more than k coins with probability O(2^-k). Thus it suffices to establish a bound, say k=120.

Potentially related: amiller/HoneyBadgerBFT#63

Implement speedybadger's state file persistence

draft

speedybadger is meant to be a kind of SPV-like (bitcoin) or fast-sync (ethereum) mechanism.

This issue is concerned with the "state file", as mentioned in amiller/HoneyBadgerBFT#57:

Observation 2. Although the entire blockchain of committed transactions (T is the total number of transactions) grows unboundedly, the size of the current “state” S at any given time is typically much smaller and may be considered bounded.

For example, |S| may be bounded by the number of active accounts, whereas |T| is the number of transactions made by all the accounts.

State S_1, ,... S_B where S_B = apply(S_{B-1}, B.txs).

Hence if a node falls many blocks behind (or even if it crashes and restarts), then it can “catch up” to the current block by downloading just the current state S rather than the entire log of transactions. This is known as SPV-syncing in Bitcoin, and fast-sync in Ethereum.

Clarify how to handle redundant messages in ABA

Currently, upon reception of an EST or AUX message a node will raise an error if the message (of the form (tag, round, value), e.g.: ('EST', 0, 0)) has already been received by the sender.

This issue is meant to clarify whether this is what the protocol implementation should really do, or whether it should simply silently discard the message and continue looping to receive messages.

Project documentation

Copied from original issue: amiller/HoneyBadgerBFT#26 and was originally concerned with "where to publish the docs", and has now been converted into a more general issue about documenting the project.

Overall, we could loosely follow the guidelines in https://docs.python-guide.org/writing/documentation/, as a starting point.


Maintaining a changelog

This is to document the project release history. We could simply follow the guidelines outlined in https://keepachangelog.com/.

As for the name of the changelog file, we may use CHANGELOG, or HISTORY.

If at some point we are super discipline with the git commit messages, we could consider automatically generating the changelog from the commit messages, using a tool such as https://github.com/vaab/gitchangelog.

Documentation structure

The following from https://docs.python-guide.org/writing/documentation/#project-publication may be useful as a starting point, and should be easy to extend to fit the documentation needs specific to this project.

Depending on the project, your documentation might include some or all of the following components:

  • An introduction should show a very short overview of what can be done with the product, using one or two extremely simplified use cases. This is the thirty-second pitch for your project.
  • A tutorial should show some primary use cases in more detail. The reader will follow a step-by-step procedure to set-up a working prototype.
  • An API reference is typically generated from the code (see docstrings). It will list all publicly available interfaces, parameters, and return values.
  • Developer documentation is intended for potential contributors. This can include code convention and general design strategy of the project.>

From @sbellem on August 19, 2017 21:38

Where to Publish the docs

Publishing on Read the Docs

It is very common for open source Python projects to host their documentation on Read the Docs.

It is usually relatively simple to set up.

There are some limitations however in terms of what can be installed beyond Python packages specified in a requirements.txt file, and running python setup.py install. This means that for libraries that require installing C header files for instance, it does not seem to be possible to instruct Read the Docs to perform those required installations. In other words, it does not seem to be possible to instruct Read the Docs to execute:

apt-get -y install libgmp-dev libmpc-dev
wget https://crypto.stanford.edu/pbc/files/pbc-0.5.14.tar.gz
tar -xvf pbc-0.5.14.tar.gz
cd pbc-0.5.14 && ./configure && make && make install
git clone https://github.com/JHUISI/charm.git
cd charm && git checkout 2.7-dev && ./configure.sh && python setup.py install

Since the above instructions are required before running python setup.py install to install honeybadgerbft, and installing honeybadgerbft is required in order to build documentation from the the docstrings, it is then not simple to set up the docs on Read the Docs.

Perhaps there are workarounds, such as indicated here, but this means it will take slightly longer to set up.

Publishing on Github Pages

Another approach would be to publish the docs on Github Pages. For a concrete example, one may look at the asyncpg project's travis-publish-docs.sh script.

Accounting for buffer usage on per-peer basis

From @amiller on May 25, 2017 19:30

  1. The Asynchronous communication model means that outgoing messages may need to be stored/resent arbitrarily far into the future.

Some outgoing messages may be able to be marked as stale, where it no longer matters if they're re-sent. For example, once we get a final signature for a block, no messages pertaining to the previous round should matter.

How can we annotate the protocol to take advantage of this? What does this tell us about the maximum size of a buffer?

  1. Almost every communication in honey badger is "broadcast" only. The only exception is in Reliable Broadcast where different erasure coded shares are sent.
    Can the communication channel abstraction help with this?

  2. For incoming messages, Asynchronous model means messages pertaining to "stale" subprotocols that have since concluded might be able to be ignored.
    When can we safely mark a subprotocol as concluded and free up state? Can we express filter rules to efficiently discard old messages, e.g. messages pertaining to subprotocols in the previous round get discarded immediately?

Copied from original issue: amiller/HoneyBadgerBFT#4

Garbage collect "outdated" outgoing protocol messages

From amiller/HoneyBadgerBFT#57:

After finalizing each HBBFT-Block, nodes produce a t+1 threshold signature on a merkle tree over all the transactions, as well as a merkle tree over the current state file S. This signature serves as a CHECKPOINT, a succinct piece of evidence that the current block has concluded. CHECKPOINTs are used for three different purposes:
...

  • III. Allow nodes to garbage collect old outgoing messages. Because nodes that have fallen behind can catch up via a CHECKPOINT, it is not necessary to buffer outgoing protocol messages pertaining to earlier blocks. Outgoing messages buffered in the I/O abstraction can be canceled/withdrawn, replaced with CHECKPOINT messages for the current round.

Set up local network with docker-compose

UPDATE: Instead of fixing and/or updating the experiments as the original issue intended to do, this issue is now concerned with having a docker-compose based local network.

One of the key goals of this local network is that it will help towards the deployment of a test network.

The local network may also be useful for different kinds of test cases, such as #17.


From @sbellem on October 17, 2017 12:28

The experiments need to be updated to match the changes made in the dev branch.

NOTE: There's a work-in-progress branch addressing this issue: https://github.com/sbellem/HoneyBadgerBFT/tree/experiments

benchmark tests

  • document the results (in docs and easily accessible in the repo -- e.g. README.md or BENCHMARKS.md), with steps to reproduce.
  • if it makes sense for some of them, add to (travis) CI

Copied from original issue: amiller/HoneyBadgerBFT#45

Can not run the run_local.py

line2 in run_local.py shows that:
p = subprocess.check_output( ['python', '-m', 'honest_party_test', '-k', '%d_%d.key' % (N, t), '-e', 'ecdsa.keys', '-b', '%d' % Tx, '-n', str(N), '-t', str(t), '-c', 'th_%d_%d.keys' % (N, t)], shell=False, )
and in the HoneyBadgerBFT/experiments/honest_party_test line8,9,10,13,14

from ..core.utils import ...
from ..core.includeTransaction import
from ..core.utils import ACSException, checkExceptionPerGreenlet, getSignatureCost, encodeTransaction, getKeys, \ deepEncode, deepDecode, randomTransaction, initiateECDSAKeys, initiateThresholdEnc, finishTransactionLeap

where is the core/utils?

Host an online "living version" of the original paper in the docs

It would be nice to have a version of the original paper included somewhere in the docs that can evolve and include necessary improvements such as the CONF phase (amiller/HoneyBadgerBFT#59).

This issue proposes to have the paper reproduced using Sphinx / ReStructuredText.

This work has already been started and is currently located under https://github.com/sbellem/HoneyBadgerBFT-Python/tree/docs-paper/docs/paper

If possible, it would be useful to have the tex code and image files for (/cc @amiller ):

  • algorithms
  • diagrams, etc
  • plots

Once the experiments are fixed we can perhaps generate the plots on the fly!? Or we could have living plots coming from a test network?

Authenticated Sockets

From @amiller on May 30, 2017 4:58

So far the demos have only used plain unauthenticated sockets. Clearly we need TLS, with client/server authentication and self-signed certificates. This is most likely a prereq for #26 (originally amiller/HoneyBadgerBFT#7).

Also needs a script to generate certificates along with the other keys.

Copied from original issue: amiller/HoneyBadgerBFT#13

Try Bandit

Bandit is a tool designed to find common security issues in Python code. To do this Bandit processes each file, builds an AST from it, and runs appropriate plugins against the AST nodes. Once Bandit has finished scanning all the files it generates a report.

Bandit was originally developed within the OpenStack Security Project and later rehomed to PyCQA.

ref: https://github.com/PyCQA/bandit

Avoid overwriting Python functions

From @sbellem on August 17, 2017 23:17

For example the function hash() in reliablebroadcast.py could be confused with the Python built-in function hash(). It could perhaps be renamed something like sha256_digest() just to avoid confusion and potential problems in the future.

Another example is the usage of the callable namedinput() (e.g. in reliablebroadcast():

def reliablebroadcast(sid, pid, N, f, leader, input, receive, send):
    """Reliable broadcast ... """"
    # ...
    if pid == leader:
        # The leader erasure encodes the input, sending one strip to each participant
        m = input()  # block until an input is received
    # ...

input() is also a Python built-in function. If somehow input() is the best name, one trick that is often recommended is to suffix the name with an underscore _. In this case the callable would be renamed input_() and the above code snippet would become:

def reliablebroadcast(sid, pid, N, f, leader, input, receive, send):
    """Reliable broadcast ... """"
    # ...
    if pid == leader:
        # The leader erasure encodes the input, sending one strip to each participant
        m = input_()  # block until an input is received
    # ...

Notice how the highlighting of the input() function differs in each code snippet. I assume that is because Github uses MagicPython to highlight the code and MagicPython highlights Python built-in functions.

See related thread https://www.reddit.com/r/pythontips/comments/4m9wiw/avoid_overwriting_python_functions/?st=j6h17gu8&sh=f483ed6e

Also somewhat related Function and method arguments in PEP 8:

If a function argument's name clashes with a reserved keyword, it is generally better to append a single trailing underscore rather than use an abbreviation or spelling corruption. Thus class_ is better than clss. (Perhaps better is to avoid such clashes by using a synonym.)

Copied from original issue: amiller/HoneyBadgerBFT#23

Recovering from intermittent crashes

From @amiller on May 25, 2017 22:15

After crashing and restarting, can we resume the protocol in progress?
The easiest way would be to get a threshold signature on a most recent round, and just start there.
Getting an old signature could lead to equivocation, like sending messages in a round we've already participated in. This would be tolerated by the other nodes and not cause a failure, as long as not too many occurred at once.

Copied from original issue: amiller/HoneyBadgerBFT#6

Threshold decryption seems to not actually work?

From @Vagabond on April 9, 2018 23:15

When we were implementing the threshold decryption routines for erlang_tpke https://github.com/helium/erlang-tpke by following what the python code did, we noticed that threshold decryption seemed to succeed regardless of the inputs. We eventually re-implemented all the threshold decryption routines according to the Baek and Zhang paper and finally our property based tests started passing (we do negative testing with duplicate shares, shares generated with the wrong key and shares for the wrong message).

I don't have specific changes to suggest here, nor the time to assemble them, but I'm pretty convinced your threshold decrypt, as implemented, ends up being a no-op.

The commit where I reworked our implementation to follow the paper, not the python implementation is here:

helium/erlang-tpke@b2bd3c8

Later commits annotate all those functions with the specific math from the paper(s).

I realize this is not intended to be a production quality implementation, but people should be aware that the threshold decryption doesn't work as advertised and they should not rely on the python implementation of it.

Thanks again for all your work and let me know if there's any more information I can provide.

Copied from original issue: amiller/HoneyBadgerBFT#60

license

From @ericbets on November 21, 2017 0:26

I read your paper. I'm interested in testing out HoneyBadger for a project and was curious -
what are the chances for a dual Apache/CRAPL license?

Copied from original issue: amiller/HoneyBadgerBFT#49

Coding style, etc

From @sbellem on August 24, 2017 21:9

The goal of this issue is to first list the various coding style elements for which there are multiple alternatives but yet for which some convention is sought in order to ease contributions, to make the code base uniform in style (to increase its readability) and to reduce potential unforeseen sources of disagreements regarding what could very often be considered unimportant trivial details.

Once a coding style element is identified, a convention can be agreed on.

PEP 8 -- Style Guide for Python Code

Since this is a Python project, PEP 8 -- Style Guide for Python Code is perhaps the first thing that should be considered. Automatic checkers such as flake8 (as mentioned in #33) can help in identifying coding style "errors". Yet, certain things need to be configured: We'll try to highlight some of the most common elements.

Maximum Line Length

It's best to read the section in PEP 8 on this matter. But here's the essence more or less, (quoting):

Limit all lines to a maximum of 79 characters.

For flowing long blocks of text with fewer structural restrictions (docstrings or comments), the line length should be limited to 72 characters.
[...]
Some teams strongly prefer a longer line length. For code maintained exclusively or primarily by a team that can reach agreement on this issue, it is okay to increase the nominal line length from 80 to 100 characters (effectively increasing the maximum length to 99 characters), provided that comments and docstrings are still wrapped at 72 characters.

The Python standard library is conservative and requires limiting lines to 79 characters (and docstrings/comments to 72).

Imports

ref: https://www.python.org/dev/peps/pep-0008/#imports

The entire section should more or less be observed.

TODO: List important elements

Relative or absolute imports

TODO This needs to be decided

Imports in tests

This is not mentioned in PEP 8 and there can be a different style of using imports specifically for tests. One that is worth considering is: https://pylonsproject.org/community-unit-testing-guidelines.html

TODO: explain a bit https://pylonsproject.org/community-unit-testing-guidelines.html

String Quotes

ref: https://www.python.org/dev/peps/pep-0008/#string-quotes

In Python, single-quoted strings and double-quoted strings are the same. This PEP does not make a recommendation for this. Pick a rule and stick to it. When a string contains single or double quote characters, however, use the other one to avoid backslashes in the string. It improves readability.

For triple-quoted strings, always use double quote characters to be consistent with the docstring convention in PEP 257.

tool: https://github.com/zheller/flake8-quotes

PEP 257 -- Docstring Conventions

ref: https://www.python.org/dev/peps/pep-0257/

Testing Framework

This can perhaps be moved to its own issue for will put here for now.

Different test frameworks have their own set of features and plugins such that they are likely to be incompatible.

TODO list some examples (e.g.: nose2 vs pytest)
TODO pros and cons of different frameworks

reddit thread: Nose alternatives - nose2, pytest or something else?

Copied from original issue: amiller/HoneyBadgerBFT#34

Finalize each block with a threshold signature

From amiller/HoneyBadgerBFT#57:

After finalizing each HBBFT-Block, nodes produce a t+1 threshold signature on a merkle tree over all the transactions, as well as a merkle tree over the current state file S. This signature serves as a CHECKPOINT, a succinct piece of evidence that the current block has concluded. CHECKPOINTs are used for three different purposes:

  • I. Prevent DoS from malicious nodes. Messages for future rounds B' > B are ignored until a CHECKPOINT message for round B is received. This enables a node to discard DoS messages sent from an attacker, which would otherwise appear to be plausible messages in the future.

  • II. Allows lagging (or restarted) nodes to recover. When a node receives a valid CHECKPOINT for a later block than the current one (for block B’ > B), then it determines it has fallen behind. It deallocates any buffered incoming or outgoing messages in block B, and

  • III. Allow nodes to garbage collect old outgoing messages. Because nodes that have fallen behind can catch up via a CHECKPOINT, it is not necessary to buffer outgoing protocol messages pertaining to earlier blocks. Outgoing messages buffered in the I/O abstraction can be canceled/withdrawn, replaced with CHECKPOINT messages for the current round.

Segfaults at N>100

From @mark-liu on July 26, 2017 10:24

When running the following docker run -e N="100" -e t="2" -e B="16" -it honeybadgerbft
The program returns a segfault (please see screenshot), systems resources looked okay at the time. Let me know if you want me to run any other tests. Same behaviour at N="200".

screen shot 2017-07-26 at 8 16 28 pm

screen shot 2017-07-26 at 7 58 18 pm

Copied from original issue: amiller/HoneyBadgerBFT#19

Review usage of assert statements in implementation code

In some places assert statements are being used to perform some kind of validation, e.g.: type of an argument, value of an argument, etc.

These statements should be replaced with proper validation mechanisms that will raise an appropriate Exception and do whatever is necessary from the point of view of the protocol, because such assert statements may be removed if optimization is turned on. From the Python docs:

The current code generator emits no code for an assert statement when optimization is requested at compile time.

Resources

Remove transaction is not work in honeybadger.py

When removing the message which was processed, something was wrong.

new_tx = self._run_round(r, tx_to_send[0], send_r, recv_r)
print('new_tx:', new_tx)

# Remove all of the new transactions from the buffer
self.transaction_buffer = [_tx for _tx in self.transaction_buffer if _tx not in new_tx]

It doesn't really remove the message which was processed. Because the object in new_tx is byte object, the object in self.transaction_buffer is string object. So it will not equal.

I change the code to this. It works.

new_tx = self._run_round(r, tx_to_send[0], send_r, recv_r)
print('new_tx:', new_tx)

# Remove all of the new transactions from the buffer
- self.transaction_buffer = [_tx for _tx in self.transaction_buffer if _tx not in new_tx]
+ self.transaction_buffer = [_tx for _tx in self.transaction_buffer if bytes(_tx, 'utf-8') not in new_tx]

Thank you for your sharing. I have learned a lot about HoneyBadgerBFT.

Implement dashboard for test network

The specs have yet to be defined, but one may think of it as a simple web interface having blockchain explorer -like features.

See also #35 (visualization of messages).

Look into v-ABA from Abaraham et al.

This paper appears to offer an improvement on validated broadcast. Whereas honeybadgerBFT requires O(log N) expected rounds, due to N instances of binary ABA each of which is geometric distribution of rounds, it should be possible to refactor our ACS protocol so that it uses theirs and works
https://www.researchgate.net/publication/328758146_Validated_Asynchronous_Byzantine_Agreement_with_Optimal_Resilience_and_Asymptotically_Optimal_Time_and_Word_Communication

Use docker for builds on travis

To help preventing situations where everything works fine in the docker-based development environment but things fail on Travis CI.

  • The same Dockerfile as the one used for development can be used, except that it can be parametrized for the pip install phase.
  • A compose file can be added for travis only.

Implement test cases that exhibit the unbounded behavior

This work is needed towards the completion of the bounded badger implementation.

The help the implementation and testing of bounded badger a subset of or all these tests should fail if the bounded badger "extension" is not active and succeed otherwise.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.