google / trillian Goto Github PK
View Code? Open in Web Editor NEWA transparent, highly scalable and cryptographically verifiable data store.
License: Apache License 2.0
A transparent, highly scalable and cryptographically verifiable data store.
License: Apache License 2.0
It's in the API and could be useful for clients but not implemented in the server.
Currently, the storage.QueueLeaves API effectively enforces an all-or-nothing failure mode for the addition of each leaf in the array, which is not ideal. E.g. this would cause a batch of additions to fail if the log didn't accept dupes but one was presented.
This resolves the "Too many open connections at once" issue.
It also allows tests to create test databases directly.
Public clients only need an array of neighbor hashes to verify proofs.
Including extra data invites API missuse.
Most of the work is done. The API needs to read the hash from the new leaf_value_hash field. When leaves are queried the backend should return both the merkle_leaf_hash and leaf_value_hash.
Return a generic storage object rather than new objects for each tree.
Partial work started here
Make it possible to prevent very recent leaves from being eligible for sequencing. For more details see the existing CT repository.
Consider removing TrillianStatusCode so that clients have a single place to check for errors.
Currently we can only obtain proofs where the tree size is at an internal STH but it should be possible to return proofs at arbitrary sizes as in existing implementations.
This requires dynamic rehashing of some proof nodes and is complex to implement. Research indicates it's feasible to unroll the rehashing chain so storage can fetch all the involved nodes and the rehashing can be done by post-processing.
Changes involved (plus tests of course):
They sometimes fail with the server complaining that a port is already in use.
The Map API is intended to expose a key / value interface, yet the "value" aka. MapLeaf
currently contains the key
aka Index
. Index
is also contained in IndexValue
, IndexInclusionProof
and several other messages, producing confusion about where and when to set Index.
Proposal:
Use proto3 map in Set / Get if we can find a scalar value for index. Perhaps a hex string?
Or standadize on the IndexValue message.
Blocking google/keytransparency#486
In the example CT frontend including all the RFC 6962 gubbins.
The signatures library doesn't have a companion verification function.
Investigate importing the one from Key Transparency
Should use the one with the lowest sequence number for the leaf index.
Existing tests are at a small tree size (up to 7 leaves). This is not enough to be sure about the code. Needs more extensive tests.
Write a log integration test for consistency proofs between various tree sizes.
Hasher needs to be an interface to support alternative hashing implementations.
Here's a proposal that supports both logs and maps.
Individual implementations are not required to incorporate all input fields into their hash.
// TreeHasher provides hash functions for tree implementations.
type TreeHasher interface {
HashLeaf(treeID, index []byte, depth int, dataHash []byte) Hash
HashEmpty(treeID, index []byte, depth int) Hash
HashInterior(left, right Hash) Hash
}
Steps:
Pretty sure that both of these aren't implemented atm as that work predated the split.
https://travis-ci.org/google/trillian/jobs/182679607#L1061
Running test(s)
# github.com/google/certificate-transparency/go/merkletree
cc1plus: error: unrecognized command line option ‘-std=c++11’
FAIL github.com/google/trillian/integration [build failed]
Don't do this until all the handler PRs are merged to avoid rework.
Examples:
log_integration_test.go:97: Test failed: failed to queue leaves: rpc error: code = 2 desc = Unsequenced: Error 1213: Deadlock found when trying to get lock; try restarting
We need some kind of administrative API that supports creating and deleting LogIDs.
There's ongoing work on an admin API. Not sure how to connect this bug to that.
Including at least:
When adding a batch of leaves some leaves could fail, e.g. due to dupe keys, and the log shouldn't ditch the rest of the batch due to an unrelated dupe entry. Rather, it should submit as many as it can, and report those which failed to the caller.
Tests that prove end to end that it's doing the correct crypto and other stuff related to adding and querying entries, signing / sequencing, proof serving.
This should be the last remaining handler for the example CT implementation.
Current tests verify inclusion proofs by rebuilding a parallel tree and checking that the inclusion proofs are the same. We should migrate these tests to use a Log Verifier that computes the root hash from the neighbor nodes.
In the CT example frontend.
The KeyManager interface currently supports returning error
for several of the Get* functions.
This dramatically increases code complexity for calling functions. All these error params can be eliminated if the New* functions require a valid key to be loaded before returning a KeyManager object. Is there a strong reason to support starting Trillian without key material?
type KeyManager interface {
Signer() (crypto.Signer, error)
SignatureAlgorithm() spb.DigitallySigned_SignatureAlgorithm
HashAlgorithm() crypto.Hash
GetPublicKey() (crypto.PublicKey, error)
GetRawPublicKey() ([]byte, error)
}
Proposed interface:
New(...) (KeyManager, error)
type KeyManager interface {
Signer() crypto.Signer
SignatureAlgorithm() spb.DigitallySigned_SignatureAlgorithm
HashAlgorithm() crypto.Hash
GetPublicKey() crypto.PublicKey
GetRawPublicKey() []byte
}
Create a pure go implementation that can verify all the responses that are returned from the bits of Trillian that implement an append only log.
Components:
Supports google/keytransparency#384
The current Map Hasher interface contains a HashKey function to turn a string into a sha256 index in the map. This index, however, should be computed by the personality, not the map. Key Transparency, for instance, computes the index as the output of a privately keyed signature function.
If this sounds good, I'll convert the Map interfaces to accept a index []byte
rather than key []byte
or HashedKey []byte
, and remove the HashKey
function from the MapHasher.
Should trigger termination of the sequencer goroutine(s) and block until it's done.
Updates made to CompactMerkleTree in d1ee609 (PR #180) to return a copy of the internal Node state, have not properly covered the edge case where the tree is perfectly balanced (i.e. has 2^n leaf nodes).
In that situation the set of nodes should be empty: the size & root hash alone describe the tree, as evidenced by the fact that when merkle.NewCompactMerkleTreeWithState() is called it will make 0 calls to the backing store via its getNodeFunc to retrieve hashes.
It can be done via existing API but needs client to issue two RPCs. Might as well provide this in the backend. It will be useful for debugging.
Similar to backend tests. Demonstrate that the log operates correctly, signs and returns the correct objects etc. and generally behaves itself.
Fetching correct node set and stuff. Note: for this milestone initially only at tree sizes where we have an STH because the intermediate recalculations are complex. Needs to support by hash and by index, which are fairly similar operations.
Fetching correct node sets and doing computations etc. Again only at tree sizes where we have an STH.
Including at least:
Currently the log backend rejects them. Should probably do the same thing as current log implementations, which I think is returning an empty proof but no error.
If I add --alsologtostderr
to the invocation of ./trillian_map_server
in integration/map_integration_test.sh
, I see lots of warnings:
W0131 09:37:43.527794 11656 subtree_cache.go:185] Unexpectedly reading from within GetNodeHash()
The comment there says "This should never happen - we should've already read all the data we need above, in Preload()"...
Write a log integration test for inclusion proofs at various tree sizes.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.