Giter Site home page Giter Site logo

us's Introduction

us

GoDoc Go Report Card

us is an alternative interface to the Sia network. It provides low-level, developer-oriented APIs and formats that facilitate the storage and retrieval of files on Sia.

"Low-level" means that us generally avoids making decisions on behalf of the user. For example, when renting storage, the user must decide which hosts to form contracts with, how many coins to spend on each contract, when to renew contracts, when to migrate data to new hosts, etc. These questions do not have simple answers; they vary according to the context and goals of the user. Recognizing this, the us philosophy is to provide the user with a set of building blocks rather than a one-size-fits-all solution.

Why should I care?

The us project is at the forefront of Sia research and development, exploring new ideas, tools, and protocols that complement and extend the existing ecosystem. With us, you can do things currently not supported by siad, such as:

  • Specify exactly which hosts you want to use
  • Share your files and/or contracts with a friend
  • Upload your meme folder without padding each file to 4 MiB
  • Mount a virtual Sia filesystem with FUSE
  • Upload and download without running a Sia full node

More importantly, you can use us to build apps on Sia. Here are a few ideas:

  • A storage backend for go-cloud, upspin, or minio
  • A site where you can buy contracts directly, paying with BTC (via LN?) instead of SC
  • A cron job that downloads 1 KB from a host every 24 hours and reports various metrics (latency, bandwidth, price)
  • A site that aggregates host metrics to provide a centralized host database (done!)
  • A mobile app that stores and retrieves files stored on Sia (done!)

What do I need to get started?

If you're a renter, you're probably looking for user, a CLI tool for forming contracts and transferring files that leverages the us renter packages.

If you're a hodler or an exchange, you're probably looking for walrus, a high-performance Sia wallet server that leverages the us wallet packages.

If you're a developer who wants to build something with us, please get in touch with me via email, reddit, or Discord (@nemo).

If you would like to contribute (thank you!), please read CONTRIBUTING.md.

Please be aware that us is in an experimental, unstable state. us contracts and files differ from the corresponding siad formats, so you should not assume that contracts formed and files uploaded using us are transferable to siad, nor vice versa. Until us is marked as stable, don't spend any siacoins on us that you can't afford to lose.

us's People

Contributors

dependabot[bot] avatar eriner avatar georgemcarlson avatar jkawamoto avatar lukechampine avatar n8maninger avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

us's Issues

Testing thoughts

We need some form of CI, including coverage reports. Without a big, prominent number, there's little incentive to improve test coverage. I've evaluated a few options but haven't found a service I really like yet. I even started writing my own! Unfortunately that's too big a project to take on right now.

I'd also like to adopt the practice of using "external" tests almost exclusively. When an external test needs access to internal package state, that access must be facilitated via explicit functions/methods in a helpers_test.go file. This reduces dependence on internal state and makes it easier to catch breakages. My only concern is with regard to coverage: I think external tests will count towards package coverage, but I haven't confirmed it.

Streaming Merkle root computation in the Renter-Host Protocol

Currently, we encrypt all messages in the RHP with ChaCha20-Poly1305. This means that sector data is encrypted twice. The encryption itself adds some overhead, but not enough to worry about. However, treating sector as an opaque message introduces a bottleneck in the protocol: we can't start computing the Merkle root of the sector until we've received and verified the entire message!

This affects both the renter and the host: when uploading, the host needs to wait for the entire sector to arrive before it can compute its Merkle root; when downloading, the same applies to the renter. Imagine you have 1 Gbps of bandwidth, and computing a Merkle root takes 20ms. The total transfer time for a sector is now (4 MiB / 1 Gbps = 33ms) + 20ms = 53 ms, which means your throughput is 4 MiB / 53ms = 633 Mbps. But if the Merkle root could be computed in streaming fashion as the sector arrived, the total transfer time would be much closer to 33ms. So we're looking at a potential 58% speed-up here!

But -- how do we accomplish this? At first I thought we would need to add a new RPC that sends the sector un-encrypted. But on further thought, I think we can actually pull it off without changing the protocol at all! Here's why. ChaCha20-Poly1305 just means encrypting with ChaCha20 (a stream cipher) and appending a poly1305 tag. You don't need to verify the tag before decrypting the data, you're just supposed to. In our case, we're not doing anything dangerous with the plaintext; we're just computing its Merkle root. So it's fine if we do a streaming decryption, pipe that into our Merkle root function, and verify the tag in parallel. (Actually, I think verifying the tag may be unnecessary too, since the root itself serves as a tag.)

PseudoFS with extra contracts

Currently, PseudoFS needs that all underlying hosts are accessible to write a file and close the filesystem. However, it hardly expects it, especially if you increase the number of hosts. Since this is a network application, it might not be a good idea to assume all remote hosts are always working.

I'd propose that PseudoFS should have some extra hosts so that it can select live hosts for each operation (flushSectors). For example, if we need to upload files to 30 hosts and create a PseudoFS with 35 contracts (hosts), PseudoFS.flushSectors tries to acquire 30 hosts and can skip some hosts that don't respond. Also, if h.Append(sector) fails with a host, it can replace the host with another extra host. Then, we can keep using the PseudoFS even if some hosts get temporarily unreachable. As a result, each sector would be stored in a different host set, but it won't be a problem because metafile keeps which hosts have each sector.

This seems like it doesn't require a significant change but will increase reliability.

It is related to #69.

PseudoFS without the local filesystem

PseudoFS stores metafiles with the local filesystem. However, it might not work on some IoT devices and browsers because they might not have a writable filesystem.

It'd be nice if it uses some filesystem abstraction library such as https://github.com/spf13/afero instead of calling functions in os package. With such a library, we can use a memory-based filesystem on the case where we don't have access to the OS based filesystem.

Cannot initialize HostSet in parallel

We could shave a lot of time off initialization by connecting to hosts in parallel, but doing so would cause a race in the HostSet map. Should be as simple as adding a mutex.

Support parallel downloads

Currently all PseudoFS methods use coarse-grained locking: they hold a mutex for the entire duration of the method, meaning you can't call Read in parallel with Write, or even with another Read. I designed it this way for simplicity, but now the time has come to switch to more fine-grained locking in order to increase performance.

The main contended resource here is Sessions. Concurrent Read and Write calls must "compete" for the available Sessions, and must block while waiting to acquire each Session they need. A Write call, if it flushes sectors, will need to access every Session in the HostSet, whereas a Read call may only need to access minHosts Sessions. This means that, when downloading, it is possible to increase throughput by a factor of the file's redundancy. If a file is stored at 3x redundancy, you can download it 3x faster than if it was stored at 1x redundancy.

No API changes are necessary to support parallelism. Callers can simply start making Read and Write calls in goroutines, just as they can with *os.File. (Technically, they will need to use ReadAt and WriteAt, since Read and Write are stateful.) Example code:

// split one large read into two parallel reads
buf := make([]byte, readSize)
buf1, buf2 := buf[:readSize/2], buf[readSize/2:]
var wg sync.WaitGroup
wg.Add(2)
go func() { pf.ReadAt(buf1, 0); wg.Done() }()
go func() { pf.ReadAt(buf2, readSize/2); wg.Done() }()
wg.Wait()

This is a little unwieldy (and doesn't handle errors), but it would be easy to add a helper function that automatically splits up work for you and handles errors properly. I think this is what https://github.com/klauspost/readahead does, so perhaps callers could just use that package.

Writes, on the other hand, cannot be significantly sped up by adding parallelism. This is because, as previously mentioned, writing requires accessing all of the hosts, so you'll always be bottlenecked by the slowest host. (Imagine that you have two hosts: one that takes 1s per sector, and one that takes 10s. You might be able to upload all of the first host's data before the second has finished a single sector, but this doesn't improve your overall throughput; you still need to wait for the second host to finish.) However, there are two scenarios where the parallelism does help. First, if your slowest host is only temporarily slow, and then greatly speeds up soon after, then parallelism allows you to "paper over" the period of temporary slowness. Second, if you are simultaneously uploading and downloading, then parallelism allows you to start downloading from a host as soon as you finish uploading to it, rather than having to wait for all hosts to finish uploading.

Garbage collection errors

@jkawamoto , i checked the attached garbage collection patch from Luke (request: communication error: download request exceeded maximum batch size);

diff --git a/renter/proto/session.go b/renter/proto/session.go
index 7947d89..788faec 100644
--- a/renter/proto/session.go
+++ b/renter/proto/session.go
@@ -287,6 +287,7 @@ func (s *Session) SectorRoots(offset, n int) (_ []crypto.Hash, err error) {
                return nil, err
        }
        if err := s.sess.ReadResponse(&resp, uint64(4096+32*n)); err != nil {
+               println("sector roots err:", req.RootOffset, req.NumRoots, s.host.MaxDownloadBatchSize)
                readCtx := fmt.Sprintf("couldn't read %v response", renterhost.RPCSectorRootsID)
                rejectCtx := fmt.Sprintf("host rejected %v request", renterhost.RPCSectorRootsID)
                return nil, wrapResponseErr(err, readCtx, rejectCtx)

(1)

image

This low output looks the same;

error | SectorRoots: host rejected LoopSectorRoots request: communication error: download request exceeded maximum batch size
-- | --
  | t errorVerbose | communication error: download request exceeded maximum batch size host rejected LoopSectorRoots request lukechampine.com/us/renter/proto.wrapResponseErr 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:53 lukechampine.com/us/renter/proto.(*Session).SectorRoots 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:294 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC.func1 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:320 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:331 github.com/storewise/s3-gateway/pkg/server/contract/standalone.(*contractManager).MigrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/contract/standalone/contract.go:371 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).migrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:526 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).Start.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:389 github.com/storewise/s3-gateway/pkg/bg.(*taskGroup).Go.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/bg/bg.go:65 golang.org/x/sync/errgroup.(*Group).Go.func1 	/Users/junpei/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 runtime.goexit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/asm_amd64.s:1373 SectorRoots lukechampine.com/us/renter/proto.wrapErr 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/proto.go:16 lukechampine.com/us/renter/proto.(*Session).SectorRoots 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:294 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC.func1 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:320 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:331 github.com/storewise/s3-gateway/pkg/server/contract/standalone.(*contractManager).MigrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/contract/standalone/contract.go:371 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).migrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:526 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).Start.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:389 github.com/storewise/s3-gateway/pkg/bg.(*taskGroup).Go.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/bg/bg.go:65 golang.org/x/sync/errgroup.(*Group).Go.func1 	/Users/junpei/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 runtime.goexit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/asm_amd64.s:1373

Also found some other errors related to garbage collection;

(2)
failed garbage collection | Write: host supplied invalid Merkle proof;

error | Write: host supplied invalid Merkle proof
-- | --
  | t errorVerbose | host supplied invalid Merkle proof lukechampine.com/us/renter/proto.init 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:34 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5420 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.main 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:190 runtime.goexit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/asm_amd64.s:1373 Write lukechampine.com/us/renter/proto.wrapErr 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/proto.go:16 lukechampine.com/us/renter/proto.(*Session).Write 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:583 lukechampine.com/us/renter/proto.(*Session).DeleteSectors 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:687 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC.func3 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:388 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:389 github.com/storewise/s3-gateway/pkg/server/contract/standalone.(*contractManager).MigrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/contract/standalone/contract.go:371 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).migrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:526 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).Start.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:389 github.com/storewise/s3-gateway/pkg/bg.(*taskGroup).Go.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/bg/bg.go:65 golang.org/x/sync/errgroup.(*Group).Go.func1 	/Users/junpei/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 runtime.goexit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/asm_amd64.s:1373

(3)
SectorRoots: host rejected LoopSectorRoots request: communication error: download request has invalid sector bounds

error | SectorRoots: host rejected LoopSectorRoots request: communication error: download request has invalid sector bounds
-- | --
  | t errorVerbose | communication error: download request has invalid sector bounds host rejected LoopSectorRoots request lukechampine.com/us/renter/proto.wrapResponseErr 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:53 lukechampine.com/us/renter/proto.(*Session).SectorRoots 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:294 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC.func1 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:320 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:331 github.com/storewise/s3-gateway/pkg/server/contract/standalone.(*contractManager).MigrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/contract/standalone/contract.go:371 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).migrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:526 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).Start.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:389 github.com/storewise/s3-gateway/pkg/bg.(*taskGroup).Go.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/bg/bg.go:65 golang.org/x/sync/errgroup.(*Group).Go.func1 	/Users/junpei/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 runtime.goexit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/asm_amd64.s:1373 SectorRoots lukechampine.com/us/renter/proto.wrapErr 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/proto.go:16 lukechampine.com/us/renter/proto.(*Session).SectorRoots 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:294 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC.func1 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:320 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:331 github.com/storewise/s3-gateway/pkg/server/contract/standalone.(*contractManager).MigrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/contract/standalone/contract.go:371 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).migrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:526 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).Start.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:389 github.com/storewise/s3-gateway/pkg/bg.(*taskGroup).Go.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/bg/bg.go:65 golang.org/x/sync/errgroup.(*Group).Go.func1 	/Users/junpei/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 runtime.goexit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/asm_amd64.s:1373

(4)
failed garbage collection | SectorRoots: contract has insufficient funds to pay for revision


SectorRoots: contract has insufficient funds to pay for revision
--
  | t errorVerbose | contract has insufficient funds to pay for revision lukechampine.com/us/renter/proto.init 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:30 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5420 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.doInit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:5415 runtime.main 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/proc.go:190 runtime.goexit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/asm_amd64.s:1373 SectorRoots lukechampine.com/us/renter/proto.wrapErr 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/proto.go:16 lukechampine.com/us/renter/proto.(*Session).SectorRoots 	/Users/junpei/src/github.com/lukechampine/us/renter/proto/session.go:268 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC.func1 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:320 lukechampine.com/us/renter/renterutil.(*PseudoFS).GC 	/Users/junpei/src/github.com/lukechampine/us/renter/renterutil/filesystem.go:331 github.com/storewise/s3-gateway/pkg/server/contract/standalone.(*contractManager).MigrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/contract/standalone/contract.go:371 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).migrateFiles 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:526 github.com/storewise/s3-gateway/pkg/server/storage/dynamo.(*StorageClass).Start.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/server/storage/dynamo/storage.go:389 github.com/storewise/s3-gateway/pkg/bg.(*taskGroup).Go.func1 	/Users/junpei/src/github.com/storewise/s3-gateway/pkg/bg/bg.go:65 golang.org/x/sync/errgroup.(*Group).Go.func1 	/Users/junpei/pkg/mod/golang.org/x/[email protected]/errgroup/errgroup.go:57 runtime.goexit 	/usr/local/Cellar/go/1.14.4/libexec/src/runtime/asm_amd64.s:1373

Support “overdrive” (PseudoKV)

Overdrive will allow us to reach 1x redundancy faster, though, the total transfer time will still be limited by the slowest host. The main advantage is that the renter will be able to serve the file after 1x redundancy already while the remaining xx redundancy is synchronizing in the background.

Encryption for mutable files

The current metafile format encrypts sectors using ChaCha20, where the counter corresponds to the offset within a sector. The advantage of this is that it's crazy fast: on my machine, it encrypts sectors at 3 GB/s, so it's definitely not going to be the bottleneck. The downside is that you can't reuse counter values, which means you can't modify the contents of a sector. This is a problem if we want to support mutable files. Is there a different way to encrypt files that allows efficient, random-access writes?

Sia's encryption story is somewhat novel. Traditionally, in the context of files, there are three places to apply encryption: to the filesystem as a whole, to individual files, or to the disk sectors comprising files. But Sia doesn't fall cleanly into any of these buckets.

Encryption at the sector level, also known as full-disk encryption, seems the closest fit, since Sia itself works only in terms of sectors, not files. However, there are some key differences in the security model. First, disk encryption usually focuses on protecting against an attacker who only sees one "state" of the disk. But on Sia, the host sees all states. So using e.g. plain CTR mode is right out, because CTR mode breaks down catastrophically if the host can see multiple blocks encrypted with the same nonce. (ChaCha is basically CTR mode.) Another difference is that disk encryption assumes that there is no extra space where IVs/nonces might be stored: an encrypted 4096-byte sector must contain exactly 4096 bytes of plaintext. (This is why tweakable "length-preserving" cipher modes like XTS are favored in disk encryption.) But this is not the case for Sia: we can store IVs/nonces in the file's metadata.

Perhaps we could do file-level encryption, then? A common way to do this is to use AEAD, encrypting the file contents and appending a MAC for integrity/authentication. But in Sia, we don't need to bother with MACs; we can use Merkle proofs instead. This emphasizes a key distinguishing property of storage on Sia, which is that it is, in a sense, interactive: you aren't asking a dumb disk for a sector, you're asking an independent (potentially adversarial!) actor with their own resources and incentives. In this model, computing Merkle range proofs on the fly makes more sense than storing a bunch of extra local metadata.
The other problem with AEAD is changing any part of the file requires re-encrypting the entire thing and recomputing the MAC. From a security perspective, this is a good thing: if the entire file is scrambled, an adversary can't tell which part we actually changed. But from a performance perspective, it's just too costly. We want Sia to behave more like a virtual disk, where writing is a common and efficient operation.

Lastly, we could do something like filesystem-level encryption. I'm less familiar with this mode, but I believe it means the filesystem handles chaining encrypted blocks of data into semantic files. (Filesystem-level metadata is often encrypted as well, but again, that's not something Sia is concerned with.) This approach seems closest to the Sia model. The filesystem, much like a metafile, has "extra space" to store IVs/nonces, and filesystem blocks map well to Sia sectors. For example, we could modify the metafile format to store an explicit IV for every 64-byte segment in a file. This is obviously secure, but also obviously inefficient. Can we do better?

Given all that background, here are some options I'm considering:

  • Use XTS. XTS is the modern standard for disk encryption; it is a tweakable length-preserving mode, allowing safe random-access writes without any storage overhead. However, it doesn't mix ciphertexts very well: flipping one bits changes only 16 bytes of the ciphertext. XTS is also slooooow: 10x slower than ChaCha. Granted, that's still pretty fast, but it's slow enough to be noticeable. Oh, except you can only get that fast by using XTS with AES, and for various reasons, I don't like AES. So XTS is out.

  • Use Adiantum. Adiantum is a bleeding-edge encryption mode created by researchers at Google. It's basically like XTS (tweakable, length-preserving) but faster and more secure. Adiantum appears to be capable of encrypting 4096-byte sectors at 2 GB/s, nearly as fast as ChaCha20; however, ChaCha20 has 64-byte granularity. When encrypting 64 bytes at a time, Adiantum becomes unacceptably slow. So if it were to be used, we'd have to restrict file modifications to 4096-byte granularity. (Perhaps this is acceptable; 4096 bytes is already the standard size of a physical disk sector.) Again, this can be considered a good thing, because the attacker can't tell which of the 4096 bytes you changed. (Bear in mind, though, that they can still tell if you e.g. flip the same bit twice in a row, because the encryption is deterministic.)

  • Use ChaCha20 with extra metadata. The idea here is that we add a Revision field to the SectorSlice type. If you modify part of a file, you have to increment the revision. When encrypting/decrypting, you use the segment index as the counter and the revision number as the nonce. I like this approach for two reasons. First, it lets us keep using ChaCha20 with its sweet, sweet 3 GB/s. (We could even drop down to ChaCha12 for even faster speeds.) Second, it doesn't require creating separate formats for mutable vs. immutable files. Such a split could be necessary if mutable files performed much worse than immutable ones, but with this approach, the only difference between a mutable file and an immutable file is that the former will have more SectorSlices and take slightly longer to read (because you have to set the nonce more often). That seems like an acceptable tradeoff. The downsides of this approach are that it adds more metadata (and makes updating the metadata annoying -- you have to split up SectorSlices), and that it's a bit less secure than randomizing 4096-byte sectors (since adversaries can see changes with 64-byte granularity). Oh, one more interesting thing: this approach allows us to escape the "deterministic encryption" problem shared by Adiantum and XTS: even if you re-encrypt the same plaintext twice, you can do so with a different nonce, so the adversary should never observe the same ciphertext twice.

Clearly I'm leaning towards the modified ChaCha20 approach. If I do go ahead and implement it, it will probably be in tandem with supporting the new renter-host protocol in us, because they have obvious synergy.

Connect to hosts lazily

This would reduce latency when downloading high-redundancy files, since you wouldn't need to connect to all hosts before starting the download.

The downside is that errors will not surface until you actually try to connect. But this probably shouldn't be treated as an exceptional condition, since hosts can fail at any time regardless.

Couldn't read LoopLock response

Persists after #31.
Gateway v2.1.0, linux-amd64.

2020-02-15T12:04:58.465Z        WARN    storage/bucket.go:375   failed to close file system     {"path": "/home/file", "request_id": "79a8c08f55a36c8d2672ead671b81a63ac78077e", "bucket": "home", "error": "could not upload to some hosts: \n451d016b: NewSession: Lock: couldn't read LoopLock response: read tcp 167.71.76.88:35728->77.203.254.72:9982: i/o timeout", "errorVerbose": "\n451d016b: NewSession: Lock: couldn't read LoopLock response: read tcp 167.71.76.88:35728->77.203.254.72:9982: i/o timeout\ncould not upload to some hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).flushSectors\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:320\nlukechampine.com/us/renter/renterutil.(*PseudoFS).Close\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:429\ngithub.com/storewise/s3-gateway/pkg/meta.(*fileSystem).Close\n\t/go/src/github.com/storewise/s3-gateway/pkg/meta/filesystem.go:126\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Synchronize\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:373\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store.func2.1\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:181\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}
$ 2020-02-11T11:44:08.919Z        ERROR   api/put_object.go:169   failed to store the object  {"path": "/wise/file", "request_id": "6e6d6718c366a90533e5f540771936166ff67ff4", "bucket": "wise", "key": "file", "user": "a74982c8-67a1-4dae-a366-f082c85bc5d0", "error": "could not upload to some hosts: \n88b3c832: NewSession: Settings: couldn't read LoopSettings response: read tcp 167.71.76.88:53806->74.58.72.34:9982: i/o timeout", "errorVerbose": "\n88b3c832: NewSession: Settings: couldn't read LoopSettings response: read tcp 167.71.76.88:53806->74.58.72.34:9982: i/o timeout\ncould not upload to some hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).flushSectors\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:320\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWriteAt\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:536\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWrite\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:358\nlukechampine.com/us/renter/renterutil.PseudoFile.Write\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:555\nio.copyBuffer\n\t/usr/local/go/src/io/io.go:404\nio.CopyBuffer\n\t/usr/local/go/src/io/io.go:375\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:238\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:201\ngithub.com/storewise/s3-gateway/pkg/storage.(*Storage).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/storage.go:129\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).putObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:167\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).PutObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:99\ngithub.com/GooBox/sia-s3-adapter/pkg/api.Respond.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/respond.go:54\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithDate.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/date.go:41\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithServer.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/server.go:37\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.(*Logger).With.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/logger.go:73\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2802\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1890\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}
$ 2020-02-11T11:44:24.599Z      DEBUG   storage/bucket.go:240   failed to copy the contents {"path": "/wise/file", "request_id": "39f5dc374aec93fc73a04adfccab2f309cca564b", "bucket": "wise", "key": "file/null/1", "error": "could not upload to some hosts: \n88b3c832: NewSession: Lock: couldn't read LoopLock response: read tcp 167.71.76.88:53830->74.58.72.34:9982: i/o timeout", "errorVerbose": "\n88b3c832: NewSession: Lock: couldn't read LoopLock response: read tcp 167.71.76.88:53830->74.58.72.34:9982: i/o timeout\ncould not upload to some hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).flushSectors\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:320\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWriteAt\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:536\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWrite\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:358\nlukechampine.com/us/renter/renterutil.PseudoFile.Write\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:555\nio.copyBuffer\n\t/usr/local/go/src/io/io.go:404\nio.CopyBuffer\n\t/usr/local/go/src/io/io.go:375\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:238\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:170\ngithub.com/storewise/s3-gateway/pkg/storage.(*Storage).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/storage.go:129\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).putObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:167\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).PutObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:99\ngithub.com/GooBox/sia-s3-adapter/pkg/api.Respond.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/respond.go:54\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithDate.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/date.go:41\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithServer.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/server.go:37\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.(*Logger).With.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/logger.go:73\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2802\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1890\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}

Browser compatibility

Opened this issue to track when we can start making browser based apps with us using the new renter-host protocol.

RenewAndClear underflows

goroutine 7358 [running]:
runtime/debug.Stack(0x44ff77, 0x0, 0xc000b5b268)
        /usr/local/Cellar/go/1.14.2_1/libexec/src/runtime/debug/stack.go:24 +0x9d
runtime/debug.PrintStack()
        /usr/local/Cellar/go/1.14.2_1/libexec/src/runtime/debug/stack.go:16 +0x22
gitlab.com/NebulousLabs/Sia/build.Critical(0xc000b5b310, 0x1, 0x1)
        /Users/junpei/pkg/mod/gitlab.com/!nebulous!labs/[email protected]/build/critical.go:16 +0xaa
gitlab.com/NebulousLabs/Sia/types.Currency.Sub(0xc000b5bc00, 0xc000bc3680, 0x2, 0x7, 0x100, 0xc000bc3700, 0x2, 0x7, 0x0, 0x0, ...)
        /Users/junpei/pkg/mod/gitlab.com/!nebulous!labs/[email protected]/types/currency.go:190 +0x14e
lukechampine.com/us/renter/proto.(*Session).RenewAndClearContract(0xc0000ba580, 0x1191000, 0xc0004c8258, 0x117f940, 0xc0004c8258, 0xc0004f4400, 0xc000aae600, 0x2, 0x7, 0x3fbe5, ...)
        /Users/junpei/pkg/mod/lukechampine.com/[email protected]/renter/proto/renew.go:289 +0xa82
lukechampine.com/us/renter/proto.RenewContract(0x1191000, 0xc0004c8258, 0x117f940, 0xc0004c8258, 0xa6d93141424e68de, 0x9d33a41fb271994, 0xe66fb7c5713620c8, 0x7b2dc55531564d00, 0xc0004f44b0, 0x40, ...)
        /Users/junpei/pkg/mod/lukechampine.com/[email protected]/renter/proto/renew.go:31 +0x271
...
Critical error: negative currency not allowed
Please submit a bug report here: https://gitlab.com/NebulousLabs/Sia/issues

Proposal: Remove dependency on NebulousLabs/Sia

us does not strongly depend on the NebulousLabs/Sia packages; most of the references are to small types like modules.NetAddress, types.BlockHeight, etc. However, even a single reference is sufficient to drag in the entire repo as a dependency, which significantly bloats the size of binaries that import us packages. This also impacts the us bindings: the resulting ObjC framework, for example, is 53 MB, which is just crazy.

Unfortunately, there are a few places where the dependency is hard to remove. In particular, wallet.ChainScanner needs to implement modules.ConsensusSetSubscriber, which means it must depend on the modules.ConsensusChange type. Additionally, if we substitute our own types for NebulousLabs/Sia types, it becomes harder to write code that interoperates with both us and NebulousLabs/Sia.

I propose that we make two changes:

  • Replace all types/functions imported from NebulousLabs/Sia with local types/functions.
  • Add a new interop package to us-bindings that converts between us types and NebulousLabs/Sia types.

To solve the wallet.ChainScanner problem, we would redefine the method to depend on a locally-defined copy of modules.ConsensusChange, and then call interop.ToConsensusSetSubscriber on it to make it satisfy modules.ConsensusSetSubscriber. This is viable because we only need to actually subscribe to the consensus set in binaries like walrus and during certain tests, so you only "pay for" the NebulousLabs/Sia dependency when you actually need it. Accordingly, the bindings should shrink enormously.

MinShards cannot be greater than the number of hosts

Verbose: error
Gateway: v2.1.0, linux-amd64
Server: DO

2020-02-11T11:20:53.246Z        ERROR   api/put_object.go:169   failed to store the object  {"path": "/data/file", "request_id": "0eb06352e47accba0fb782f04cb50e44b49f7448", "bucket": "data", "key": "file", "user": "a74982c8-67a1-4dae-a366-f082c85bc5d0", "error": "minShards cannot be greater than the number of hosts", "errorVerbose": "minShards cannot be greater than the number of hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).OpenFile\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:169\ngithub.com/storewise/s3-gateway/pkg/meta.(*fileSystem).OpenFile\n\t/go/src/github.com/storewise/s3-gateway/pkg/meta/filesystem.go:149\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:227\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:201\ngithub.com/storewise/s3-gateway/pkg/storage.(*Storage).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/storage.go:129\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).putObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:167\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).PutObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:99\ngithub.com/GooBox/sia-s3-adapter/pkg/api.Respond.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/respond.go:54\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithDate.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/date.go:41\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithServer.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/server.go:37\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.(*Logger).With.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/logger.go:73\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2802\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1890\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}

Error persists after #31

2020-02-15T11:55:13.846Z        ERROR   storage/bucket.go:203   failed to store the object      {"path": "/home/file", "request_id": "2ce91fe4881a7c4b7cacdcdf18ca063f39ea9694", "bucket": "home", "key": "file/null/1", "error": "minShards cannot be greater than the number of hosts", "errorVerbose": "minShards cannot be greater than the number of hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).OpenFile\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:169\ngithub.com/storewise/s3-gateway/pkg/meta.(*fileSystem).OpenFile\n\t/go/src/github.com/storewise/s3-gateway/pkg/meta/filesystem.go:149\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:227\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:201\ngithub.com/storewise/s3-gateway/pkg/storage.(*Storage).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/storage.go:129\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).putObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:167\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).PutObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:99\ngithub.com/GooBox/sia-s3-adapter/pkg/api.Respond.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/respond.go:54\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithDate.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/date.go:41\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithServer.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/server.go:37\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.(*Logger).With.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/logger.go:73\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2802\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1890\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}

Related to https://github.com/storewise/s3-gateway/issues/78.

Add reedsolomon.EncodeMulti

I added a ReconstructMulti and JoinMulti, so why not EncodeMulti? Maybe there was a good reason for it, but I don't remember now. More likely, I benchmarked it and didn't see much improvement. But from an API standpoint, it makes sense to have Multi variants of all methods, rather than leaving the caller responsible for writing EncodeMulti.

Investigate "rejected for high paying renter valid output" error

Hosts sometimes complain that we didn't pay them enough. I've tried to add some leeway here before, but it's surprisingly hard to quash this decisively. Warrants more investigation into how the host is calculating expected costs. (Perhaps related to BaseRPCPrice/SectorAccess/MerkleProof costs?)

Walk callback needs to check error

We get panic during GC:

runtime error: invalid memory address or nil pointer dereference

runtime.gopanic
	/usr/local/go/src/runtime/panic.go:969
runtime.panicmem
	usr/local/go/src/runtime/panic.go:212
runtime.sigpanic
	/usr/local/go/src/runtime/signal_unix.go:695
lukechampine.com/us/renter/renterutil.(*PseudoFS).GC.func2
	/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:335
path/filepath.Walk
	/usr/local/go/src/path/filepath/path.go:404
lukechampine.com/us/renter/renterutil.(*PseudoFS).GC
	/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:334

It looks like info is nil because filepath .Walk gets an error here

err := filepath.Walk(fs.root, func(path string, info os.FileInfo, _ error) error {
if info.IsDir() || !strings.HasSuffix(path, ".usa") {
return nil
}

The callback function should check err and if it's not nil, it should return soon.

Output splitting

We can form only about 10~20 contracts with one output. (Actually, we can form more contracts but we get Lock: host rejected LoopLock request: no record of that contract if we try to use such contracts) To form more contracts stably, we need to split outputs. This commit 72a1e2b looks like it implements something related to output splitting, but it looks like DistributeFunds is not used anywhere. Is that still WIP or do we need to implement something?

Upload a file without encryption

It'd be nice if us supports uploading/downloading files without encryption. Since some clients encrypt implement client-side encryption to achieve higher security, us could skip encryption and it might save transfer time.

Proposal: Add the host's public key to the contract header

Currently the contract header only stores the contract's ID and secret key; the host key is stored as part of the revision. This means that if the revision becomes corrupted, the host key may be lost. Contract files are supposed to remain recoverable even if the revision is corrupted (indeed, they exploit this property in order to avoid excessive fsyncing). The header alone must be sufficient to recover the contract; thus, the host key must be moved into the header.

This will be an interesting case study in changing one of the us formats post-release. Previously I could change things with impunity, but now I have to maintain at least a semblance of compatibility.

Implementation plan

The host key will be encoded as a crypto.PublicKey (a 32-byte array), and will immediately precede the contract ID in the header. That is, the new struct will be:

type ContractHeader struct {
	magic   string
	version uint8
	hostKey crypto.PublicKey
	id      types.FileContractID
	key     crypto.SecretKey
}

This precludes non-ed25519 host keys, but that's fine: hosts are going to use ed25519 and ed25519 alone for the conceivable future. If circumstances change, we'll just have to change the format again.

While we're at it, I think we could encode the renter's secret key as just 32 bytes instead of 64. IIRC, crypto.SecretKey actually contains both the secret key and the public key, and the public key can be regenerated from the secret key. So the actual size of the header will not change.

When creating a contract, the hostKey field will be set in the obvious manner. After creation, the hostKey becomes immutable: those bytes will never be touched by another Write call.

When loading a contract, the revision will be checked against hostKey to ensure the keys match. A mismatch results in an error. (At some point, I will need to add helpers for restoring a revision after it's been corrupted, but that's outside the scope of this proposal.)

A new method, (renter.Contract).HostKey, will return the hostKey field as a hostdb.HostPublicKey. (This method will shadow the HostKey method of the proto.ContractRevision embedded in the renter.Contract struct.) This should result in all existing code transparently using the header key instead of the revision key.

The version number will be incremented to 2. renter.LoadContract will return an error if passed a contract with a version number other than 2.

A new function, ConvertContractV1V2, will be added to the renter package, along with testing.

A new command, user contracts convert [filename], will be added to user. This command will upgrade a contract from v1 to v2. To prevent corruption, the upgraded contract will be written to a new file and then atomically Rename'd to overwrite the old file. The user is expected to run this command on all of their contracts. As a convenience, perhaps user contracts convert (with no argument) will automatically convert all of the user's available contracts. To point users in the right direction, user will suggest the convert command if it encounters a version mismatch error when loading a contract.

ConvertContractV1V2 will be deleted two months after release, along with any associated testing and user code. (Future conversion functions will have longer lifespans; this one is short-lived because us does not have many users yet.) I do anticipate that user contracts convert will be necessary again someday; however, it will live eternally in version control, so I see no reason to keep it around while the format is stable.

In summary, the end goal is for users to get an error the next time they run user, and then run user contracts convert to upgrade all of their contracts to the new format.

Incomplete Uploads (PseudoKV)

Currently when a host fails while uploading to a contract set, the entire upload has to be restarted. Especially for files with bigger chunk sizes (e.g. 5 GB), re-uploading 145 GB for x30 hosts if a single host fails is problematic. Not only because all this data also has to be re-uploaded, but also because the existing sectors of the failed upload are garbage and use up valuable space on the contracts.

The suggested solution by the author of this repo;

Basically if an upload returns an error, it should save the incomplete metafile. Then you can use the existing migration functionality to copy the metafile into the new PseudoFS, and then resume uploading.

Upload Throughput Improvement

The purpose of this issue is so we can track and improve upload throughput. Testing has shown that upload throughput caps out at around 200 Mbit/s (max spike), 6 hosts, 33.3 Mbit/s/host. Although us uploads concurrently to all hosts, there is still some bottleneck holding the upload process from going above 200 Mbit/s. We'd like to see uploads through us to reach ~1 Gbit/s or very close. This is a enterprise standard (1 Gbit/s & 10 Gbit/s).

Possibilities;

  • some of selected hosts are not fast enough,
  • some of them are just busy,
  • Reed-Solomon coding is not fast enough,
  • etc.

write: broken pipe

Error persists after 21dacc1.

gateway v2.1.0, linux-amd64.

$ 2020-02-15T12:08:15.000Z        ERROR   api/put_object.go:169   failed to store the object      {"path": "/home/file", "request_id": "79a8c08f55a36c8d2672ead671b81a63ac78077e", "bucket": "home", "key": "file", "user": "a74982c8-67a1-4dae-a366-f082c85bc5d0", "error": "could not upload to some hosts: \n96ee34b0: Write: WriteRequestID: write tcp 167.71.76.88:53514->212.51.155.230:9982: write: broken pipe", "errorVerbose": "\n96ee34b0: Write: WriteRequestID: write tcp 167.71.76.88:53514->212.51.155.230:9982: write: broken pipe\ncould not upload to some hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).flushSectors\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:320\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWriteAt\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:536\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWrite\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:358\nlukechampine.com/us/renter/renterutil.PseudoFile.Write\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:555\nio.copyBuffer\n\t/usr/local/go/src/io/io.go:404\nio.CopyBuffer\n\t/usr/local/go/src/io/io.go:375\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:238\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:201\ngithub.com/storewise/s3-gateway/pkg/storage.(*Storage).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/storage.go:129\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).putObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:167\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).PutObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:99\ngithub.com/GooBox/sia-s3-adapter/pkg/api.Respond.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/respond.go:54\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithDate.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/date.go:41\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithServer.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/server.go:37\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.(*Logger).With.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/logger.go:73\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2802\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1890\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}
C2020-02-15T12:09:03.454Z      WARN    storage/bucket.go:375   failed to close file system     {"path": "/home/file", "request_id": "e3d90f149c4488a95a7398bffa2e9ad83a68266d", "bucket": "home", "error": "could not upload to some hosts: \n96ee34b0: Write: WriteRequestID: write tcp 167.71.76.88:53514->212.51.155.230:9982: write: broken pipe\n4cfefed2: Write: WriteRequestID: write tcp 167.71.76.88:38580->217.103.244.25:9982: write: broken pipe", "errorVerbose": "\n96ee34b0: Write: WriteRequestID: write tcp 167.71.76.88:53514->212.51.155.230:9982: write: broken pipe\n4cfefed2: Write: WriteRequestID: write tcp 167.71.76.88:38580->217.103.244.25:9982: write: broken pipe\ncould not upload to some hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).flushSectors\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:320\nlukechampine.com/us/renter/renterutil.(*PseudoFS).Close\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:429\ngithub.com/storewise/s3-gateway/pkg/meta.(*fileSystem).Close\n\t/go/src/github.com/storewise/s3-gateway/pkg/meta/filesystem.go:126\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Synchronize\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:373\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store.func2.1\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:181\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}
2020-02-15T12:08:38.527Z        WARN    storage/bucket.go:176   failed to store the object      {"path": "/home/file", "request_id": "d819061971097b9ee8c8f10691f400daaf9c04f4", "bucket": "home", "key": "file/null/1", "error": "could not upload to some hosts: \n96ee34b0: Write: WriteRequestID: write tcp 167.71.76.88:53514->212.51.155.230:9982: write: broken pipe", "errorVerbose": "\n96ee34b0: Write: WriteRequestID: write tcp 167.71.76.88:53514->212.51.155.230:9982: write: broken pipe\ncould not upload to some hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).flushSectors\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:320\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWriteAt\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:536\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWrite\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:358\nlukechampine.com/us/renter/renterutil.PseudoFile.Write\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:555\nio.copyBuffer\n\t/usr/local/go/src/io/io.go:404\nio.CopyBuffer\n\t/usr/local/go/src/io/io.go:375\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:238\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:170\ngithub.com/storewise/s3-gateway/pkg/storage.(*Storage).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/storage.go:129\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).putObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:167\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).PutObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:99\ngithub.com/GooBox/sia-s3-adapter/pkg/api.Respond.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/respond.go:54\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithDate.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/date.go:41\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithServer.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/server.go:37\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.(*Logger).With.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/logger.go:73\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2802\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1890\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}
2020-02-15T12:08:38.528Z
2020-02-15T12:08:14.934Z        DEBUG   storage/bucket.go:240   failed to copy the contents     {"path": "/home/file", "request_id": "e3d90f149c4488a95a7398bffa2e9ad83a68266d", "bucket": "home", "key": "file/null/1", "error": "could not upload to some hosts: \n96ee34b0: Write: WriteRequestID: write tcp 167.71.76.88:53514->212.51.155.230:9982: write: broken pipe", "errorVerbose": "\n96ee34b0: Write: WriteRequestID: write tcp 167.71.76.88:53514->212.51.155.230:9982: write: broken pipe\ncould not upload to some hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).flushSectors\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:320\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWriteAt\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:536\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWrite\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:358\nlukechampine.com/us/renter/renterutil.PseudoFile.Write\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:555\nio.copyBuffer\n\t/usr/local/go/src/io/io.go:404\nio.CopyBuffer\n\t/usr/local/go/src/io/io.go:375\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:238\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:170\ngithub.com/storewise/s3-gateway/pkg/storage.(*Storage).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/storage.go:129\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).putObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:167\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).PutObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:99\ngithub.com/GooBox/sia-s3-adapter/pkg/api.Respond.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/respond.go:54\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithDate.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/date.go:41\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithServer.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/server.go:37\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.(*Logger).With.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/logger.go:73\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2802\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1890\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}

Download caching (PseudoKV)

Storing sectors in a download cache will allow the renter to download the chunks once and extract the small files from within the downloaded sectors without having to do multiple requests to the hosts. One of the the challenges is to figure out how much buffer size is required due to potential non-sequential downloads spanning multiple chunks.
Although caching has to be supported in us first, the gateway implementation could expose an option --cache=<bytes> to allow the operator to manually specify a cache size depending on the available resources. With many enterprise moving to flash based storage solutions sitting between the application and disk-drives, this cache could be stored on the flash arrays and subsequently written to disk.
Download caching will enable higher performance when it comes to handling small files.

Support I/O cancellation

It should be possible to cancel PseudoFile operations like Read and Write. This would imply support for cancellation at the proto level as well.

I think the best way to accomplish this would be SetDeadline methods on PseudoFile and Session. These would translate directly to SetDeadline calls on the underlying net.Conn. If a timeout occurs, the caller should be able to determine this by unwrapping the resulting error.

Adding context.Context arguments to every method is also an option, but I've never liked this pattern. It pollutes the API and drags in non-cancellation baggage that is almost never used.

Investigate "rejected for bad file size" error

Somehow hosts are desyncing from renters in terms of the filesize. But it's worse than a normal desync; in fact, it doesn't appear to be a desync at all! The host will report to the renter that the filesize is X sectors, but when you go to revise it, it will complain that the filesize is actually X+1 sectors. (The new protocol doesn't even send what the filesize should be; the host computes it on their side.) So it's not clear whether the bug is on the renter side or the host side. Hopefully the former, since that's easier to fix.

Keep more funds in a contract

Until hosts implement this https://gitlab.com/NebulousLabs/Sia/-/merge_requests/4630, we cannot add funds to a contract if we use up the contract. It'd be better to keep say 1000 x BaseRPCPrice here:

if s.rev.RenterFunds().Cmp(price) < 0 {

if s.rev.RenterFunds().Cmp(price) < 0 {

if rev.NewValidProofOutputs[0].Value.Cmp(price) < 0 {

I'm not sure if 1000 x BaseRPCPrice, though, because we usually fail to renew a contract because of the wallet issue...

Related to #75.

Reuse addresses to form/renew contracts

Reusing addresses seems urgent. Our wallet that is running for about one month has 831432 addresses already. Although one of the reasons might be it retries forming/renewing contracts so many times because of wallet conflict, it may be going to run out of memory.

Currency might be negative

We got this stack tract. I guess Currency gets negative in updateRevisionOutputs.

goroutine 1818429 [running]:
runtime/debug.Stack(0x450157, 0x0, 0xc001e348b0)
  /usr/local/Cellar/go/1.14.4/libexec/src/runtime/debug/stack.go:24 +0x9d
runtime/debug.PrintStack()
  /usr/local/Cellar/go/1.14.4/libexec/src/runtime/debug/stack.go:16 +0x22
gitlab.com/NebulousLabs/Sia/build.Critical(0xc001e34968, 0x1, 0x1)
  /Users/junpei/pkg/mod/gitlab.com/!nebulous!labs/[email protected]/build/critical.go:20 +0xd9
gitlab.com/NebulousLabs/Sia/types.Currency.Sub(0x100c800, 0xc001edbee8, 0x1, 0x1, 0xc001e6a900, 0xc00230be80, 0x1, 0x1, 0x8ae625be92af5a00, 0x0, ...)
  /Users/junpei/pkg/mod/gitlab.com/!nebulous!labs/[email protected]/types/currency.go:190 +0x14e
lukechampine.com/us/renter/proto.updateRevisionOutputs(0xc001e356a8, 0xc013533600, 0xc00230be80, 0x1, 0x1, 0x0, 0x0, 0x0, 0x0, 0x1, ...)
  /Users/junpei/pkg/mod/lukechampine.com/[email protected]/renter/proto/session.go:754 +0x1e6
lukechampine.com/us/renter/proto.(*Session).RenewAndClearContract(0xc001dc8c00, 0x13e8320, 0xc001ec7ad0, 0x13d4a20, 0xc001ec7ad0, 0xc0018cea00, 0xc002515000, 0x2, 0x7, 0x40e20, ...)
  /Users/junpei/pkg/mod/lukechampine.com/[email protected]/renter/proto/renew.go:331 +0x155c
lukechampine.com/us/renter/proto.RenewContract(0x13e8320, 0xc001ec7ad0, 0x13d4a20, 0xc001ec7ad0, 0x789823fdbaaea4ea, 0x7105fea3fcb7ef0e, 0x8cf1488de1bb7ae0, 0x5661fc6f389adc5c, 0xc0018cea00, 0x40, ...)
  /Users/junpei/pkg/mod/lukechampine.com/[email protected]/renter/proto/renew.go:31 +0x271

Write: couldn't read Merkle proof response

Write: couldn't read Merkle proof response: unexpected EOF
Write: couldn't read Merkle proof response: marshalled object contains invalid length prefix

gateway: v2.1.0, linux-amd64

2020-02-15T12:08:14.935Z        WARN    storage/bucket.go:176   failed to store the object      {"path": "/home/file", "request_id": "0b9fb917d670bc38b2ec89309e9cc6cfcf172f8c", "bucket": "home", "key": "file/null/1", "error": "could not upload to some hosts: \n96ee34b0: Write: couldn't read Merkle proof response: unexpected EOF", "errorVerbose": "\n96ee34b0: Write: couldn't read Merkle proof response: unexpected EOF\ncould not upload to some hosts\nlukechampine.com/us/renter/renterutil.(*PseudoFS).flushSectors\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:320\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWriteAt\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:536\nlukechampine.com/us/renter/renterutil.(*PseudoFS).fileWrite\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:358\nlukechampine.com/us/renter/renterutil.PseudoFile.Write\n\t/go/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:555\nio.copyBuffer\n\t/usr/local/go/src/io/io.go:404\nio.CopyBuffer\n\t/usr/local/go/src/io/io.go:375\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:238\ngithub.com/storewise/s3-gateway/pkg/storage.(*bucket).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/bucket.go:170\ngithub.com/storewise/s3-gateway/pkg/storage.(*Storage).Store\n\t/go/src/github.com/storewise/s3-gateway/pkg/storage/storage.go:129\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).putObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:167\ngithub.com/GooBox/sia-s3-adapter/pkg/api.(*Server).PutObject\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/put_object.go:99\ngithub.com/GooBox/sia-s3-adapter/pkg/api.Respond.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/api/respond.go:54\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithDate.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/date.go:41\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.WithServer.func1.1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/server.go:37\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/GooBox/sia-s3-adapter/pkg/middlewares.(*Logger).With.func1\n\t/go/pkg/mod/github.com/!goo!box/[email protected]/pkg/middlewares/logger.go:73\nnet/http.HandlerFunc.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2007\ngithub.com/gorilla/mux.(*Router).ServeHTTP\n\t/go/pkg/mod/github.com/gorilla/[email protected]/mux.go:212\nnet/http.serverHandler.ServeHTTP\n\t/usr/local/go/src/net/http/server.go:2802\nnet/http.(*conn).serve\n\t/usr/local/go/src/net/http/server.go:1890\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357"}

Go Modules Import Error

Not necessarily an issue, but something I thought you might want to be aware of. When importing lukechampine.com/us using Go Modules us throws an error importing in Sia/persist. Works fine the traditional way.

# gitlab.com/NebulousLabs/Sia/persist
../../../Projects/pkg/mod/gitlab.com/!nebulous!labs/[email protected]/persist/boltdb.go:6:2: imported and not used: "github.com/coreos/bbolt"
../../../Projects/pkg/mod/gitlab.com/!nebulous!labs/[email protected]/persist/boltdb.go:13:2: undefined: bolt

Reproduction:
https://github.com/n8maninger/sia-us-module

Modules:
https://github.com/golang/go/wiki/Modules

Proposal: Refactor secret key management in proto

Secret key management currently works like this: a secret key is generated for you in proto.FormContract and returned as part of the proto.ContractRevision struct. This struct is then exposed in the proto.ContractEditor interface so that things like proto.Uploader can access the secret key when signing revisions.

There are a few problems with this:

  • There's no way to pass a custom key to FormContract; you are forced to use the key it generates for you. This means you can't derive your contract keys deterministically; you must store each key separately.
  • It forces the secret key to live in memory. Since proto.Uploader accesses the key directly, there's no possibility of e.g. storing the key on a separate hardware module. What we really want is to expose just a SignHash method.
  • The proto.ContractRevision struct is awkward because it bundles the revision and signatures with the secret key. When proto.Uploader want to get the secret key, it has to call (ContractEditor).Revision().SecretKey -- does that make any sense? I think the struct should stick around, since it's often convenient to group the revision and signatures (and it lets us add nice helper methods like RenterFunds), but the secret key should be split out and handled separately.

Implementation plan

The SecretKey field will be removed from proto.ContractRevision.

A proto.ContractKey interface will be added, supporting signing operations.

proto.FormContract will take a proto.ContractKey as an argument.

renter.SaveContract will take a crypto.SecretKey as an argument. (The renter contract format is not generic; it only supports ed25519 keys.)

A wrapper type will be added that implements proto.ContractKey for crypto.SecretKey.

proto.ContractEditor will gain a Key method that returns the renter's proto.ContractKey. renter.Contract will implement this method by returning its ed25519 key.

Question: Method to improve contract fund monitoring

This is more of a question/request for some feedback. Is there a way to block ongoing uploads and downloads in us if it the contract risks running out of funds or falls below a certain threshold? I know https://gitlab.com/NebulousLabs/Sia/-/merge_requests/4630 will prevent contracts from becoming unusable when they run out of funds, but what if we want to be sure a contract doesn't go < 1 SC, or < 10 SC of funding, how will that monitoring work?

We currently have the a monitor to renew contracts if they have < 100mSC but it appears to not function very well and it can still run out of funds. From what i understand we also have to lock the contract to do this check so the more checks we do the more it will interfere with uploads and downloads.

Is this something we should fix on our side or can us assist in this to make contract funding monitoring more robust? Thanks.

Edit: Related to #80 as well.

FormContract/RenewContract should check how long the unconfirmed transaction chain is

To my understanding, a host can remove a contract after forming it if the associated unconfirmed transaction chain gets too long. Would it be possible that it also happen if renewing a contract? In other words, will a host remove a renewed contract if the host finds the renewing transaction uses an unconfirmed transaction after it accepts the renew request?

Anyway, it'd be better to check whether the unconfirmed transaction chain is too long and if so it should decline to form/renew contracts. Otherwise, we receive a not-existing contract revision from FormContract/RenewContract.

PseudoFS.maxWriteSize might miscalculate

We got this panic:

runtime error: slice bounds out of range [:-64]

runtime.gopanic
  /usr/local/Cellar/go/1.14.2_1/libexec/src/runtime/panic.go:969
runtime.goPanicSliceAcap
  /usr/local/Cellar/go/1.14.2_1/libexec/src/runtime/panic.go:106
lukechampine.com/us/renter/renterutil.(*PseudoFS).fileWriteAt
  /Users/junpei/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:540
lukechampine.com/us/renter/renterutil.(*PseudoFS).fileWrite
  /Users/junpei/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/fileops.go:337
lukechampine.com/us/renter/renterutil.PseudoFile.Write
  /Users/junpei/pkg/mod/lukechampine.com/[email protected]/renter/renterutil/filesystem.go:570
io.copyBuffer
  /usr/local/Cellar/go/1.14.2_1/libexec/src/io/io.go:407
io.CopyBuffer
  /usr/local/Cellar/go/1.14.2_1/libexec/src/io/io.go:378

It looks like fs.maxWriteSize returns a negative.

More accurate transaction fee calculation

wallet.FundTransaction currently estimates the transaction size as len(inputs) * 241, which is inaccurate for transactions with few inputs and many outputs. I see two potential solutions: either take the number of outputs as an additional argument, or take an entire partially-filled transaction as an argument. I'm leaning towards the latter case, since it generalizes to things like file contract transactions and siafund transactions. The only downside is that passing in a partially-filled transaction is sort of a weird API.

Investigate Merkle proofs for empty contracts

us/renter/proto/session.go

Lines 345 to 348 in f1705b5

// TODO: we skip the Merkle proof if the resulting contract is empty (i.e.
// if all sectors were deleted) because the proof algorithm chokes on this
// edge case. Need to investigate what proofs siad hosts are producing (are
// they valid?) and reconcile those with our Merkle algorithms.

Session.Write should retry

Currently, once Session.Write fails; it kills the upper layer operation such as PseudoFile.Write and PseudoFS.Close. Since those functions are hard to recover, it creates some garbage if Session.Write with only one host fails. I think it'd be nice if Session.Write retries the request a couple of times.

Wrapping Session.Write with some library such as https://github.com/avast/retry-go would be enough.

It is related to #69.

PseudoKV

Moving this here-->The metafile format was designed to store ordinary user files, and PseudoFS was designed to represent an ordinary filesystem. Neither of them will work optimally with an S3 bucket architecture. It sounds like we should investigate creating something like a "PseudoS3" that is better geared towards key-value storage.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.