iterating over decomposition of the data flow, leaning out both crypto and compression functionality, in a contextually robust fashion. targetting an objective of maximizing instruction availability for the PC and minimizing wait time for any available gophers, while supporting a semantically dense client development experience.
In compression.go, the Gzip function has a go routine that reads from a buffer, and writes back to that buffer. I noticed that the buffersize variable is hard coded to 8. Would there be any benefit to increasing this buffer size, or would it result in too much data for a gopher to effectively process as it accepts data from the encryption gopher's stream?