Comments (7)
I am expecting to add an options bundle as the second argument to the constructor. So for example we'd have something like
new CompressionStream('deflate', {
level: 0.1,
flush: 'always'
});
How the parameters work will be difficult to fix if we get it wrong, which is why they aren't in the initial version of the standard.
from compression.
That makes perfect sense, thanks! :)
from compression.
I imagine that will also include the use of a custom dictionary? At present for Fastmail, we use pako in a worker to compress our API request bodies, and use a custom dictionary because it makes the compression much more effective. I had hoped that we could look to switching it to a standard API that would compress faster with less code loaded.
Given the already-niche status of manual compression in JavaScript (for web systems specifically, I personally can’t think of having heard of even one other user, though doubtless some exist), I was a little surprised to hear of this shipping in Chrome without support for varying the compression level or providing a custom dictionary—it’s so rare that people want to do manual compression that I’d guess that a fair fraction of those that do use it have tuned things carefully, and so will not be able to use this new thing at this time without altering the balance.
My surprise is probably because I believe supporting at least those two parameters (level and dictionary) to be quite straightforward, with a broad approach (an options object to the constructor) being obvious, and individual options decisions that should just be made, where discussion is unlikely to affect matters. For starters, I take it as given that the available options depend wholly on the compression method selected.
level
could reasonably be an enum, an integer or a float in the range 0–1; given JavaScript and given conventions of extant compression software, probably an integer. Its reasonable range could be 0–3 (matching FLEVEL) or 1–9 (matching most software). What the default is also varies—for FLEVEL’s 0–3, 2 is defined as the default; for compression tools with a level 1–9, some default to 8 (e.g. the zlib library) and others to 6 (e.g. gzip(1)). These numbers are, of course, fairly arbitrary anyway. You could then either leave the default unspecified, or pick 6 or 8 and run with it.
For compression, dictionary
should probably be an String containing only ASCII, an ArrayBuffer or a Uint8Array. For decompression, you could wish to provide more than one dictionary, so perhaps dictionary
(or dictionaries
?) would be an object mapping Adler-32 to dictionary.
from compression.
I imagine that will also include the use of a custom dictionary?
Yes, that's on the roadmap, although I don't think I've specifically mentioned it here. I filed issue #27 to make it explicit.
I was a little surprised to hear of this shipping in Chrome without support for varying the compression level or providing a custom dictionary
I believe in shipping the most uncontroversial parts of an API first. We need to assess demand to set the priority for shipping more advanced features.
individual options decisions that should just be made, where discussion is unlikely to affect matters.
Discussion does affect matters. You gave 4 different approaches to level
yourself. Someone else may have another suggestion which copes well with libdeflate having levels all the way up to 12, or zopfli providing a higher level of compression which is extraordinarily expensive.
from compression.
The parameter should be a float in [0..1], similar to toDataURL
, toBlob
, imho.
Mapping to internal compression levels, like say libdeflate 12 for example, should be an unspecified internal implementation detail [1].
[1] toDataURL
, toBlob
are spec'd that way. How their parameters are mapped to the internal details of a codec is not in the spec. That was intentional -- it allows browser vendors some wiggle-room to choose what's best for their underlying implementations.
from compression.
@noell I didn't know about the quality
argument to toDataURL
and toBlob
. That's a good precedent.
I feel there should be some kind of restriction on implementations. For example, level: 0.1
should use less CPU than level: 0.9
, and the difference between 0.8
and 1.0
shouldn't be more than a factor of 2. The reason being that code that performs well in one browser shouldn't perform badly in another browser.
from compression.
In addition to options
for the compression algorithm, it would be good to be able to set the Queuing Strategies for these as well following the same approach as the TransformStream
constructor.
from compression.
Related Issues (20)
- Compress arraybuffer example HOT 1
- Readable byte stream HOT 2
- Standards home? HOT 19
- Include brotli HOT 16
- Shipping status? (Unity3D use cases) HOT 1
- Blocking APIs in Workers? HOT 1
- Get unused data from end of DecompressionStream? HOT 4
- Support for decompressing multi-member gzip files? HOT 1
- Getting snappy over Snappy
- Handling of trailing data after deflate-raw final block HOT 4
- Use enum for `format` parameter? HOT 4
- Why TypeError instead of DOMException DataError for decompression failure? HOT 3
- Should the spec allow enqueueing on-the-go instead of storing them until the input is exhausted? HOT 1
- zstd support HOT 7
- Resolved Promises of WritableStreamDefaultWriter Methods HOT 1
- On-demand flush HOT 3
- SSL certificate error HOT 2
- The served document is still in W3C style HOT 1
- Conformance should be in terms of Infra
- DecompressionStream should support pull-based decompression HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from compression.