Giter Site home page Giter Site logo

spillerrec / cgcompress Goto Github PK

View Code? Open in Web Editor NEW
16.0 16.0 1.0 282 KB

Efficiently store Visual Novel cgs by using a multi-image format for storing all variations of one image in one file. This way, only the differences need to be stored which reduces the file size significantly.

License: GNU General Public License v3.0

QMake 0.80% C++ 91.00% C 3.13% HTML 1.69% JavaScript 2.91% CMake 0.47%

cgcompress's People

Contributors

spillerrec avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

Forkers

hakanaku1234

cgcompress's Issues

Image::combine does not support cgcompress:alpha-replace

It was probably written before this was added, but never the less, it is not correct if we are dealing with images requiring it.
In the long run we should probably have a way of actually specifying what merging method is needed. It should be sufficient to just do the new way always for now, but if we are going to allow a variable transparency color, we need somehow of specifying it.

Base image could be a combination of all the images

It is most commonly noticed with eyebrows:
0 1
The eyebrow moves, and the overlay image contains both erases the old eyebrow and paints the new one. Total size: 33,454 bytes

An alternative way would be to use a base image which contains no eyebrows, and two overlays just adding the eyebrows:
shared-base shared-1-cropped shared-2-cropped
Total size: 33,046 bytes

Since the base image is simpler now, it got reduced with 458 bytes (from 30,068 bytes).
The two diff images are however larger by 50 bytes, but this can be removed by storing them in the same image like this:
shared-combined2
It is now smaller with 64 bytes, thus smaller than the original diff image as well. Worth doing this for 64 bytes? Probably not, but something to think about nevertheless.

Current Proposed Shared context
Base 30,068 29,610 29,610
Diff 1 1,572
Diff 2 3,386 1,864 3,372
Total 33,454 33,046 32,982

Evaluate estimates of image compressed size

The image gradient sum method is quite ad-hoc and doesn't really work that well. It does work though and is decently fast. Using real compression results is only very slightly better at a horrible slowdown (quality 0). But is it really better than just counting non-transparent pixels? Lets try making some different methods and properly evaluate speed / accuracy.

Suggestions to add:

  • Count amount of transparent pixels
  • Use LZ4 compression as suggested on FLIF gitter
  • Use LZ4 compression with PNG delta compression methods

Images with small changes end up bloating files

Some files contains a slight difference in large parts of the image of unknown reasons, for example:
1
Since we don't have access to the previous image, this is a lot less efficient than LZMA. An idea would be to store the difference between the two images, which I haven't had a lot of success with. However notice that most of the image is only a very slight change, this mask only reacts on differences above 12:
mask
We could use the difference to store the small pixel value changes, and use the normal approach for the large changes. Like so:
background-noise
foreground
This could perhaps easily split which pixels benefit from using a difference, and those which are better off stored normally. It seems to be the case, with this example saving of 60% of the file size, but this is just a quick example produced in Gimp which could contain errors!

This also raises the challenge of how to decide when to do it, as if we are combining and extracting frames, we have less control of which previous image it is doing the diff on. We should try making an implementation just for testing however, as this could result in significant savings for a certain set of images.

Implement image codecs without relying on Qt

This will allows us to discard transparent values during compression if not needed, improving results. We could try to hack the Qt framework instead and require specific plugins, but we want to write a decoder which doesn't rely on Qt anyway, so this would be a good first step.

Crash or incorrect results on different input sizes

If the input images are not the same size it will crash. In rare case it does not crash, but still produces incorrect results.

  • Go through the code and insert asserts on functions which relies on the input images being equal in size
  • Add a check at the very beginning with a good error message
  • Catch and display exceptions in main

BMP conversion drops alpha, wrong results in evaluation

Using Qt's inbuilt image codecs, it does not seem to save the alpha channel. This causes the LZMA compression tests to give LZMA an unfair advantage with datasets which contains transparency, and I have a rather large one which reduces cgCompress efficiency from 25% to 5% average over all datasets, as just using LZMA should be more efficient for those images. However this is not fair, as it does not include the alpha channel.

It does however seem that for those images, LZMA is best, and even deflate is better than WebP. PNG ends up with about same size as deflate.

Since it is only the evaluation which would use BMP, just fixing it there would be good enough. We could just write a simple stream manually, instead of using BMP. We could even try to interleave it, to see if that improves matters. It needs to be fixed in the end though, but not needed for this issue.

Crash if two images are equal

If two images are exactly the same, cgCompress crashes when it tries to render the image, as the image gets saved twice, but with width and height offset to be outside the image. The crash should not happen, nor should it save the image twice.

Non-optimal compression with overlapping repetitions

Difference frames sometimes contain some of the same content as other difference frames. If a frame is the combination of two other frames, cgCompress cannot extract and reuse it unless they are completely independent.

The following images will produce the issue when compressed with cgCompress:
m1
m2
m3
m4

Decoder library

Develop a proper decoding library, and focus on great speed and memory consumtion. This is to make it easier to include it in applications and game engines.

Requiriments:

  • No dependency on Qt
  • Fast loading of just a single image
  • Support holding it in memory for fast drawing of several images.
  • Mode for reading only thumbnail + metadata (still missing)

Wanted:

  • LGPL if possible
  • No hard dependencies on file formats, support disabling for example FLIF.
  • Parallel decoding of needed images
  • Parallel rendering?
  • SIMD for SPEED.
  • OpenGL compatible buffers?
  • Control of when images get decoded, i.e. delaying decoding until needed to reduce memory usage.
  • Repacking for modifying metadata
  • Access to the raw images, so we can create an evaluator

Other points of interest:

  • Support a different format than ZIP which allows fast seeking. FxSF is being developed for this.
  • Investigate different overlay methods for rendering speed. (1bit alpha might be more efficient)

Detect the need of custom overlay

The FLIF api doesn't currently save transparent colors, making it impossible to use the overlay method for transparent images with this format, which is always used, even when not needed no transparency is needed. So FLIF support is uterly broken.
Furthermore, FLIF compresses better when we ignore transparent colors, so only saving them when needed could save us a little. This is likely also the case with WebP, though I haven't tried.

Work items:

  • Raise issue at FLIF to find out how to save the transparent colors
  • Detect the need of storing the transparent colors
  • Support changing the settting in the encoder dynamically

Automatic evaluation of compression

cgCompress files are now verified to be exactly as the original, so instead of having to store the original images, we can convert the cgCompress image to other formats and create a .csv file with the resulting filesizes.

Compare the following:

  • The cgCompress file
  • Plain old BMP
  • individual compressed BMP (LZMA, perhaps Deflate as well)
  • combined compressed BMP
  • WebP
  • FLIF ?

New compression by detecting equal pixels

Try out compression method where each pixel is only saved once. Instead of finding out how we can get from one image to another, find out which pixels are shared with which images.

Parallel decoding?

Split base image into for example 4 parts so it can be decoded in parallel? Test how much it affects the compression.

Split images to reduce pixel count

Consider this image:
mask 015-cropped
While seemingly okay, this could be split into three parts, the left eye, right eye, and mouth. Doing so would reduce the amount of pixels with 50%, while only increasing file size with 2% (~200 bytes). Secondly, we will be able to decode the images in parallel, speeding up decoding.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.