Giter Site home page Giter Site logo

libsquish's People

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

Forkers

bayonetta5

libsquish's Issues

Squish assumes image data is supplied in RGBA format, but DXT outputs colors in BGRA

This is only important when using the perceptual metric. The solution is just 
to flip the luminance 
calculation components and to relabel all rgba variables and documentation 
appropriately. Also, 
images are usually gamma corrected, yet the current perceptual metric is using 
the arbitrary Rec. 
709 coefficients (which is only good for linear space). Using CCIR 601 is more 
correct. They are: Y' 
= 0.299 R' + 0.587 G' + 0.114 B'.


Original issue reported on code.google.com by [email protected] on 28 Mar 2010 at 10:34

Inclusion of <climits>

When compiling libsquish under Linux (Debian, up-to-date testing branch),
it fails, complaining about INT_MAX being undeclared. This is easily solved
by #including <climits> (which is a standard C++ header) in alpha.cpp and
singlecolourfit.cpp. Can this be incorporated into the trunk?

Original issue reported on code.google.com by [email protected] on 13 Nov 2009 at 6:27

Crash in ComputeWeightedCovariance

What steps will reproduce the problem?
1. have a 4x4 block with identical values in all 4 channels
2.
3.

What is the expected output? What do you see instead?

processing crashes in ComputeWeightedCovariance() with a divide by 0.0

What version of the product are you using? On what operating system?

1.10, XP

Please provide any additional information below.

you can fix it with a one liner if check in maths.cpp at line 47,  
if(total > FLT_EPSILON)  or   if(!(total < FLT_EPSILON))

Original issue reported on code.google.com by [email protected] on 24 Mar 2009 at 6:05

S3TC patent situation

Hello Simon Brown (and others),

I have written several DXTn texture conversion patches for the Wine project 
(http://www.winehq.org/), however, the project's maintainer will not merge my 
patches into HEAD because of patent fears. 
Currently the patent to DXT (de)compression is held by VIA (and its satellite 
brand S3 Inc.). Someone pointed me to libsquish, an open source project 
implementing DXT algorithms.
My question is this: did VIA or S3 indemnify you or did they provide a waiver 
stating they will not sue you for implementing a DXT (de)compression algorithm? 
If so, could you advise me on how to procure such indemnification for the Wine 
project? If not, did you find a way to break the patent?

Thanks in advance,

Itzamna

Original issue reported on code.google.com by [email protected] on 4 Dec 2010 at 11:05

Calculate compression result size

I am trying out libsquash using "compressImage". However I have no idea how to 
find out how large my result will be in bytes. I searched through the example 
but found no use of this function.

Original issue reported on code.google.com by [email protected] on 2 May 2012 at 2:22

Compile warnings are generated due to missing virtual destructors

What steps will reproduce the problem?
1. Compile code
2. Warning: class has virtual functions but non-virtual destructor
2.1 files: ColourFit, ClusterFit, RangeFit, SingleColourFit

What is the expected output? What do you see instead?
Code should compile without warnings

What version of the product are you using? On what operating system?
Mac OS X 10.5.6
i686-apple-darwin9-gcc-4.0.1

Issue can be fixed by adding an empty virtual destructor to ColourFit
virtual ~ColourFit() { }

Original issue reported on code.google.com by [email protected] on 2 Apr 2009 at 3:48

DXT1 compression forces RGB to 0 when alpha bit is 0

What steps will reproduce the problem?
1. Convert a R8G8B8A88 texture to DXT1 using squish
2. Look at the compressed texture in your engine with bilinear filtering 
enabled onto the sampler or alpha test turned off

What is the expected output? What do you see instead?
The rgb values of the texture are forced to 0 where alpha < 128
This is a problem since when discarding texels with clip(tex-0.5) then the 
rgb values are bleeding to black.

What version of the product are you using? On what operating system?
I checked the last version the code is always the same.

Please provide any additional information below.
I locally removed theses lines:

// check for transparent pixels when using dxt1
/*if( isDxt1 && rgba[4*i + 3] < 128 )
{
  m_remap[i] = -1;
  m_transparent = true;
  continue;
}*/

and 

check for a match
int oldbit = 1 << j;
bool match = ( ( mask & oldbit ) != 0 )
&& ( rgba[4*i] == rgba[4*j] )
&& ( rgba[4*i + 1] == rgba[4*j + 1] )
&& ( rgba[4*i + 2] == rgba[4*j + 2] )
/*&& ( rgba[4*j + 3] >= 128 || !isDxt1*/ );

in coulourset.cpp

Then it works as expected... So I really wonder if the DXT1 formats 
*needs* that to be done or if this is known bug? 

I tested textures converted with that fix onto a PC with GTX260/dx9 and 
playstation 3 and xbox 360 it worked fine on all platforms.




Original issue reported on code.google.com by [email protected] on 9 Mar 2010 at 9:49

Is squish DXT1 compression reliably consistent across builds/machines?

Please let me know if there is a more appropriate forum within which to ask
this question.

I am considering switching to squish (from nvDXT) if it will solve the
following issue(s):

What steps will reproduce the problem?

in nvDXT (a somewhat old version):
1. Debug seems to be changing the FP (x86) control word (I can tell because
it is not restoring it on return), where Release seems to not be changing
it (or maybe is, but then _is_ restoring it, not sure)
2. If I build Debug v Release, my compressed image will be different (even
if I call _controlfp(_PC_24, _MCW_PC);)
3. Comments online concerning the CUDA library (in software mode) indicate
that the output may be different based on the compile flags, compile target
(32bit/64bit), and (more troublesome) on the specific machine the
compression is run on.
4. I am concerned (though have not confirmed) that compressing the same
source data may result in different target data if they occur during the
same application instance (possibly due to "leftover" state of memory
within the nvDXT lib from a previous compression).

What is the expected output? What do you see instead?
Is there a way to configure squish such that I can get reliable results
regardless of the machine/compile target/build I am running?  It seems that
nvDXT is not.

What version of the product are you using? On what operating system?
not using it yet.  Windows

Please provide any additional information below.
I am using DXT1 compression on many machines within a team and am trying to
use CRC checks to confirm consistency.  I understand that DXT1 compression
at large does not mandate identical results, but since all machines will be
running the same code I would like to be able to get identical results from
identical source data on all of the machines on my team.


Thank you for your consideration/help.

Original issue reported on code.google.com by [email protected] on 11 Jun 2009 at 9:12

Python bindings

Here is a patch for adding python binding to this excellent library. It's not 
covering the whole API, just the compress/uncompress image.

I've also added 2 test examples for compress/uncompress image from pygame.

Mathieu

Original issue reported on code.google.com by txprog on 4 Jul 2011 at 10:56

Attachments:

Image quality regression


I noticed that the squish branch in the NVIDIA Texture Tools produces slightly 
higher 
quality than the latest version of squish.

Here are the RMSE that I get on a standard set of images:

             squish      NVTT        DIFF
kodim01.png 8.290677    8.275274    0.015403
kodim02.png 6.264800    6.204140    0.060660
kodim03.png 4.893167    4.872263    0.020904
kodim04.png 5.785400    5.749871    0.035529
kodim05.png 9.666156    9.651270    0.014886
kodim06.png 7.165666    7.153911    0.011755
kodim07.png 5.862642    5.841345    0.021297
kodim08.png 10.275237   10.239549   0.035688
kodim09.png 5.342344    5.329803    0.012541
kodim10.png 5.312246    5.290972    0.021274
kodim11.png 6.785249    6.772330    0.012919
kodim12.png 4.850367    4.833756    0.016611
kodim13.png 10.896658   10.87993    0.016728
kodim14.png 8.333256    8.314128    0.019128
kodim15.png 5.929647    5.897056    0.032591
kodim16.png 5.130026    5.118624    0.011402
kodim17.png 5.588973    5.579279    0.009694
kodim18.png 8.036373    8.015795    0.020578
kodim19.png 6.621748    6.613124    0.008624
kodim20.png 5.511690    5.486531    0.025159
kodim21.png 7.169297    7.158374    0.010923
kodim22.png 6.487758    6.474782    0.012976
kodim23.png 4.980631    4.965201    0.015430
kodim24.png 8.455920    8.414561    0.041359
clegg.png   15.175355   14.996866   0.178489
frymire.png 12.486885   12.053708   0.433177
lena.png    7.085361    7.064801    0.020560
monarch.png 6.609877    6.600169    0.009708
peppers.png 6.458308    6.433527    0.024781
sail.png    8.359949    8.341820    0.018129
serrano.png 6.549851    6.365134    0.184717
tulips.png  7.639101    7.622415    0.016686
Bradley1.png    13.535675   13.528776   0.006899
Gradient.png    0.752600    0.752600    0
MoreRocks.png   8.888842    8.888280    0.000562
Wall.png    12.408543   12.408283   0.000260
Rainbow.png 2.197008    2.185532    0.011476
Text.png    6.638536    6.483967    0.154569


I'm a bit puzzled by that. I've confirmed it's not caused by the recent 
optimizations 
in the computation of the alpha beta terms. NVTT uses the power method to 
compute the 
best fit axis, and that seems to produce slightly better results, but doesn't 
account 
for all the differences.

I also noticed that increasing the number of iterations produces only a minimal 
effect 
on the resulting quality. It makes me wonder if it's really worth the added 
complexity.

If old releases were available for download it might make it easier to find out 
when 
the regression was introduced.


Original issue reported on code.google.com by [email protected] on 25 Nov 2008 at 7:12

Decompression issue.

I have grabbed the latest version and with addition of limits.h got it 
compiling on Android. I am trying to use the DecompressImage function on 
devices that do not support s3tc. I use nvcompress to generate the dds files 
that I am trying to decompress. I have attached an example that contains the 
input dds file and the output (post DecompressImage) rgba data, you can use 
gimp to load the raw image. It looks like the blocks color palette isn't being 
calculated properly, perhaps it is RGBA/BGRA?

Original issue reported on code.google.com by [email protected] on 28 Jan 2012 at 12:51

Attachments:

gcc 4.4.0 error compiling

when compiling with gcc 4.4.0 an error is returned that INT_MAx is not
declared, it's fiex by including limits.h in the file squid.h

Original issue reported on code.google.com by [email protected] on 2 Sep 2009 at 8:00

libtxc_dxtn replacement/integration

Hi,

I dont have problem with libsquish, even not using it.
I wanted to ask if you got any plans to make libtxc_dxtn replacement or
Mesa integration, because libtxc_dxtn is broken (doesnt support
multitextures - I was told it by #radeon devs), for example light/lamps and
90% of other animated textures looks pixelated(like in Wolf3D) in Quake3.
Its even visible in Quake3 menus(main menu texture, and maps overview
textures).
And #radeon devs dont want to fix/work on that library, because s3tc is
patented algorithm. And dunno also if your supports it, anyway I hate C++.

Original issue reported on code.google.com by [email protected] on 3 Jan 2010 at 10:49

Error in principle component estimation

An issue was reported on NVTT that should also affect libsquish:

http://code.google.com/p/nvidia-texture-tools/issues/detail?id=120

The code to compute principle components does not always converge, in
particular, when the principle component is perpendicular to (1, 1, 1). A
simple workaround is to use the largest vector of the covariance matrix as
the initial estimate.

A quick fix is here:

http://code.google.com/p/nvidia-texture-tools/source/detail?r=1057

Original issue reported on code.google.com by [email protected] on 19 Apr 2010 at 6:48

Multiple definition of int FloatToInt( float a, int limit )

This is not an issue but more a sugestion:

When building the Squish library with a custom build system where all the cpp 
files are concatenated into one big cpp, compilation errors occurs because of 
the function "int FloatToInt( float a, int limit )" which is defined 3 times in 
3 different cpp files (alpha.cpp singlecolourfit.cpp colourblock.cpp).

It could be usefull to "factorise" this code into a new file to avoid this kind 
of errors.

Thanks a lot




Original issue reported on code.google.com by [email protected] on 29 Feb 2012 at 9:10

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.