Giter Site home page Giter Site logo

vqtorch's Introduction




VQTorch is a PyTorch library for vector quantization.

The library was developed and used for.

Installation

Development was done on Ubuntu with Python 3.9/3.10 using NVIDIA GPUs. Some requirements may need to be adjusted in order to run. Some features, such as half-precision cdist and cuda-based kmeans, are only supported on CUDA devices.

First install the correct version of cupy. Make sure to install the correct version. The version refers to CUDA Version number when using the command nvidia-smi. cupy seem to now support ROCm drivers but this has not been tested.

# recent 12.x cuda versions
pip install cupy-cuda12x

# 11.x versions (for even older see the repo above)
pip install cupy-cuda11x

Next, install vqtorch

git clone https://github.com/minyoungg/vqtorch
cd vqtorch
pip install -e .

Example usage

For examples using VectorQuant for classification and auto-encoders check out here.

import torch
from vqtorch.nn import VectorQuant

print('Testing VectorQuant')
# create VQ layer
vq_layer = VectorQuant(
                feature_size=32,     # feature dimension corresponding to the vectors
                num_codes=1024,      # number of codebook vectors
                beta=0.98,           # (default: 0.9) commitment trade-off
                kmeans_init=True,    # (default: False) whether to use kmeans++ init
                norm=None,           # (default: None) normalization for the input vectors
                cb_norm=None,        # (default: None) normalization for codebook vectors
                affine_lr=10.0,      # (default: 0.0) lr scale for affine parameters
                sync_nu=0.2,         # (default: 0.0) codebook synchronization contribution
                replace_freq=20,     # (default: None) frequency to replace dead codes
                dim=-1,              # (default: -1) dimension to be quantized
                ).cuda()

# when `kmeans_init=True` is recommended to warm up the codebook before training
with torch.no_grad():
    z_e = torch.randn(128, 8, 8, 32).cuda()
    vq_layer(z_e)

# standard forward pass
z_e = torch.randn(128, 8, 8, 32).cuda()
z_q, vq_dict = vq_layer(z_e)

print(vq_dict.keys)
>>> dict_keys(['z', 'z_q', 'd', 'q', 'loss', 'perplexity'])

Supported features

  • vqtorch.nn.GroupVectorQuant - Vectors are quantized by first partitioning into n subvectors.
  • vqtorch.nn.ResidualVectorQuant - Vectors are first quantized and the residuals are repeatedly quantized.
  • vqtorch.nn.MaxVecPool2d - Pools along the vector dimension by selecting the vector with the maximum norm.
  • vqtorch.nn.SoftMaxVecPool2d - Pools along the vector dimension by the weighted average computed by softmax over the norm.
  • vqtorch.no_vq - Disables all vector quantization layers that inherit vqtorch.nn._VQBaseLayer
model = VQN(...)
with vqtorch.no_vq():
    out = model(x)

Experimental features

  • Group affine parameterization: divides the codebook into groups. The individual group is reparameterized with its own affine parameters. One can invoke it via
vq_layer = VectorQuant(..., affine_groups=8)
  • In-place alternated optimization: in-place codebook during the forward pass.
inplace_optimizer = lambda *args, **kwargs: torch.optim.SGD(*args, **kwargs, lr=50.0, momentum=0.9)
vq_layer = VectorQuant(inplace_optimizer=inplace_optimizer)

Planned features

We aim to incorporate commonly used VQ methods, including probabilistic VQ variants.

Citations

If the features such as affine parameterization, synchronized commitment loss or alternating optimization was useful, please consider citing

@inproceedings{huh2023improvedvqste,
  title={Straightening Out the Straight-Through Estimator: Overcoming Optimization Challenges in Vector Quantized Networks},
  author={Huh, Minyoung and Cheung, Brian and Agrawal, Pulkit and Isola, Phillip},
  booktitle={International Conference on Machine Learning},
  year={2023},
  organization={PMLR}
}

If you found the library useful please consider citing

@misc{huh2023vqtorch,
  author = {Huh, Minyoung},
  title = {vqtorch: {P}y{T}orch Package for Vector Quantization},
  year = {2022},
  howpublished = {\url{https://github.com/minyoungg/vqtorch}},
}

vqtorch's People

Contributors

minyoungg avatar jyonn avatar sunrainyg avatar

Stargazers

 avatar Shareef Ifthekhar avatar Sheng-Yu Wang avatar Han Wang avatar Baptiste Morisse avatar Haotong DU 杜昊桐 avatar Hao avatar Junhyeong Cho avatar Elyn_Wang avatar Jayeon Yi avatar  avatar 桜華 avatar Kabir Swain avatar Rachel ZHENG avatar ChengLi Feng avatar  avatar Xubin Ren avatar elucida avatar  avatar Yushi Bai avatar Viktor Tóth avatar TrivialToSee avatar Joschka Birk avatar Yuzhe Gu avatar  avatar  avatar Jyo Pari  avatar Moritz avatar Adrien Petralia avatar  avatar  avatar MiZhenxing avatar  avatar Sensho Nobe avatar Vateye avatar 1998czk avatar Shengyu Ye avatar  avatar zzc avatar LouisSimon avatar chrisCC avatar  avatar gi8jeon7 avatar  avatar Yuying Ge avatar HAESUNG JEON avatar  avatar Zedong Wang avatar Sijie Zhao avatar Zhao Wenhao avatar  avatar yan-mingyuan avatar  avatar  avatar Raphaël avatar Seunghyun Kim avatar Dongwon Kim avatar Xingjian Du avatar Luting Wang avatar Tom Ouellette avatar Andrew avatar Chenhui Zhang avatar Jeff Carpenter avatar Ziyi Chen avatar Yang Wang  avatar Moritz Reuss avatar Dominik Schmidt avatar  avatar Rohit Gupta avatar Junghwan Heo avatar zhujiem avatar mse avatar felix-wang avatar George avatar Qi Sun 孙启 avatar Jeong-Sik Lee avatar Sofian Mejjoute avatar Yunjie avatar  avatar Qcy avatar Kashif Rasul avatar DS.Xu avatar Nordlicht avatar Ye Bai avatar Hongwei Fan avatar sumyyyyy avatar Zhikang Niu avatar Sicheng Li avatar  avatar Crazyang avatar Sandalots avatar 爱可可-爱生活 avatar Xiaojian Ma avatar Baoxiong Jia avatar  avatar Phillip Isola avatar Alex Andonian avatar  avatar  avatar

Watchers

Yang Wang  avatar  avatar  avatar  avatar

vqtorch's Issues

Request for new features: Diverse codebook size in RVQ

Thanks for your contribution to propose this inspiring work.
I would like to request for the support of different codebook size in residual quantization and weighted loss function.
It might be conflict with the share attribute, but I think it would be a reasonable extension. Also, since the generated codes are in a course-to-fine manner, the first primary codes should have a larger loss weight during training.

I hope you and your team can consider these two features, and I think other quantization variants (e.g., product quantization) are also compatible with them.

Affine reparameterization for residual vector quantization

Hi there, thank you for the great repo and the insightful paper!

I am playing around with residual vector quantisation and was wondering whether you have any insights with regards to having different learnable scale and bias parameters for each residual group. It looks like the repo shares the same scale and bias for all residual codebooks. Thank you!

shapes of inputs

hello! currently, the inputs to the layers assume that it is an image, but it would be great if one can use it for a sequence of vectors (which is what you do I suppose internally).

retain_graph=True

Hello! I encountered an error when using "inplace_optimizer" in my code, but the same code works fine when "inplace_optimizer" is not used.

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward

Why is inplace_optimizer can only be used with beta=1.0

Since we are using a separate optimizer for the codebook we would want to prevent codebook updates due to optimizing the original loss. However, If beta = 1.0, then the term left in your formulation of commitment loss is equivalent to the codebook update term in the original VQVAE paper. Shouldn't beta be 0, which would then have the z_q term detached?

license

could you kindly also add an appropriate license?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.