Giter Site home page Giter Site logo

Comments (8)

BridgetteSong avatar BridgetteSong commented on August 17, 2024

demo with 160k steps
demo.zip

from audiodec.

bigpon avatar bigpon commented on August 17, 2024

Hi,
Thanks for the interesting experiments!
I think it is reasonable to add more nonlinearity to the model to enhance its modeling ability once the training is still stable.
If you have more detailed results in any form (demo page, paper, etc.), please feel free to share them with us and I will update the README to show that adding activation functions will improve the robustness to unseen data.

from audiodec.

BridgetteSong avatar BridgetteSong commented on August 17, 2024

@bigpon I confirmed changing autoencoder model can improve results. I changed as follows:

  1. it is very important to add activation functions as I said above(highly recommended). In the Encodec paper or SoundStream paper they all add(I guess they all borrowed from MelGAN). I use Snake activation function not ELU or LeakyReLU
  2. it is very important to add WeightNorm Layer, which can ensure training stability and model results significantly(highly recommended).
  3. Appropriately increasing code_dim and model size can improve audio reconstruction quality(lower melloss about 15.3 in my version)(recommended code_dim=128 although I use 256)
  4. I use noncausal training mode, MPD + ComplexMRD as discriminators, MultiMelLoss, trained by AdamW and ExponentialLR
  5. BTW, there are some errors in your MRD because intermediate convolution outputs are missed and can't be computed feature loss.

    and missing padding for each Conv2d Layer

Here is my training config.yaml
symAD_librispeech_16000_hop160_base.txt

demos for new config with training 200K steps by using librispeech and aishell datasets, but testing on an unseen dataset

demo.zip

from audiodec.

bigpon avatar bigpon commented on August 17, 2024

Hi @BridgetteSong,

  • Thanks for the great efforts of investigation! I will check the results of 48kHz VCTK corpus.
  • Do you have any plan to write a paper about your findings? If you write any paper, please inform me, and I will add the info to the README for others' reference.
  • You are correct. The MRD actually has these problems, and I will fix them.
  • Where do you put the WeightNorm layers?
  • Could you also provide the results of the original AudioDec for references? (I assume the audiodec results in demo.zip are the modified version, right?)
  • According to your conclusion, will these modifications increase the quality for arbitrary dataset? or the robustness of unseen dataset? Since you train and test the model using libritts and aishell, I assume that these modifications will increase the reconstruction quality for seen data, right?

from audiodec.

BridgetteSong avatar BridgetteSong commented on August 17, 2024

@bigpon

  • I don't have the idea of ​​writing a paper yet, but I'm interested in improving the effect of Encodec.

  • I add WeightNorm on each Conv1d layer like this:

    self.conv = nn.Conv1d(

    self.conv = torch.nn.utils.weight_norm(nn.Conv1d(**))

    I think stability of AutoEncoder is very important to train two stages or just only train one stage same as mine.

    BTW, I see WeightNorm are added in 2nd stage by default using apply() function. But I can't confirm Whether the WeightNorm initialization of ResidualBlock is successful by using apply() function. So I directly use self.conv = torch.nn.utils.weight_norm(nn.Conv1d(**)) as above.

  • I trained model by libritts and aishell dataset, but test on an unseen dataset(audios in demo.zip are unseen which are from another TTS dataset, even includes a singing demo), so these modifications can increase the quality for arbitrary dataset

  • I can't provide the results of the original AudioDec because the model is changed, the demo.zip is the modified version. but demo.zip hifi dir contains of the 16k and 24k original audios, if someone has trained an original AudioDec model, just use real audios in demo.zip to test.

from audiodec.

bigpon avatar bigpon commented on August 17, 2024

Hi,
Thanks for your investigation!

According to our internal experiments, we get some conclusions.

  1. Adding more activation functions like HiFiGAN will slightly increase the unseen data robustness. However, it is very similar to our 2-stage approach, which already used HiFiGAN as the decoder.
  2. The snake activation doesn’t show marked improvements over the ELU activation. In some cases, the snake activation even achieves much worse speech quality. We think the instability of the snake activation might cause the problem.
  3. Instand of adding activations, we found that increasing the bitrate to a reasonable scale (ex: 24kbps as Opus) will significantly improve the unseen data robustness, which somehow makes sense since it reduces the modeling difficulties. However, the very low bitrate feature is essential for some temporal-sensitive tasks such as LLM-based speech generation. Therefore, without greatly changing the architecture, adopting more training data will be a compromise. (We are investigating a new architecture for unseen data robustness and hope to release it soon.)

On the other hand, the 2D conv padding issue of the MSTFT discriminator has been fixed, and the corresponding models have been updated. Thanks for your contributions again.

from audiodec.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.