Giter Site home page Giter Site logo

r9y9 / gantts Goto Github PK

View Code? Open in Web Editor NEW
515.0 39.0 114.0 8.88 MB

PyTorch implementation of GAN-based text-to-speech synthesis and voice conversion (VC)

License: Other

Python 0.73% Shell 0.06% Jupyter Notebook 99.21%
speech-synthesis voice-conversion generative-adversarial-net gan nnmnkwii

gantts's Introduction

GAN TTS

Build Status PyPI DOI

PyTorch implementation of Generative adversarial Networks (GAN) based text-to-speech (TTS) and voice conversion (VC).

  1. Saito, Yuki, Shinnosuke Takamichi, and Hiroshi Saruwatari. "Statistical Parametric Speech Synthesis Incorporating Generative Adversarial Networks." IEEE/ACM Transactions on Audio, Speech, and Language Processing (2017).
  2. Shan Yang, Lei Xie, Xiao Chen, Xiaoyan Lou, Xuan Zhu, Dongyan Huang, Haizhou Li, " Statistical Parametric Speech Synthesis Using Generative Adversarial Networks Under A Multi-task Learning Framework", arXiv:1707.01670, Jul 2017.

Generated audio samples

Audio samples are available in the Jupyter notebooks at the link below:

Notes on hyper parameters

  • adversarial_streams, which represents streams (mgc, lf0, vuv, bap) to be used to compute adversarial loss, is a very speech quality sensitive parameter. Computing adversarial loss on mgc features (except for first few dimensions) seems to be working good.
  • If mask_nth_mgc_for_adv_loss > 0, first mask_nth_mgc_for_adv_loss dimension for mgc will be ignored for computing adversarial loss. As described in saito2017asja, I confirmed that using 0-th (and 1-th) mgc for computing adversarial loss affects speech quality. From my experience, mask_nth_mgc_for_adv_loss = 1 for mgc order 25, mask_nth_mgc_for_adv_loss = 2 for mgc order 59 are working to me.
  • F0 extracted by WORLD will be spline interpolated. Set f0_interpolation_kind to "slinear" if you want frist-order spline interpolation, which is same as Merlin's default.
  • Set use_harvest to True if you want to use Harvest F0 estimation algorithm. If False, Dio and StoneMask are used to estimate/refine F0.
  • If you see cuda runtime error (2) : out of memory, try smaller batch size. #3

Notes on [2]

Though I haven't got improvements over Saito's approach [1] yet, but the GAN-based models described in [2] should be achieved by the following configurations:

  • Set generator_add_noise to True. This will enable generator to use Gaussian noise as input. Linguistic features are concatenated with the noise vector.
  • Set discriminator_linguistic_condition to True. The discriminator uses linguistic features as condition.

Requirements

Installation

Please install PyTorch, TensorFlow and SRU (if needed) first. Once you have those, then

git clone --recursive https://github.com/r9y9/gantts && cd gantts
pip install -e ".[train]"

should install all other dependencies.

Repository structure

  • gantts/: Network definitions, utilities for working on sequence-loss optimization.
  • prepare_features_vc.py: Acoustic feature extraction script for voice conversion.
  • prepare_features_tts.py: Linguistic/duration/acoustic feature extraction script for TTS.
  • train.py: GAN-based training script. This is written to be generic so that can be used for training voice conversion models as well as text-to-speech models (duration/acoustic).
  • train_gan.sh: Adversarial training wrapper script for train.py.
  • hparams.py: Hyper parameters for VC and TTS experiments.
  • evaluation_vc.py: Evaluation script for VC.
  • evaluation_tts.py: Evaluation script for TTS.

Feature extraction scripts are written for CMU ARCTIC dataset, but can be easily adapted for other datasets.

Run demos

Voice conversion (en)

vc_demo.sh is a clb to clt voice conversion demo script. Before running the script, please download wav files for clb and slt from CMU ARCTIC and check that you have all data in a directory as follows:

> tree ~/data/cmu_arctic/ -d -L 1
/home/ryuichi/data/cmu_arctic/
├── cmu_us_awb_arctic
├── cmu_us_bdl_arctic
├── cmu_us_clb_arctic
├── cmu_us_jmk_arctic
├── cmu_us_ksp_arctic
├── cmu_us_rms_arctic
└── cmu_us_slt_arctic

Once you have downloaded datasets, then:

./vc_demo.sh ${experimental_id} ${your_cmu_arctic_data_root}

e.g.,

 ./vc_demo.sh vc_gan_test ~/data/cmu_arctic/

Model checkpoints will be saved at ./checkpoints/${experimental_id} and audio samples are saved at ./generated/${experimental_id}.

Text-to-speech synthesis (en)

tts_demo.sh is a self-contained TTS demo script. The usage is:

./tts_demo.sh ${experimental_id}

This will download slt_arctic_full_data used in Merlin's demo, perform feature extraction, train models and synthesize audio samples for eval/test set. ${experimenta_id} can be arbitrary string, for example,

./tts_demo.sh tts_test

Model checkpoints will be saved at ./checkpoints/${experimental_id} and audio samples are saved at ./generated/${experimental_id}.

Hyper paramters

See hparams.py.

Monitoring training progress

tensorboard --logdir=log

References

Notice

The repository doesn't try to reproduce same results reported in their papers because 1) data is not publically available and 2). hyper parameters are highly depends on data. Instead, I tried same ideas on different data with different hyper parameters.

gantts's People

Contributors

karkirowle avatar r9y9 avatar yamachu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gantts's Issues

Problem with dataset_loaders

During duration modeling I am facing following issue. Could you please suggest for workaround.

Traceback (most recent call last):
File "train.py", line 829, in
mse_w=mse_w, mge_w=mge_w)
File "train.py", line 493, in train_loop
for x, y, lengths in dataset_loaders[phase]:
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 201, in next
return self._process_next_batch(batch)
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 221, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 62, in pin_memory_loop
batch = pin_memory_batch(batch)
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 123, in pin_memory_batch
return [pin_memory_batch(sample) for sample in batch]
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 123, in
return [pin_memory_batch(sample) for sample in batch]
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 117, in pin_memory_batch
return batch.pin_memory()
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/tensor.py", line 82, in pin_memory
return type(self)().set
(storage.pin_memory()).view_as(self)
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/storage.py", line 83, in pin_memory
allocator = torch.cuda._host_allocator()
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/cuda/init.py", line 220, in _host_allocator
_lazy_init()
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/torch/cuda/init.py", line 85, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (11) : invalid argument at /opt/conda/conda-bld/pytorch_1503963423183/work/torch/lib/THC/THCGeneral.c:70

Quality of voice conversion from female to male

Hi all,
I tried out the voice conversion demo on the cmu_slt to cmu_bdl (female-male) dataset, yet it turns out the voice generated can hardly be recognized as a male. I run the Sprocket VC on the same datasets and it achieves a better generated audio with a voice closer to the target male speaker, though not as smooth as the gantts output. Is there anything we can do to improve this?
Thanks!

No audio files generated

Hi there,

After running the command (./vc_demo.sh vc_gan_test ~/data/cmu_arctic/), 'checkpoints' are created but no audio files in the './generated' folder. Here followings are my log shown 'Finished!'

90%|
| 179/200 [21:24<02:30, 7.18s/it]('Saved checkpoint:', 'checkpoints/checkpoint_epoch180_Generator.pth')
('Saved checkpoint:', 'checkpoints/checkpoint_epoch180_Discriminator.pth')
94%|
| 189/200 [22:35<01:18, 7.17s/it]('Saved checkpoint:', 'checkpoints/checkpoint_epoch190_Generator.pth')
('Saved checkpoint:', 'checkpoints/checkpoint_epoch190_Discriminator.pth')
100%|
[23:48<00:07, 7.18s/it]('Saved checkpoint:', 'checkpoints/checkpoint_epoch200_Generator.pth')
('Saved checkpoint:', 'checkpoints/checkpoint_epoch200_Discriminator.pth')
100%|
[23:55<00:00, 7.18s/it]
('Saved checkpoint:', 'checkpoints/checkpoint_epoch200_Generator.pth')
('Saved checkpoint:', 'checkpoints/checkpoint_epoch200_Discriminator.pth')
Finished!

Publish how much audio needed?

Hi there,

We're considering this for a dry run tts project with approx 300 hours of speech. Do you have any indication of how the model is likely to perform on this much data?

Even if you can state somewhere how many hours you had for the model you trained, that would be great.

Thanks.

What's the inputs of GAN-TTS?

I run the tts_demo.sh before, and it works well.
I noticed that '.lab' file is one of input for function"tts_from_label()"
My question is what should I do if I want producing a '.wav' file by a string of char.
Should I convert the string to a '.lab' file?
If that means I should build a front-end containing a trained module to deal with the string?
Thx!

Add download_dataset.sh

Add download dataset script download_dataset.sh:

curl http://festvox.org/cmu_arctic/packed/cmu_us_awb_arctic.tar.bz2 -o cmu_us_awb_arctic.tar.bz2
tar -xvjf cmu_us_awb_arctic.tar.bz2
curl http://festvox.org/cmu_arctic/packed/cmu_us_bdl_arctic.tar.bz2 -o cmu_us_bdl_arctic.tar.bz2
tar -xvjf cmu_us_bdl_arctic.tar.bz2
curl http://festvox.org/cmu_arctic/packed/cmu_us_clb_arctic.tar.bz2 -o cmu_us_clb_arctic.tar.bz2
tar -xvjf cmu_us_clb_arctic.tar.bz2
curl http://festvox.org/cmu_arctic/packed/cmu_us_jmk_arctic.tar.bz2 -o cmu_us_jmk_arctic.tar.bz2
tar -xvjf cmu_us_jmk_arctic.tar.bz2
curl http://festvox.org/cmu_arctic/packed/cmu_us_ksp_arctic.tar.bz2 -o cmu_us_ksp_arctic.tar.bz2
tar -xvjf cmu_us_ksp_arctic.tar.bz2
curl http://festvox.org/cmu_arctic/packed/cmu_us_rms_arctic.tar.bz2 -o cmu_us_rms_arctic.tar.bz2
tar -xvjf cmu_us_rms_arctic.tar.bz2
curl http://festvox.org/cmu_arctic/packed/cmu_us_slt_arctic.tar.bz2 -o cmu_us_slt_arctic.tar.bz2
tar -xvjf cmu_us_slt_arctic.tar.bz2

Also looks like there is more archives that are used in the demo:
http://festvox.org/cmu_arctic/packed/

Add requirements.txt

I have installed:

pip install docopt
pip install numpy
pip install nnmnkwii
pip install pyworld
pip install tensorflow
pip install torch torchvision
pip install tensorboard_logger

pip freeze > requirements.txt

absl-py==0.7.1
astor==0.7.1
bandmat==0.7
Cython==0.29.7
decorator==4.4.0
docopt==0.6.2
fastdtw==0.3.2
gast==0.2.2
grpcio==1.20.1
h5py==2.9.0
Keras-Applications==1.0.7
Keras-Preprocessing==1.0.9
Markdown==3.1
mock==2.0.0
nnmnkwii==0.0.17
numpy==1.16.3
pbr==5.2.0
Pillow==6.0.0
pkg-resources==0.0.0
protobuf==3.7.1
pysptk==0.1.16
pyworld==0.2.8
scikit-learn==0.20.3
scipy==1.2.1
six==1.12.0
sklearn==0.0
tensorboard==1.13.1
tensorboard-logger==0.1.0
tensorflow==1.13.1
tensorflow-estimator==1.13.0
termcolor==1.1.0
torch==1.0.1.post2
torchvision==0.2.2.post3
tqdm==4.31.1
Werkzeug==0.15.2

ImportError: No module named cuda_functional

Hello @r9y9 ,
i have been running the module in python 2.7 and i encountered the following error:

Traceback (most recent call last):
File "train.py", line 773, in
model_g = getattr(gantts.models, hp.generator)(**hp.generator_params)
File "/home/sasa/nisa/gantts/gantts/models.py", line 150, in init
from cuda_functional import SRU
ImportError: No module named cuda_functional

assert np.allclose(x_lengths, y_lengths) AssertionError

I tried to run train.py using my own dataset and I got this error
assert np.allclose(x_lengths, y_lengths)
AssertionError
First I thought the problem was the difference between the duration of the wav files. Then I decided to decrease the duration of each wav file, but he problem remains.

mismatch

if set generator_add_noise to True and discriminator_linguistic_condition to True. There will get a problem about size mismatch m1:[23720 x 377], m2: [177 x 512],so how could i do to solve this problem?

Ask for help about GAN use conv

Does anyone use gan with conv?

In the paper "Generative adversarial network-based postfilter for statistical parametric speech synthesis", they use the following net structure:
捕获
I add this stucture in gantts, however, the training of gan_d_warmup always fails. the loss of trainning is reduced but the loss of testting shaked, the gan model tends to judge every sample (no matter real or fake) just as real.
My code reference https://github.com/bajibabu/postfilt_gan/blob/master/models.py .
I use crop=58, and split the feat frames to [0, 58] [ 58, 2*58] ....[(n-1)58, n58]... and do discriminator to every fragment.

Doest any one tried conv gan, how is the performance?

Issue when using different database other than SLT

SLT model built successfully, When I try with my own database, I am facing following error. I also tried keeping SLT directory intact and changed questionfile, wav, state and phone labels. I am getting same error. Is any thing hard coded internally?

mkdirs: data/dam_voice/Y_acoustic
Duration linguistic feature dim 576
Duration feature dim 5
Acoustic linguistic feature dim 585
Traceback (most recent call last):
File "prepare_features_tts.py", line 237, in
print("Acoustic feature dim", Y_acoustic[0].shape[-1])
File "/home/ram/installations/anaconda3/lib/python3.5/site-packages/nnmnkwii/datasets/init.py", line 126, in getitem
*self.collected_files[idx])
TypeError: collect_features() takes 3 positional arguments but 3409 were given

Issue when using my own data

I have recorded my own voice and I want to use them instead of cmu files, however, I am getting the following error while running prepare_features_vc:
python prepare_features_vc.py data/arctic_sam/ fem sam --dst_dir=./data/vc_sam/
Command line args:
{'--dst_dir': './data/vc_sam/',
'--help': False,
'--max_files': '100',
'--overwrite': False,
'<DATA_ROOT>': 'data/arctic_sam/',
'<source_speaker>': 'fem',
'<target_speaker>': 'sam'}
Hyperparameters:
adversarial_streams: [True]
batch_size: 20
cache_size: 1200
discriminator: MLP
discriminator_linguistic_condition: False
discriminator_params: {'out_dim': 1, 'num_hidden': 2, 'dropout': 0.5, 'hidden_dim': 256, 'in_dim': 59, 'last_sigmoid': True}
frame_period: 5
generator: In2OutHighwayNet
generator_add_noise: False
generator_noise_dim: 200
generator_params: {'out_dim': None, 'num_hidden': 3, 'static_dim': 59, 'hidden_dim': 512, 'in_dim': None, 'dropout': 0.5}
has_dynamic_features: [True]
lr_decay_epoch: 10
lr_decay_schedule: False
mask_nth_mgc_for_adv_loss: 0
name: vc
nepoch: 200
num_workers: 1
optimizer_d: Adagrad
optimizer_d_params: {'lr': 0.01, 'weight_decay': 0}
optimizer_g: Adagrad
optimizer_g_params: {'lr': 0.01, 'weight_decay': 0}
order: 59
pin_memory: True
stream_sizes: [177]
windows: [(0, 0, array([1.])), (1, 1, array([-0.5, 0. , 0.5])), (1, 1, array([ 1., -2., 1.]))]
Traceback (most recent call last):
File "prepare_features_vc.py", line 77, in
max_files=max_files))
File "prepare_features_vc.py", line 40, in init
max_files=max_files)
File "/home/sakhors/voiceConv/env/lib/python3.5/site-packages/nnmnkwii/datasets/cmu_arctic.py", line 45, in init
speaker, available_speakers))
ValueError: Unknown speaker 'fem'. It should be one of ['awb', 'bdl', 'clb', 'jmk', 'ksp', 'rms', 'slt']

OSError: [Errno 40] Too many levels of symbolic links:

I run the tts_demo.sh and it occurs this problem. I don't know how to fix. maybe you can help me .Thanks a lot.

[luban@2d9ccf399183 gantts]$ ./tts_demo.sh
Experimental id:
Data dir: ./data/cmu_arctic_tts_order59
-1

./nnmnkwii_gallery/data/slt_arctic_full_data/
./data/cmu_arctic_tts_order59
Traceback (most recent call last):
File "prepare_features_tts.py", line 186, in
subphone_features=hp_duration.subphone_features)
File "prepare_features_tts.py", line 45, in init
hp_acoustic.question_path)
File "/nfs/project/miniconda3/lib/python3.6/site-packages/nnmnkwii/io/hts.py", line 353, in load_question_set
with open(qs_file_name) as f:
OSError: [Errno 40] Too many levels of symbolic links: '/nfs/cold_project/caopan_i/gantts/nnmnkwii_gallery/data/questions-radio_dnn_416.hed'

seems not converted in ./generated floder?

Hi Ryuichi,
I am trying to reproduce your project "gantts": https://github.com/HudsonHuang/gantts
To convert clb to slt, with your code I run like this:

#source activate tensorflow
#export LD_LIBRARY_PATH=/usr/local/cuda/lib64/
#pip install -U pip
#conda install pytorch torchvision cuda80 -c soumith
#pip install http://download.pytorch.org/whl/cu80/torch-0.2.0.post3-cp35-cp35m-manylinux1_x86_64.whl
#pip install torchvision
python setup.py install
bash ./vc_demo.sh awb-clb1 /home/lab-huang.zhongyi/data/cmu_arctic clb awb

After training, I got files in ./generated, but it seems not converted, it's still clb's voice, did I mistaken in some steps?
clb-awb1(1).zip
vc_gan_test2(1).zip

PS. I checked tensorboard and the Loss is dropping normalliy:
_20171119103626

and, I modified vc_demo.sh, added to parameters to support person modifiy:https://github.com/HudsonHuang/gantts/blob/master/vc_demo.sh
I hope it doesn't matter.

This is an excellent work , thank you!

Segmentation fault (core dumped)

I run the vc_demo.sh and it occurs this problem.
Segmentation fault (core dumped).

Then i use gdb to locate this problem.
Type "apropos word" to search for commands related to "word"...
"/home/xjl910940173/yutao/gantts-master/./vc_demo.sh": not in executable format: File format not recognized
[New LWP 2548]
Core was generated by `python3 train.py --hparams_name=vc --max_files=500 --w_d=0 --hparams=nepoch=200'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fd829c0b5a7 in ?? ()

I don't know how to fix. maybe you can help me .Thanks a lot.

GAN conditioned on speaker id

What is the prefered way to add speaker id to current generator / discriminator architecture to make GAN conditioned on speaker id?

AssertionError

一番最後のトレーニングで、

image

このようにでるのですが、何が原因でしょうか。トレーニングデータ数は20です

ImportError: No module named nnmnkwii.datasets

./tts_demo.sh experiment_tts
slt_arctic_full_data already downloaded
Experimental id: experiment_tts
Data dir: ./data/cmu_arctic_tts_order59
Traceback (most recent call last):
  File "prepare_features_tts.py", line 17, in <module>
    from nnmnkwii.datasets import FileSourceDataset, FileDataSource
ImportError: No module named nnmnkwii.datasets

>>> import nnmnkwii
>>> nnmnkwii.__version__
'0.0.18+cf79e09'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.