keonlee9420 / portaspeech Goto Github PK
View Code? Open in Web Editor NEWPyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech
License: MIT License
PyTorch Implementation of PortaSpeech: Portable and High-Quality Generative Text-to-Speech
License: MIT License
Hi @keonlee9420 , thanks for your code and work. But I want to know what's the key difference between this repo and the official repo?
Hi @keonlee9420, the a(256) is encoder_hidden and the b(45) is the first max_time of src_seq. in the word_level_pooling. I very want to know how to solve the problem. Thank you very much!
I notice that the b(45) is variable.
Basically, I tried to run it in the Google Colab
1st cell
%cd /content/
!git clone https://github.com/keonlee9420/PortaSpeech
%cd /content/PortaSpeech/
!pip install -r /content/PortaSpeech/requirements.txt
2nd
id_big = '1VTotGmE42a19bevwgQ9mhPkXzQvKzl8q'
id_small = '1Y0IGlc4zJ7XN5sh4aPWLTeQ80D9ZhfbB'
!mkdir /content/PortaSpeech/output/
!mkdir /content/PortaSpeech/output/ckpt/
!mkdir /content/PortaSpeech/output/ckpt/DATASET/
%cd /content/PortaSpeech/output/ckpt/DATASET/
!gdown --id $id_big
!gdown --id $id_small
%cd /content/PortaSpeech
3rd
%cd /content/PortaSpeech
!python3 synthesize.py --text "Moved to Site-19 1993. Origin is as of yet unknown. It is constructed from concrete and rebar with traces of Krylon brand spray paint." \
--restore_step 125000 --mode single --dataset DATASET
and this is what I've got:
/content/PortaSpeech
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
[nltk_data] Downloading package cmudict to /root/nltk_data...
[nltk_data] Unzipping corpora/cmudict.zip.
2021-10-26 10:57:51.803863: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "synthesize.py", line 138, in <module>
args.dataset)
File "/content/PortaSpeech/utils/tools.py", line 19, in get_configs_of
os.path.join(config_dir, "preprocess.yaml"), "r"), Loader=yaml.FullLoader)
FileNotFoundError: [Errno 2] No such file or directory: './config/DATASET/preprocess.yaml'
What this 'preprocess.yaml' is exactly?
Hi, thanks for your contribution in TTS, and it's such a great work !!
It's seems perfect in most of the sentence while trying python3 synthesize.py --text "MY_SENTENCE" --restore_step 125000 --mode single --dataset LJSpeech
, but when I tried the sentence with "hello" in the front, the sound of "hello" became long and weird. Here is the mel-spetrogram of "Hello, glad to see you."
And you can observe a large area on the left is represent the word "hello" clearly.
I've tried to check your training data in preprocessed_data/LJSpeech/train.txt, and I couldn't find the word "hello" in that.
Is this problem caused by the quantity of the phoneme of the word merely or I just do something wrong or something else?
Anything would help, thank you.
Dear sir,
First of all, I really appriciate your contribution in this amazing repo! However, it would be perfect if you can add the feature of multi-speaker TTS here. I can see the spker_emb was not used now. Do I know when can you consider this and opmimize the ability of this impressive model!
Thanks,
Max
Who can share the pre-trained model which is the AISHELL3
请问下,现在的程序是不是训练不了呢? 在text的处理上,代码采用的是字符作为单元,而不是空格分开的音素为单元。发现跟phones_per_word对应不上。就是 len(phones)!=sum(phones_per_word),我分析到的原因是phones采用的字符为单元。
File "preprocess.py", line 20, in
preprocessor.build_from_path()
File "D:\UW-Detection\PortaSpeech\preprocessor\preprocessor.py", line 129, in build_from_path
n_frames += n
UnboundLocalError: local variable 'n' referenced before assignment
when I use NATSpeech show this problem (BiaoBei dataset)
when I use LJSpeech dataset and this code show this problem
I use global but cannot ......
Is that possible train Chinese?
请问下,现在的程序是不是训练不了呢? 在text的处理上,代码采用的是字符作为单元,而不是空格分开的音素为单元。发现跟phones_per_word对应不上。就是 len(phones)!=sum(phones_per_word),我分析到的原因是phones采用的字符为单元。
why valid loss is raising?
Traceback (most recent call last):
File "synthesize.py", line 153, in
model = get_model(args, configs, device, train=False)
File "/content/PortaSpeech/utils/model.py", line 21, in get_model
model.load_state_dict(ckpt["model"])
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1407, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for PortaSpeech:
Missing key(s) in state_dict: "linguistic_encoder.phoneme_encoder.attn_layers.3.emb_rel_k", "linguistic_encoder.phoneme_encoder.attn_layers.3.emb_rel_v", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_q.weight", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_q.bias", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_k.weight", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_k.bias", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_v.weight", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_v.bias", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_o.weight", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_o.bias", "linguistic_encoder.phoneme_encoder.norm_layers_1.3.gamma", "linguistic_encoder.phoneme_encoder.norm_layers_1.3.beta", "linguistic_encoder.phoneme_encoder.ffn_layers.3.conv.weight", "linguistic_encoder.phoneme_encoder.ffn_layers.3.conv.bias", "linguistic_encoder.phoneme_encoder.norm_layers_2.3.gamma", "linguistic_encoder.phoneme_encoder.norm_layers_2.3.beta", "linguistic_encoder.word_encoder.attn_layers.3.emb_rel_k", "linguistic_encoder.word_encoder.attn_layers.3.emb_rel_v", "linguistic_encoder.word_encoder.attn_layers.3.conv_q.weight", "linguistic_encoder.word_encoder.attn_layers.3.conv_q.bias", "linguistic_encoder.word_encoder.attn_layers.3.conv_k.weight", "linguistic_encoder.word_encoder.attn_layers.3.conv_k.bias", "linguistic_encoder.word_encoder.attn_layers.3.conv_v.weight", "linguistic_encoder.word_encoder.attn_layers.3.conv_v.bias", "linguistic_encoder.word_encoder.attn_layers.3.conv_o.weight", "linguistic_encoder.word_encoder.attn_layers.3.conv_o.bias", "linguistic_encoder.word_encoder.norm_layers_1.3.gamma", "linguistic_encoder.word_encoder.norm_layers_1.3.beta", "linguistic_encoder.word_encoder.ffn_layers.3.conv.weight", "linguistic_encoder.word_encoder.ffn_layers.3.conv.bias", "linguistic_encoder.word_encoder.norm_layers_2.3.gamma", "linguistic_encoder.word_encoder.norm_layers_2.3.beta", "variational_generator.flow.flows.0.enc.in_layers.3.bias", "variational_generator.flow.flows.0.enc.in_layers.3.weight_g", "variational_generator.flow.flows.0.enc.in_layers.3.weight_v", "variational_generator.flow.flows.0.enc.res_skip_layers.3.bias", "variational_generator.flow.flows.0.enc.res_skip_layers.3.weight_g", "variational_generator.flow.flows.0.enc.res_skip_layers.3.weight_v", "variational_generator.flow.flows.2.enc.in_layers.3.bias", "variational_generator.flow.flows.2.enc.in_layers.3.weight_g", "variational_generator.flow.flows.2.enc.in_layers.3.weight_v", "variational_generator.flow.flows.2.enc.res_skip_layers.3.bias", "variational_generator.flow.flows.2.enc.res_skip_layers.3.weight_g", "variational_generator.flow.flows.2.enc.res_skip_layers.3.weight_v", "variational_generator.flow.flows.4.enc.in_layers.3.bias", "variational_generator.flow.flows.4.enc.in_layers.3.weight_g", "variational_generator.flow.flows.4.enc.in_layers.3.weight_v", "variational_generator.flow.flows.4.enc.res_skip_layers.3.bias", "variational_generator.flow.flows.4.enc.res_skip_layers.3.weight_g", "variational_generator.flow.flows.4.enc.res_skip_layers.3.weight_v", "variational_generator.flow.flows.6.enc.in_layers.3.bias", "variational_generator.flow.flows.6.enc.in_layers.3.weight_g", "variational_generator.flow.flows.6.enc.in_layers.3.weight_v", "variational_generator.flow.flows.6.enc.res_skip_layers.3.bias", "variational_generator.flow.flows.6.enc.res_skip_layers.3.weight_g", "variational_generator.flow.flows.6.enc.res_skip_layers.3.weight_v", "variational_generator.dec_wn.in_layers.3.bias", "variational_generator.dec_wn.in_layers.3.weight_g", "variational_generator.dec_wn.in_layers.3.weight_v", "variational_generator.dec_wn.res_skip_layers.3.bias", "variational_generator.dec_wn.res_skip_layers.3.weight_g", "variational_generator.dec_wn.res_skip_layers.3.weight_v", "postnet.flows.24.logs", "postnet.flows.24.bias", "postnet.flows.25.weight", "postnet.flows.26.start.bias", "postnet.flows.26.start.weight_g", "postnet.flows.26.start.weight_v", "postnet.flows.26.end.weight", "postnet.flows.26.end.bias", "postnet.flows.26.cond_layer.bias", "postnet.flows.26.cond_layer.weight_g", "postnet.flows.26.cond_layer.weight_v", "postnet.flows.26.wn.in_layers.0.bias", "postnet.flows.26.wn.in_layers.0.weight_g", "postnet.flows.26.wn.in_layers.0.weight_v", "postnet.flows.26.wn.in_layers.1.bias", "postnet.flows.26.wn.in_layers.1.weight_g", "postnet.flows.26.wn.in_layers.1.weight_v", "postnet.flows.26.wn.in_layers.2.bias", "postnet.flows.26.wn.in_layers.2.weight_g", "postnet.flows.26.wn.in_layers.2.weight_v", "postnet.flows.26.wn.res_skip_layers.0.bias", "postnet.flows.26.wn.res_skip_layers.0.weight_g", "postnet.flows.26.wn.res_skip_layers.0.weight_v", "postnet.flows.26.wn.res_skip_layers.1.bias", "postnet.flows.26.wn.res_skip_layers.1.weight_g", "postnet.flows.26.wn.res_skip_layers.1.weight_v", "postnet.flows.26.wn.res_skip_layers.2.bias", "postnet.flows.26.wn.res_skip_layers.2.weight_g", "postnet.flows.26.wn.res_skip_layers.2.weight_v", "postnet.flows.27.logs", "postnet.flows.27.bias", "postnet.flows.28.weight", "postnet.flows.29.start.bias", "postnet.flows.29.start.weight_g", "postnet.flows.29.start.weight_v", "postnet.flows.29.end.weight", "postnet.flows.29.end.bias", "postnet.flows.29.cond_layer.bias", "postnet.flows.29.cond_layer.weight_g", "postnet.flows.29.cond_layer.weight_v", "postnet.flows.29.wn.in_layers.0.bias", "postnet.flows.29.wn.in_layers.0.weight_g", "postnet.flows.29.wn.in_layers.0.weight_v", "postnet.flows.29.wn.in_layers.1.bias", "postnet.flows.29.wn.in_layers.1.weight_g", "postnet.flows.29.wn.in_layers.1.weight_v", "postnet.flows.29.wn.in_layers.2.bias", "postnet.flows.29.wn.in_layers.2.weight_g", "postnet.flows.29.wn.in_layers.2.weight_v", "postnet.flows.29.wn.res_skip_layers.0.bias", "postnet.flows.29.wn.res_skip_layers.0.weight_g", "postnet.flows.29.wn.res_skip_layers.0.weight_v", "postnet.flows.29.wn.res_skip_layers.1.bias", "postnet.flows.29.wn.res_skip_layers.1.weight_g", "postnet.flows.29.wn.res_skip_layers.1.weight_v", "postnet.flows.29.wn.res_skip_layers.2.bias", "postnet.flows.29.wn.res_skip_layers.2.weight_g", "postnet.flows.29.wn.res_skip_layers.2.weight_v", "postnet.flows.30.logs", "postnet.flows.30.bias", "postnet.flows.31.weight", "postnet.flows.32.start.bias", "postnet.flows.32.start.weight_g", "postnet.flows.32.start.weight_v", "postnet.flows.32.end.weight", "postnet.flows.32.end.bias", "postnet.flows.32.cond_layer.bias", "postnet.flows.32.cond_layer.weight_g", "postnet.flows.32.cond_layer.weight_v", "postnet.flows.32.wn.in_layers.0.bias", "postnet.flows.32.wn.in_layers.0.weight_g", "postnet.flows.32.wn.in_layers.0.weight_v", "postnet.flows.32.wn.in_layers.1.bias", "postnet.flows.32.wn.in_layers.1.weight_g", "postnet.flows.32.wn.in_layers.1.weight_v", "postnet.flows.32.wn.in_layers.2.bias", "postnet.flows.32.wn.in_layers.2.weight_g", "postnet.flows.32.wn.in_layers.2.weight_v", "postnet.flows.32.wn.res_skip_layers.0.bias", "postnet.flows.32.wn.res_skip_layers.0.weight_g", "postnet.flows.32.wn.res_skip_layers.0.weight_v", "postnet.flows.32.wn.res_skip_layers.1.bias", "postnet.flows.32.wn.res_skip_layers.1.weight_g", "postnet.flows.32.wn.res_skip_layers.1.weight_v", "postnet.flows.32.wn.res_skip_layers.2.bias", "postnet.flows.32.wn.res_skip_layers.2.weight_g", "postnet.flows.32.wn.res_skip_layers.2.weight_v", "postnet.flows.33.logs", "postnet.flows.33.bias", "postnet.flows.34.weight", "postnet.flows.35.start.bias", "postnet.flows.35.start.weight_g", "postnet.flows.35.start.weight_v", "postnet.flows.35.end.weight", "postnet.flows.35.end.bias", "postnet.flows.35.cond_layer.bias", "postnet.flows.35.cond_layer.weight_g", "postnet.flows.35.cond_layer.weight_v", "postnet.flows.35.wn.in_layers.0.bias", "postnet.flows.35.wn.in_layers.0.weight_g", "postnet.flows.35.wn.in_layers.0.weight_v", "postnet.flows.35.wn.in_layers.1.bias", "postnet.flows.35.wn.in_layers.1.weight_g", "postnet.flows.35.wn.in_layers.1.weight_v", "postnet.flows.35.wn.in_layers.2.bias", "postnet.flows.35.wn.in_layers.2.weight_g", "postnet.flows.35.wn.in_layers.2.weight_v", "postnet.flows.35.wn.res_skip_layers.0.bias", "postnet.flows.35.wn.res_skip_layers.0.weight_g", "postnet.flows.35.wn.res_skip_layers.0.weight_v", "postnet.flows.35.wn.res_skip_layers.1.bias", "postnet.flows.35.wn.res_skip_layers.1.weight_g", "postnet.flows.35.wn.res_skip_layers.1.weight_v", "postnet.flows.35.wn.res_skip_layers.2.bias", "postnet.flows.35.wn.res_skip_layers.2.weight_g", "postnet.flows.35.wn.res_skip_layers.2.weight_v".
size mismatch for linguistic_encoder.abs_position_enc: copying a param with shape torch.Size([1, 1001, 128]) from checkpoint, the shape in current model is torch.Size([1, 1001, 192]).
File "train.py", line 122, in main
model_update(model, step, G_loss, optG_fs2)
File "train.py", line 77, in model_update
loss = (loss / grad_acc_step).backward()
File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd_init_.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Found dtype Long but expected Float
Hi@keonlee9420. This problem occurs when the loss function is back-propagating, how can I solve it?
This is the dtype of loss
`2022-11-11 22:31:08.004017: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll
Device of PortaSpeech: cpu
Traceback (most recent call last):
File "synthesize.py", line 153, in
model = get_model(args, configs, device, train=False)
File "D:\projects\PortaSpeech\utils\model.py", line 21, in get_model
model.load_state_dict(ckpt["model"])
File "C:\ProgramData\Miniconda3\envs\tts_env\lib\site-packages\torch\nn\modules\module.py", line 1223, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for PortaSpeech:
Missing key(s) in state_dict: "linguistic_encoder.phoneme_encoder.attn_layers.3.emb_rel_k", "linguistic_encoder.phoneme_encoder.attn_layers.3.emb_rel_v", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_q.weight", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_q.bias", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_k.weight", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_k.bias", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_v.weight", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_v.bias", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_o.weight", "linguistic_encoder.phoneme_encoder.attn_layers.3.conv_o.bias", "linguistic_encoder.phoneme_encoder.norm_layers_1.3.gamma", "linguistic_encoder.phoneme_encoder.norm_layers_1.3.beta", "linguistic_encoder.phoneme_encoder.ffn_layers.3.conv.weight", "linguistic_encoder.phoneme_encoder.ffn_layers.3.conv.bias", "linguistic_encoder.phoneme_encoder.norm_layers_2.3.gamma", "linguistic_encoder.phoneme_encoder.norm_layers_2.3.beta", "linguistic_encoder.word_encoder.attn_layers.3.emb_rel_k", "linguistic_encoder.word_encoder.attn_layers.3.emb_rel_v", "linguistic_encoder.word_encoder.attn_layers.3.conv_q.weight", "linguistic_encoder.word_encoder.attn_layers.3.conv_q.bias", "linguistic_encoder.word_encoder.attn_layers.3.conv_k.weight", "linguistic_encoder.word_encoder.attn_layers.3.conv_k.bias", "linguistic_encoder.word_encoder.attn_layers.3.conv_v.weight", "linguistic_encoder.word_encoder.attn_layers.3.conv_v.bias", "linguistic_encoder.word_encoder.attn_layers.3.conv_o.weight", "linguistic_encoder.word_encoder.attn_layers.3.conv_o.bias", "linguistic_encoder.word_encoder.norm_layers_1.3.gamma", "linguistic_encoder.word_encoder.norm_layers_1.3.beta", "linguistic_encoder.word_encoder.ffn_layers.3.conv.weight", "linguistic_encoder.word_encoder.ffn_layers.3.conv.bias", "linguistic_encoder.word_encoder.norm_layers_2.3.gamma", "linguistic_encoder.word_encoder.norm_layers_2.3.beta", "variational_generator.flow.flows.0.enc.in_layers.3.bias", "variational_generator.flow.flows.0.enc.in_layers.3.weight_g", "variational_generator.flow.flows.0.enc.in_layers.3.weight_v", "variational_generator.flow.flows.0.enc.res_skip_layers.3.bias", "variational_generator.flow.flows.0.enc.res_skip_layers.3.weight_g", "variational_generator.flow.flows.0.enc.res_skip_layers.3.weight_v", "variational_generator.flow.flows.2.enc.in_layers.3.bias", "variational_generator.flow.flows.2.enc.in_layers.3.weight_g", "variational_generator.flow.flows.2.enc.in_layers.3.weight_v", "variational_generator.flow.flows.2.enc.res_skip_layers.3.bias", "variational_generator.flow.flows.2.enc.res_skip_layers.3.weight_g", "variational_generator.flow.flows.2.enc.res_skip_layers.3.weight_v", "variational_generator.flow.flows.4.enc.in_layers.3.bias", "variational_generator.flow.flows.4.enc.in_layers.3.weight_g", "variational_generator.flow.flows.4.enc.in_layers.3.weight_v", "variational_generator.flow.flows.4.enc.res_skip_layers.3.bias", "variational_generator.flow.flows.4.enc.res_skip_layers.3.weight_g", "variational_generator.flow.flows.4.enc.res_skip_layers.3.weight_v", "variational_generator.flow.flows.6.enc.in_layers.3.bias", "variational_generator.flow.flows.6.enc.in_layers.3.weight_g", "variational_generator.flow.flows.6.enc.in_layers.3.weight_v", "variational_generator.flow.flows.6.enc.res_skip_layers.3.bias", "variational_generator.flow.flows.6.enc.res_skip_layers.3.weight_g", "variational_generator.flow.flows.6.enc.res_skip_layers.3.weight_v", "variational_generator.dec_wn.in_layers.3.bias", "variational_generator.dec_wn.in_layers.3.weight_g", "variational_generator.dec_wn.in_layers.3.weight_v", "variational_generator.dec_wn.res_skip_layers.3.bias", "variational_generator.dec_wn.res_skip_layers.3.weight_g", "variational_generator.dec_wn.res_skip_layers.3.weight_v", "postnet.flows.24.logs", "postnet.flows.24.bias", "postnet.flows.25.weight", "postnet.flows.26.start.bias", "postnet.flows.26.start.weight_g", "postnet.flows.26.start.weight_v", "postnet.flows.26.end.weight", "postnet.flows.26.end.bias", "postnet.flows.26.cond_layer.bias", "postnet.flows.26.cond_layer.weight_g", "postnet.flows.26.cond_layer.weight_v", "postnet.flows.26.wn.in_layers.0.bias", "postnet.flows.26.wn.in_layers.0.weight_g", "postnet.flows.26.wn.in_layers.0.weight_v", "postnet.flows.26.wn.in_layers.1.bias", "postnet.flows.26.wn.in_layers.1.weight_g", "postnet.flows.26.wn.in_layers.1.weight_v", "postnet.flows.26.wn.in_layers.2.bias", "postnet.flows.26.wn.in_layers.2.weight_g", "postnet.flows.26.wn.in_layers.2.weight_v", "postnet.flows.26.wn.res_skip_layers.0.bias", "postnet.flows.26.wn.res_skip_layers.0.weight_g", "postnet.flows.26.wn.res_skip_layers.0.weight_v", "postnet.flows.26.wn.res_skip_layers.1.bias", "postnet.flows.26.wn.res_skip_layers.1.weight_g", "postnet.flows.26.wn.res_skip_layers.1.weight_v", "postnet.flows.26.wn.res_skip_layers.2.bias", "postnet.flows.26.wn.res_skip_layers.2.weight_g", "postnet.flows.26.wn.res_skip_layers.2.weight_v", "postnet.flows.27.logs", "postnet.flows.27.bias", "postnet.flows.28.weight", "postnet.flows.29.start.bias", "postnet.flows.29.start.weight_g", "postnet.flows.29.start.weight_v", "postnet.flows.29.end.weight", "postnet.flows.29.end.bias", "postnet.flows.29.cond_layer.bias", "postnet.flows.29.cond_layer.weight_g", "postnet.flows.29.cond_layer.weight_v", "postnet.flows.29.wn.in_layers.0.bias", "postnet.flows.29.wn.in_layers.0.weight_g", "postnet.flows.29.wn.in_layers.0.weight_v", "postnet.flows.29.wn.in_layers.1.bias", "postnet.flows.29.wn.in_layers.1.weight_g", "postnet.flows.29.wn.in_layers.1.weight_v", "postnet.flows.29.wn.in_layers.2.bias", "postnet.flows.29.wn.in_layers.2.weight_g", "postnet.flows.29.wn.in_layers.2.weight_v", "postnet.flows.29.wn.res_skip_layers.0.bias", "postnet.flows.29.wn.res_skip_layers.0.weight_g", "postnet.flows.29.wn.res_skip_layers.0.weight_v", "postnet.flows.29.wn.res_skip_layers.1.bias", "postnet.flows.29.wn.res_skip_layers.1.weight_g", "postnet.flows.29.wn.res_skip_layers.1.weight_v", "postnet.flows.29.wn.res_skip_layers.2.bias", "postnet.flows.29.wn.res_skip_layers.2.weight_g", "postnet.flows.29.wn.res_skip_layers.2.weight_v", "postnet.flows.30.logs", "postnet.flows.30.bias", "postnet.flows.31.weight", "postnet.flows.32.start.bias", "postnet.flows.32.start.weight_g", "postnet.flows.32.start.weight_v", "postnet.flows.32.end.weight", "postnet.flows.32.end.bias", "postnet.flows.32.cond_layer.bias", "postnet.flows.32.cond_layer.weight_g", "postnet.flows.32.cond_layer.weight_v", "postnet.flows.32.wn.in_layers.0.bias", "postnet.flows.32.wn.in_layers.0.weight_g", "postnet.flows.32.wn.in_layers.0.weight_v", "postnet.flows.32.wn.in_layers.1.bias", "postnet.flows.32.wn.in_layers.1.weight_g", "postnet.flows.32.wn.in_layers.1.weight_v", "postnet.flows.32.wn.in_layers.2.bias", "postnet.flows.32.wn.in_layers.2.weight_g", "postnet.flows.32.wn.in_layers.2.weight_v", "postnet.flows.32.wn.res_skip_layers.0.bias", "postnet.flows.32.wn.res_skip_layers.0.weight_g", "postnet.flows.32.wn.res_skip_layers.0.weight_v", "postnet.flows.32.wn.res_skip_layers.1.bias", "postnet.flows.32.wn.res_skip_layers.1.weight_g", "postnet.flows.32.wn.res_skip_layers.1.weight_v", "postnet.flows.32.wn.res_skip_layers.2.bias", "postnet.flows.32.wn.res_skip_layers.2.weight_g", "postnet.flows.32.wn.res_skip_layers.2.weight_v", "postnet.flows.33.logs", "postnet.flows.33.bias", "postnet.flows.34.weight", "postnet.flows.35.start.bias", "postnet.flows.35.start.weight_g", "postnet.flows.35.start.weight_v", "postnet.flows.35.end.weight", "postnet.flows.35.end.bias", "postnet.flows.35.cond_layer.bias", "postnet.flows.35.cond_layer.weight_g", "postnet.flows.35.cond_layer.weight_v", "postnet.flows.35.wn.in_layers.0.bias", "postnet.flows.35.wn.in_layers.0.weight_g", "postnet.flows.35.wn.in_layers.0.weight_v", "postnet.flows.35.wn.in_layers.1.bias", "postnet.flows.35.wn.in_layers.1.weight_g", "postnet.flows.35.wn.in_layers.1.weight_v", "postnet.flows.35.wn.in_layers.2.bias", "postnet.flows.35.wn.in_layers.2.weight_g", "postnet.flows.35.wn.in_layers.2.weight_v", "postnet.flows.35.wn.res_skip_layers.0.bias", "postnet.flows.35.wn.res_skip_layers.0.weight_g", "postnet.flows.35.wn.res_skip_layers.0.weight_v", "postnet.flows.35.wn.res_skip_layers.1.bias", "postnet.flows.35.wn.res_skip_layers.1.weight_g", "postnet.flows.35.wn.res_skip_layers.1.weight_v", "postnet.flows.35.wn.res_skip_layers.2.bias", "postnet.flows.35.wn.res_skip_layers.2.weight_g", "postnet.flows.35.wn.res_skip_layers.2.weight_v".
size mismatch for linguistic_encoder.abs_position_enc: copying a param with shape torch.Size([1, 1001, 128]) from checkpoint, the shape in current model is torch.Size([1, 1001, 192]).
size mismatch for linguistic_encoder.kv_position_enc: copying a param with shape torch.Size([1, 1001, 128]) from checkpoint, the shape in current model is torch.Size([1, 1001, 192]).
size mismatch for linguistic_encoder.q_position_enc: copying a param with shape torch.Size([1, 1001, 128]) from checkpoint, the shape in current model is torch.Size([1, 1001, 192]).
size mismatch for linguistic_encoder.src_emb.weight: copying a param with shape torch.Size([361, 128]) from checkpoint, the shape in current model is torch.Size([361, 192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.emb_rel_k: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.emb_rel_v: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.conv_q.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.conv_q.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.conv_k.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.conv_k.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.conv_v.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.conv_v.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.conv_o.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.0.conv_o.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.emb_rel_k: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.emb_rel_v: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.conv_q.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.conv_q.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.conv_k.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.conv_k.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.conv_v.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.conv_v.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.conv_o.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.1.conv_o.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.emb_rel_k: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.emb_rel_v: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.conv_q.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.conv_q.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.conv_k.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.conv_k.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.conv_v.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.conv_v.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.conv_o.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.phoneme_encoder.attn_layers.2.conv_o.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_1.0.gamma: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_1.0.beta: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_1.1.gamma: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_1.1.beta: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_1.2.gamma: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_1.2.beta: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.ffn_layers.0.conv.weight: copying a param with shape torch.Size([128, 128, 3]) from checkpoint, the shape in current model is torch.Size([192, 192, 5]).
size mismatch for linguistic_encoder.phoneme_encoder.ffn_layers.0.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.ffn_layers.1.conv.weight: copying a param with shape torch.Size([128, 128, 3]) from checkpoint, the shape in current model is torch.Size([192, 192, 5]).
size mismatch for linguistic_encoder.phoneme_encoder.ffn_layers.1.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.ffn_layers.2.conv.weight: copying a param with shape torch.Size([128, 128, 3]) from checkpoint, the shape in current model is torch.Size([192, 192, 5]).
size mismatch for linguistic_encoder.phoneme_encoder.ffn_layers.2.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_2.0.gamma: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_2.0.beta: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_2.1.gamma: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_2.1.beta: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_2.2.gamma: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.phoneme_encoder.norm_layers_2.2.beta: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.emb_rel_k: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.emb_rel_v: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.conv_q.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.conv_q.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.conv_k.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.conv_k.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.conv_v.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.conv_v.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.conv_o.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.0.conv_o.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.emb_rel_k: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.emb_rel_v: copying a param with shape torch.Size([1, 9, 64]) from checkpoint, the shape in current model is torch.Size([1, 9, 96]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.conv_q.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.conv_q.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.conv_k.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.conv_k.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.conv_v.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.conv_v.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.conv_o.weight: copying a param with shape torch.Size([128, 128, 1]) from checkpoint, the shape in current model is torch.Size([192, 192, 1]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.1.conv_o.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([192]).
size mismatch for linguistic_encoder.word_encoder.attn_layers.2.emb_rel_k: copying a param with shape torch.Size([1, 9, 64]) from...`
Hi @keonlee9420, after training the AISHELL3 dataset, The synthesized sound is all electric current sound. I want to know how to happed.
for multispeaker dataset train tts, is this preprocess alignment necessary? This provided preprocess script support Chinese?
This can happen because of blurring of mel-spectrogram(in most cases start of the sound should have sharp edge), which can happen because bad attention. I could recommend to try using Diagonal guided attention (DGA) during the training(https://arxiv.org/abs/1710.08969)
Hi, thank you very much for you work and share.
In the paper, the proposed method have been compared with many methods in MOS, parameter size, as well as the speed. While you compute the RTF with GPU, did you compared the RTF / speed when running in CPU?
Hi, the link to the pretrained models seems to lead to an empty directory. Could you fix?
From your experience, how are the effects of these models ranked.
saw that the normal model for LJSpeech was released thanks, will the small model also be released?
HI@[keonlee9420],I cannot understand the meaning of inputs[11:] in model.loss.py
def forward(self, inputs, predictions, step):
(
mel_targets,
*_,
) = inputs[11:]
Thank you very much!
Hello,
Thanks for the code. Do you know if I can fine tune the model with 30 mins of data?
Hi@keonlee9420, Thank You very much!
def get_mask_from_lengths(lengths, max_len=None):
batch_size = lengths.shape[0]
if max_len is None:
max_len = torch.max(lengths).item()
ids = torch.arange(0, max_len).unsqueeze(
0).expand(batch_size, -1).to(lengths.device)
mask = ids >= lengths.unsqueeze(1).expand(-1, max_len)
return ~mask
In PortaSpeech, the return is ~mask, while in DiffGAN-TTS it is mask. I want to know the difference between them!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.