Giter Site home page Giter Site logo

florianeagox / weeablind Goto Github PK

View Code? Open in Web Editor NEW
159.0 13.0 17.0 990 KB

A program to dub non-english media with modern AI speech synthesis, diarization, and voice cloning!

Python 100.00%
accessibility anime dubbing blindness tts a11y diariz python voice-cloning

weeablind's Introduction

Weeablind

A program to dub multi-lingual media and anime using modern AI speech synthesis, diarization, language identification, and voice cloning.

A blind anime girl with an audio waveform for eyes. She's got green and purple hair and a cozy green sweater and purple burrettes. This above the words "Weea-Blind." The image was generated by Dall-E AI

Why

Many shows, movies, news segments, interviews, and videos will never receive proper dubs to other languages, and dubbing something from scratch can be an enormous undertaking. This presents a common accessibility hurdle for people with blindness, dyslexia, learning disabilities, or simply folks that don't enjoy reading subtitles. This program aims to create a pleasant alternative for folks facing these struggles.

This software is a product of war. My sister turned me onto my now-favorite comedy anime "The Disastrous Life of Saiki K." but Netflix never ordered a dub for the 2nd season. I'm blind and cannot and will not ever be able to read subtitles, but I MUST know how the story progresses! Netflix has forced my hand and I will bring AI-dubbed anime to the blind!

How

This project relies on some rudimentary slapping together of some state of the art technologies. It uses numerous audio processing libraries and techniques to analyze and synthesize speech that tries to stay in-line with the source video file. It primarily relies on ffmpeg and pydub for audio and video editing, Coqui TTS for speech synthesis, speechbrain for language identification, and pyannote.audio for speaker diarization.

You have the option of dubbing every subtitle in the video, setting the s tart and end times, dubbing only foreign-language content, or full-blown multi-speaker dubbing with speaking rate and volume matching.

When?

This project is currently what some might call in alpha. The major, core functionality is in place, and it's possible to use by cloning the repo, but it's only starting to be ready for a first release. There are numerous optimizations, UX, and refactoring that need to be done before a first release candidate. Stay tuned for regular updates, and feel free to extend a hand with contributions, testing, or suggestions if this is something you're interested in.

The Name

I had the idea to call the software Weeablind as a portmanteaux of Weeaboo (someone a little too obsessed with anime), and blind. I might change it to something else in the future like Blindtaku, DubHub, or something similar and more catchy because the software can be used for far more than just anime.

Setup

There are currently no prebuilt-binaries to download, this is something I am looking into, but many of these dependencies are not easy to bundle with something like PyInstaller

The program works best on Linux, but will also run on Windows.

System Prerequisits

You will need to install FFmpeg on your system and make sure it's callable from terminal or in your system PATH

For using Coqui TTS, you will also need Espeak-NG which you can get from your package manager on Linux or here on Windows

On Windows, pip requires MSVC Build Tools to build Coqui. You can install it here: https://visualstudio.microsoft.com/visual-cpp-build-tools/

Coqui TTS and Pyannote diarization will also both perform better if you have CUDA set up on your system to use your GPU. This should work out of the box on Linux but getting it set up on Windows takes some doing. This blog post should walk you through the process. If you can't get it working, don't fret, you can still use them on your CPU.

The latest version of Python works on Linux, but Spleeter only works on 3.10 and Pyannote can be finicky with that too. 3.10 seems to work the best on on Windows. You can get it from the Microsoft Store.

Setup from Source

To use the project, you'll need to clone the repository and install the dependencies in a virtual enviormonet.

git clone https://github.com/FlorianEagox/weeablind.git
cd weeablind
python3.10 -m venv venv
# Windows
.\venv\Scripts\activate
# Linux
source ./venv/bin/activate

This project has a lot of dependencies, and pip can struggle with conflicts, so it's best to install from the lock file like this:

pip install -r requirements-win-310.txt --no-deps

You can try from the regular requirements file, but it can take a heck of a long time and requires some rejiggering sometimes.

Installing the dependencies can take a hot minute and uses a lot of space (~8 GB).

If you don't need certain features for instance, language filtering, you can omit speechbrain from the readme.

once this is completed, you can run the program with

python weeablind.py

Usage

Start by either selecting a video from your computer or pasting a link to a YT video and pressing enter. It should download the video and lot the subs and audio.

Loading a video

Once a video is loaded, you can preview the subtitles that will be dubbed. If the wrong language is loaded, or the wrong audio stream, switch to the streams tab and select the correct ones.

Cropping

You can specify a start and end time if you only need to dub a section of the video, for example to skip the opening theme and credits of a show. Use timecode syntax like 2:17 and press enter.

Configuring Voices

By default, a "Sample" voice should be initialized. You can play around with different configurations and test the voice before dubbing with the "Sample Voice" button in the "Configure Voices" tab. When you have parameters you're happy with, clicking "Update Voices" will re-asign it to that slot. If you choose the SYSTEM tts engine, the program will use Windows' SAPI5 Narrorator or Linux espeak voices by default. This is extremely fast but sounds very robotic. Selecting Coqui gives you a TON of options to play around with, but you will be prompted to download often very heavy TTS models. VCTK/VITS is my favorite model to dub with as it's very quick, even on CPU, and there are hundreds of speakers to choose from. It is loaded by default. If you have ran diarization, you can select different voices from the listbox and change their properties as well.

Language Filtering

In the subtitles tab, you filter the subtitles to exclude lines spoken in your selected language so only the foreign language gets dubbed. This is useful for multi-lingual videos, but not videos all in one language.

Diarization

Running diarization will attempt to assign the correct speaker to all the subtitles and generate random voices for the total number of speakers detected. In the futre, you'll be able to specify the diarization pipeline and number of speakers if you know ahead of time. Diarization is only useful for videos with multiple speakers and the accuracy can very massively.

Background Isolation

In the "Streams" tab, you can run vocal isolation which will attempt to remove the vocals from your source video track but retain the background. If you're using a multi-lingual video and running language filtering as well, you'll need to run that first to keep the english (or whatever source language's vocals).

Dubbing

Once you've configured things how you like, you can press the big, JUICY run dubbing button. This can take a while to run. Once completed, you should have something like "MyVideo-dubbed.mkv" in the output directory. This is your finished video!

Things to do

  • A better filtering system for language detection. Maybe inclusive and exclusive or confidence threshhold
  • Find some less copyrighted multi-lingual / non-english content to display demos publicly
  • de-anglicanization it so the user can select their target language instead of just English
  • FIX PYDUB'S STUPID ARRAY DISTORTION so we don't have to perform 5 IO operations per dub!!!
  • run a vocal isolation / remover on the source audio to remove / mitigate the original speakers?
  • A proper setup guide for all platforms
  • remove or fix the broken espeak implementation to be cross-platform
  • Uninitialized, singletons for heavy models upon startup (e.g. only intialize pyannote/speechbrain pipelines when needed)
  • Abstraction for singletons of Coqui voices using the same model to reduce memory footprint
  • GUI tab to list and select audio / subtitle streams w/ FFMPEG
  • Move the tabs into their own classes
  • Add labels and screen reader landmarks to all the controls
  • Single speaker or multi speaker control switch
  • Download YouTube video with Closed Captions
  • GUI to select start and end time for dubbing
  • Throw up a Flask server on my website so you can try it with minimal features.
  • Use OCR to generate subtitles for videos that don't have sub streams
  • Use OCR for non-text based subtitles
  • Make a cool logo?
  • Learn how to package python programs as binaries to make releases
  • Remove the copyrighted content from this repo (sorry not sorry TV Tokyo)
  • Support for all subtitle formats
  • Maybe slap in an ASR library for videos without subtitles?
  • Maybe support for magnet URLs or the arrLib to pirate media (who knows???)

Diarization

  • Filter subtitles by the selected voice from the listbox
  • Select from multiple diarization models / pipelines
  • Optimize audio trakcs for diarizaiton by isolating lines speech based on subtitle timings
  • Investigate Diart?

TTS

  • Rework the speed control to use PyDub to speed up audio.
  • match the volume of the speaker to TTS
  • Checkbox to remove sequential subtitle entries and entries that are tiny, e.g. "nom" "nom" "nom" "nom"~~
  • investigate voice conversion?
  • Build an asynchronous queue of operations to perform
  • Asynchronous GUI for Coqui model downloads
  • Add support for MyCroft Mimic 3
  • Add Support for PiperTTS

Cloning

  • Create a cloning mode to select subtitles and export them to a dataset or wav compilation for Coqui XTTS
  • Use diaries and subtitles to isolate and build training datasets
  • Build a tool to streamline the manual creation of datasets
(oh god that's literally so many things, the scope of this has gotten so big how will this ever become a thing)

weeablind's People

Contributors

florianeagox avatar nols1000 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

weeablind's Issues

setting up take forever :((

I have a problem, when installing "pip install -r requirements.txt" it always stops and shows --- "This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https ://pip.pypa.io/warnings/backtracking for guidance."-- I waited 4, 5 hours but it didn't work

cannot import name 'ESpeakNG'

  File "C:\Users\Reno\voice\weeablind\weeablind.py", line 3, in <module>
    from Voice import Voice
  File "C:\Users\Reno\voice\weeablind\Voice.py", line 6, in <module>
    from espeakng import ESpeakNG
ImportError: cannot import name 'ESpeakNG' from 'espeakng' (C:\Users\Reno\voice\weeablind\venv\Lib\site-packages\espeakng\__init__.py)

TypeError: Descriptors cannot be created directly. | Win11

Hello esteemed developer. Firstly, I'd like to express my gratitude for creating and maintaining this project. Thanks to individuals like you, OpenSource thrives!

I followed the instructions in the readme, but unfortunately, I still encountered an error.

My ENV
Win 11 x64 - Python 3.10 (from Microsoft Store).
FFmpeg, Espeak-NG, and MSVC Build Tools are installed.
My GPU is an Nvidia RTX 4070 Ti.

Steps to reproduce the error:

  1. git clone https://github.com/FlorianEagox/weeablind.git
  2. cd weeablind
  3. python3.10 -m venv venv
  4. .\venv\Scripts\activate
  5. pip install -r requirements-win-310.txt --no-deps
  6. python weeablind.py

Error in the console:

C:\Users\Danil\dev\weeablind\output\sample.wav
espeak [WinError 2] The system cannot find the file specified
espeakng [WinError 2] The system cannot find the file specified
torchvision is not available - cannot save figures
C:\Users\Danil\dev\weeablind\venv\lib\site-packages\pyannote\audio\core\io.py:43: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call.
  torchaudio.set_audio_backend("soundfile")
2024-04-03 22:55:50.907235: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'cudart64_110.dll'; dlerror: cudart64_110.dll not found
2024-04-03 22:55:50.907388: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
  File "C:\Users\Danil\dev\weeablind\weeablind.py", line 6, in <module>
    from tabs.ListStreams import ListStreamsTab
  File "C:\Users\Danil\dev\weeablind\tabs\ListStreams.py", line 3, in <module>
    import vocal_isolation
  File "C:\Users\Danil\dev\weeablind\vocal_isolation.py", line 4, in <module>
    from spleeter.separator import Separator
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\spleeter\separator.py", line 26, in <module>
    import tensorflow as tf  # type: ignore
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\tensorflow\__init__.py", line 37, in <module>
    from tensorflow.python.tools import module_util as _module_util
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\tensorflow\python\__init__.py", line 37, in <module>
    from tensorflow.python.eager import context
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\tensorflow\python\eager\context.py", line 29, in <module>
    from tensorflow.core.framework import function_pb2
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\tensorflow\core\framework\function_pb2.py", line 16, in <module>
    from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\tensorflow\core\framework\attr_value_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\tensorflow\core\framework\tensor_pb2.py", line 16, in <module>
    from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\tensorflow\core\framework\resource_handle_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\tensorflow\core\framework\tensor_shape_pb2.py", line 36, in <module>
    _descriptor.FieldDescriptor(
  File "C:\Users\Danil\dev\weeablind\venv\lib\site-packages\google\protobuf\descriptor.py", line 621, in __new__
    _message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
 1. Downgrade the protobuf package to 3.20.x or lower.
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).

More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

Please let me know how I could avoid such an error? Perhaps I did something wrong.

Usage section

beautiful work! Can we have an usage section in the readme. Can't wait to play with it!

Error in requirements file

On line 222 of requirements-win-310.txt, you'll see:

file:///C:/Users/seth/Downloads/tesserocr-2.6.0-cp310-cp310-win_amd64.whl#sha256=a31c6eaa6380fd7d4e7764597c4f5a16f6e8f4abf4cdc9d61c52e36ea4f8a850

This brings up an error and stops installation. I could remove it, but I have a feeling it's important. I don't have AMD though, so maybe it's fine? On second thought, I'll replace it with this and hope it works:

tesserocr==2.6.0

Issue with "julius" Module Not Found Error

I ran into a snag while using your project and wanted to reach out about it. Seems like there's an issue with the "julius" module not being found, even though I've installed all the dependencies correctly.

Here's the error I'm getting:
Traceback (most recent call last): File "C:\WINDOWS\system32\weeablind\weeablind.py", line 4, in <module> from tabs.SubtitlesTab import SubtitlesTab ... File "C:\Windows\System32\weeablind\venv\Lib\site-packages\torch_audiomentations\augmentations\band_pass_filter.py", line 1, in <module> import julius ModuleNotFoundError: No module named 'julius'

I double-checked everything in my virtual environment, and it all seems in order. Any ideas on how I can fix this? Any additional steps or configurations I might be missing?

Thanks a bunch for your help!

error espeak

hello, I get an error for espeak, yet all the dependencies are downloaded, the virtual environment is ok.

weeablind\tabs\ConfigureVoiceTab.py", line 1, in
import app_state
File "C:\Users\ravai\Desktop\dubbing\weeablind\app_state.py", line 5, in
speakers[0].set_voice_params('tts_models/en/vctk/vits', 'p326') # p340
File "C:\Users\ravai\Desktop\dubbing\weeablind\Voice.py", line 103, in set_voice_params
self.voice.load_tts_model_by_name(voice)
File "C:\Users\ravai\Desktop\dubbing\weeablind\venv\lib\site-packages\TTS\api.py", line 185, in load_tts_model_by_name self.synthesizer = Synthesizer(
File "C:\Users\ravai\Desktop\dubbing\weeablind\venv\lib\site-packages\TTS\utils\synthesizer.py", line 93, in init
self.load_tts(tts_checkpoint, tts_config_path, use_cuda)
File "C:\Users\ravai\Desktop\dubbing\weeablind\venv\lib\site-packages\TTS\utils\synthesizer.py", line 187, in load_tts
self.tts_model = setup_tts_model(config=self.tts_config)
File "C:\Users\ravai\Desktop\dubbing\weeablind\venv\lib\site-packages\TTS\tts\models_init
.py", line 13, in setup_model
model = MyModel.init_from_config(config=config, samples=samples)
File "C:\Users\ravai\Desktop\dubbing\weeablind\venv\lib\site-packages\TTS\tts\models\vits.py", line 1796, in init_from_config
tokenizer, new_config = TTSTokenizer.init_from_config(config)
File "C:\Users\ravai\Desktop\dubbing\weeablind\venv\lib\site-packages\TTS\tts\utils\text\tokenizer.py", line 198, in init_from_config
phonemizer = get_phonemizer_by_name(config.phonemizer, **phonemizer_kwargs)
File "C:\Users\ravai\Desktop\dubbing\weeablind\venv\lib\site-packages\TTS\tts\utils\text\phonemizers_init
.py", line 60, in get_phonemizer_by_name
return ESpeak(**kwargs)
File "C:\Users\ravai\Desktop\dubbing\weeablind\venv\lib\site-packages\TTS\tts\utils\text\phonemizers\espeak_wrapper.py", line 114, in init
raise Exception(" [!] No espeak backend found. Install espeak-ng or espeak to your system.")
Exception: [!] No espeak backend found. Install espeak-ng or espeak to your system.

not able to dub

Exception in thread Thread-17 (run_dubbing):
Traceback (most recent call last):
File "C:\Users\Tanuj\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner
self.run()
File "C:\Users\Tanuj\AppData\Local\Programs\Python\Python310\lib\threading.py", line 946, in run
self._target(*self._args, **self._kwargs)
File "C:\freelancer\dub\weeablind\video.py", line 177, in run_dubbing
progress_hook(i+1, "Mixing New Audio")
UnboundLocalError: local variable 'i' referenced before assignment

cannot import name 'espeakng' from espeakng

Hi nice works,i had a issue with this step on Windows 10 with a python environment,please help me to fix this problem "cannot import name 'espeakng' from espeakng"

thanks in advance

Dependency conflicts during installation of project dependencies

Hey there,

So, I was trying to install the dependencies for the project, but ran into a bit of a snag. Here's the error message I got:

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
video-ocr 0.0.2 requires opencv-python~=4.5.5.62, which is not installed.
video-ocr 0.0.2 requires tesserocr~=2.5.2, which is not installed.
flask 3.0.2 requires click>=8.1.3, but you have click 7.1.2 which is incompatible.
video-ocr 0.0.2 requires click~=8.0.1, but you have click 7.1.2 which is incompatible.
video-ocr 0.0.2 requires numpy~=1.22.2, but you have numpy 1.22.0 which is incompatible.
video-ocr 0.0.2 requires Pillow~=9.0.1, but you have pillow 10.2.0 which is incompatible.
video-ocr 0.0.2 requires scipy~=1.8.0, but you have scipy 1.11.3 which is incompatible.
video-ocr 0.0.2 requires tqdm~=4.62.3, but you have tqdm 4.66.2 which is incompatible.

Seems like there are quite a few compatibility issues here. Any idea what's going on? Could we update the dependencies or is there something else we need to do to resolve this?

Cheers!

Keep up the good work

I love this project! Keep up the excellent work. I've wanted something like this for years, but I don't have the skills. I wish I had the skill to help you work on this. There are so many good animes out there that I have been unable to watch because they never got a dub or will never get one. You're doing God's work.

Library with incompatible requirements.

This software is not possible to use on a Mac. In Python version 3.8, an error appears stating that espeakng==1.0.3 requires Python version 3.9 or higher. Unfortunately, when I update to Python 3.9, I get an error saying vidia-cublas-cu12=12.1.3.1 needs Python version lower than 3.9.

Error while running with Python 3.8

ERROR: Ignored the following versions that require a different python version: 1.0.3 Requires-Python >=3.9; 1.2.0 Requires-Python >=3.9
ERROR: Could not find a version that satisfies the requirement espeakng==1.0.3 (from versions: 1.0.1, 1.0.2)
ERROR: No matching distribution found for espeakng==1.0.3

Error while running with Python 3.9

ERROR: Ignored the following versions that require a different python version: 0.52.0 Requires-Python >=3.6,<3.9; 0.52.0rc3 Requires-Python >=3.6,<3.9
ERROR: Could not find a version that satisfies the requirement nvidia-cublas-cu12==12.1.3.1 (from versions: 0.0.1.dev5)
ERROR: No matching distribution found for nvidia-cublas-cu12==12.1.3.1

Eroor when I run pip install -r requirements-win-310.txt --no-deps

pip install -r requirements-win-310.txt --no-deps
Processing c:\users\seth\downloads\tesserocr-2.6.0-cp310-cp310-win_amd64.whl (from -r requirements-win-310.txt (line 222))
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\Users\seth\Downloads\tesserocr-2.6.0-cp310-cp310-win_amd64.whl'

No module named 'wx' error when running after installation

After following the instructions on windows, I try to run weeblind.py, but get the following error:

Traceback (most recent call last):
File "D:\Weeablind\weeablind.py", line 1, in
import wx
ModuleNotFoundError: No module named 'wx'

After installing wxpython, it gets a bit further, only to give this error:

Traceback (most recent call last):
File "D:\Weeablind\weeablind.py", line 3, in
from tabs.ConfigureVoiceTab import ConfigureVoiceTab
File "D:\Weeablind\tabs\ConfigureVoiceTab.py", line 2, in
import app_state
File "D:\Weeablind\app_state.py", line 1, in
from Voice import Voice
File "D:\Weeablind\Voice.py", line 5, in
import feature_support
File "D:\Weeablind\feature_support.py", line 10, in
diarization_supported = is_module_available("pyannote.audio")
File "D:\Weeablind\feature_support.py", line 6, in is_module_available
return importlib.util.find_spec(module_name) is not None
File "C:\Users\winuser\anaconda3\envs\wee\lib\importlib\util.py", line 94, in find_spec
parent = import(parent_name, fromlist=['path'])
ModuleNotFoundError: No module named 'pyannote'

ImportError: cannot import name 'ESpeakNG' from 'espeakng'

i'm on windows with python 3.10
i have installed espeakng and i added it to the path

Traceback (most recent call last):
File "C:\Users\checc\Desktop\weeablind\weeablind.py", line 3, in
from tabs.ConfigureVoiceTab import ConfigureVoiceTab
File "C:\Users\checc\Desktop\weeablind\tabs\ConfigureVoiceTab.py", line 2, in
import app_state
File "C:\Users\checc\Desktop\weeablind\app_state.py", line 1, in
from Voice import Voice
File "C:\Users\checc\Desktop\weeablind\Voice.py", line 9, in
from espeakng import ESpeakNG
ImportError: cannot import name 'ESpeakNG' from 'espeakng' (C:\Users\checc\AppData\Local\Programs\Python\Python310\lib\site-packages\espeakng_init_.py)
(venv) PS C:\Users\checc\Desktop\weeablind>

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.