Skip to content

TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs)

License

Notifications You must be signed in to change notification settings

Walchshofer/tts-generation-webui

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TTS Generation WebUI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs)

One click installers

Download || Upgrading || Manual installation

Google Colab demo: Open In Colab

Videos

How To Use TTS Voice Generation Web UI With AI Voice Cloning Technology (Bark AI Tutorial) TTS Generation WebUI - A Tool for Text to Speech and Voice Cloning Text to speech and voice cloning - TTS Generation WebUI
Watch the video Watch the video Watch the video

Screenshots

react musicgen rvc
history Screenshot 1 Screenshot 5

Examples

audio__bark__continued_generation__2023-05-04_16-07-49_long.webm
audio__bark__continued_generation__2023-05-04_16-09-21_long.webm
audio__bark__continued_generation__2023-05-04_16-10-55_long.webm

Extra Voices for Bark

Echo AI https://rsxdalv.github.io/bark-speaker-directory/

Bark Readme

README_Bark.md

Info about managing models, caches and system space for AI projects

rsxdalv#186 (reply in thread)

Changelog

Jan 9:

  • React UI
    • Fix 404 handler for Wavesurfer
    • Group Bark tabs together

Jan 8:

  • Release React UI

Oct 26:

  • Improve model selection UX for Musicgen

Oct 24:

Sep 21:

  • Bark: Add continue as semantic history button
  • Switch to github docker image storage, new docker image:
    • docker pull ghcr.io/rsxdalv/tts-generation-webui:main
  • Fix server_port option in config rsxdalv#168 , thanks to https://github.com/Dartvauder

Sep 9:

Sep 5:

  • Add voice mixing to Bark
  • Add v1 Burn in prompt to Bark (Burn in prompts are for directing the semantic model without spending time on generating the audio. The v1 works by generating the semantic tokens and then using it as a prompt for the semantic model.)
  • Add generation length limiter to Bark

Aug 27:

Aug 26:

  • Add Send to RVC, Demucs, Vocos buttons to Bark and Vocos

Aug 24:

  • Add date to RVC outputs to fix rsxdalv#147
  • Fix safetensors missing wheel
  • Add Send to demucs button to musicgen

Aug 21:

  • Add torchvision install to colab for musicgen issue fix
  • Remove rvc_tab file logging

Aug 20:

  • Fix MBD by reinstalling hydra-core at the end of an update

Aug 18:

  • CI: Add a GitHub Action to automatically publish docker image.

Aug 16:

  • Add "name" to tortoise generation parameters

Aug 15:

  • Pin torch to 2.0.0 in all requirements.txt files
  • Bump audiocraft and bark versions
  • Remove Tortoise transformers fix from colab
  • Update Tortoise to 2.8.0

Aug 13:

  • Potentially big fix for new user installs that had issues with GPU not being supported

Aug 11:

  • Tortoise hotfix thanks to manmay-nakhashi
  • Add Tortoise option to change tokenizer

Aug 8:

  • Update AudioCraft, improving MultiBandDiffusion performance
  • Fix Tortoise parameter 'cond_free' mismatch with 'ultra_fast' preset

Aug 7:

  • add tortoise deepspeed fix to colab

Aug 6:

  • Fix audiogen + mbd error, add tortoise fix for colab

Aug 4:

  • Add MultiBandDiffusion option to MusicGen rsxdalv#109
  • MusicGen/AudioGen save tokens on generation as .npz files.

Aug 3:

Aug 2:

  • Fix Model locations not showing after restart

July 26:

  • Voice gallery
  • Voice cropping
  • Fix voice rename bug, rename picture as well, add a hash textbox
  • Easier downloading of voices (rsxdalv#98)

July 24:

  • Change bark file format to include history hash: ...continued_generation... -> ...from_3ea0d063...

July 23:

July 21:

  • Fix hubert not working with CPU only (rsxdalv#87)
  • Add Google Colab demo (rsxdalv#88)
  • New settings tab and model locations (for advanced users) (rsxdalv#90)

July 19:

July 16:

  • Voice Photo Demo
  • Add a directory to store RVC models/indexes in and a dropdown
  • Workaround rvc not respecting is_half for CPU rsxdalv#74
  • Tortoise model and voice selection improvements rsxdalv#73

July 10:

July 9:

  • RVC Demo + Tortoise, v6 installer with update script and automatic attempts to install extra modules rsxdalv#66

July 5:

  • Improved v5 installer - faster and more reliable rsxdalv#63

July 2:

July 1:

Jun 29:

Jun 27:

Jun 20

  • Tortoise: proper long form generation files rsxdalv#46

Jun 19

June 18:

  • Update to newest audiocraft, add longer generations

Jun 14:

June 5:

  • Fix "Save to Favorites" button on bark generation page, clean up console (v4.1.1)
  • Add "Collections" tab for managing several different data sets and easier curration.

June 4:

  • Update to v4.1 - improved hash function, code improvements

June 3:

  • Update to v4 - new output structure, improved history view, codebase reorganization, improved metadata, output extensions support

May 21:

  • Update to v3 - voice clone demo

May 17:

  • Update to v2 - generate results as they appear, preview long prompt generations piece by piece, enable up to 9 outputs, UI tweaks

May 16:

  • Add gradio settings tab, fix gradio errors in console, improve logging.
  • Update History and Favorites with "use as voice" and "save voice" buttons
  • Add voices tab
  • Bark tab: Remove "or Use last generation as history"
  • Improve code organization

May 13:

  • Enable deterministic generation and enhance generated logs. Credits to suno-ai/bark#175.

May 10:

  • Enable the possibility of reusing history prompts from older generations. Save generations as npz files. Add a convenient method of reusing any of the last 3 generations for the next prompts. Add a button for saving and collecting history prompts under /voices. rsxdalv#10

May 4:

May 3:

  • Improved Tortoise UI: Voice, Preset and CVVP settings as well as ability to generate 3 results (rsxdalv#6)

May 2 Update 2:

  • Added support for history recylcing to continue longer prompts manually

May 2 Update 1:

  • Added support for v2 prompts

Before:

  • Added support for Tortoise TTS

Upgrading

In case of issues, feel free to contact the developers.

Upgrading from v5 to v6 installer

  • Download and run the new installer
  • Replace the "tts-generation-webui" directory in the newly installed directory
  • Run update_platform

Is there any more optimal way to do this?

Not exactly, the dependencies clash, especially between conda and python (and dependencies are already in a critical state, moving them to conda is ways off). Therefore, while it might be possible to just replace the old installer with the new one and running the update, the problems are unpredictable and unfixable. Making an update to installer requires a lot of testing so it's not done lightly.

Upgrading from v4 to v5 installer

  • Download and run the new installer
  • Replace the "tts-generation-webui" directory in the newly installed directory
  • Run update_platform

Manual installation (not recommended, check installer source for reference)

  • Install conda or another virtual environment

  • Highly recommended to use Python 3.10

  • Install git (conda install git)

  • Install ffmpeg (conda install -y -c pytorch ffmpeg)

  • Set up pytorch with CUDA or CPU (https://pytorch.org/audio/stable/build.windows.html#install-pytorch)

  • Clone the repo: git clone https://github.com/rsxdalv/tts-generation-webui.git

  • install the root requirements.txt with pip install -r requirements.txt

  • clone the repos in the ./models/ directory and install requirements under them

  • run using (venv) python server.py

  • Potentially needed to install build tools (without Visual Studio): https://visualstudio.microsoft.com/visual-cpp-build-tools/

React UI

  • Install nodejs (if not already installed with conda)
  • Install react dependencies: npm install
  • Build react: npm run build
  • Run react: npm start
  • Also run the python server: python server.py or with start_(platform) script

Docker Setup

tts-generation-webui can also be ran inside of a Docker container. To get started, first build the Docker image while in the root directory:

docker build -t rsxdalv/tts-generation-webui .

Once the image has built it can be started with Docker Compose:

docker compose up -d

The container will take some time to generate the first output while models are downloaded in the background. The status of this download can be verified by checking the container logs:

docker logs tts-generation-webui

Open Source Libraries

This project utilizes the following open source libraries:

Ethical and Responsible Use

This technology is intended for enablement and creativity, not for harm.

By engaging with this AI model, you acknowledge and agree to abide by these guidelines, employing the AI model in a responsible, ethical, and legal manner.

  • Non-Malicious Intent: Do not use this AI model for malicious, harmful, or unlawful activities. It should only be used for lawful and ethical purposes that promote positive engagement, knowledge sharing, and constructive conversations.
  • No Impersonation: Do not use this AI model to impersonate or misrepresent yourself as someone else, including individuals, organizations, or entities. It should not be used to deceive, defraud, or manipulate others.
  • No Fraudulent Activities: This AI model must not be used for fraudulent purposes, such as financial scams, phishing attempts, or any form of deceitful practices aimed at acquiring sensitive information, monetary gain, or unauthorized access to systems.
  • Legal Compliance: Ensure that your use of this AI model complies with applicable laws, regulations, and policies regarding AI usage, data protection, privacy, intellectual property, and any other relevant legal obligations in your jurisdiction.
  • Acknowledgement: By engaging with this AI model, you acknowledge and agree to abide by these guidelines, using the AI model in a responsible, ethical, and legal manner.

License

Codebase and Dependencies

The codebase is licensed under MIT. However, it's important to note that when installing the dependencies, you will also be subject to their respective licenses. Although most of these licenses are permissive, there may be some that are not. Therefore, it's essential to understand that the permissive license only applies to the codebase itself, not the entire project.

That being said, the goal is to maintain MIT compatibility throughout the project. If you come across a dependency that is not compatible with the MIT license, please feel free to open an issue and bring it to our attention.

Known non-permissive dependencies:

Library License Notes
encodec CC BY-NC 4.0 Newer versions are MIT, but need to be installed manually
diffq CC BY-NC 4.0 Optional in the future, not necessary to run, can be uninstalled, should be updated with demucs
lameenc GPL License Future versions will make it LGPL, but need to be installed manually
unidecode GPL License Not mission critical, can be replaced with another library, issue: neonbjb/tortoise-tts#494

Model Weights

Model weights have different licenses, please pay attention to the license of the model you are using.

Most notably:

  • Bark: CC BY-NC 4.0 (MIT but HuggingFace has not been updated yet)
  • Tortoise: Unknown (Apache-2.0 according to repo, but no license file in HuggingFace)
  • MusicGen: CC BY-NC 4.0
  • AudioGen: CC BY-NC 4.0

Configuration Guide

You can configure the interface through the "Settings" tab or, for advanced users, via the config.json file in the root directory (not recommended). Below is a detailed explanation of each setting:

Model Configuration

Argument Default Value Description
text_use_gpu true Determines whether the GPU should be used for text processing.
text_use_small true Determines whether a "small" or reduced version of the text model should be used.
coarse_use_gpu true Determines whether the GPU should be used for "coarse" processing.
coarse_use_small true Determines whether a "small" or reduced version of the "coarse" model should be used.
fine_use_gpu true Determines whether the GPU should be used for "fine" processing.
fine_use_small true Determines whether a "small" or reduced version of the "fine" model should be used.
codec_use_gpu true Determines whether the GPU should be used for codec processing.
load_models_on_startup false Determines whether the models should be loaded during application startup.

Gradio Interface Options

Argument Default Value Description
inline false Display inline in an iframe.
inbrowser true Automatically launch in a new tab.
share false Create a publicly shareable link.
debug false Block the main thread from running.
enable_queue true Serve inference requests through a queue.
max_threads 40 Maximum number of total threads.
auth null Username and password required to access interface, format: username:password.
auth_message null HTML message provided on login page.
prevent_thread_lock false Block the main thread while the server is running.
show_error false Display errors in an alert modal.
server_name 0.0.0.0 Make app accessible on local network.
server_port null Start Gradio app on this port.
show_tips false Show tips about new Gradio features.
height 500 Height in pixels of the iframe element.
width 100% Width in pixels of the iframe element.
favicon_path null Path to a file (.png, .gif, or .ico) to use as the favicon.
ssl_keyfile null Path to a file to use as the private key file for a local server running on HTTPS.
ssl_certfile null Path to a file to use as the signed certificate for HTTPS.
ssl_keyfile_password null Password to use with the SSL certificate for HTTPS.
ssl_verify true Skip certificate validation.
quiet true Suppress most print statements.
show_api true Show the API docs in the footer of the app.
file_directories null List of directories that Gradio is allowed to serve files from.
_frontend true Frontend.

About

TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 51.1%
  • TypeScript 47.4%
  • Jupyter Notebook 0.8%
  • JavaScript 0.3%
  • CSS 0.2%
  • Dockerfile 0.2%