Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
zhenye234 committed Aug 31, 2024
1 parent e4be1d8 commit 9ae14b6
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,14 @@
# Paper Title
Codec Does Matter: Exploring the Semantic Shortcoming of Codec for Audio Language Model

# Abstract
Recent advancements in audio generation have been significantly propelled by the capabilities of Large Language Models (LLMs). The existing research on audio LLM has primarily focused on enhancing the architecture and scale of audio language models, as well as leveraging larger datasets, and generally, acoustic codecs, such as EnCodec, are used for audio tokenization. However, these codecs were originally designed for audio compression, which may lead to suboptimal performance in the context of audio LLM. Our research aims to address the shortcomings of current audio LLM codecs, particularly their challenges in maintaining semantic integrity in generated audio. For instance, existing methods like VALL-E, which condition acoustic token generation on text transcriptions, often suffer from content inaccuracies and elevated word error rates (WER) due to semantic misinterpretations of acoustic tokens, resulting in word skipping and errors. To overcome these issues, we propose a straightforward yet effective approach called X-Codec. X-Codec incorporates semantic features from a pre-trained semantic encoder before the Residual Vector Quantization (RVQ) stage and introduces a semantic reconstruction loss after RVQ. By enhancing the semantic ability of the codec, X-Codec significantly reduces WER in speech synthesis tasks and extends these benefits to non-speech applications, including music and sound generation. Our experiments in text-to-speech, music continuation, and text-to-sound tasks demonstrate that integrating semantic information substantially improves the overall performance of language models in audio generation.

# ckpts

Speech ckpts [downlaod link](https://drive.google.com/file/d/11TqMx7LFvSp-x74B894cd7hWy82DfZmW/view?usp=drive_link)

General audio ckpts [Soon]

# inference
# Inference

```bash
python inference.py
Expand All @@ -23,3 +21,5 @@ python inference.py
torchrun --nnodes=1 --nproc-per-node=8 main_launch_vqdp.py
```

## Acknowledgement
I would like to extend a special thanks to authors of Uniaudio and DAC, since our code base is mainly borrowed from [Uniaudio](https://github.com/yangdongchao/UniAudio/tree/main/codec) and [DAC] (https://github.com/descriptinc/descript-audio-codec).

0 comments on commit 9ae14b6

Please sign in to comment.