Skip to content

A Convolutional Transformer for Keyword Spotting

Notifications You must be signed in to change notification settings

JinmingChe/Audiomer-PyTorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

☢️ Audiomer ☢️

Audiomer: A Convolutional Transformer for Keyword Spotting

[ arXiv ] [ Previous SOTA ] [ Model Architecture ]

NOTE: This is a pre-print release, the code might have bugs.

Results on SpeechCommands

Model Architecture

Performer Conv-Attention


Usage

To reproduce the results in the paper, follow the instructions:

  • To download the Speech Commands v2 dataset, run: python3 datamodules/SpeechCommands12.py
  • To train Audiomer-S and Audiomer-L on all three datasets thrice, run: python3 run_expts.py
  • To evaluate a model on a dataset, run: python3 evaluate.py --checkpoint_path /path/to/checkpoint.ckpt --model <model type> --dataset <name of dataset>.
  • For example: python3 evaluate.py --checkpoint_path ./epoch=300.ckpt --model S --dataset SC20

System requirements

  • NVIDIA GPU with CUDA
  • Python 3.6 or higher.
  • pytorch_lightning
  • torchaudio
  • performer_pytorch

About

A Convolutional Transformer for Keyword Spotting

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%