This project aims to develop an American Sign Language (ASL) recognition system that can interpret ASL gestures and translate them into text or speech. ASL is a vital means of communication for the Deaf and hard of hearing community, and this project seeks to bridge the communication gap between ASL users and those who may not know sign language.
- Recognize ASL gestures, fingerspelling, or even full sentences.
- Create a user-friendly interface for real-time ASL recognition.
- Promote accessibility and inclusivity for the Deaf and hard of hearing community.
- Python (3.7 or higher)
- Required Python libraries (e.g., OpenCV, TensorFlow, PyTorch)
- Prepare your ASL dataset or use an existing one.
- Train your ASL recognition model using the provided scripts.
- Create a user interface for real-time recognition, if desired.
- Test and evaluate the performance of your ASL recognition system.
- Continuously improve and fine-tune your system based on feedback and additional data.
data/
: Directory for storing ASL datasets.models/
: Directory for saving trained recognition models.scripts/
: Contains scripts for data preprocessing, model training, and evaluation.src/
: Source code for the ASL recognition system.ui/
: If applicable, the user interface code goes here.
- The Deaf and hard of hearing community for their valuable input and feedback.
- ASL-LEX and other ASL datasets for research purposes.
- OpenCV and TensorFlow communities for their excellent libraries.