Skip to content

ViTA for Indian Sign Language is software that converts Indian Sign Language (ISL) into text and audio. This project aims to bridge the communication gap for ISL users by providing a seamless translation of sign language into spoken and written forms. The software can translate signs into three languages: English, Hindi, and Kannada.

Notifications You must be signed in to change notification settings

Adithya7903/ViTA-main

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

ViTA

ViTA stands for "Visual Translation Assistance."

ViTA uses a combination of computer vision and natural language processing to recognize and translate sign language into text. The system is designed to be user-friendly and intuitive, allowing individuals who are deaf or hard of hearing to communicate with others without the need for specialized training or knowledge of sign language.

To use ViTA, an individual would simply hold up their hand in front of a camera, which is equipped with special sensors and algorithms that can recognize and interpret hand gestures and movements. The system then converts the recognized sign language into text, which is displayed on a screen or output through a text-to-speech system.

In terms of its technical components, ViTA is likely to use a combination of machine learning algorithms, computer vision technology, and natural language processing techniques to accurately recognize and translate sign language. The system would also need to be able to process and analyze large amounts of data in real time in order to accurately recognize and translate sign language gestures as they are being made.

Why ViTA?

The reason for creating a program like ViTA, which converts sign language into text, is to facilitate communication between individuals who use sign language and individuals who do not. This can help to improve accessibility and inclusion for people who are deaf or hard of hearing, allowing them to more easily communicate with others and access information. Additionally, a program like ViTA can help to bridge the communication gap between individuals who use different sign languages, allowing them to communicate more easily with each other as well.

Version Info

While it is possible to use machine learning to develop a sign language converter, it is not necessarily the case that all sign language converters are based on machine learning. Sign language is a visual language that uses hand gestures, facial expressions, and body language to communicate, and there are different ways to approach the task of converting it to text or speech. Some methods for sign language conversion may use rule-based approaches or dictionaries to map signs to words, while others may use machine learning to recognize and interpret signs in real-time. Ultimately, the specific approach used will depend on the design and goals of the sign language converter.

Basic:

If you are looking for an example of a simple machine learning approach to action detection, one approach could be to use a supervised learning algorithm to train a model to classify text as belonging to one of the predefined action categories. For instance, you could use a dataset of labeled text examples where each example has been labeled as belonging to one of the action categories (e.g. "Peace", "ROCK ON!", "Call me", "I LOVE YOU"). The model could then be trained to predict the correct action category for a given input text.

ASL-American Sign Language

- This model would uses machine learning algorithms to recognize and interpret gestures and hand movements made by a sign language speaker. The model would be trained on a large dataset of sign language signs and their corresponding text translations. The model would then be able to recognize and convert sign language gestures into text in real time. The model would be relatively simple, as it would only need to recognize a limited number of gestures and hand movements. However, it would need to be highly accurate in order to effectively translate sign language into text.

Version 1.0 -

Algorithm in which images are collected and used as samples to mark key points. Once the key points have been marked, a model can be trained using the marked data. This process allows the model to learn from the images and potentially make predictions or identifications based on the key points.
Cons: this module is slow and produces inaccurate results.

Version 1.1 -

Algorithm in which images are collected and then converted into keras files using an online program "teachablemachine" by google
This allows the algorithm to access and process the images using the tools and capabilities provided by the keras framework. The use of keras can improve the efficiency and performance of the algorithm, as it offers a range of powerful tools for working with images and other data.

About

ViTA for Indian Sign Language is software that converts Indian Sign Language (ISL) into text and audio. This project aims to bridge the communication gap for ISL users by providing a seamless translation of sign language into spoken and written forms. The software can translate signs into three languages: English, Hindi, and Kannada.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages