American Sign Language (ASL)
- Tech Stack: Scikit learn, Numpy, Pandas, Python, Google colab
- Github URL: Project Link
About the dataset
About The data set is a collection of images of alphabets from the American Sign Language, separated in 29 folders which represent the various classes. The training data set contains 87,000 images which are 200x200 pixels. There are 29 classes, of which 26 are for the letters A-Z and 3 classes for SPACE, DELETE and NOTHING. These 3 classes are very helpful in real-time applications, and classification. The test data set contains a mere 29 images, to encourage the use of real-world test images.Project Summary
Goal:
Build a system that can correctly identify American Sign Language signs that corresponds to the hand gesturesApplications:
Our proposed system will help the deaf and hard-of-hearing communicate better with members of the community. For example, there have been incidents where those who are deaf have had trouble communicating with first responders when in need. Although responders may receive training on the basics of ASL, it is unrealistic to expect everyone to become fully fluent in sign language. Down the line, advancements like these in computer recognition could aid a first responder in understanding and helping those that are unable to communicate through speech.Another application is to enable the deaf and hard-of-hearing equal access to video consultations, whether in a professional context or while trying to communicate with their healthcare providers via telehealth. Instead of using basic chat, these advancements would allow the hearing-impaired access to effective video communication.