Sign Language Detector

Destroy a language barrier with a click of a button.


 Using over 13,000 labeled images, the sign language detector accurately recognizes the letters 'A' to 'Z' as well as 'Space.' It does this by noticing similarities in the characteristics of the labeled images. When we present it an image, it chooses the letter it is based on the same characteristics it notices from the labeled data. 

Our Motive

Communication is essential in today's world, and some people accomplish this by using sign language. Unfortunately, less than 1% of people in the US are fluent in sign language. This is where our model comes in. Our model can help these people out by translating the ASL      (American Sign Language) signs to English letters so that communication between people who are knowledgeable in ASL and people who are not knowledgeable in ASL is more convenient, whether it’s for buying groceries or communicating over zoom.

Dream Team

Meet the creators & contributors of this project!

Satvik Muddana

Student at Walnut High School

Satvik enjoys working with technology and loves robotics. He contributed towards data collection, helped with the model training process, worked on the back-end of the model website.

Aadi Chauhan

Student at Challenger Middle School

Aadi enjoys working with computers and swimming. He helped label the data, create and train the model, and build the back-end of the website.

Kyuwon Kim

Student at Leigh High School

Kyuwon is interested in video games, AI, and technology. He helped collect the data, label the data, and worked on the front-end of the website.

Pranauv Vijaykumar

Student at Modesto High School

Pranauv is interested in tennis, technology, and video games. He helped collect and label data, as well as helped create the front end of the website.

Andrew Song

Student at Cupertino High School

Andrew likes robotics, video games, and tennis. He worked on labeling data, training the AI model, and building the back-end of the model website.

Shane Berger

Team Instructor

Shane studies Artificial Intelligence and Human-Computer Interaction at Stanford University, and he enjoys teaching computer science. He hopes to one day be on the TV show, Survivor!

Michael Ke Zhang

Lead Instructor

Michael ran data teams at Blend Labs and Grab. He enjoys fishing and inspiring the next generation of students.



Label box is a data training website that allowed us to label thousands of sign language images to train our model.


YOLO or You Only Look Once, is an object detection system that uses set algorithms to recognize certain objects in pictures or videos. YOLO is essential for our product, when it comes to identifying sign language symbols in pictures. 

AI Camp

AI Camp is the basis of this product. Without this camp this product wouldn't exist! With the help of us campers and our instructors, in just 3 weeks we came up with, prepared, and created an AI product in which we are presenting to you!

Learn A.I. in 3 weeks with zero coding experience

A.I. camp teaches middle and high school students machine learning and career frameworks through real life experience. Prepare for your major and career by joining us!