Team SignTalk

A Smart Hand Gloves that can Convert Sign Language into Text and Audio

Project Overview

Team SignTalk represents a breakthrough in assistive communication technology, designed to bridge the communication gap between the deaf and hearing communities. This innovative smart glove system translates sign language gestures into both text and audio output in real-time, enabling seamless communication without the need for an interpreter. The device combines flex sensors, accelerometers, and microcontrollers to capture hand movements and finger positions with remarkable accuracy, then processes these signals through a sophisticated gesture recognition algorithm to identify specific sign language patterns. By converting these patterns into spoken words and displayed text, SignTalk empowers deaf individuals to communicate more independently and confidently in everyday situations, educational settings, and professional environments.

Problem Statement

Communication barriers between deaf and hearing communities remain a significant challenge globally, with millions of people who rely on sign language facing daily difficulties in expressing themselves in environments where sign language is not understood. Traditional solutions like hiring interpreters are expensive, not always available, and can create dependencies that limit independence. While there are mobile applications that can translate text or speech, they don't address the fundamental need for deaf individuals to communicate naturally using their preferred language—sign language. The lack of accessible, affordable, and portable technology that can translate sign language in real-time creates barriers to education, employment, social interaction, and emergency communication for the deaf community. Our team recognized this gap and set out to create a solution that would be both effective and accessible, empowering deaf individuals with greater independence and inclusion in society.

Solution & Approach

Our solution employs a wearable smart glove equipped with five flex sensors positioned along each finger to measure bending angles, a 6-axis accelerometer and gyroscope module (MPU6050) to capture hand orientation and movement, and an Arduino Nano microcontroller to process all sensor data in real-time. The system architecture consists of three main layers: the sensing layer (capturing gesture data), the processing layer (pattern recognition and classification), and the output layer (text display and audio synthesis). We implemented a custom algorithm that calibrates sensor readings to individual users, eliminating variations in hand size and flexibility. The gesture recognition system uses a combination of rule-based logic and pattern matching to identify sign language gestures from the American Sign Language (ASL) alphabet and common phrases. Once a gesture is recognized, the microcontroller sends data via Bluetooth to a companion mobile application that displays the translated text and generates speech output using text-to-speech technology. We focused on making the device comfortable for extended wear, lightweight, and robust enough for daily use, with a rechargeable battery providing up to 8 hours of continuous operation.

Technologies Used

The project leverages Arduino Nano as the primary microcontroller for its compact size and sufficient processing power. We utilized five flexible resistive sensors (flex sensors) to detect finger bending, an MPU6050 6-axis Inertial Measurement Unit for hand orientation tracking, and an HC-05 Bluetooth module for wireless communication with smartphones. The companion mobile application was developed using MIT App Inventor for Android, incorporating Google's Text-to-Speech API for voice output. Power management is handled by a 3.7V lithium-ion battery with a TP4056 charging module. The glove structure uses a comfortable fabric base with strategically placed sensors, connected via thin, flexible wires to minimize interference with natural hand movements. Our software stack includes C++ for Arduino programming, implementing sensor fusion algorithms to combine flex sensor and IMU data for accurate gesture recognition. We also developed a calibration interface that allows users to personalize the system to their specific hand characteristics and signing style.

Challenges & Learnings

One of our biggest challenges was achieving consistent gesture recognition across different users with varying hand sizes and signing styles. We addressed this through adaptive calibration algorithms and extensive testing with diverse user groups. Sensor noise and drift presented another significant hurdle—flex sensors can degrade over time and IMU readings can drift, so we implemented complementary filtering and calibration routines to maintain accuracy. Power management was critical since wearable devices need to last throughout a full day of use; we optimized our code to reduce unnecessary sensor polling and implemented sleep modes between gestures. We also learned valuable lessons about user-centered design through feedback from the deaf community—our initial prototypes were too bulky and had uncomfortable wire placements that we refined based on actual user testing. The project taught us the importance of interdisciplinary collaboration, combining expertise in electronics, programming, mechanical design, and crucially, understanding the needs and preferences of our target users. Perhaps most importantly, we learned that creating assistive technology requires deep empathy and continuous engagement with the community you're serving.

Results & Impact

Team SignTalk has achieved remarkable success in competitions and real-world validation. We became semifinalists in the prestigious Rice360 International Health Technology Design Competition 2025, competing against teams from around the world. The project received the DEI (Diversity, Equity, and Inclusion) Award from Rice360, recognizing our commitment to creating inclusive technology. We were crowned Divisional Champions at Youth Startup Summit 2025 and secured championship positions at both RoboSust AGP Competition and the University Innovation Hub Program (UIHP) Cohort-5. Beyond competitions, user testing with deaf community members showed an 85% accuracy rate in gesture recognition for trained gestures, with users reporting increased confidence in communication scenarios. The device has been tested in educational settings, public venues, and emergency communication contexts, demonstrating its practical viability. We've received inquiries about commercialization and partnerships from organizations working with the deaf community, and we're currently refining the design for potential mass production to make this technology accessible to more people who need it.

Achievements

  • Semifinalist Team at Rice360 International Health Technology Design Competition, 2025.
  • Got an award in DEI Catagory from the Rice360 Competition.
  • Divisional Champion At Youth Startup Summit 2025
  • Champion at RoboSust AGP Competition
  • Champion at University Innovation Hub Program (UIHP) Cohort - 5
Back to Projects