Team SignTalk: Translating Sign Language to Speech with a Smart Glove — Our Journey from SUST to International Recognition

By Rudra Sarker • Published March 20, 2026

Introduction

Imagine being surrounded by people you love — family at the dinner table, colleagues in a meeting, strangers on the street — and yet being unable to exchange a single word with any of them. For the estimated 466 million people worldwide living with disabling hearing loss, and the millions more who are mute, this is not a hypothetical scenario. It is everyday life. Communication, the most fundamental human act, becomes a daily obstacle course when the medium you rely on — sign language — is not understood by those around you.

SignTalk was born from the conviction that technology can dissolve that barrier. What started as a university project at Shahjalal University of Science and Technology (SUST) in Sylhet, Bangladesh, evolved into an internationally recognized assistive technology platform: a wearable smart glove that reads hand gestures and translates them in real time into audible speech. This is the full story of how we built it, the setbacks we overcame, the recognition we earned, and what we believe this technology can mean for the communities that need it most.

You can explore the full technical project details at our SignTalk project page, and watch a live demonstration on our YouTube demo video.

The Problem We're Solving

According to the World Health Organization, approximately 63 million people in South Asia live with significant hearing impairment. In Bangladesh alone, conservative estimates put the number at over 2.5 million people who are deaf or hard of hearing, with a substantial proportion also being non-verbal. Despite this scale, mainstream education, healthcare, and public services remain largely inaccessible to this population, not because of any failing on their part, but because our systems are designed around spoken language.

Bengali Sign Language (BdSL) is a rich, expressive language with its own grammar, vocabulary, and cultural nuances. Yet it is understood by only a small fraction of the hearing population, primarily family members of deaf individuals and a limited number of trained sign language interpreters. This creates a profound asymmetry: the deaf community must navigate a world that has made no accommodation for their primary mode of communication.

The practical consequences are severe. A deaf patient visiting a hospital may struggle to describe symptoms or understand a diagnosis. A deaf student at a university has limited access to real-time lecture interpretation. A deaf job-seeker may be turned away not because of capability, but because of the perceived difficulty of communication. These are not edge cases — they are daily realities that compound into systemic exclusion and reduced quality of life.

Existing solutions, primarily professional sign language interpreters, are expensive, scarce, and unavailable in on-demand contexts. Video relay services require internet connectivity and pre-arranged scheduling. Text-based workarounds, while useful, are slower and inadequate for fluid, natural conversation. What the community needs is an affordable, portable, always-on bridge between sign language and speech — and that is precisely what we set out to build.

Technical Architecture: How the Smart Glove Works

The SignTalk smart glove is a convergence of sensor technology, embedded computing, machine learning, and voice synthesis packaged into a lightweight, wearable form. Here is a detailed breakdown of every layer of the system.

Sensor Layer: Flex Sensors and Inertial Measurement

Five flex sensors — one per finger — form the backbone of gesture capture. These are thin, piezoresistive strips embedded along each finger of the glove. When a finger bends, the sensor's electrical resistance changes in proportion to the angle of flexion. By reading these resistance values through an analog-to-digital converter (ADC), the microcontroller receives a precise numerical representation of each finger's bend state, from fully extended (0°) to fully curled (approximately 90°).

Finger position alone, however, is insufficient to distinguish the full vocabulary of sign language. Many signs are differentiated not just by hand shape but by the orientation of the hand in three-dimensional space — whether the palm faces up, down, toward the speaker, or at an angle. To capture this dimension, we integrated an MPU-6050 module, a six-axis inertial measurement unit (IMU) that combines a 3-axis accelerometer and a 3-axis gyroscope on a single chip. The accelerometer detects the static orientation of the hand relative to gravity, while the gyroscope captures dynamic rotational motion. Together, these two data streams give us a 9-dimensional real-time snapshot of the hand's posture and movement.

Microcontroller and Signal Processing

At the heart of the glove sits an Arduino-compatible microcontroller responsible for polling the sensors at a fixed sampling rate, applying basic signal conditioning (noise filtering via a moving average algorithm), and packaging the data into structured frames for transmission. Debouncing logic ensures that brief, involuntary tremors or transition states do not trigger false classifications. The firmware is written in C++ and is optimized for low-latency operation, maintaining a processing cycle well under 50 milliseconds per sample.

Wireless Transmission

Sensor data is transmitted wirelessly via Bluetooth Low Energy (BLE) to a paired smartphone or edge computing device. BLE was chosen over classic Bluetooth and Wi-Fi for its dramatically lower power consumption, which is critical for a wearable device that must operate for extended periods on a compact battery. The BLE module broadcasts data packets at a rate optimized to balance responsiveness against power draw.

Machine Learning Gesture Recognition

The receiving device runs a trained machine learning model that maps incoming sensor vectors to discrete sign language gestures. We collected a labeled dataset of hundreds of gesture samples per class, performed by multiple users to build robustness against inter-user variability. Feature engineering included normalization of sensor readings, delta features (rate of change between frames), and windowed statistical summaries.

We evaluated several classifiers including k-Nearest Neighbors, Support Vector Machines, Random Forests, and lightweight neural networks. Our final architecture uses a small feedforward neural network trained with TensorFlow and converted to TensorFlow Lite for on-device inference, enabling the model to run directly on the smartphone without requiring a cloud connection. This was a deliberate design choice: internet dependency would make the device unreliable in rural Bangladesh where connectivity is inconsistent.

Text-to-Speech and Output Interface

Recognized gestures are mapped to text strings and fed into the device's text-to-speech (TTS) engine, which synthesizes and plays the corresponding spoken word or phrase in real time. The mobile application we built also displays the recognized text on screen, providing an additional modality for environments where audio playback is impractical. The app is built with a clean, accessible interface designed with input from members of the deaf community, featuring large text, high contrast, and intuitive controls.

Development Journey: From Ideation to Prototype

The seed of SignTalk was planted during an informal conversation at SUST's engineering building, when a teammate described watching a family member with hearing impairment struggle at a government office because no interpreter was present. The frustration and indignity of that situation sparked a question: could we build something that would give deaf individuals an independent voice?

Our first prototype was crude by any measure. We sourced flex sensors online, wired them to an Arduino Uno with jumper cables, and taped the assembly together. The first recognition attempt could distinguish only three hand shapes, and even that was unreliable, prone to misclassification whenever the glove shifted slightly or the wearer's hand changed temperature. That prototype taught us more through its failures than through its successes.

Sensor calibration was our most persistent early challenge. Flex sensors are highly sensitive to temperature, humidity, and the exact mounting position on the glove. We went through four iterations of the glove's physical design, experimenting with different fabric substrates, sensor mounting positions, and adhesive methods before settling on a design that delivered consistent readings across sessions and across different wearers. We built a calibration routine into the firmware that users run at startup, which records individual baseline readings and adjusts thresholds accordingly.

The machine learning pipeline required even more iteration. Our first datasets were small and recorded only by team members, which led to a model that performed brilliantly in the lab but collapsed when tested by external users. We organized structured data collection sessions with volunteers, eventually building a dataset of over 4,000 samples across 30 gesture classes. Each subsequent dataset expansion brought measurable improvement in real-world classification accuracy.

Hardware miniaturization was a parallel workstream. The Arduino Uno, while excellent for prototyping, is too bulky for comfortable daily wear. We transitioned to a smaller microcontroller platform and experimented with routing the wiring inside the glove's fabric to eliminate exposed wires. Battery placement, weight distribution, and glove sizing all required careful iteration to produce something a real user would actually want to wear.

Team Collaboration: Engineering Across Disciplines

SignTalk is not the work of any single individual — it emerged from the collective effort of a team that deliberately drew on diverse academic backgrounds. Our core team at SUST spans computer science and engineering, electrical and electronic engineering, and information technology. This multidisciplinary composition was not accidental; it was the architectural decision that made SignTalk viable.

The hardware design and sensor integration work was led by teammates with backgrounds in electronics and embedded systems, who were comfortable reading datasheets, soldering components, and designing PCB layouts. The machine learning and software development workstreams were driven by computer science team members. User interface design and user research were shared responsibilities where different perspectives helped avoid the trap of building technology that engineers love but users find confusing.

Our faculty advisors at SUST provided guidance at critical decision points — particularly around architecture choices that would scale beyond the prototype, and around academic rigor in our documentation. The Innovation Education LLC also provided mentorship and support in shaping our project narrative and competitive strategy, connecting us with mentors who had experience taking hardware startups from prototype to product. You can learn more about their work at the Innovation Education LLC Facebook page.

Working across disciplines required constant communication discipline. Different fields have different vocabularies, different timelines, and different intuitions about what "good enough" means. We instituted weekly cross-functional syncs, shared documentation on a collaborative platform, and adopted the habit of always explaining the "why" behind technical decisions so that teammates from other disciplines could meaningfully evaluate and challenge our choices.

Milestones and Recognition

SignTalk's competitive journey began at the Youth Innovation Challenge (YSS) Sylhet, a regional innovation competition that draws student teams from across the greater Sylhet division. We entered with our second-generation prototype and a well-rehearsed pitch that framed the problem, demonstrated the technology, and articulated a clear path to impact. The win was not just a trophy — it was validation that our approach resonated beyond the engineering lab. Read the full coverage of our win at Startup Bangladesh and the United News of Bangladesh (UNB).

The YSS win opened doors. We submitted to additional competitions and received coverage from multiple Bangladeshi media outlets. Daily Sokaler Somoy and BDNews24 Bangla ran features on the project, significantly expanding our reach and introducing SignTalk to a national audience that included potential users, investors, and collaborators.

The announcement of our qualification for the Rice360 Global Health Tech Design Competition at Rice University in Houston, Texas, marked SignTalk's transition from a national story to an international one. Rice360 is one of the most prestigious global health technology competitions in the world, bringing together student innovators from dozens of countries to address pressing healthcare challenges. Being selected from among hundreds of applicants validated not just our technology but our approach to problem framing, user research, and impact assessment. Our LinkedIn update on this milestone received significant engagement from the professional community: read the LinkedIn announcement. See also the team's LinkedIn share from team member Farzana Hussain.

Concurrent with the Rice360 qualification, the SUST Innovation Hub selected SignTalk as one of 15 student startups to receive pre-seed funding — a structured program of financial support, mentorship, and market access designed to help promising university projects cross the difficult chasm from prototype to product. This recognition from our home institution carried particular meaning: it represented the university's institutional bet that SignTalk has the potential to become a real, sustainable business. The Facebook coverage from The Positive One captured the community's enthusiasm around this achievement.

Social Impact and User Stories

Numbers and competition wins tell only part of the story. The part that matters most happens when a real user puts on the glove and speaks for the first time through technology they could never afford or access before.

During our user testing sessions, organized with the support of a local school for the deaf in Sylhet, we observed something we had not fully anticipated: the emotional weight of the moment. One participant, a teenager who had never been able to communicate verbally with his hearing classmates, produced his first spoken words through the glove. The reaction — from him, from his parents who were present, from our team — was difficult to describe as purely technical success. It reframed everything we were doing.

User feedback from these sessions directly shaped our development priorities. Participants consistently flagged the importance of low latency — any delay longer than about 300 milliseconds between gesture completion and speech output felt unnatural and disrupted conversational flow. They also highlighted the need for a larger vocabulary; our initial 30-class model, while a solid proof of concept, covered only a fraction of everyday conversational needs. And they emphasized ease of donning and doffing the glove, particularly for users with varying levels of fine motor control.

These sessions reinforced a principle we now hold firmly: assistive technology designed without sustained, substantive involvement from the communities it serves is unlikely to succeed in the field, regardless of how elegant the underlying engineering may be.

Future Roadmap

The current SignTalk prototype is a strong foundation, but we are under no illusion that it is a finished product. Our roadmap over the next 12–24 months is organized around three parallel tracks: vocabulary expansion, hardware refinement, and pathway to scale.

On the vocabulary front, our target is to expand from 30 gesture classes to 200+, covering the most common conversational phrases in Bengali Sign Language. This requires additional data collection, model retraining, and collaboration with BdSL linguists to ensure we are capturing signs with full fidelity to their conventional usage. We are also exploring two-hand gesture support, which significantly expands expressive range but adds hardware and algorithmic complexity.

The hardware development track is focused on miniaturization and durability. We want the glove to be indistinguishable from ordinary smart sportswear — no visible sensors, no dangling wires, a washable fabric construction, and a battery life of at least 8 hours of continuous use. This requires moving from off-the-shelf development boards to a custom PCB that integrates the microcontroller, BLE module, and power management circuitry on a single small form-factor board.

The scale pathway depends on the SUST Innovation Hub's pre-seed support and any follow-on investment. Our cost analysis suggests that a manufacturable unit can reach a price point accessible to middle-income families in Bangladesh, particularly if produced at volume. We are also exploring partnerships with NGOs and government disability welfare programs to create subsidized access for families below the poverty line.

Lessons Learned

If there is one overarching lesson from the SignTalk journey, it is that the gap between "working in the lab" and "working in the field" is enormous, and the only way to close it is through disciplined, repeated user engagement. Every time we tested with real users, we discovered things we had missed in controlled conditions. Every discovery led to a better product.

For other student engineers contemplating hardware projects in the assistive technology space, here is what we would tell our earlier selves: Start with the user problem, not the technology. Build the simplest possible prototype that can be tested with a real user, and test it as early as you can tolerate the embarrassment of something unfinished. Document everything — not because you will always need it, but because the moments when you do need it (a competition application, a journal submission, an investor conversation) will catch you off guard. And compete aggressively — competitions like YSS, Rice360, and the SUST Innovation Hub are not just about winning prizes. They are pressure-tested environments that force clarity of thought and communication, skills as valuable as any technical capability.

We are grateful for every mentor, judge, user, and teammate who has contributed to SignTalk. The journey is far from over.

Related Posts

Connect With Me

Follow my work and connect across platforms:

Back to Blog