Part 2 of the Two-Way Communication Series  ·  Read Part 1: Speech2Braille →

Being heard is fundamental to human dignity. For DeafBlind students at Helen Keller Indonesia in Yogyakarta, a new pipeline built by AI4DeafBlind.org makes it possible in a way it never was before — translating the press of Braille keys into a spoken voice, in real time, on a device that fits in a classroom and costs a fraction of commercial alternatives.

Completing the Loop: From Listening to Speaking

Our first pipeline, Speech2Braille, gave DeafBlind students the ability to receive — converting a teacher's spoken Indonesian into Braille output on a refreshable display. It was a profound step. But a classroom where only the teacher's voice is heard is not yet a conversation.

Braille2Speech completes the loop. It does the reverse: when a student types on a Perkins Braille keyboard, the system converts those keystrokes into spoken Indonesian — read aloud through a speaker for the teacher and anyone in the room to hear. For the first time, a DeafBlind student can answer a question, express a need, contribute an idea, or simply say they understand, without requiring a human interpreter in the room.

Together, Speech2Braille and Braille2Speech create a fully bidirectional, real-time communication channel between a teacher and a DeafBlind student — both pipelines running entirely offline, on affordable hardware, in a classroom with or without an internet connection.

Pipeline Direction Input Output Core Technology
Speech2Braille Teacher → Student Spoken Indonesian Braille on refreshable display Whisper ASR (fine-tuned)
Braille2Speech Student → Teacher Perkins keyboard input Spoken Indonesian Liblouis + Piper TTS
100%
Offline operation
Pi
Runs on Raspberry Pi
2
Bidirectional pipelines
3
Org partners in testing

How It Works: The Technology Inside Braille2Speech

The Braille2Speech pipeline is elegant in its design: three open-source components, each doing one job precisely, chained together to convert a finger's movement into a voice in under a second.

Step 1 — Perkins Keyboard Input

The student types on a Perkins Brailler keyboard — a tactile, six-key input device designed for Braille. Each keystroke combination encodes a Braille cell, representing a letter, number, or word contraction. The raw Braille cell output is captured and passed to the translation engine.

Step 2 — Liblouis: Braille to Indonesian Text

Liblouis is an open-source Braille translation library that supports dozens of languages including Indonesian. In the Braille2Speech pipeline, Liblouis performs back-translation: it reads the raw Braille cells from the keyboard and converts them into readable Indonesian text. This is the linguistic bridge — transforming the student's tactile language into a form a TTS engine can pronounce. The AI4DeafBlind.org team, in collaboration with WATAP and KreasiTech Indonesia, verified Liblouis's back-translation accuracy using loaned refreshable Braille displays, ensuring the text produced faithfully represents what the student intended to type.

Step 3 — Piper TTS: Indonesian Text to Speech

Piper is a fast, open-source, locally deployable text-to-speech engine. Deployed on a Raspberry Pi with an Indonesian language voice model, Piper converts the Liblouis-produced text into natural-sounding spoken Indonesian in real time. No cloud API. No subscription fee. No network request. The voice emerges from the device itself, aloud in the classroom, within moments of the student's last keystroke.

Why Raspberry Pi? A Raspberry Pi costs a fraction of a commercial workstation and consumes minimal power. Deploying Piper TTS locally on a Raspberry Pi means the entire Braille2Speech system can run in any classroom — with or without electricity from the grid, with or without internet — making it viable for schools across Indonesia and beyond where infrastructure cannot be assumed.

Empowerment Through Participation

In the 1:1 and small-group teaching environments at Helen Keller Indonesia, Braille2Speech is fundamentally about participation. Not just access — participation.

Before this technology, a DeafBlind student's ability to contribute to a lesson depended entirely on the teacher's capacity to interpret their tactile responses in the moment. The student could understand, but expressing that understanding required significant mediation. Questions went unanswered in words. Ideas stayed unspoken. The teacher could sense engagement, but gauging comprehension — truly gauging it — was difficult.

"When the student typed their answer and the device spoke it aloud, it changed the dynamic of the whole lesson. For the first time, I could hear them."

— Teacher, Helen Keller Indonesia (Braille2Speech field test)

With Braille2Speech, a student types on their Perkins keyboard and their words are spoken aloud to their teacher. This simple act — a student answering a question in their own words, heard directly by the teacher — carries profound significance in a DeafBlind educational context. It means:

Students can answer questions. Rather than nodding or gesturing, a student types their response and the room hears it. The teacher receives unambiguous feedback on comprehension.

Students can express needs. A student who is confused, tired, or in discomfort can communicate that directly and immediately, without waiting for an interpreter or a physical check-in.

Students can engage as equal participants. The classroom dynamic shifts from a teacher delivering to a student receiving, toward a genuine dialogue — two people exchanging information at the speed of conversation.

Teachers at Helen Keller Indonesia found this capability particularly transformative for gauging understanding during lessons. The ability to hear a student's typed words aloud — spontaneously, without additional mediation — allows for more responsive, adaptive teaching. When a student's answer reveals a misconception, the teacher can address it immediately. When an answer demonstrates mastery, the moment of recognition is shared.

Empowerment in practice: For a DeafBlind student, being heard is not a small thing. It is participation in the community of the classroom — and through it, in the wider community of school, neighborhood, and life. Braille2Speech gives students a voice that travels beyond their fingertips.


A Collaborative Journey: WATAP, KreasiTech, and the Testing Partnership

The development of Braille2Speech was not a solo effort. It is the product of a deliberate, multi-partner collaboration built on the conviction that assistive technology for DeafBlind communities must be tested rigorously, in real conditions, with real users — before it reaches a classroom.

WATAP: Verifying Translation Accuracy

A critical challenge in building Braille2Speech was validating the accuracy of Liblouis's back-translation for Indonesian Braille. Without access to physical verification tools, errors in translation could go undetected until they reached students. Through our partnership with WATAP — the Washington Assistive Technology Act Program — the AI4DeafBlind.org team was able to use loaned refreshable Braille displays during the testing phase. These devices allowed engineers to read back the Liblouis-translated output directly in Braille, character by character, confirming that the pipeline accurately rendered what each student typed before the Piper TTS engine ever pronounced a word.

This partnership exemplifies the kind of resource-sharing that makes community-grounded assistive technology development possible. WATAP's loan program removed a significant barrier to high-quality testing without requiring the team to purchase expensive specialized hardware.

KreasiTech Indonesia, UW, and WSU: Optimizing for Raspberry Pi

Running a full text-to-speech pipeline on a Raspberry Pi — a device with modest processing power — requires more than simply installing software. It requires careful optimization: tuning Piper's voice model for low-latency inference, managing memory efficiently so Liblouis and Piper can run concurrently, and ensuring the system remains responsive under the real-world conditions of a classroom.

This optimization work was carried out collaboratively by engineers and scientists at the University of Washington, Washington State University, and KreasiTech Indonesia. The Indonesian engineers brought essential knowledge of local hardware availability and the specific acoustic environment of classrooms in Yogyakarta. The US university teams contributed expertise in embedded systems optimization and model inference efficiency. Together, they produced a system that is not only technically sound but practically reliable in the setting for which it was designed.

The Braille2Speech Collaboration

  • AI scientists and engineers, AI4DeafBlind.org
  • Engineers and interns, University of Washington
  • Engineers and interns, Washington State University
  • KreasiTech Indonesia — Raspberry Pi optimization
  • WATAP — loaned Braille display hardware for testing
  • Educators and students, Helen Keller Indonesia, Yogyakarta

Helen Keller Indonesia itself remained central to every decision. The school's educators shaped the interaction model — how quickly speech should follow a keystroke, at what volume, through what kind of speaker, in what kind of physical arrangement at the classroom table. Technology that works in the abstract must also work in the particular, and the team at Helen Keller Indonesia provided the grounded, practical perspective that turned a working prototype into a deployable tool.

The Stronger Together Vision: Why Bidirectionality Matters

The theme of the Washington State Department of Services for the Blind Stronger Together Conference resonates deeply with what Braille2Speech represents. Stronger Together is not simply a slogan — it describes the mechanism by which DeafBlind individuals gain access to fuller participation in education and community life: through technology that removes communication barriers, through partnerships that pool expertise and resources, and through systems designed for the people who use them.

A one-directional communication tool — however well designed — leaves one party in the conversation silent. Speech2Braille gave students access to their teacher's voice. Braille2Speech gives teachers access to their student's. Each pipeline is more powerful in the presence of the other. Together, they make a conversation possible where only a monologue existed before.

That is what "stronger together" means in a classroom in Yogyakarta: a teacher and a student, facing each other across a table, exchanging ideas at the speed of thought — one through speech, one through Braille — without intermediary, without delay, without exclusion.

Looking Ahead: World Conference on Deafblindness 2027

AI4DeafBlind.org is preparing an abstract for the World Conference on Deafblindness 2027, presenting the full bidirectional communication system — Speech2Braille and Braille2Speech — as an integrated, deployable, offline AI pipeline for DeafBlind education in under-resourced settings.

Our submission will document the complete pipeline architecture, the testing methodology developed in partnership with WATAP, the Raspberry Pi optimization approach developed with KreasiTech, UW, and WSU, and the field outcomes from Helen Keller Indonesia in Yogyakarta.

We welcome contact from researchers, practitioners, and organizations working in DeafBlind education worldwide who are interested in collaborating on the abstract, replicating the pipeline in other languages, or presenting alongside us at WCDB 2027.

Why Open, Offline, and Affordable Must Go Together

The choice to build on Liblouis and Piper — both free, open-source tools — was not incidental. It reflects a principled position: that the DeafBlind community should not be dependent on proprietary platforms, cloud subscriptions, or hardware they cannot afford in order to communicate.

Commercial TTS systems can produce impressive-sounding voices, but they require persistent internet access and often charge per use. In a school in Yogyakarta where connectivity is intermittent and budgets are constrained, these are disqualifying conditions. Piper's local deployment model means the cost of the voice is fixed at the cost of the Raspberry Pi — a one-time, modest investment — and every word spoken thereafter costs nothing.

Liblouis is maintained by a global open-source community and supports Indonesian Braille with no licensing cost. By building on this foundation, AI4DeafBlind.org ensures that any school, organization, or community anywhere in the world can deploy Braille2Speech, adapt it for their local Braille code, and maintain it independently — without dependence on any single company or product.

This is what inclusive technology looks like at its most principled: not just accessible to the user, but accessible to the organizations that serve them.


Summary

Braille2Speech completes what Speech2Braille began. Together, these two pipelines — both running offline on affordable hardware, both developed through genuine US–Indonesia partnership — create a real-time, two-way communication channel between DeafBlind students and their teachers at Helen Keller Indonesia.

The Perkins keyboard carries a student's thoughts. Liblouis translates them from Braille into text. Piper speaks them aloud in Indonesian. The teacher hears. The student is understood. In a 1:1 teaching environment, that is the entire purpose of communication — achieved through three open-source components, a Raspberry Pi, and a collaboration spanning Seattle and Yogyakarta.

Participation is not a privilege. It is a right. Braille2Speech is one more step toward ensuring that DeafBlind students can exercise it — not in spite of their disability, but in full acknowledgment of it, with technology built to meet them where they are.

Braille2Speech Piper TTS Liblouis Perkins Keyboard DeafBlind Raspberry Pi Offline AI Assistive Technology Helen Keller Indonesia WATAP KreasiTech University of Washington Washington State University Bidirectional Communication AI4DeafBlind WCDB 2027 Stronger Together

About the Author

Kim-Fu Lim, founder of AI4DeafBlind.org

Kim-Fu Lim is a Computer Science and Mathematics graduate from the University of Washington with an extensive background in developing mission-critical software systems. His career spans a diverse range of technical domains, including Electronic Design Automation for Data I/O, large-scale customer acquisition and care applications for Accenture's wireless clients, and unified communication servers for Active Voice.

He founded iCrescendo post 9/11, specializing in critical infrastructure protection for power substations and water infrastructures. At Amazon, he managed cross-functional teams to deliver solutions for Sellers Onboarding, Fulfillment Centers, AWS Commerce Platform, and Alexa Engine. Most recently, he has focused on developing bidirectional communication smart devices designed specifically for DeafBlind teachers and students, bridging the gap in assistive technology through innovative engineering.

Disclosure: This article was developed with the assistance of AI tools for structural and editorial refinement. The technical concepts and final review were provided by the author.