Vision-Braille: A Curriculum Learning Toolkit and Braille-Chinese Corpus for Braille Translation
Alan Wu, Ye Yuan, Zhiping Xiao, Ming Zhang · Jul 8, 2024 · Citations: 0
How to use this page
Low trustUse this as background context only. Do not make protocol decisions from this page alone.
Best use
Background context only
What to verify
Validate the evaluation procedure and quality controls in the full paper before operational use.
Evidence quality
Low
Derived from extracted protocol signals and abstract evidence.
Abstract
We present Vision-Braille, the first publicly available end-to-end system for translating Chinese Braille extracted from images into written Chinese. This system addresses the unique challenges of limited annotated resources and tone omission. It integrates a robust Braille OCR pipeline with an LLM fine-tuned for sequence-to-sequence translation. We construct a synthetic Braille-Chinese corpus, including tone-omission variants that mimic authentic Braille writing habits. We fine-tune the model using a four-stage curriculum: starting with sentence-level data with full tone markers, progressing to passage-level data, then applying a tone-omission schedule of decreasing retention, and finally consolidating on passages with heavy tone omission. On passage-level translation with 10\% tone retention, \methodname{} achieves 83.28 BLEU. Vision-Braille offers an inclusive NLP solution that empowers students with visual impairments to participate in mainstream education by enabling teachers to grade Braille homework without extensive training. Our code and data are available at https://anonymous.4open.science/r/EMNLP_2026_Supp_Code_Data-2F6D.