Enhancing Augmentative and Alternative Communication Systems with Fine-Tuned GPT-3: Improving Predictive Text for Users with Speech and Language Impairments

Abstract

Author(s): Arnav Gupta

This research investigates the fine-tuning of large language models, specifically GPT-3, to enhance predictive text and other functionalities in Augmentative and Alternative Communication (AAC) systems for users with speech and language impairments. Through domain-adaptive pre-training and multi-task learning, the GPT-3 model was tailored to the linguistic needs of AAC users, resulting in significant improvements in perplexity, keystroke savings, and communication rate. User feedback highlighted the model's enhanced accuracy, ease of use, and overall satisfaction, underscoring its potential to reduce the cognitive and physical effort associated with AAC communication. Despite challenges related to data scarcity, computational demands, and bias mitigation, the study demonstrates the promise of advanced language models in creating more personalized, efficient, and user-friendly AAC tools. The findings provide a foundation for future research aimed at further refining and expanding the capabilities of AAC technologies.