Abstract
In healthcare, a remarkable progress in machine learning has given rise to a diverse range of predictive and decision-making medical models, significantly enhancing treatment efficacy and overall quality of care. These models often rely on electronic health records (EHRs) as fundamental data sources. The effectiveness of these models is contingent on the quality of the EHRs, typically presented as unstructured text. Unfortunately, these records frequently contain spelling errors, diminishing the quality of intelligent systems relying on them. In this research, we propose a method and a tool for correcting spelling errors in Russian medical texts. Our approach combines the Symmetrical Deletion algorithm with a finely tuned BERT model to efficiently correct spelling errors, thereby enhancing the quality of the original medical texts at a minimal cost. In addition, we introduce several fine-tuned BERT models for Russian anamneses. Through rigorous evaluation and comparison with existing spelling error correction tools for the Russian language, we demonstrate that our approach and tool surpass existing open-source alternatives by 7% in correcting spelling errors in sample Russian medical texts and significantly superior in automatically correcting real-world anamneses. However, the new approach is far inferior to proprietary services such as Yandex Speller and GPT-4. The proposed tool and its source code are available on GitHub 1 and pip 2 repositories. This paper is an extended version of the work presented at ICCS 2023 (Pogrebnoi et al. 2023)
Project:MedSpellChecker— Fast and effective spellchecker for Russian medical texts