Since OpenAI publicly released ChatGPT, a generative artificial intelligence (AI) that uses a large language model to produce human-like text, the possibilities for ChatGPT have taken just about every communication and translation-heavy industry by storm. But in the highly regulated field that is human health, where does ChatGPT fit into a healthcare or life science setting? While ChatGPT was not explicitly trained for healthcare, its AI may help improve patient communication in the hospital, at home, and even in clinical research – in particular for multilingual patient communications through ChatGPT translations.
ChatGPT Will See You Now: AI-Driven Medical Advice
Perhaps the most straightforward application of ChatGPT in healthcare is medical advice given partially, or fully, by ChatGPT results. ChatGPT was trained on 1.5 billion parameters that include medical journals and books, giving it the ability to respond to health questions that patients might otherwise ask their doctor or medical practitioner. In fact, a recent pre-print showed that ChatGPT performed near the passing threshold for USMLE, the United States Medical Licensing Exam. In another experiment, published as a letter to JAMA, ChatGPT returned appropriate responses for 21 of 25 questions about cardiovascular disease (CVD), the leading cause of death in the U.S. and many other nations.
Whether ChatGPT functionality might be employed within electronic medical records (EHR) to advise patients on when to seek in-person medical advice, or as a decision tool for physicians, patients could benefit from faster response times. ChatGPT translations could be particularly useful for Limited English Proficiency (LEP) patients who may otherwise have difficulty accessing medical information in their chosen language.
ChatGPT Could Improve Access to Multilingual Health Tech
ChatGPT is already being tested to improve access to healthcare in nontraditional spaces, such as through mobile health apps. Recent controversy over a mental health messaging app that tested used GPT-3, the language model behind ChatGPT, to draft responses that were then edited by volunteers, has made some clinicians wary of these applications. Yet patient communication and even translations provided by ChatGPT, in collaboration with human quality control measures, could fill an important gap in mental health and other services. From mobile health apps to customer service modules for pharmaceuticals, medical devices, and other medical products, patients may one day benefit from ChatGPT communication, particularly across languages.
Supporting Physicians and Clinicians Towards Efficiency
Interestingly, ChatGPT may be best able to support healthcare and research professionals in improving efficiency – thus giving more time to directly communicate with patients. Researchers have proposed that ChatGPT could lessen administrative burden on physicians by summarizing medical records and visit notes, and refocus researchers on their patients by shortening the time for medical writing and research proposals. In early tests of ChatGPT, blinded human reviewers could pick out abstracts written by ChatGPT only 68% of the time, demonstrating the high-quality of responses even in technical fields like medicine. Assisted writing and ChatGPT translations may also benefit scientists for whom English is their second language, allowing clinicians to better communicate and collaborate to improve patient care globally.
ChatGPT Still Has Room to Grow Before Transforming Healthcare
While ChatGPT is an exciting new advancement in generative AI, it is likely not going to change patient communication in healthcare overnight. ChatGPT’s “black box” architecture means that its answers often lack source material and can produce harmful misinformation. For example, when asking ChatGPT standard medical questions, as a patient might, some researchers found that the algorithm returned incorrect information citing fake scientific articles. Further, ChatGPT was trained on data from September 2021, so it is not yet able to be updated for medical best practices and crucial advances in clinical research. As medical misinformation and can cost lives, more quality control measures are likely necessary before unleashing ChatGPT – or ChatGPT-assisted translations – directly to patients.
Additionally, generative AI like ChatGPT still has the same drawbacks as more traditional predictive AI – it is only as good as the data used in learning. Continuing biases in healthcare data used to train algorithms perpetuate those same biases, as seen in the racial bias found in healthcare algorithms. Using ChatGPT translations, or really any AI algorithm, without addressing the racist, sexist, ageist, fatphobic, and other prejudices inherent to the online data used to train the model does not promote equitable access and effective patient communication.
CSOFT Health Sciences is Committed to the Highest Standards for Technology
CSOFT Health Sciences, leaders in medical translation, uses the latest translation and content management technologies extending to every stage of the medical translation process, with particular emphasis on translation memory and terminology management. We are certified in ISO 17100:2015, ISO 9001:2015, and ISO 13485:2016, and our operations leverage best practices of ISO 27001 to ensure our customized solutions meet both global regulatory requirements and patient communication needs. Visit lifesciences.csoftintl.com to learn more about our certified translations.