Reviewing a patient’s medical record to assess their health, make a diagnosis, or plan treatment is one of the most time-consuming parts of a medical appointment. This frustrates many healthcare professionals who often have to click through to learn the timeline of a particular parameter from the test results or read another medical advisor’s results. The diverse use of medical jargon adds to the complexity.
Even doctors who are well-versed in IT systems need time to move from tab to tab, study other doctors’ or nurses’ notes, manually compare numbers and analyze data in their heads. Finding the information you need or arriving at an already available diagnosis eats up valuable minutes from each doctor’s visit.
Healthcare software vendors are trying to solve this problem by introducing dashboards that provide a transparent view of critical data. Such dashboards present standardized data—such as lab test results—as clear graphs, while recently prescribed medications and interactions are presented in a separate table.
Unfortunately, much of that valuable knowledge is trapped in a doctor’s loose notes. Since they have gone digital, their clarity has improved. But this only concerns visual clarity. Every doctor has his own way of taking notes, and hasty note-taking results in numerous grammatical errors, abbreviations and specialist jargon. Paradoxically, these notes contain important nuances that no one reads.
Challenge for AI
Computers and artificial intelligence (AI) are very good at handling aggregate data analysis. But understanding notes is a challenge even for the most sophisticated algorithms. Models used in one hospital often fail in another. A universal AI model would be the perfect solution.
That’s what researchers at the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) are working on. They want to create linguistic models that facilitate the extrapolation of important information from loose notes that contain abbreviations, jargon, and acronyms. The researchers believe this would allow doctors to make better use of data collected in rarely read, hard-to-interpret notes.
A natural medical language processing system must be highly accurate and immune to the enormous diversity of health-related datasets. Such AI models achieve 86 percent performance levels in terms of acronym reading accuracy. The MIT team has devised additional methods that raise that number to 90 percent.
Commonly used jargon
There are many commonly used abbreviations in medical jargon. The current version of the Dictionary of Medical Abbreviations and Acronyms contains as many as 600,000 entries. If several abbreviations occur in a row, the AI model connects them together – just like the human brain processes information. Hence the name of the technology: Natural Language Processing (NLP). The result, in the form of ‘translated’ sentence structures, is then analyzed and the meaning verified. This is then arranged in the form of a clear interpretation. This phase is called post-processing.
An example: a doctor wants to know why patient X is taking medicine Y. The doctor enters his question into the system. To answer that, the AI model cycles through the general data and gives the reason for taking the drug that is most common, but only statistically. A more complex interference pathway can be enforced, one that links general substance use information to the textual notes in the patient’s medical record. The second method uses a high degree of personalization, since taking drug Y may be associated with other concurrent diseases.
Development of suitable algorithms for analyzing texts with different input formats is a possible research direction. Another is to rank notes as they prepare. In this area, the researchers work with NLP (natural language processing) systems that extract data from conversations between the doctor and the patient. The data is then automatically entered into the EPD. A similar mechanism can be applied to handwritten text. This allows the AI system to capture formulations in real time, while the doctor only needs to verify them.
A May 2022 survey by M&I/Partners found that hospitals still have low expectations for the use of AI for NLP (natural language processing) in the near term. The CIOs and CMIOs surveyed in the survey have the highest expectations for image recognition, decision support and pattern recognition for AI applications in the near term. Natural language processing, robotics and process mining will only be added in the longer term.
Read the entire article about ICT&health International.
ICT & Health Congress 2023
On 30 January 2023, ICT&health kicks off the new health year with the annual large and influential health conference on health transformation.
also be present? Order your entry ticket quickly