Taking a ‘natural’ approach could improve the value of electronic medical records

computerdoctor222.jpgElectronic medical records hold great promise to facilitate efficient healthcare. No longer will nurses squint to decipher doctors’ chicken scratch on paper forms, and patients can have their medical records transferred between healthcare providers with ease. But interpreting all the data in the system can present a challenge.

A simple keyword search in a database pulls up far too many results, including many irrelevant ones. For example, if you searched ‘pneumonia’ in the medical records, you would retrieve the cases of pneumonia you desired — but also every single time pneumonia was mentioned in passing.

“In an ideal world, all clinical data should be structured and coded, mapped with a standard terminology,” says Stéphane Meystre, a biomedical informaticist at the University of Utah Medical School who was not involved with the research. “But clinicians like to tell things their way, so we need a way to extract data from narrative text.”

One solution to help computers decode the context of all the medical terms in a system might be natural language processing, an artificial intelligence tool that goes beyond a basic keyword search, according to a study out today in the Journal of the American Medical Association. This analytical tool incorporates sentence structure “to take into account the context of what you’re trying to say,” says Harvey Murff, associate professor of medicine at Vanderbilt University and lead author.

Murff and his colleagues used natural language processing to search the medical records of nearly 3,000 patients at six Veterans Health Administration medical centers for adverse effects following surgery, including renal failure, sepsis, heart attack and pneumonia. They compared their results to patient reviews done by reading straight from the paper charts— considered the gold standard, he says. Pooling the data from all complications, natural processing language identified 77% of all adverse events found in the paper-based undertaking. By comparison, a traditional keyword search only identified 42% of post-surgical complications.

The unfortunate reality is that the system still cannot decipher mis-entered information. Clinicians sometimes rush the process of entering notes into the computer, leading to typos and misspellings, or use non-standard abbreviations. Additionally, “the meanings [of terms] changes from specialty to specialty, hospital to hospital, even in the same documents,” Meystre says.

Murff says that even though his language analysis tool can help sort through the information in a system, it is not perfect. “It’s unlikely that you’d have a tool like this that would ever be 100% accurate,” says Murff. “After all, it’s people; we take shortcuts.”

Natural language processing is one tool, but it’s likely not the final word.

Image: flickr user southerntabitha via Creative Commons

Leave a Reply

Your email address will not be published. Required fields are marked *