EHR Data Not Ready for Prime Time, Studies Show

by Ken Terry, iHealthBeat Contributing Reporter

TOPIC ALERT:

Two new studies cast doubt on whether the data in electronic health records are reliable enough to be used as the basis for publicly reported quality measurements and performance-based payments. A third study shows that EHR data on cervical cancer screening may be dependable, but only under certain circumstances.

Taken together, the studies -- all published in the Journal of the American Medical Informatics Association -- provide a snapshot of how well U.S. physicians are documenting preventive services and other clinical data in EHRs. This is important because public and private payers are beginning to require EHR-derived data to support programs aimed at lowering costs and improving the quality of care.

For example, Stage 1 of the meaningful use incentive program requires physicians to provide specific quality data through attestation. As early as 2013, they will have to submit the data electronically to CMS. Physicians already have the option of sending EHR data to Medicare's Physician Quality Reporting System.

Starting in 2015, CMS will use PQRS data to calculate a portion of physicians' Medicare payments under its value-based purchasing program. The ability of health care providers who join accountable care organizations to share in Medicare savings also will depend partly on electronically submitted quality data. And it's likely that private insurers will follow suit in their own ACO programs.

A lot is riding on the reliability of EHR data. But, in regard to the CMS programs, "we're not ready" to use this data, said Eric Schneider, distinguished chair in health care quality at the RAND Corporation. Moreover, he noted, "Until we get the EHR fully operational, we're pretty limited in the types of quality measures we can produce."

Structured Data Are Incomplete

In a study of New York City primary care practices that used the same publicly subsidized EHR, researchers assessed the accuracy of the structured data used for quality measurement. Structured data are computable information entered in discrete fields of the EHR. Researchers manually reviewed electronic charts to identify diagnoses related to preventive care measures anywhere within the record, including free text. According to the researchers, "the average practice missed half of the eligible patients for three of the 11 quality measures."

Because many preventive services were not documented as discrete data, the study also found that practices underreported the services their doctors provided on six of the 11 measures.

Another study -- conducted in a primary care network affiliated with Brigham & Women's Hospital in Boston -- focused on a clinical decision support tool designed to improve the completeness of EHR diagnosis lists, also known as "problem" lists. The program combed through lab, medication and billing data to find hints of missing diagnoses. Physicians who received prompts about these diagnoses through the EHR system added nearly three times as many old and new diagnoses to problem lists as doctors in the control group did.

The authors pointed out that in their prior research, a large portion of diagnoses had been missing from problem lists. For instance, only 51% of hypertension and 62% of diabetes diagnoses had been included. "Other institutions have found similar results," they added.

The third JAMIA study looked at whether EHR data could be used to detect overutilization of cervical cancer screening tests, known as Pap tests. Comparing manual e-chart reviews to the results of EHR queries, the researchers ascertained that EHR data could be used to measure accurately the overuse of Pap tests among low-risk women.

Jason Matthias -- the lead author and a research fellow in the Feinberg School of Medicine at Northwestern University -- said he was confident that every Pap test ordered during the study period had been documented as structured data. The EHR system had a lab interface, and "any results that returned from the pathologists were captured automatically," he said, adding, "If you didn't have results and you didn't have an order, the test hadn't been done."

Consequently, he said that data would be adequate for a quality measure. However, he added that similar information probably would be less accurate in a practice that had recently adopted an EHR system than in the university-affiliated clinic he studied. In a practice that was new to the technology, he said, it's likely that physicians would be less aware of the importance of problem lists and other discrete data.

Paper Habits Die Hard

The New York City physicians had been using their EHR system for at least a year or two at the time they were studied. The city Department of Health and Mental Hygiene, which performed the study, had provided these doctors with no-cost EHR software in return for their promise to provide quality data. The city chose the EHR system and worked with its vendor to develop critical features for quality improvement.

The researchers found that most of the required data on vital signs, vaccinations and medications were in the correct fields. That was because it was obvious where the data were supposed to be entered, explained Amanda Parsons, lead author and deputy commissioner for health care access and improvement at the city's health department.

In contrast, data on smoking cessation counseling were found in several different places because it wasn't clear where they should be documented, Parsons said. While this can be partly attributed to EHR design, she added that physicians weren't used to recording this information in a particular place in their paper charts.

Similarly, some doctors declined to enter more than four diagnoses for a patient because that was the limit on the paper billing forms they were used to. "We often see transfers of [paper] workflows that are not appropriate to the EHR setting," she said.

Better Problem Lists

In many cases, diagnoses such as obesity and hyperlipidemia were missing from problem lists. But overall, the New York City physicians maintained more complete lists than the Brigham & Women's doctors did.

Parsons attributed that mainly to the "chronic flags" in the EHR system that the New York City doctors used. When a physician flags a patient's chronic condition, the diagnosis goes into the problem list and is added to the visit note automatically on every subsequent visit. The city's health department preset the most common conditions in the EHR system, increasing the completeness of the problem lists.

However, the EHR system underreported data on preventive measures that depended on mammography or lab results, such as breast cancer screening, control of cholesterol and control of hemoglobin A1c in patients with diabetes.

The EHR system lacked interfaces with the imaging facilities that performed mammograms, Parsons noted. So mammography reports were usually faxed back to the practices, which would scan them into the system but often would fail to enter the results as discrete data. If a test had been ordered through the EHR, the system didn't recognize that the test had been performed unless the results were in the proper field. That explains why fewer than 11% of the mammograms the practice's patients received were included in quality reports.

Similarly, despite the city's agreements with some reference labs to provide no-cost interfaces to the participating practices' EHR systems, much of the requisite lab data were not in structured form. Practices with robust lab interfaces can generate good data on related quality measures, Parsons said, but the availability of those interfaces varies greatly.

Parsons stressed that she and her colleagues are strong supporters of performance measurement and the meaningful use program. "We're just trying to add to the intelligence around the quality measure discussion, so we can be thoughtful about how we use this data and how we work with providers and patients to improve it," she said.

David Winn
It is not that EHR data is not ready for Prime Time, but rather certain vendors are not ready. Many mainstream, large vendors do not fully understand medical informatics and these same vendors still rely on Claims data (like ICD-9 or ICD-10) for their clinical data. That is patently dangerous. A medical desiderata must be unambiguous. Certain ICD-9 codes, for instance, map to very different clinical concepts. Medical vocabularies like Medcin and SnoMed represent this type of mapping. Until vendors standardize on such vocabularies, semantic interoperability will remain impossible and MU reporting will be difficult (and suspect) at best. Also, problem lists should be built automatically off of assessments with auto-aging... That would have prevented the missing data elements described in the article above.
Michael Milne
There is a problem with EMRs gathering all this discrete data. Take problem lists, if every problem is added to this list it will become too large to be useful. We are a long way from the computer making medical decisions, a doctor still has to decise what is important and what is not. Adding everything, without doctor determination, is as bad as incomplete records. Computers are not shortcuts to good medicine. Good EMR design allows doctors to be more effecient by giving them what is needed. This is why "meaningful use" is an oxymoron in describing current standards of EMR design. We are decades away from even a rudimenatry Star trek tricorder for scanning, data analysis, and recording data.
Arjen Westerink
This does not seem to surprising as real growth of EHR adoption has not taken off until recently. I think it fair to say with ongoing adoption and meaningful use the data will become increasingly dense and have the potential to be aggregated to find out all kind of things.

to share your thoughts on this article.