Comparative Study
Journal Article
Research Support, Non-U.S. Gov't
Research Support, U.S. Gov't, P.H.S.
Add like
Add dislike
Add to saved papers

The art and science of chart review.

BACKGROUND: Explicit chart review was an integral part of an ongoing national cooperative project, "Using Achievable Benchmarks of Care to Improve Quality of Care for Outpatients with Depression," conducted by a large managed care organization (MCO) and an academic medical center. Many investigators overlook the complexities involved in obtaining high-quality data. Given a scarcity of advice in the quality improvement (QI) literature on how to conduct chart review, the process of chart review was examined and specific techniques for improving data quality were proposed.

METHODS: The abstraction tool was developed and tested in a prepilot phase; perhaps the greatest problem detected was abstractor assumption and interpretation. The need for a clear distinction between symptoms of depression or anxiety and physician diagnosis of major depression or anxiety disorder also became apparent. In designing the variables for the chart review module, four key aspects were considered: classification, format, definition, and presentation. For example, issues in format include use of free-text versus numeric variables, categoric variables, and medication variables (which can be especially challenging for abstraction projects). Quantitative measures of reliability and validity were used to improve and maintain the quality of chart review data. Measuring reliability and validity offers assistance with development of the chart review tool, continuous maintenance of data quality throughout the production phase of chart review, and final documentation of data quality. For projects that require ongoing abstraction of large numbers of clinical records, data quality may be monitored with control charts and the principles of statistical process control.

RESULTS: The chart review module, which contained 140 variables, was built using MedQuest software, a suite of tools designed for customized data collection. The overall interrater reliability increased from 80% in the prepilot phase to greater than 96% in the final phase (which included three abstractors and 465 unique charts). The mean time per chart was calculated for each abstractor, and the maximum value was 13.7 +/- 13 minutes.

CONCLUSIONS: In general, chart review is more difficult than it appears on the surface. It is also project specific, making a "cookbook" approach difficult. Many factors, such as imprecisely worded research questions, vague specification of variables, poorly designed abstraction tools, inappropriate interpretation by abstractors, and poor or missing recording of data in the chart, may compromise data quality.

Full text links

We have located links that may give you full text access.
Can't access the paper?
Try logging in through your university/institutional subscription. For a smoother one-click institutional access experience, please use our mobile app.

Related Resources

For the best experience, use the Read mobile app

Mobile app image

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app

All material on this website is protected by copyright, Copyright © 1994-2024 by WebMD LLC.
This website also contains material copyrighted by 3rd parties.

By using this service, you agree to our terms of use and privacy policy.

Your Privacy Choices Toggle icon

You can now claim free CME credits for this literature searchClaim now

Get seemless 1-tap access through your institution/university

For the best experience, use the Read mobile app