Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 1998 Mar-Apr;5(2):194-202.
doi: 10.1136/jamia.1998.0050194.

Simulating an integrated critiquing system

Affiliations

Simulating an integrated critiquing system

M M Kuilboer et al. J Am Med Inform Assoc. 1998 Mar-Apr.

Abstract

Objective: To investigate factors that determine the feasibility and effectiveness of a critiquing system for asthma/COPD that will be integrated with a general practitioner's (GP's) information system.

Design: A simulation study. Four reviewers, playing the role of the computer, generated critiquing comments and requests for additional information on six electronic medical records of patients with asthma/COPD. Three GPs who treated the patients, playing users, assessed the comments and provided missing information when requested. The GPs were asked why requested missing information was unavailable. The reviewers reevaluated their comments after receiving requested missing information.

Measurements: Descriptions of the number and nature of critiquing comments and requests for missing information. Assessment by the GPs of the critiquing comments in terms of agreement with each comment and judgment of its relevance, both on a five-point scale. Analysis of causes for the (un-)availability of requested missing information. Assessment of the impact of missing information on the generation of critiquing comments.

Results: Four reviewers provided 74 critiquing comments on 87 visits in six medical records. Most were about prescriptions (n = 28) and the GPs' workplans (n = 27). The GPs valued comments about diagnostics the most. The correlation between the GPs' agreement and relevance scores was 0.65. However, the GPs' agreements with prescription comments (complete disagreement, 31.3%; disagreement, 20.0%; neutral, 13.8%; agreement, 17.5%; complete agreement, 17.5%) differed from their judgments of these comments' relevance (completely irrelevant, 9.0%; irrelevant, 24.4%; neutral, 24.4%; relevant, 32.1%; completely relevant, 10.3%). The GPs were able to provide answers to 64% of the 90 requests for missing information. Reasons available information had not been recorded were: the GPs had not recorded the information explicitly; they had assumed it to be common knowledge; it was available elsewhere in the record. Reasons information was unavailable were: the decision had been made by another; the GP had not recorded the information. The reviewers left 74% of the comments unchanged after receiving requested missing information.

Conclusion: Human reviewers can generate comments based on information currently available in electronic medical records of patients with asthma/COPD. The GPs valued comments regarding the diagnostic process the most. Although they judged prescription comments relevant, they often strongly disagreed with them, a discrepancy that poses a challenge for the presentation of critiquing comments for the future critiquing system. Requested additional information that was provided by the GPs led to few changes. Therefore, as system developers faced with the decision to build an integrated, non-inquisitive or an inquisitive critiquing system, the authors choose the former.

PubMed Disclaimer

Figures

Figure 1
Figure 1
Four reviewers analyzed six medical records. The reviewers generated comments and requested further information when needed. The general practitioners rated these comments and provided the missing information. When information was not available, they were asked to explain why. Finally, the reviewers updated their comments, taking the additional information into account.
Figure 2
Figure 2
Summary of information missed in six electronic medical records by reviewers. Three categories of missing information could be identified: Factual patient data (n = 44)—any request for additional information related to a patient's medical history, physical examination, diagnosis, or additional test; Factual therapeutic data (n = 22)—requests asking the physician about his or her therapeutic strategy; Motivation (n = 24)—requests asking for the physician's motivation for his or her interventions.
Figure 3
Figure 3
Distribution of the individual agreement scores and relevance scores (n scores = 424) of three general practitioners for comments (n comments = 74) generated by reviewers. The vertical axes shows the range of the scores that the general practitioners could assign (-2 representing complete disagreement to +2 representing complete agreement and -2 representing completely irrelevant to +2 representing completely relevant, respectively). The horizontal axes show the percentages with which each score was assigned.
Figure 4
Figure 4
Agreement scores of general practitioners (N = 213) for comments (n = 74) generated by reviewers. The results are shown by the four categories of comments: Diagnostics (n = 13), Workplan (n = 27), Prescription (n = 28), and Follow-up (n = 6). For each category, the distribution of the agreement scores is shown by the horizontal bars. The vertical axes show the ranges of the scores that the general practitioners could assign (-2 representing complete disagreement to +2 representing complete agreement). The horizontal axes show the frequencies with which the scores were given.
Figure 5
Figure 5
Relevance scores of general practitioners (N = 211) for comments (n = 74) made by reviewers. The results are shown by the four categories of comments: Diagnostics (n = 13), Workplan (n = 27), Prescription (n = 28), and Followup (n = 6). For each category, the distribution of the relevance scores is shown by the horizontal bars. The vertical axes show the ranges of the scores that the general practitioners could assign (-2 representing completely irrelevant, to +2 representing completely relevant). The horizontal axes show the frequencies with which the scores were given.

Similar articles

Cited by

References

    1. McDonald CJ. Protocol-based computer reminders, the quality of care and the non-perfectibility of man. N Engl J Med. 1976;295: 1351-5. - PubMed
    1. McDonald CJ, Wilson GA, McCabe GJJ. Physician response to computer reminders. JAMA. 1980;244: 1579-81. - PubMed
    1. McDonald CJ, Hui SL, Smith DM, et al. Reminders to physicians from an introspective computer medical record. A two-year randomized trial. Ann Intern Med. 1984;100: 130-8. - PubMed
    1. Elson RB, Connelly DP. Computerized decision-support systems in primary care. Primary Care. 1995;22: 365-84. - PubMed
    1. Shortliffe EH, Buchanan BG, Feigenbaum EA. Knowledge engineering for medical decision making: a review of computer-based clinical decision aids. Proc IEEE. 1979;67: 1207-24.

Publication types