KSL-90-69

Graphical Access to Medical Expert Systems: IV. Experiments to Determine the Role of Spoken Input

Reference: Isaacs, E.; Wulfman, C. E.; Rohn, J. A.; Lane, C. D.; & Fagan, L. M. Graphical Access to Medical Expert Systems: IV. Experiments to Determine the Role of Spoken Input. April, 1993.

Abstract: Our goal is to design improved interfaces for medical expert systems. We have explored previously the use of graphical techniques to improve the acceptance by clinicians of the user interface. Now that devices that accept spoken input are available, we wish to design interfaces that take advantage of this potentially more natural modality for interaction. To understand how clinicians might want to speak to a medical decision-support system, we carried out an experiment that simulated the availability of a spoken interface to the ONCOCIN medical expert system. ONCOCIN provides therapy advice for patients on complex cancer therapy protocols based on a description of the patient's current medical status and laboratory-test values. In the experiment, we had oncologists present a clinical case while observing the ONCOCIN flowsheet display. A project member listened to the presentation and filled in values for the flowsheet, as well as introducing purposeful misunderstandings of the input. The results suggest that each individual developed a stereotypical grammar for communicating with the program. Our experience with the purposeful miscommunications suggests particular ways to tailor requests for repetition based on which part of the utterance was not understood.


Jump to... [KSL] [SMI] [Reports by Author] [Reports by KSL Number] [Reports by Year]
Send mail to: ksl-info@ksl.stanford.edu to send a message to the maintainer of the KSL Reports.