Shaderkin I.A. – PhD, Head of the Laboratory of Electronic Health, Institute of Digital Medicine, Sechenov University; Moscow, Russia; https://orcid.org/0000-0001-8669-2674
Introduction. Recently, a large number of intelligent systems have begun to appear that are used to support medical decision- making – «artificial intelligence in medicine».
Material and methods. The author of the publication also works on the issues of making medical decisions, and, being a doctor himself, in the course of his work and existing practice, discovered a number of important issues that he considered necessary to share with the professional community.
Results. In some cases, the demonstration of the successful operation of the software in the declared characteristics (sensitivity, specificity) occurs only in the «reliable hands» of the developers and on the data that underlie the software. When attempting to demonstrate performance in clinical situations, the claimed characteristics are often not achieved, so the clinical community that must use this AI-based solution does not always form a favorable opinion. The author considers various types of errors that can be fatal in making medical clinical decisions – distortion of primary medical knowledge, lack of knowledge or inaccurate knowledge about the subject area, social distortions.
Conclusions. When developing solutions based on AI, it seems important to keep the above points in mind for both developers and users.
Conflict of interest: The author declare no conflict of interest.
Attachment | Size |
---|---|
Download | 118.47 KB |