Ethical Risks of Artificial Intelligence and Prospects for Joint Decision-making in Medicine
DOI:
https://doi.org/10.31857/S0236200724030065Keywords:
artificial intelligence, joint decision-making, medicine, autonomy, confidentiality, informed consentAbstract
The article deals with ethical problems in the process of joint decision-making arising in connection with the active involvement of artificial intelligence systems in medical practice. Special attention is paid to the influence of artificial intelligence systems on the principle of respect for the patient's autonomy, an ethical assessment of the main criteria of the patient's autonomous action — voluntariness, awareness and competence is given. It is noted that the declared ethical values in the implementation of artificial intelligence in the field of medicine at this stage of its development cannot fully provide the standard of joint informed decision-making. Along with the extensive capabilities of artificial intelligence systems to increase the patient's competence and responsibility for their health, ethically ambiguous issues remain related to the awareness and voluntary nature of the patient. Technological features of artificial intelligence create obstacles to the formation of trust between the doctor and the patient: they complicate the process of informing the patient, prevent the patient from voluntarily choosing the preferred treatment algorithm, which may make it difficult to comply with the principle of respect for patient autonomy. The study of trigger points in the mechanism of ethical management of artificial intelligence in the healthcare system is a promising task for the creation of reliable medical artificial intelligence.