The area of conflict between AI and ethics




Artificial intelligence (AI) is finding its way into medicine and medical technology design more and more. The new technology has great potential, but it is also a source of some justified concern. Thus, to a certain extent AI ​​is in a field of tension between progress and ethics, which should be weighed against each other in the context of a social discourse. In the following we want to give you an overview of the opportunities and risks of this new technology and at the same time explain the tension between AI and ethics in more detail.


Artificial intelligence and AI ethics - A definition


Artificial intelligence is a mixture of different technologies that enable machines to learn, understand and act on the basis of human-like intelligence. In medicine, it can be used, among other things, to better examine large studies, to help the treating physicians with diagnostics and to independently evaluate data or images. Based on these findings, the AI ​​then makes suggestions for treatment. AI ethics examines the use of artificial intelligence on humans from a moral and anthropological point of view.


AI ethics - Opportunities of new technology


In order to discuss the tension between AI and ethics in more detail, it makes sense to weigh the established medical values ​​and principles against each other.

A central principle to the discussion surrounding AI and ethics is the potential benefit to patients. The benefit is basically the source of legitimacy for every new medical device. The great benefit of AI is seen in the hope that, based on huge amounts of data, an AI system will be able to make the right diagnosis and select the best possible treatment much sooner than the treating doctor. Diseases could then be recognized and diagnosed more precisely, which would enormously increase the quality of healthcare. An example of this is the early detection of breast cancer. The use of assistance robots is also an interesting possibility for AI. In this way, these people in need of care should help to look after themselves independently on a larger scale. Through the use of robots, the patient's privacy can be better preserved. For the development of this technology, the factors of ergonomics and usability as well as user experience should also be taken into account in AI ethics. Such technologies must be both practical and user-friendly in order to really help people.

held+team | The area of conflict between AI and ethics


AI ethics - Issues with new technologies


The crucial question about AI and ethics is to what extent an actual benefit is generated and not just promised. The current evidence here is often ambiguous. So it may be that the AI ​​can detect breast cancer better overall. At the same time, however, it will also identify less relevant cancer types that have no impact on health and thus encourage overtreatment.

While there is undoubtedly great potential in AI, there are certain risks and downsides that need to be carefully considered. In AI ethics, the main discussion is to what extent the reduced contact with the patient leads to a loss of solidarity in the long term. Excessive trust in AI can also become problematic. For example, artificial intelligence could presumably not recognize variations in diseases due to insufficient data and thus deliver incorrect results even with good training. This problem is aggravated by the fact that, as in many fields of AI ethics, only a small amount of data could be collected on such misdiagnoses that would make it possible to make reliable statements. It is therefore not entirely clear how often such deviations occur and how they can be remedied.

In addition, there is always the danger of external influence in the digital sector, e.g. in the form of hacking. When it comes to the discussion of AI ethics, data protection in particular plays a major role. After all, who owns the personal data and how it is collected and processed is primarily a question of privacy. Due to the large amounts of data, private companies are usually required to manage them. However, the unwelcome risk of commercialization often arises when private data is passed on and sold by companies. Proponents of AI emphasize that the desired benefit from the amount of data is greater than the significant damage. In AI ethics, too, the question arises as to whether and to what extent the end justifies the means.

In the self-determination of patients, there is a problem in AI ethics insofar as certain medical preferences can hardly be taken into account by an AI. Patients still have no access to the data and the algorithm, which means that the entire system is generally opaque. Here, however, it must be noted that the decisions made by doctors often cannot be understood from the patient's point of view, which is why the AI, at least as of now, is not a worse alternative in terms of an equal decision.


There is a lot left to solve


AI ethics is an exciting and at the same time highly polarizing field. On the one hand, it would be irresponsible not to use new technologies such as AI if this could significantly improve the healthcare system. On the other hand, there is the fundamental question of the extent to which the AI ​​really does this and whether possible side effects (patient alienation, data commercialization, misdiagnosis, etc.) may predominate. AI ethics therefore assumes an important function for medicine in the 21st century. Questions that go beyond medical ethics must not be left unconsidered. How do we imagine a healthcare system in which AI plays a crucial role? What does this do to our understanding of freedom and self-determination? And how do we perceive the AI ​​in terms of ethics and morality? AI ethics must answer these questions in order to give the new technology social legitimacy.

If you have any further questions about AI ethics, please feel free to contact us at any time. We look forward to your inquiry.


Read next: