Anastasiya Kiseleva will defend her PhD in Law and Computer Science on Tuesday 21st January at 14:30, titled 'Transparent Development and Deployment of Artificial Intelligence in Healthcare: A Multilayered Accountability Framework Integrating Law and Computer Science to Address Technological and Human Opacity'. The defence will take place at the VUB's Etterbeek campus in Promotion Room or online via MS Teams.
The invitation, abstract and registration information can be found here.
Abstract
Lack of transparency is one of the most pressing and fundamental issues of AI. Since the very first promising results of using AI, the opacity of how AI comes to decisions became a cornerstone of its successful application. In healthcare, this issue is crucial because people's lives and health are at stake. Clinical decision-making is intended to ensure the safety and efficacy of patient treatment to the greatest extent possible. However, algorithmic opacity creates the inherent technical challenge to explain AI, which is primarily faced by the creators of the technology, but ultimately seriously impacts healthcare providers and patients.
This interdisciplinary research investigates the legal landscape and technical restraints for the development and deployment of AI in healthcare, with a view to achieving sufficient transparency for the main groups of stakeholders involved: patients, healthcare providers and developers of the technology. The research describes and classifies AI transparency in law and computer science, integrates the insights into interdisciplinary taxonomy and translates it into a model to design transparency as a multilayered system of accountabilities of the stakeholders involved. This model guides the interpretation of the relevant legal frameworks at the relevant layers of the transparency system: the requirement of informed medical consent (external transparency), the Medical Devices Framework, and the AI Act (transparency at the internal and insider layers).
Through the lens of accountability methodology, the analysis of the applicable legal frameworks resulted in the formulation of practical recommendations on how to ensure that the development and implementation of AI in healthcare are sufficiently transparent for patients, healthcare providers, and AI developers. To empower patients, the thesis proposed to establish the three minimum requirements for informing patients about AI-assisted diagnosis and treatment: 1) communication (disclosing the fact of using AI); 2) information provision on the nature of the technology and its features, the purposes and benefits of AI, and the consequences and risks of implementing an AI recommendation in the diagnosis or treatment; 3) explanations (in the sense that a physician shall explain his or her decision to believe that the AI recommendation is correct and beneficial to the patient). To empower healthcare providers, the thesis submitted two recommendations. First, it proposed that informational materials, instructions, and explanations provided with an AI medical device should be evaluated by healthcare professionals. Second, it requested to clarify the role of healthcare organisations in the AI lifecycle, distinguishing it from the role of healthcare professionals. Finally, to facilitate internal and external control of AI providers, the thesis submitted an innovative approach to address the opacity of AI within a risk management framework. In contrast to the requirement-based approach, managing opacity as a risk provides more flexibility, ensures ongoing management of the risk, and facilitates normalising the lack of absolute algorithmic transparency.