On 31 October 2024, 13:00h - 14:00h CET, LSTS visiting scholar Stefano Tramacere (PhD Candidate at Sant’Anna School of Advanced Studies of Pisa) will present his work-in-progress, entitled ‘Enhancing transparency and accountability: operational rules for the design of high-risk AI systems’, followed by discussion.
This event will be in a hybrid format (both on-campus and online). Interested participants wishing to take part can register by sending an email to Pablo.Rodrigo.Trigo.Kramcsak@vub.be.
Abstract
The AI system possesses the capacity to operate with varying levels of autonomy and adaptability once it has been deployed, which can have an influence on the physical and virtual environment. Its utilization raises significant ethical, social, and legal concerns regarding the protection of fundamental rights of the person, particularly in high-risk domain such as healthcare. Despite AI systems can generate accurate predictions, they show a deficiency in transparency on their internal decision-making processes and final output, as well as a lack of reliability in real applications. This doctoral research examines, from an interdisciplinary perspective, the relationship between the legal concept of transparency and accountability with explainability techniques for the development and deployment of AI systems within high-risk domain. The research applies a legal methodology to ascertain if explainability AI techniques can enhance comprehension of the research design inherent in AI models, thereby facilitating the conveyance of more information to the deployers and final users. This enables deployers in interpreting and justifying decisions more effectively, allowing for better systems oversight, while providing persons affected by decisions generated by high-risk AI systems the ability to contest them in a judicial forum.