Skip to main content

Brussels Privacy Hub Doctoral Seminar with Nikolaos Ioannidis on 'Impact assessments for artificial intelligence - Towards a tool for accommodating diverse fundamental rights'

Date:
Location: Teams
Add to personal calendar

The Brussels Privacy Hub (BPH) is organising a Doctoral Seminar series to give the opportunity to Ph.D. candidates working on privacy and data protection topics at the Law, Science, Technology and Society (LSTS) to present and discuss their work in progress. The aim of the series is to offer Ph.D. students at all research stages a training ground to refine and practice debating their scientific work, and to receive qualified feedback and questions from their peers and privacy and data protection experts. To this aim, each seminar will include a short presentation by the Ph.D. candidate, followed by an open discussion session with the audience. Seminars are also open to external participants. Find more information here.

On 4 May 2021, Nikolaos Ioannidis will be discussing his PhD project on 'Impact assessment for artificial intelligence: Towards a tool for accommodating diverse fundamental rights'.

The link to the event will be send out in due time to the LSTS mailing list. Interested participants wishing to take part, who are not on that mailing list, can register by sending an email to laura.drechsler@vub.be.  

Abstract: Algorithms are increasingly being adopted for decision-making, at the expense of human agency. This is already visible in online advertising, social media, and welfare distribution, to name a few. Such algorithms work by data processing, profiling, and inference-drawing, supported by the utilization of artificial intelligence (AI) and machine learning. While complex algorithms facilitate decision-making, they at the same time bring to the fore a number of challenges for individuals and society at large. These include, for example, introducing stereotypes, perpetuating societal biases, resulting in risks to the rights and freedoms of natural persons. Data subjects casually end up being the data objects. To remedy that, the research will investigate whether and the extent to which personal data protection law, and more concretely – a tool of data protection impact assessment (DPIA) – could be an adequate tool to protect fundamental rights in the context of artificial intelligence or a more adapted approach is warranted. This is examined taking into consideration the proposal for a Regulation on a European approach for artificial intelligence, which introduces the concept of conformity assessment for high-risk AI systems (under Article 43 of the Proposal).