Doctoral researcher Nikolaos Ioannidis (VUB, LSTS, CDSL) proposes a research agenda for ‘’impact assessment considerations in artificial intelligence applications’’ identifying commonalities between the proposed conformity assessment procedure, the data protection impact assessment process and the newly introduced fundamental rights impact assessment. He further suggests how the current proposal for an AI regulation might need to be re-assessed prioritizing fundamental rights in the equation.
Introduction
Algorithms are increasingly being adopted for decision-making, at the expense of the human agency. This is already visible in online advertising, social media, and welfare distribution, among others. Such algorithms work by data processing, profiling, and inference-drawing, supported by the utilization of artificial intelligence (AI), bringing to the fore a number of challenges; the perpetuation of stereotypes and societal biases are some of these risks (harms) Data subjects casually end up being the data objects. An interrelation of data protection law and artificial intelligence law has recently emerged, expressed into the following question, among others: how personal data protection law, and more concretely, a data protection impact assessment (DPIA) process could be an adequate means to protect fundamental rights in the context of artificial intelligence? Designing a specific impact assessment architecture for artificial intelligence applications, while ensuring the safeguarding of fundamental rights, has been one of the highest priorities since the European Commission (EC) expressed its intention to regulate AI in the EU in its White Paper.
Challenges of algorithmic decision-making to data protection law
Data protection law is undeniably a rich ‘machinery’. It has been validated as fit for purpose for regulating a responsible deployment of algorithms for decision-making. The European data protection framework as reformed, re-establishes several principles, among which, the accountability principle. This obligates the data controllers to comply in an effective manner and to demonstrate, upon request, compliance with the other principles. In addition, the GDPR introduces the risk-based approach. Such an approach means that tiered obligations arise depending on the level of risk. Criteria for determining the risk are personal data processing operations, the complexity and scale of data processing, the sensitivity of the data processed, and the protection required for the data being processed.
Challenges are not only theoretical, but also perceptible and observable in practice. The reformed framework, however, seems to regulate algorithmic activity from a high-level, without always ensuring a consistent and adequate level of personal data protection (despite the pro-fundamental rights articulation in Recitals 1-13 GDPR). Concrete challenges either concern discrimination cases (Barocas & Selbst, 2016), explicability concerns (Edwards & Veale, 2018; Wachter et al., 2017), or trade secret defences (Malgieri, 2016). Other, more fundamental challenges pertain to transparency and fairness of algorithmic systems (Butterworth, 2018), their accountability (Kroll et al. 2017) or civil liability (Bertolini, 2020). The consequence of the complexity of these challenges is usually poor compliance with the law.
Data protection impact assessment and accountability
When data controllers are using algorithms, they are specifically obliged to put in place the respective technical and organisational measures to demonstrate their compliance when requested. ‘Algorithmic accountability’ implies that a decision-making system is properly documented and where mitigating mechanisms to personal data processing apply, such as transparency, audit controls or sanctions (Alhadeff et al., 2012). This may include an obligation to report, explain, or justify algorithmic decision-making as well as mitigate any negative social impacts or harms.
Within the accountability principle, of great importance is the process of impact assessment. The GDPR requires that the process of DPIA is carried out, inter alia, in case of ‘systematic and extensive evaluation of personal aspects […] based on automated processing, including profiling, and […] produce legal effects’. The assessment should contain a systematic description of the envisaged processing operations, the (justification of the) necessity and proportionality of such processing, an assessment of the risks to the rights and freedoms of data subjects, along with mitigating measures. The obligation to conduct a DPIA is triggered when a type of processing using new technologies is likely to result in a high risk to the rights and freedoms of natural persons. This tool might be the most prominent, drastic, and comprehensive tool for algorithmic accountability, as per Article 35 GDPR (Kloza et al., 2019). As a meta-regulatory tool (Binns, 2017), it obliges the data controller to carry out a multifaceted, systematic, techno-legal self-assessment before initializing the processing of personal data. This obligation arrives to firmly epitomize the enshrined data protection principles and encapsulate the risk-based approach, while it strengthens the principle of accountability.
Hard-law regulation of artificial intelligence
The EC has recently proposed a Regulation laying down harmonized rules for artificial intelligence (European Commission, 2021) introducing a risk-based approach on AI, with two cumulative criteria: (a) the applicable sector, and (b) the impact on the affected parties. For instance, some AI applications, such as those involving remote biometric identification or other intrusive surveillance technologies, should always be considered ‘high risk’ by default. In ensuring compliance with these requirements, the EC proposes the procedure of conformity assessment (Veale & Borgesius, 2021), addressing requirements of AI and its ‘supply chain’. The EC, considering the guidelines of the High-Level Expert Group, requires that (high-risk) AI applications conform to data governance, documentation and record-keeping, transparency, provision of information to users, human oversight and robustness, accuracy, and security.
The conformity assessment in the proposed AI Regulation essentially deviates from the DPIA of the GDPR, as expressed in an earlier work, following a preliminary linguistic comparison between the terms ‘impact’ and ‘conformity’. Impact, in general, may mean: the effective action of one thing or person upon another, the effect of an action. In data protection law, the DPIA process, emphasizes on the estimated impact stemming from the high risks of the processing (Gellert, 2021). The choice of the term ‘impact’ focuses exactly on the uncertain future and the envisaged consequences (perceived effects) to the fundamental right of data protection or other interrelated rights. Conformity assessment is (mostly) an internal procedure for providers of high-risk systems and constitutes a formalistic ex ante tool. Combined with the CE marking, the choice of the term ‘conformity’ alludes to an anterior control mechanism, which may hardly be updated as the conforming AI system is adapting itself (e.g., self-learning AI algorithms).
Impact assessment towards safeguarding fundamental rights
Data protection law and the proposed artificial intelligence law both act as umbrella mechanisms for several fundamental rights. Typically, such rights are: (a) the right to non-discrimination, (b) the freedom of thought, conscience, and religion, (c) the freedom of expression and information, (d) the right to an effective remedy and to a fair trial and (e) the respect for private and family life, home, and communications (right to privacy). Case law from the Court of Justice of the EU (CJEU) and European Court of Human Rights (ECtHR) often involve two or more affected rights. For instance, the right to not be subject to automated decision-making is usually interfered with in the case of filter bubbles (with the freedom of expression and/or information) or in credit scoring (with the right to non-discrimination).
Taking a step further, fundamental rights in an artificial intelligence environment could be more comprehensively protected if a dedicated impact assessment process was to be proposed. Towards this, the process of the DPIA and the process of the conformity assessment - combined – could contribute to the concept of an algorithmic impact assessment or artificial intelligence impact assessment. This should enable assessors to identify and to mitigate risks for the fundamental rights involved before they employ an AI system (Hallinan & Martin, 2020; Janssen, 2020; Mantelero, 2022). While this concept has already gained importance in some jurisdictions outside Europe (Borgesius, 2018; Kaminski & Malgieri, 2019; Koshiyama, 2019; Metcalf et al., 2021; Reisman et al., 2018; Yeung, 2021) there is still a research gap vis-à-vis the EU legal order. Specifically, more elucidation is necessary on its societal significance (rationale), on how to efficiently conduct it (methodology), on its interrelation with other assessment tools (DPIA, risk assessment) and its legal and societal rationale. Similarites, differences and bridging practices between the established DPIA and to the conformity assessment for artificial intelligence have been recently addressed (Demetzou, 2022). Indeed, a comparison in the scope, in the content and in the conditions of each would reveal that parallels could be drawn between them.
Further research and conclusions
How to accommodate ‘societal’ fundamental rights thinking and ‘corporate’ compliance together, i.e., how to combine all these different kinds of assessments, is a central question, accompanied by auxiliary ones:
- what are the persistent challenges and risks of artificial intelligence, and how to systematize them,
- how is the data protection impact assessment currently being employed vis-à-vis artificial intelligence applications,
- what is the role of the conformity assessment and how the requirements stemming from it advocate its effectiveness,
- which is the objective of a fundamental rights impact assessment in the artificial intelligence realm.
Taking into consideration the abovemetnioned (legal) novelties, it seems that the proliferation of AI applications requires a sophisticated approach, not yet encountered in laws or legal practice. Although literature and policy makers have thus far created an extensive corpus on the subject-matter of both data protection and artificial intelligence, fundamental rights might still need a specific impact assessment process to be adequately promoted in the algorithmic realm. The methodology is expected to be built on consistently used practices for the process of the DPIA, on the one hand, and on the notions of recently proposed conformity assessment and fundamental rights impact assessment, on the other hand. Finally, once designed appropriately, the methodology would cover a consistent research and compliance gap among smaller-sized data controllers, who use AI, but often lack the resources to elaborate on a comprehensive AI impact assessment. Enabling such data controllers with directed and compliance-oriented instructions, abidance by the law is expected to be considerably higher.
5 October 2022
Acknowledgment:
This blog post has benefited from feedback received through discussions with Prof. Vagelis Papakonstantinou, within the CDSL's research agenda.