The latest issue of the European Health and Pharma Law Review (EHPL) has published the paper of Anastasiya Kiseleva written together with Professor Paul Quinn "Are You AI’S Favourite? EU Legal Implications of Biased AI Systems in Clinical Genetics and Genomics".
In the article the authors take two perspectives to look at bias: societal and statistical. Anastasiya and Paul define and classify biases in these perspectives and explore three negative consequences of biases in AI systems applied in genetics and genomics: discrimination and stigmatization (as the more societal concepts) and inaccuracy of AI’s decisions (more related to the statistical perception of bias). Each of these consequences is analyzed within the frameworks they correspond to.
Recognizing inaccuracy as harm caused by biased AI systems is one of the most important contributions of the article. It is argued that once identified, bias in an AI system indicates possible inaccuracy in its outcomes. The authors demonstrate it through the analysis of the medical devices framework: if it is applicable to AI applications used in genomics and genetics, how it defines bias, and what are the requirements to prevent them. The paper also looks at how this framework can work together with anti-discrimination and stigmatization rules, especially in the light of the upcoming general legal framework on AI. The paper submits that all the frameworks shall be considered for fighting against bias in AI systems because they reflect different approaches to the nature of bias and thus provide a broader range of mechanisms to prevent or minimize them.
News contact
For suggestions/feedback concerning the LSTS website news section, please contact us.