On Monday 15 February, Prof. Mireille Hildebrandt will give an informal talk at the Stokes Society (of Pembroke College of Cambridge University) where she will briefly introduce her views on ‘Robust AI and Robust Law’, explaining that the notion of ‘robust AI’ provides an interesting perspective on what it would mean to speak of ‘robust law’ and of ‘robust legal technologies’.
What is robust AI? Besides being the name of a company it usually refers to computing systems that have been verified (math) and validated (empirical testing) to ensure safety and functionality (and hopefully compatibility with fundamental rights, though this has not yet become part of the definition of ‘robust’ in AI research).
What is robust law? Hildebrandt would say this refers to a legal system that incorporates the checks and balances of the rule of law and is capable of resisting attempts to corrupt its mode of existence (e.g. colonisation of law by methodologies befitting other disciplines or practices).
What is ‘robust legal tech’? This is where things get complicated. If we don’t want law to be colonised by the assumptions inherent in code- or data-driven technologies we need to pay keen attention to how such legal tech is integrated in legal practice or legal research. At the same time we want high quality tech that complies with the requirements of its own discipline (CS, engineering), meaning that we need to properly understand and navigate both understandings of ‘robust’. This may e.g. imply deferring quality control to software developers, which raises many questions (check the COHUBICOL website as Hildebrandt has promised to write a blog on the matter in February or March 2021).