Reinforcement Learning (RL) is a rapidly growing branch of AI research, with the capacity to learn to exploit our dynamic behavior in real time. From YouTube’s recommendation algorithm to post-surgery opioid prescriptions, RL algorithms are poised to permeate our daily lives. The ability of the RL system to tease out behavioral responses, and the human experimentation inherent to its learning, motivate a range of crucial policy questions about RL’s societal implications that are distinct from those addressed in the literature on other branches of Machine Learning (ML).
This NeurIPS2021 Workshop addresses these implications under the heading of the Political Economy of Reinforcement Learning (PERLS).
Renowned UC Berkeley AI Scientist Stuart Russell and our own Mireille Hildebrandt will set the tone with kick-off talks, where Mireille will speak of the foundational incomputability of things that matter and the implications of vetting the so-called ‘reward function’ that is key to reinforcement learning. In short: what are the real life (RL) implications of reinforcement learning (RL), depending on who get to decide how on the goals and the means. And what could be the role of Question Zero, i.e. the question that should always precede any cost benefit analysis (CBA): what problem is a specific type of AI solving, what problems does it NOT solve and what problems does it CREATE.
For more information, click here.
For slides, click here.