11/02/2021 by Preligens
Aurélie Jean, PhD in science and entrepreneur answers our 3 questions following the release of her book "Do algorithms rule?", in French Les algorithmes font-ils la loi ?.
It is important to maintain a significant role on the international scene of artificial intelligence, in order to ensure that we leave our mark on this discipline. It will allow our citizens to benefit from future advances, but it will also influence the development and use of these algorithms. In this respect, the GDPR has successfully managed to influence the vision of many nations on the collection of personal data, starting with the United States with the CCPA (California Consumer Privacy Act).
This is a multi-scale subject. First of all, at school, from kindergarten onwards, by developing in all children an appetite for science, so that each one of them becomes curious and enlightened. In this way, they will be able to ask the right questions. I still come across far too many adults who are 'traumatised' (that's the term they use) by mathematics or science in general. On top of that, companies need to offer their employees continuous training in scientific culture and the scientific method. Scientists and engineers, for their part, must share their knowledge and know-how with the general public. Moreover, technological actors must have a moral obligation to ensure that their users understand, even in broad terms, the ins and outs of how their tools work. What data is collected, for what purpose (content suggestion, matchmaking proposal, tool effectiveness test, etc.)? This is a subject where social cohesion and the responsibility of each individual play a predominant role.
You can't regulate an algorithm for the simple reason that you often can't fully evaluate it. That being said, and as I explain in my book, we can and should regulate the development, testing and use of algorithms. We also talk about algorithmic governance. This is why the notion of algorithmic explicability - which is one of the pillars of the book - is the subject of the next legal frameworks. It is fundamental to require stakeholders to apply explicability calculation methods to ensure that they have as much control as possible over the operating logic - even partial - of their algorithms. In this way, errors, bugs, and even technological discriminations linked to algorithmic biases arising from biases in the training data sets, for example, can be anticipated. In my book I introduce in a non-exhaustive way methods of computation of explicability that can be applied before training - on the data sets -, during training, or after once the algorithm is trained.