Contact form
Your request has been sent.
Please enter your name
Please enter your email
Please enter a correct email address
Please enter your company name
Please enter your message

3 questions to Laurence Devillers

← All resources
3 questions to Laurence Devillers

04/26/2021 by Preligens

Laurence Devillers is a researcher at LIMSI-CNRS and a professor in Computer Science and Artificial Intelligence (AI) at Sorbonne Université. She is an expert in language processing, machine learning, human-computer dialogue, affective computing and ethics applied to AI. She conducts research on emotion detection and the affective and social dimensions of spoken interactions, particularly with robots. She works on the BAD-nudge BAD-robot project of the DATAIA Institute. She is a member of Allistène's think tank on research ethics in digital science and technology (CERNA), and has worked particularly on biases in machine learning. She is the author of more than 150 scientific publications as well as several books including, in 2020 at Éditions de l'Observatoire, " Emotional robots " and " Digital sovereignty in the post-crisis world ".

What is your analysis of the impact of the health crisis on digital sovereignty at the European level?

The sanitary crisis has reaffirmed the importance of digital technology, which is essential to maintain activity during confinement, whether for work, health, education or culture. The COVID-19 crisis has also made the general public aware of the stakes of sanitary, food, industrial but also digital sovereignty.  The power of the American digital giants - GAFAM: Google, Apple, Facebook, Amazon, Microsoft - have become indecent in this period of economic recession caused by the pandemic. The European Commission presented in December two draft regulations to open up a new regulatory framework for digital giants: the Digital Services Act (DSA) and the Digital Markets Act (DMA). Defending European digital sovereignty has become everyone's concern.

Do you think that the upcoming European regulations on ethics and the use of artificial intelligence will be a brake on innovation or, on the contrary, a new virtuous framework for the European Union that could inspire its partners?

The European Commission published yesterday a regulation on the use of AI to restore trust. This legal framework is necessary to foster the development and adoption of AI that meets a high level of protection of public interests, including health, safety and the fundamental rights and freedoms of people in our societies. We already had the European framework for the protection of personal data, the GDPR regulation, but we now have a regulatory framework that sets limits on uses and opens up questions about ethics. Ethics aims to determine the right thing to do. Ethical tensions are inevitable in many fields and anticipating the impact of AI is necessary. This new step is important and political, because this legal framework enshrines a European approach based on our values, those of human rights and ethics, including respect for human dignity, freedom, justice, democracy, equality, solidarity, fairness and transparency. The principle that the human being must remain at the center should facilitate the acceptance of AI in all its developments. These European values must be omnipresent in the development of AI in the European Union. We must defend them at the global level with all democracies.

Given the prospects for the development of AI research and its applications, what should be the European priorities to strengthen sovereignty facing Chinese and American powers?

Artificial intelligence should not be an aim in itself, but a tool to serve humans in countless applications to enhance the well-being of the individual and our community. AI rules should not unnecessarily constrain technological development. While it is not possible to anticipate every possible use or application of AI that may occur in the future, the tensions must be understood. AI integrating ethical dimensions such as fairness, transparency, explicability also poses many research challenges. The development of AI research and its applications must be amplified in Europe on these subjects.

Europe has more than 512 million Internet users with high purchasing power who are also of interest to GAFAM. It is in the interest of the European Union to preserve our technological lead and to ensure that Europeans can benefit from new technologies developed according to their values and principles. In addition to the legal framework presented by the Commission, the sharing of our data is necessary to counterbalance the power of GAFAM. A debate must also take place, in society and in companies, both those who manufacture AI and those who use it, on the limits and possibilities of the machines and on their usefulness. The objective is to enable the best cooperation between AI and humans. It is necessary to develop a double attitude: trust and doubt to keep one's free will in front of the machine.

To summarize, our strength in Europe is to develop a particular vision of AI according to our values and principles. The 3 keys to a trusted AI are 1/ to respect a legal framework, 2/ to anticipate ethical risks and 3/ to check the robustness of the systems. GDPR which deals with the respect of our private data was criticized at first, now it has become an example for other countries. This new legal framework on the use of AI systems, respectful of humans and society, could be the envy of the world in a while and boost our economy.

Laurence Devillers, Professor of Artificial Intelligence 

Twitter : @lau_devil

  • Research at LISN-CNRS - CNPEN

https://www.ccne-ethique.fr/fr/actualites/cnpen-les-enjeux-ethiques-des-agents-conversationnels

  • GPAI on the future of work

https://www.gpai.ai/fr/projets/avenir-du-travail/#:~:text=Group%20of%20work%20on%20workers%20and%20increasing%20the%20productivity%C3%A9

  • Author of Les robots émotionnels (Editions de l’Observatoire, 2020) 

https://www.editions-observatoire.com/content/Les_robots_%C3%A9motionnels

Photo©Olivier Ezratty

Related Articles
Episode 2, Stanley Kubrick and the relationship man-machine
Episode 2, Stanley Kubrick and the relationship man-machine
Episode 2, Stanley Kubrick and the relationship man-machine
DASA case study - Accelerating Artificial Intelligence (AI) on the Front Line
DASA case study - Accelerating Artificial Intelligence (AI) on the Front Line
DASA case study - Accelerating Artificial Intelligence (AI) on the Front Line
3 questions to Olivier Ezratty
3 questions to Olivier Ezratty
3 questions to Olivier Ezratty