Ethics has become a fashionable word. In today's hyper-technological world, it is increasingly seen as essential for designing robots and intelligent systems. But this is only in theory. In practice, moving from “ethics of face” to ethics of substance is becoming increasingly difficult.This content was published on March 19, 2021 - 06:00
According to a study by the Massachusetts Institute of Technology (MIT)External link, there is a substantial gap between how artificial intelligence (AI) can be used and how it should be used. The study highlights that as many as 30% of large US companies are actively implementing AI. However, not many have a concrete plan to ensure the ethical correctness of their solutions.
This statistic concerns all of us. Most of the world's technology giants are concentrated on the Pacific coast of the United States, between Silicon Valley (think Google, Facebook, Apple, Microsoft) and Seattle (Amazon). They are the so-called GAFAMs, the magical five “musketeers” of the technological revolution, who are increasingly influencing our lives on and off the web.
"Artificial intelligence is everywhere and is advancing at a fast pace; nevertheless, very often developers of AI tools and models are not really aware of how these will behave when they are deployed in complex real-world settings,” Alexandros Kalousis, Professor of Data Mining and Machine Learning at the University of Applied Sciences of Western Switzerland, told me in an interview. "The realisation of harmful consequences comes after the fact, if at all," he added.
AI is a powerful tool with far-reaching implications for the real world, society and individuals. We are all already subject to recommendations and profiling based on our online behaviour. The ubiquity of AI is now well established. “How AI systems change our future depends on the people and policies that guide their implementations," says ethical AI researcher Aparna Ashok in a portrait about her workExternal link.
To predict and mitigate the risks of new technologies, we also need unbiased research that is free from business interests. But often, research is financed by the very corporations that like to advertise the importance of moral principles while looking after their own commercial interests.
The case of Timnit GebruExternal link, the distinguished researcher in AI ethics who was fired on the spot by Google after publishing an article criticising the heart of the company's business - its search engine - is a case in point.
In an article that will soon be published on swissinfo.ch, I will look more closely at Gebru's case and the so-called practice of “ethics washing”, i.e. façade ethics, with experts and Googlers from Switzerland.
Are you afraid of the power of the technology giants? How do you deal with these questions in your everyday life? What are your experiences? Let's talk about it! Write meExternal link your comments.
Ethics and robotics: what values?
For this edition of the Swiss Science Watch newsletter, our collaboration with the NCCR - the National Centre of Expertise in Robotics ResearchExternal link has led us to explore the question of ethics in the world of robotics.
We spoke to Aude BillardExternal link, Professor of Machine Learning and Robotics at the Swiss Federal Institute of Technology in Lausanne (EPFL):
SWI swissinfo.ch: Professor, what are the risks and benefits of using robots on a large scale?
Aude Billard: That's a very bold question. It all depends on what you mean by benefits and risks. With regard to applications in the medical field, such as prostheses and wheelchairs, I mainly see the benefits. These devices allow people to return to a normal life. But even a robotic wheelchair can present problems, such as analysing the surrounding environment using personal data.
As for the use of robots in the military, I personally see only risks. One might think that the use of robots in armies would reduce the number of people killed. But in reality, a machine could kill more frequently and more precisely.
Then there is the question of safety: if we used drones to deliver goods to people's homes, we would have less traffic on our roads, but the risk of the robot hitting a human and someone getting hurt exists. The ethical and political question is: how can we find a compromise between safety and comfort in our society? I think we need to discuss more about how to balance these two different values.
Do you think that ethics washing is also an issue in robotics?
There are guidelines in robotics. Through the project called "P7000", the IEEE [The world's largest technical professional organisation dedicated to advancing technology for the benefit of mankind] is trying to create a standard to certify the ethics of robotic devices at the design stage. And while this is good on the one hand, the concern is that robots are already navigating our world and we would have needed an ethical standard long ago that also takes into account the impact of robots on the human environment. But I don't think any such guidelines will be produced in the short term.
What ethical issues should be addressed most urgently in the field of robotics?
It is very important that society come together to define its own ethical values. At the moment there are so many contradictions, just look at the military sector: everyone agrees that it is unethical to kill, but states train soldiers to kill and support actions of war.
In robotics, the question is the same: we need to agree on a European level on what the most important reference values are. Here, too, there are contradictions: we do not want robots to cause harm, but we use them even though we know they will harm someone (I am thinking, for example, of autonomous vehicles). But how to ascribe the responsibility for the damage caused by a robot? That's why we need a balance between what is not ethically acceptable and the risks we are prepared to take as a society.
In your opinion, is it “ethical” to expect perfection from robots?
I don't know if it's ethical or not. It's certainly not realistic. The more complex the system, the greater the chance that something will not work as it should, and the more difficult it becomes to identify the problem. This is why we need very precise guidelines for the actual design of, for example, autonomous vehicles and autonomous wheelchairs.
Do you have an opinion on this? Let's talk about itExternal link over a (virtual) coffee.
Upcoming events not to be missed
Touch-proof interactions thanks to AI
Want to learn more about how machine learning can enable citizen interactions in the post-pandemic era? Then don't miss the event "Untouched interaction through Machine Learning" organised by the Swiss-Korean Science Club, a platform created by the Office of Science and Technology of the Swiss Embassy in South Korea to showcase the latest developments in research projects between Switzerland and South Korea.
I will be present at the event as a moderator and I warmly invite you to participate!
When? 24 March, 2021 - 9.00 am CET
Where? Online via Zoom
Robots in space... and what about people?
Following the landing of the American Perseverance rover on Mars, the question of the "Martian dream", i.e. the habitability of the red planet by humans, has returned. Experts are divided between those who believe that humans will one day be able to live on Mars, and those who see too many obstacles on the horizon.
Save the date – on April 15, SWI swissinfo.ch will host a live debate on this topic with experts Sylvia Ekström, Javier G. Nombela and Pierre Brisson. If you have questions for them you want us to raise in the debate, send them to meExternal link! We’ll give many more details in a future edition of this newsletter, and on swissinfo.ch.
In compliance with the JTI standards