The Catholic Register

Laws, ethical principles can ensure AI benefits all, speakers say

2025-02-18-PaoloBenantiAIConference.png

Franciscan Father Paolo Benanti, an expert on artificial intelligence and professor of moral theology at the Pontifical Gregorian University, speaks at a conference at LUMSA university in Rome March 7, 2024.

CNS photo/Justin McLellan

Carol Glatz
Carol Glatz
Catholic News Service
Article continues below ad

Share this article:

Rome

While the possible benefits of AI may outnumber its feared risks, a group of experts meeting in Rome insisted global legislation and ethical principles must be developed and applied now to protect human rights and curb or prevent possible harm.

"Voluntary compliance with ethical principles and standards is no longer sufficient for high-risk AI applications," said Marcus Wu, chargé d'affaires at the Australian Embassy to the Holy See, as he introduced a panel of experts to discuss "AI, Ethics and Human Rights" at the ambassador's residence Feb. 13.

"Like many countries, Australia is grappling with how to harness the potential AI benefits while mitigating the risks, to ensure AI is developed and used in a manner that is safe, human-centric, trustworthy and responsible," he said.

"Australia is looking to develop managerial ground rules to shape the use of AI in high-risk settings," he said. "This includes assessing AI use, the risk it poses to people's physical and mental health and to human rights, and how severe the impacts of this might be."

To help inform their considerations, embassy staff invited five experts -- representing the fields of theology, economics, technology, public policy and law -- to discuss what they saw as the most pressing concerns requiring immediate oversight, preventative measures and ethical guidelines.

Luigi Ruggerone, a professor of economics at Milan's University of the Sacred Heart and director of business and innovation research at Intesa San Paolo, Italy's largest bank, said wage inequalities will very likely continue.

Just as increased mechanization boosted worker productivity after the Industrial Revolution, artificial intelligence "is certainly helping a lot in increasing the productivity of labor," he said. Economic theory stipulates that workers' wages should increase hand-in-hand with increased productivity.

"However, we have a problem here," he said. "In the last 70 years, 99.9% of those who receive wages and income have not seen their wages increase by 1.5% or 2% a year," the increase needed to keep pace with the estimated annual productivity increases.

In fact, "Only 0.01% of people, so basically the managers, those who decide on their own salary, only they have received an increase," he said, and their salaries have grown much faster than the increase in their productivity.

So, while AI is expected to offer huge increases in productivity, he said, it appears those who own the capital will get "much richer" and the people providing the labor will be "much, much poorer."

Franciscan Father Paolo Benanti, an expert on artificial intelligence and professor of moral theology at Rome's Pontifical Gregorian University, said the "attention economy" in which tech companies vie for people's attention on their platforms, may be giving way to the "intention economy" thanks to Large Language Models. These models can be trained to predict and propose products and services to users, making people's "intentions" a possible new commodity to be bought and sold.

Father Benanti asked what kind of impact that would have on the rights of human beings who will be pushed to interact through a kind of "software-defined reality."

Diego Ciulli, head of government affairs and public policy at Google Italy, said, "With AI, it will be technically possible to monitor and control what everybody, everywhere does" and says.

Before AI, it was technically impossible to police the vast amounts of content on YouTube, he said. But the same technology developed to detect cyber pornography and terrorist content "could be available to monitor everything we say online and offline and to control freedom of speech."

Edward Santow, a lawyer and founder and co-director of the Human Technology Institute at Sydney's University of Technology, said one of the biggest concerns from a human rights perspective is freedom of speech.

Freedom of speech involves two rights: the right to be free to express oneself or transmit information and "the right to receive information in a way that is unpolluted."

"It is increasingly difficult to form opinions freely, be it on a matter of religion or on a matter of politics or whatever it happens to be, if your intellectual diet is mediated primarily by some of the social media platforms," he said.

"We're in an era right now where there is sometimes an exclusive focus" on the freedom of expression with increasing pressure to have no restrictions at all and to let people speak as they want, he said.

But, he said, "we have had for many, many years laws on defamation or libel that restrict people from saying things that are false and defamatory about other people," as well as privacy law protections and intellectual property considerations where "you can't just reveal people's trade secrets. So we know that that could not possibly be an absolute right."

While most of the panelists agreed that the opportunities offered by AI, especially in broadening everyone's access to education and health care, are greater than the risks, Santow cautioned that this may not be "the right calculus."

He recalled having a client who had been caught stealing but wanted leniency from the judge because, as the client said, "for each of the last six days, I was going to steal and I didn't. Surely, I get credit for that."

"That's just not how the law, or even frankly morality, works," Santow said. The same thing goes for AI and tech companies. "You don't get credit in human rights or ethical terms for all of the good uses of artificial intelligence if you are also causing human rights harm. You have to avoid human rights harm anyway."

Share this article:

Submit a Letter to the Editor

Join the conversation and have your say: submit a letter to the Editor. Letters should be brief and must include full name, address and phone number (street and phone number will not be published). Letters may be edited for length and clarity.

More articles below ad