The Catholic Register

AI needs some Christian ethics, UBC prof says

2024-06-13-ComputerTechnology.png

Unsplash

Nicholas Elbers
Canadian Catholic News
Article continues below ad

Share this article:

Vancouver

The tech world isn’t known for its moral character. Its pursuit of efficiency and technological progress rarely demonstrates a deeper appreciation of spiritual or moral considerations.

Now the same companies that have consolidated soft political power through social media and search result manipulation are integrating so-called artificial intelligence into every facet of the world, including, troublingly, warfare.

At a recent talk titled AI & Faith: A Christian Contribution to Ethical AI, University of British Columbia law professor Benjamin Perrin said AI development and implementation must be directed by principles beyond legal compliance. 

“What is legal can change,” he told attendees.

AI is fundamentally different from existing technology and tools, he said. Unlike the internet, a technology with a similarly revolutionary impact, AI “learns” and “is taught,” because “what it’s fed will guide it.”

He stressed how that reality creates powerful and sometimes concerning feedback loops. For one, humans have a demonstrated bias toward privileging answers derived from machines. If the data fed into the machines is corrupt, incomplete or ideologically biased, the problem and its result will be masked by the AI’s illusion of objectivity.

Perrin expressed serious reservations about creating so-called embodied AI systems — AI embedded in robots — especially those being developed for warfare. Policy alone cannot direct the development of such systems, he said. Strong principles are needed to ensure embodied AI technologies are used to help, not harm, society.

Whether these technologies should be built is irrelevant at this point. The genie is out of the bottle. All that’s left is to minimize the damage these autonomous systems can cause and ensure human interference and control are prioritized.

For Perrin, there’s a need to add the Christian voice to the growing number of perspectives on AI. Queer studies, women’s studies and Indigenous perspectives, among others, have already weighed in on the topic, and Perrin believes Christians can contribute something concrete and valuable to an ethical AI framework.

He hopes such an ethical framework, built on a foundation he took from Tonye Brown of FaithGPT Blog fame, will help Christians better understand the technology, its limits and, perhaps most importantly, its potential to positively affect human lives.

The core principles of the framework cover a range of topics, including AI systems affirming the dignity of humans as made in the image and likeness of God, maintaining human agency and focusing on sustainability and stewardship of the environment.

In a follow-up email, Perrin said it’s “empowering for followers of Jesus” that even though the world faces major challenges, “the timeless values, teachings and principles in the Bible offer something compelling, rich and meaningful to address them.”

Philosophically, he hopes those who interact with and create AI maintain humility in the face of the technology’s potential, being careful to understand and accept its limitations.

There can be a temptation with data-driven technology to prioritize knowledge and computing power, he said. Integrating a Biblical emphasis on wisdom into their work will help creators of technology to promote the common good rather than raw efficiency.

Ultimately, rejection of the illusion of objectivity in AI systems and the claim that they are value-neutral in their orientation and programming must be at the core of a Christian understanding of the technology.

Perrin highlighted developing AI technologies that are especially concerning, such as autonomous weapons systems. The amount and quality of human control needed for such weapons is hotly debated, with countries including China, the United States and Russia pushing the boundaries of what technology can do. When weapons can make decisions at a staggering pace, human intervention becomes more difficult.

Data-driven AIs can also exacerbate existing systemic problems, Perrin said, pointing to poorly trained police profiling software that led to racially biased outcomes, highlighting the need for police and governments to disclose their use of AI systems to ensure accountability and prevent abuse or negligence.

At the same time, AI has its benefits. Perrin highlighted the predictive linguistic modelling made possible by Large Language Models (LLMs) like ChatGPT, which has the ability to speak and translate passable Cree. These models are valuable for preserving endangered languages and helping under-resourced communities maintain their linguistic heritage.

In the medical field, AI’s data-analytic capabilities have reduced patient mortality by up to 25 per cent at one Toronto hospital. By analyzing patient data, AI can identify high-risk patients even when conventional medicine deems them stable.

A version of this story appeared in the April 20, 2025, issue of The Catholic Register with the headline "AI needs some Christian ethics, UBC prof says".

Share this article:

Submit a Letter to the Editor

Join the conversation and have your say: submit a letter to the Editor. Letters should be brief and must include full name, address and phone number (street and phone number will not be published). Letters may be edited for length and clarity.

More articles below ad