Follow
Register for free to receive Fr. Patrick Mary Briscoe’s My Daily Visitor newsletter and unlock full access to the latest inspirational stories, news commentary, and spiritual resources from Our Sunday Visitor.
Newsletter Magazine Subscription

When people disagree, how instructive is AI?

LLM LLM
Large Language Model. Shutterstock

The other day I had a disorienting encounter on social media. A few years ago, I published a book on transubstantiation in ecumenical dialogue. Since then, I am often tagged in online conversations about transubstantiation, particularly when Catholic teaching is being misunderstood or misrepresented.

In this case, a non-Catholic who was misrepresenting Catholic teaching was directed to my work. After a brief exchange, in which he demanded citations and I provided them, he informed me that he had asked an AI engine about the matter and that it had agreed with him, not me. This was presented as if it straightforwardly resolved the question.

There is an old saying in theology, “Roma locuta est, causa finita est.” Roughly, “Rome has spoken, the matter is settled.” For this interlocutor, however, it was not Rome’s word that was final, but a large language model’s word. What are we to make of this?

LLMs can’t know things

The first thing to notice here is that large language models do not actually know things. Relatedly, they have no reliable means of discerning truth from falsehood. They merely use predictive algorithms to generate content based on the enormous amount of text they have been fed. This means they can do a lot of things quite well — they are excellent brainstorm partners, for example — but that they are far from dependable sources of information.

If a false idea is prevalent in the dataset from which the model derives its content, it’s likely to appear in its responses. In other words, AI is especially bad at questions where there is already widespread error and confusion. If, let us just try to imagine, the internet is full of confusion about Catholic teaching, then asking a large language model about that teaching is not necessarily the best way to get accurate information.

LLMs are open to manipulation

Which brings us to a second key consideration. What you get out of a large language model depends on what you put into it. The way that a prompt or query is formulated makes a huge difference in the outcome your AI engine generates. This means that AI can be manipulated, intentionally or otherwise, to give you the answer you are looking for.

If you want an argument against transubstantiation, AI can give you one. If you want an argument for it, it can give you that, too. Indeed, if you want a triumphalist articulation of transubstantiation that demonstrates the errors of Protestantism, you can get that. Or, if you want an ecumenical articulation that highlights commonalities between transubstantiation and various Protestant theologies of Real Presence, all you have to do is ask.

The very specific question in this case was the relationship between transubstantiation and the eucharistic doctrine of the reformer John Calvin. While it is widely believed that Calvin’s doctrine represents a total repudiation of transubstantiation — a position easy enough to substantiate by a surface-level reading of Calvin — scholars familiar with both Calvin’s teachings and Thomas Aquinas’s articulation of transubstantiation are regularly struck by their similarities.

In other words, this is exactly the kind of question you would expect a large language model to handle very poorly. It will be far more influenced by the surface level understanding common on the internet than by the more nuanced and researched positions of qualified theologians publishing books and articles in academic journals.

The limits of AI

After my summary dismissal from the social media conversation, I logged in and asked a large language model about Thomas and Calvin on the Eucharist, to see what it would say. Not surprisingly, its answer wasn’t terrible. But it did slip up a little in its description of Aquinas. When I said that my understanding was that the language it was using was a little misleading and that Aquinas did not quite say that, it politely apologized for its error, affirmed my point, and amended its position.

It would be tempting here to think that I had corrected a large language model, taught it to discern truth from falsehood. I had, in fact, done no such thing. I had simply manipulated a tool to get it to say what I wanted. That what it said was closer to the truth is immaterial; I could have manipulated it the other way just as easily.

Large language models are an incredible advance in computing and a remarkable tool that we are just beginning to understand how to leverage. But they do not have any privileged access to truth. If you are persistent, however, and experiment with enough prompts, you can get it to say something like:

“The responses I provide are generated based on statistical patterns and associations in the data, rather than on personal knowledge or understanding. While I can provide information and answer questions to the best of my ability based on the data I’ve been trained on, it’s important to remember that my responses may not always be accurate or reflect objective truth, especially in complex or subjective matters. It’s always a good idea to verify information from multiple reliable sources when seeking factual or authoritative knowledge.”