Today's Large Language Models are Essentially BS Machines

LLMs have no way to determine whether the responses they generate are factually correct or make logical sense.

By Ryan McGreal

Posted September 07, 2023 in Blog (Last Updated September 07, 2023)

Contents

1Introduction
2Reasonable-Sounding Nonsense
3Understanding BS
4LLMs are BS Generators
5Ripe for Abuse
6What’s Next

1 Introduction

A large language model (LLM) is a type of artificial intelligence (AI) in which a specialized computer system called a neural network consumes a very large set of written language, and learns to identify patterns in how the individual words it has ingested relate to each other and to larger structures of written language. Then it is fine-tuned with additional training - often by teams of human trainers who provide feedback on the LLM's responses - to produce the desired kinds of responses to prompts.

When you prompt an LLM with a question, it uses predictive logic to generate a response by successively predicting the next word in the sentence, based on its expansive model of how words relate to each other.

This approach can produce extraordinary outputs, but it's important to understand what is not happening. The LLM does not "understand" what it is saying. It has a model of how words relate to each other, but does not have a model of the objects to which the words refer.

It engages in predictive logic, but cannot perform syllogistic logic - reasoning to a logical conclusion from a set of propositions that are assumed to be true - except insofar as a rough approximation of syllogism tends to emerge from predicting each word in a response based on a large corpus of training data.

In addition, today's LLMs cannot independently fact-check their own responses against some kind of knowledge base.

What they can do is generate text that sounds reasonable and persuasive, especially to a reader who is not particularly well-versed in the material the LLM is generating text about.

2 Reasonable-Sounding Nonsense

In preparation for this article, I asked the Bing Chatbot, which is powered by the OpenAI LLM ChatGPT, "Who is Ryan McGreal?" From Bing's response, I learned that I have written for the New York Times, authored three books, and hosted podcasts about technology and society.

I asked more about the NY Times article, and Bing told me the article is titled, "How to build a better city" and was published as part of a series called "The Future of Cities".

I asked about the three books I authored. Bing told me the first book was a collection of essays called "Urbanicity: The Book", published in 2010, and that I contributed several articles to the book.

The second book was called "Code: Debugging the Gender Gap", published in 2016, a companion novel to the 2015 documentary of the same name. In that book, I wrote a chapter sharing my personal and professional experiences as a programmer and advocate for women in tech.

The third book was called "The Future of Cities", published in 2019, and was compiled from the NY Times series in which I had contributed an essay.

The responses all came with citations and links to sources for the fact claims. And the responses themselve all sound entirely reasonable.

They are also entirely made up.

I have contributed an essay to a published collection, but the book was called "Reclaiming Hamilton" and it was published in 2020. I've never written for the New York Times, and as far as I can determine, the three books to which Bing claims I contributed chapters don't even exist.

But its responses were so reasonable sounding that I actually had to do an independent search to see whether I had perhaps forgotten having written these things.

3 Understanding BS

In 1986, American philosopher Harry G. Frankfurt wrote an important essay titled, "On Bullshit" in which he presented a theory of BS that has become ever-more relevant in the Internet and especially the social media age.

Frankfurt draws a sharp distinction between lies and BS. With a lie, the person making the claim has a specific intent of trying to convince you that their false claim is true. The liar cares about what is true, because they want you to believe something specific that is specifically not true.

Whereas with BS, the person making the claim is not trying to contradict the truth. Rather, they are entirely indifferent as to whether what they are saying is true or not. As Frankfurt puts it:

When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose.

The absolute indifference to what is true is what distinguishes BS from a lie. In that sense, as Frankfurt warns, bullshitters are actually more harmful to the truth than liars.

4 LLMs are BS Generators

LLMs are trained not to produce answers that meet some kind of factual threshold, but rather to produce answers that sound reasonable. As currently designed, they have absolutely no way to determine whether a generated response is true or not, or whether its conclusions logically follow from its propositions.

If one were inclined to anthromorphize these models, one might say that they are indifferent to whether what they are producing is true or even makes logical sense.

However, we must be careful not to ascribe intent to LLMs. After all, they are not in any way conscious, let alone malicious. They are merely algorithms predicting a series of words in response to a prompt based on the patterns they identified in their training data.

However, the practical effect of how they operate is that they function as generators of BS. As LLMs get embedded in more and more systems that interact with humans, and particularly as they get smaller and more portable, this property should make everyone who genuinely cares about the truth feel a little bit queasy.

5 Ripe for Abuse

LLMs provide bad-faith actors with an incredibly prolific tool to generate mountains of persuasive-sounding nonsense to flood the public discourse and erode the very concept of a shared understanding of reality.

Indeed, the role of sheer volume in attacking civil society through disinformation deserves its own essay. As fascist chaos agent Steve Bannon famously put it, "The real opposition is the media. And the way to deal with them is to flood the zone with shit."

The goal of flooding the zone is not so much to convince people to believe in a specific lie, though lies - and especially Big Lies - are part of it. Rather, the goal is to introduce so much confusion, exhaustion and cynicism about what is and isn't true - that is, to produce so much BS - that the broad civic and political consensus which underpins every movement for justice becomes impossible to sustain.

This tactic is perhaps most effectively used in the Russian "firehose of falsehood" propaganda model:

We characterize the contemporary Russian model for propaganda as "the firehose of falsehood" because of two of its distinctive features: high numbers of channels and messages and a shameless willingness to disseminate partial truths or outright fictions. In the words of one observer, "[N]ew Russian propaganda entertains, confuses and overwhelms the audience." Contemporary Russian propaganda has at least two other distinctive features. It is also rapid, continuous, and repetitive, and it lacks commitment to consistency.

Whatever other socially-benign services they might provide, LLMs also make the task of flooding the zone exponentially easier.

6 What’s Next

The companies and engineers who are building the current generation of LLMs have bet big on the scale of the training data - the first L in LLM. So far, increases in the size of the training set have, indeed, translated into more impressive performance. It’s an open question whether and for how long this trend can continue.

Evangelists like OpenAI CEO Sam Altman, whose company built ChatGPT, the LLM that powers the Bing chatbot, has stated his belief that scaling the training set large enough can lead to artificial general intelligence, or AGI, which AI researchers regard as the Holy Grail of AI. He may be right, or we may be close to a local maximum in which further increases in data scale hit diminishing returns.

And there are other headwinds. Large scale copyright holders have begun pushing back on AI companies consuming their content without permission or compensation. This is likely to lead to protracted litigation, and likely the best outcome for all concerned will be some sort of licencing framework. But it is definitely a risk.

Another, potentially more serious problem is the rise of LLM-generated content itself. As a progressively larger share of the total content of the internet is generated by LLMs, that means a progressively larger share of the training data for future generations of LLMs will be the output from previous generations. This is a problem because, when used as inputdata, the output of an LLM is like already-digested food.

Indeed, a fascinating research paper released this year found that feeding LLM-generated content into an LLM leads to “model collapse”. As the authors write:

We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in … LLMs.

Since there is currently no reliable way to test whether a given piece of content was written by a human or a LLM, this threatens to be an increasingly thorny challenge.

Of course, all of these challenges may eventually be surmountable. The ceiling of performance as a function of data size may be far off. The addition of logic and fact checking modules may already be in the works. Licencing arrangements may solve the copyright problem.

But for the time being, today’s LLMs remain plagued by the BS problem - and calling the plausible nonsense LLMs generate “hallucinations” does not make them any less troublesome for people who value increasing the amount of truth going out onto the world.