The Bias Of Being Wrong


https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffc704d0e 54dc 4830 96ac

One of my favourite Norm Macdonald bits is when he ponders, “Imagine if you woke up and you realised you were wrong about everything”:

“You just woke up, and you go, God damn, I’ve been wrong about every single thing I’ve ever believed. Huh.”

After a pause, Macdonald recommends, “Then it’s time to go down to the rope store—because it’s not going to get better.”

I like the bit because the scenario is absurd and yet connected to real anxieties.

Why couldn’t you be wrong about everything? I don’t mean this in the way philosophers raise annoying sceptical scenarios about whether an external world exists or whether the universe popped into existence five minutes ago. I mean, when it comes to our deep-rooted convictions—what most people have in mind when they use the word “belief”—getting things completely wrong seems like a live possibility. If you have strong beliefs, you must think those you disagree with are wrong. But if they can be totally wrong, why can’t you?

It’s alarming to contemplate this possibility. We organise much of our lives around our beliefs. We identify with them. We stake our reputations on them. It would be devastating to wake up and discover you had been wrong about everything. Going to the rope store might seem appropriate.

That’s probably one reason why most people don’t spend much time contemplating the possibility of radical error.

Anyway, lately I’ve been wondering whether I’m wrong about everything.

In domains where I have strong opinions, I spend a lot of time confidently arguing for those opinions and criticising views I think are mistaken. I spend much less time contemplating the possibility I’m deeply mistaken. Even when people tell me I’m wrong—a common occurrence, especially on Substack—I typically react to such criticisms by thinking, “How can I find a way to make this confused person see that their bad objections are mistaken?” rather than, “Are they right?”

This might be a general feature of human psychology, or maybe I’m just unusually dogmatic. Whichever is true, I want to spend more time seriously exploring the possibility I’m wrong about fundamental things.

One thing I’ve become especially concerned with recently is that my writings and the broader political and philosophical outlook they reflect fall prey to what might be called the “everyone is biased” bias.

The “everyone is biased” bias isn’t merely the conviction that everyone is biased. Everyone is biased when forming beliefs about the world, or at least those features of the world that lie beyond immediate experience.

First, we tend to think and reason about reality in ways distorted by goals like emotion regulation, self-interest, and social signalling. When we’re driven to adopt a belief that would make us feel good or promote our interests, we become motivated reasoners, adjusting how we seek out and process information to reach congenial conclusions rather than correct ones. Only “such truth, as opposeth no man’s profit, nor pleasure, is to all men welcome,” wrote Hobbes. And when truth is unwelcome, we’re skilled at interpreting reality in ways that protect us from it.

Second, even when we can’t directly convince ourselves of falsehoods, we still communicate and self-censor in ways shaped by status competition, reputation management, and social pressures. As evolved social primates, we often care more about getting along and getting ahead than being honest or intellectually courageous. Over time, this skewed but socially adaptive communication shapes how and what we think.

Moreover, precisely because such behaviour is itself an unwelcome truth—because caring too much about what others think of us makes others think less of us—we tend to deny such tendencies, even to ourselves. To maintain a self-image as honest truth-tellers, we internalise those views and perspectives that are socially rewarded in our political or cultural tribes so that their expression feels sincere.

Third, reality is inherently difficult to understand, and our access to it is typically highly indirect. In forming beliefs about complex or distant matters, we rely on the partial, fallible, and occasionally deceptive information we acquire from others. And we’re forced to interpret that information through low-resolution mental models, categories, and explanatory narratives that simplify and distort reality, interpretive systems that were also primarily inherited from our culture and favoured subcultures.

As Jeffrey Friedman observes, these processes of interpretation and belief revision are highly path-dependent:

“The mix of truths and errors in which each of us believes forms an interconnected web that, as it grows in breadth and depth over a lifetime, comes to function increasingly like… a self-perpetuating worldview. The self-perpetuation stems from the fact that we continually screen candidates for entry into each of our webs of belief, and the primary screening criterion is whether the candidates seem plausible in light of what we already believe. Those candidates that do not seem plausible, or even legible, are rejected or ignored.”

Finally, we tend to be oblivious to the partiality and fallibility of our beliefs. That is, we’re not just profoundly and unavoidably biased in our understanding of reality; we’re typically ignorant of the deep and ineliminable nature of this bias. We’re instinctive “naive realists”, treating our convictions as direct reflections of self-evident facts rather than partial and fallible interpretations of shadows on a cave wall.

Anyone who reads this blog or my research will be familiar with these observations, which I think are widely under-appreciated. Because most people—including otherwise sophisticated pundits and academics—are naive realists, I spend (too) much time explaining why naive realism is mistaken.

And it’s not just mistaken; it’s harmful.

At the individual level, naive realism breeds intellectual complacency and arrogance.

At the collective level, it drives polarisation, animosity, and a sense of mutual alienation, hostility, and terror as the self-evident truths affirmed by different political tribes come apart. If reality is transparent—if the truth is manifest—why don’t others see the same reality you do? They must be lying, crazy, or stupid. There must be sinister forces conspiring to obscure or hide the facts.

For this reason, naive realism doesn’t just blind us to our own biases. It leads us to exaggerate the biases of our rivals and enemies. Worse, by driving us to explain other people’s mistakes in terms of malevolence, it conjures rivals and enemies out of mere disagreement.

Given this, it’s important to point out that naive realism is itself a mistake.

Everyone is biased.

photo 1645081293373 325de01c87f2?crop=entropy&cs=tinysrgb&fit=max&fm=jpg&ixid=M3wzMDAzMzh8MHwxfHNlYXJjaHwyM3x8dHJ1dGh8ZW58MHx8fHwxNzMzNTA0MjkwfDA&ixlib=rb 4.0

The deep and unavoidable roots of political bias

Nevertheless, it’s possible to focus excessively on the universality of bias in ways that are also intellectually and socially harmful. This is to succumb to the “everyone is biased” bias.

I’m worried that some of my writings and broader intellectual outlook are often biased in just this way.

For example, in recent years, I’ve been highly critical of elite discourse surrounding topics like “misinformation”, “disinformation”, “fake news”, and “post-truth”. As is well-documented, such buzzwords became highly prominent in 2016 after the Brexit referendum result and Donald Trump’s first presidential election. Surprised and disgusted by these events and the growing popularity of similar right-wing populist movements worldwide, many highly educated professionals—the sorts of people right-wing populists malign as “liberal elites”—developed a new piece of conventional wisdom: that these movements are distinguished by their uniquely hostile attitude towards truth, facts, and honesty.

Whereas political disagreement once occurred against a backdrop of shared facts, right-wing populists embrace “alternative facts”. Whereas politicians and pundits once paid respect to truth, right-wing populists are “post-truth”. Whereas the public once listened to experts, right-wing populists have “had enough of experts”.

One reason I’ve been critical of such narratives is because of their association with prominent ideas and research that I find philosophically or scientifically objectionable. However, I’ve also objected because such narratives are typically highly biased.

First, they often obscure the possibility that one reason for anti-establishment backlash is establishment failures over many years, including shocking epistemological catastrophes. In the UK and US, for example, this includes the Iraq War, the financial crisis, and many aspects of the COVID-19 pandemic response. However, these are just the most striking examples amidst a wide range of persistent failures by elites and establishment institutions over decades.

Moreover, much of the discourse surrounding “misinformation” and “post-truth” exhibits a naively realistic attitude to the orthodoxies that prevail among highly educated liberals and progressives. Not only is it absurd to imagine a golden age of objectivity or “truth era” preceding 2016, but there is plenty of “misinformation”—falsehoods, lies, propaganda, cherry-picking, omission of context, alarmism, biased and misleading narratives, and so on—among those who sanctimoniously pontificate about the dangers of misinformation.

This is true at the level of popular belief systems and narratives. However, it’s also true at the level of the most prestigious liberal institutions throughout Western democracies. Elite outlets like the New York Times and the BBC don’t merely report on “the facts”. They frequently select, omit, frame, package, and contextualise facts in ways skewed towards supporting specific narratives. And at the same time as liberal discourse has exploded with concerns about widespread anti-science attitudes (“science denial”), we’ve become aware of just how much modern science is unreliable, ungeneralisable, overhyped, and frequently outright fraudulent.

I stand by the literal content of everything I’ve just written. And yet, I have to admit that after witnessing the behaviour of Elon Musk and the broader right-wing media ecosystem in the US over the past year or so, I’m worried that this kind of analysis misses the forest for the trees.

Yes, everyone—including highly educated professionals who “believe in science” and “trust experts” and read broadsheet newspapers and go to farmers markets and so on—is biased, but “bias” doesn’t begin to describe somebody like Elon Musk.

Almost everything that Musk communicates or amplifies on X—and he posts and reposts a staggering quantity of content to a vast audience—is an outright lie, half-truth, or flagrant propaganda. And, of course, the only reason he can do this is that he inhabits a right-wing ecosystem in America which has wholly abandoned even the pretence of caring about truth, reality, or objectivity.

This ecosystem is increasingly dangerous, including in the way it is opening up intellectual space for repugnant anti-semitic conspiracy theorising. And given the extent to which America influences other societies and the American right is explicitly allied with right-wing populist parties elsewhere, there’s a significant risk that this radical epistemic dysfunction will have a broader global significance in the years to come.

I know many people will point out that this is simply a continuation of Donald Trump’s behaviour. Maybe they’re right. Seeing where we are now, I have to admit that many of those who characterised Trump in what struck me as extremely alarmist ways had better judgement than me. However, I’ve always viewed Trump as a run-of-the-mill ignoramus and bullshitter whose absurd exaggerations and fabrications are so obvious that they’re priced in by audiences who care more about authenticity and triggering libs than factual correctness.

Musk seems qualitatively different. Contrary to progressive wishful thinking, he’s clearly highly intelligent, and he can easily acquire accurate information about anything he wants, including from his own large language model, Grok, which will readily inform users that Musk is the most prolific source of misinformation on his own platform. And Musk … simply doesn’t care.

When confronted with this behaviour, the observation that everyone is biased obscures more than it illuminates.

More generally, an excessive focus on the universality of bias encourages at least two kinds of errors:

  1. The problem of flattening

  2. The problem of intellectual defeatism

I’m working on a book called “Why it’s OK to be cynical” (subtitle: “but not too cynical”). The book argues for a tragic or “constrained” view of human nature that emphasises the ineradicable roles of self-interest, social competition, hypocrisy, self-deception, and sharp moral limitations in human psychology.

Of course, I don’t deny that typical humans have genuine prosocial instincts and an intuitive sense of fairness or that we form close and genuine bonds with romantic partners, family members, and friends. (Hence the book’s subtitle). Nevertheless, the book argues that cooperation outside such close-knit relationships is exceptionally challenging. It requires significant social and institutional scaffolding that punishes (and hence disincentivises) self-interest or channels social competition in beneficial directions.

I’m writing the book because I think a moderately cynical view of human nature is well-supported by a wide range of evidence and that the alternative view—a heart-warming and often moralised picture of humanity that’s especially appealing to progressives—is not just mistaken but harmful.

https%3A%2F%2Fsubstack post media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b5e1e34 f872 4dca 856f

The Survival of the Friendliest?

Nevertheless, cynicism gives rise to a problem of moral flattening. Even if everyone is highly limited in their propensities towards altruism, some are more limited than others. And even if cynicism is correct at one level of abstraction, sweeping generalisations about human nature inevitably obscure such individual differences. This is a problem from the perspective of accuracy. However, it’s also a problem because tracking individual differences in motivations is necessary for rewarding and, hence, encouraging good behaviours and detecting and punishing bad ones.

Similar lessons apply to the conviction that everyone is biased, which involves a kind of epistemic cynicism.

Yes, everyone is biased, but some are much more biased than others. There are significant individual differences in how intellectually virtuous people are, and some people—Musk included—are outright sociopathic in their attitude towards truth and honesty.

Moreover, profound differences exist in the social norms and procedures enforced in different communities and institutions. For all the genuine flaws of elite knowledge-generating institutions like science, academia, legacy media, and so on, they implement norms designed to encourage conformity to Enlightenment values and responsible intellectual conduct (fact-checking, peer review, sourcing claims, issuing formal corrections, etc.), even if they often fall short of this goal.

Focusing excessively on the universality of bias obscures these individual and social-institutional differences. Moreover, this flattening can have deleterious consequences. As with all cooperation, epistemic cooperation depends on identifying and rewarding good intellectual conduct—honesty, reasonableness, thoughtfulness, and so on—and detecting and discouraging their opposites. This requires sensitivity to individual differences and the ability to distinguish run-of-the-mill human bias from the intellectual equivalents of Ted Bundy.

An excessive focus on the universality and unavoidability of bias can also undermine one’s own motivation to participate fully in political discourse and debate.

This can happen for two reasons.

First, you can become so preoccupied with identifying other people’s biases—in pointing out how others’ perspectives on reality are selective, partial, and distorted by grubby motives—that you avoid taking a stance on reality yourself. This buys a feeling of intellectual superiority at the expense of achieving anything of substantial intellectual value.

Second, just as naive realism drives intellectual complacency, overconfidence, and arrogance, the rejection of naive realism can easily give rise to the opposite vices of intellectual cowardice and defeatism. If you’re too paranoid about the inevitability of bias—if you’re too focused on the unavoidable subjectivity and partiality of political judgement—you can become excessively intellectually risk averse.

The easiest way to avoid bias is simply to avoid taking any stand at all.

As friend of the blog and consistently excellent commenter Chris Schuck puts it in response to one of my articles:

“It is important to sometimes be able to believe things strongly, and have faith in the grounds for those particular beliefs. Otherwise, we are screwed. Not having enough confidence in your own sound judgment and available evidence, failure to make your best shot at truth and stand behind this, can be just as pernicious as having too much faith in your flawed judgment or faulty evidence. This is the flipside of ignorance as overreach, self-deception and arrogance: ignorance as weakness and misplaced humility.”

Looking at how most people approach politics, the problem of intellectual arrogance seems much more widespread than misplaced humility. Nevertheless, a rare vice is still a vice—and one worth taking seriously.

For example, I’ve spent much more time scrutinising biases in prominent discourse about “misinformation” than directly condemning misinformation itself. Although I’d like to believe this scrutiny has had value, in the current political climate—and perhaps in all political climates—this disproportionate focus itself reveals a problematic bias.



Source link

About The Author

Scroll to Top