Balloons and Ballots I: The Balloon Problem

Reading Time: 4 minutes

Why do people vote against their own interests?

The inspiration to write this article series came from, as many of my honest ideas do, a rant.

A friend had posted a series of messages on her WhatsApp story – frustrated, almost incredulous – about how difficult it is to understand why poor people continue to support a government that clearly does not serve their interests. Her argument was simple: at least the people on the other end of the spectrum – wealthy, or those connected to power, might benefit from patronage, access, or proximity. But for everyone else, the support seemed irrational.

I found myself unable to dismiss her frustration. It is a question many people have asked, often in quiet moments: why do people make political choices that appear to work against them?

It is tempting to answer this question with moral judgement and simplicity; to say people are uninformed, indifferent, or even complicit. But the more one thinks about it, the less satisfying those explanations become. The reality is harder to ignore: information is not scarce. News, analysis, social commentary, and data circulate more widely today than ever before. People are exposed to more political information now than at any other time in history.

And yet, the outcomes often look the same.

This tension – between what people know and how they act – is what I have come to think of as the balloon problem. Not the other “balloon problem” you might be familiar with. This one was inspired by a simple observation of a balloon drifting in my compound.

It floated lightly, with no care in the world, carried by gusts of wind it could not perceive. It did not concern itself with the intensity of the sun above it or the sharpness or otherwise of the surface below it. It simply drifted. Even with the danger present, but the balloon was unaware and paid no mind to it – until something happened.

Modern democracies rest on a fragile assumption: that citizens can evaluate information rationally and make decisions aligned with their long-term interests. The expectation is that voters will weigh competing claims and arrive at choices that reflect both reason and self-interest.

But electoral outcomes across contexts suggest that this assumption does not always hold.

The work of Amos Tversky and Daniel Kahneman offers a useful lens for understanding why. Their research challenged the long-standing belief that human beings are consistently rational decision-makers. Through their study, they showed that much of our thinking relies on heuristics – mental shortcuts that allow us to make quick judgements without expending too much effort.

These shortcuts are not inherently flawed. In many situations, they are efficient and even necessary. But they are not designed for navigating complex, abstract systems like modern politics.

When faced with difficult questions about policy, governance, or long-term economic outcomes, the mind often substitutes them with easier ones. Instead of asking, “What will this decision mean in five years?”, we ask, “Do I trust this person? Do they feel familiar? Do they represent people like me?”

These are not analytical questions. They are intuitive ones.

And intuition, while powerful, is not always reliable.

This becomes especially important when we consider how people process risk.

Imagine being told that a political candidate’s proposed policy could lead to inflation, reduced access to public services, or institutional decline over time. These are serious outcomes, but they are also abstract. They do not produce immediate, tangible consequences. They require interpretation, projection, and a willingness to think beyond the present moment.

Now compare that to touching a hot stove. The lesson is immediate. The consequence is unmistakable. No analysis is required.

Human beings are far better at learning from the second kind of experience than from the first.

This is where the gap begins to emerge.

Political decision-making often depends on interpreting information about future consequences. But the human mind is wired to respond more strongly to experience than to abstraction.

So even when people are exposed to relevant information, it does not always translate into meaningful behavioural change.

To understand this more intuitively, return to the image of the balloon drifting through the air – unaware of the danger until something happens. And when it does, the lesson arrives too late.

Human behaviour, particularly in collective decision-making like voting, often mirrors this drift. People move through environments filled with signals – warnings, analyses, projections – but those signals do not always translate into action. Not because people are incapable of understanding them, but because the mind does not naturally process abstract risk in the same way it processes lived experience.

So the balloon keeps drifting.

But the question remains: If people are not necessarily ignorant, and if information is not necessarily lacking, then what exactly is going wrong?

That is where the real problem begins – and where the next part of this series will continue.

Note: AI images appeared in this post.

Leave a comment

Your email address will not be published. Required fields are marked *