An eccentric dreamer in search of truth and happiness for all.

Category: Philosophy Page 1 of 3

My Current Theory Of Ethics

Many years ago, I subscribed to my own pet version of Utilitarianism that I called Eudaimonic Utilitarianism. In practice, it ended up functioning as the Classical Utilitarianism of Bentham and Mill, but with higher aspirations. Over time, I added some additional ideas, like Kantian Priors, but the basic idea was roughly the same.

Recently, I’ve thought a lot about ethics and questions of what I actually believe now. I think, over time, I’ve drifted away from a practically hedonistic view, towards something that more closely resembles the Preference Utilitarianism of Harsanyi and Tomasik.

The way I see it, morality is about values. It is about valuing equally what everyone values. What we value is not set in stone. It is dependent only on what the subject, the sentient being, cares about.

Generally, sentient beings care about their happiness. They desire happiness and avoid suffering intrinsically, which is the insight of hedonism. But they generally care about other things too. They care about whether they live meaningful lives, whether there is beauty in the world, whether truth is upheld, whether their children go on to live good lives too. These things, it can be argued to be instrumental goals rather than intrinsic, but I wonder, how are we to judge this? Who are we to decide that some values are more important than others?

Happiness is still important, but it becomes one consideration among many. This form of Preference Utilitarianism is inclusive like that. This differs from the Objective List form of Utilitarianism, in the sense that we the outsider do not arbitarily choose some set of things to be important for someone else. The moral patients themselves, decide what matters.

In many ways, this idea is encapsulated well by the Golden Rule: “Do unto others as you would have done unto you.” It’s a rule that exists not only in Christianity, but a myriad of religions and philosophies. It’s something that many wise people have converged on. It’s self-justifying, in the sense that, the world would be better for everyone if everyone followed it.

So, in some sense I’ve come full circle back to my upbringing, albeit with a tad more sophistication. If I look closely, Eudaimonic Utilitarianism as I originally proposed, is actually closer to Preference Utilitarianism than Hedonistic Utilitarianism. What matters to me is hopes and dreams being fulfilled, more than mere pleasures and pains experienced, though those still matter too.

This helps to counter the thought experiments like Nozick’s Experience Machine, or the Utilitronium Shockwave. Giving the perfect drug Soma to people against their will is wrong, even if they might be blissful. Tiling the universe with happybots, ignoring the wishes of everyone else, is also not right.

There’s the thought experiment of the mathematician who either dies believing they have achieved their life’s work in some grand theorem, but not actually, or dies believing they have failed, but actually succeeded. It seems to me, even disregarding the value of the theorem to society, that it is better that it is truly found, even if the discoverer never knows.

In my earlier writing on Eudaimonic Utilitarianism, I used the surprise birthday party example to argue it was different from Preference Utilitarianism, but in truth, it wasn’t a good example. While they may have a preference not to be lied to, they also have a preference to not have such surprises ruined, to learn that they have wonderful friends in a moment of joy and celebration. It is what they would want if they truly knew all the relevant details of the situation.

I might still use the formulation of “to maximize the happiness of everyone”, but with the understanding that happiness is partially a proxy. It is the emotional goal state we experience when the state of the universe matches our wants and desires.

Morality then, is a matter of finding the compromise where everyone’s hopes and dreams are reached as much as reasonably possible, a fair distribution of happiness and joy, of projects achieved, of wondrous worlds attained. From the perspective of an impartial observer of the universe, everyone’s hopes and dreams count the same.

This is my current theory of ethics. It is, perhaps, still not complete. I don’t pretend to know that it is The One True Morality(TM). It’s just a working theory I have about it. Perhaps things will evolve again in the future. But this is where I am now.

Confessing To Murder

I have a confession to make. I own a grand piano that originally cost enough to, according to GiveWell, save three lives by instead donating that money to the Against Malaria Foundation. In a sense, I was responsible for the deaths of three people in that way.

It wasn’t even something that I could argue was necessary, like a car to drive to work with. A grand piano is pure unnecessary luxury. And one that depreciates in value, so selling it and donating now wouldn’t save all those lives.

It’s kinda like a Trolley Problem, except on one track it’s three human beings I’ll never meet, and on the other side it’s an old out of tune grand piano that I rarely even play and that mostly gets played by my wife.

Anyways, from the perspective of the saints and angels and probably Peter Singer, I’m actually pretty evil. But then, by that judgment, the overwhelming, vast majority of human beings are no better.

And who am I to judge? Utilitarianism is super demanding like this. It also leads to bizarre conclusions like the Hedonium Shockwave where the greatest good thing to do is to convert all matter in the universe into happybots or pulsating pleasure blobs as quickly as possible, tiling the universe with them, ignoring the concerns of everyone else.

Taking things to their logical conclusion can, intuitively, feel wrong. It’s very easy to focus on particular axioms and prove from first principles that something absolute is true. But… reality is more complicated than that?

From a certain perspective, I am well and truly evil. I am a murderer of innocent lives by virtue of not saving them when I very easily could. But that logic condemns nearly everyone. What use is there in that? Do people stop deserving happiness because they are so far from perfection?

Judgments like this are, in a way, cruel and cold moral calculus, lacking in compassion towards those who, like us, are inherently flawed creatures.

So, what do I do about the piano? I could still sell it and maybe save a life. I could try to play it more, make the most of it. Does it even matter that much? Powerful people toy with the lives of others quite casually these days. My sin seems orders of magnitude less evil. But, in a way, it is still an evil, and I am definitely no saint.

The Idea That Could Save The World

Author’s Note: This was written while still somewhat sick, so it may not actually make much sense.

Many years ago, I wrote a rather simple and silly post on Less Wrong about what I called The Alpha Omega Theorem. It was, back in those days, not well received by the skeptical crowd of Rationalists, who were overwhelmingly atheists and my description of the Alpha Omega had obvious theistic overtones.

Many years later, I wrote about the concept of Superrational Signalling, in a long-winded essay that almost no one bothered to read.

Both of these posts have to do with an underlying idea, that it would be rational for a powerful entity like an ASI to be benevolent, or at least benign towards lesser entities.

Given the lack of being taken seriously, I wanted to find some way to show that this wasn’t just a hair-brained thought, but could be backed up with logic or math. The natural path towards this end was to show it through a proof using game theory.

I’ve mentioned before about Axelrod’s discovery of Tit-For-Tat winning the Iterated Prisoner’s Dilemma, and arguably showing that cooperation beats aggression fundamentally. But many people seem to see the IPD result as simplified, and not relevant to the case of an ASI facing primitive humans.

So, I decided to try something. Take the Iterated Prisoner’s Dilemma, and iterate on the design. Add Death through having payoff matrices with negative values that could lead to zero or less points, which would then remove the agent from the game. Add Asymmetric Power by allowing these payoff matrices to depend on the relative point totals. Add Aggressor Reputation so that agents could “police” or act as “peacekeepers” similar to what Toby Ord explored long ago

And so, I came up with Peace Or War Each Round (POWER) with code and analysis and an actual runnable simulation.

Basically, what I thought would probably happen, did. The cooperative (nice) strategies would, gradually, in the very long run, beat out the aggressive (nasty) strategies through a kind of coordination at a distance. Essentially, alliances beat empires. Perhaps more importantly, stronger agents had a strong strategic incentive to cooperate with weaker agents instead of just eating them. This part is what makes it relevant to AI safety.

Though, this goes past aligning just ASI. In theory, if we ever encounter alien superintelligence, the game theoretic proof holds even for them. In effect, this idea could turn evil towards good. It could show that morality is rational to everyone. This could be the idea that saves the world, so to speak.

Given how poorly my past essays on this area of ideas has been received, I’ve been more cautious about voicing this result this time. I want to write up a more rigorous analysis before I post on venues like Less Wrong again.

I did throw it past some other people interested in Game Theory and AI safety, and they seemed to find the idea interesting and potentially a big deal, but they’re probably very biased because of their aligned interests. I know that there are arguments people could use to critique the idea, that it’s too simple and irrelevant to real world situations, etc.

So, if I want to make it a true “proof”, I probably have to take a lot more steps to firm up the result, to confirm it across more complex simulations and expose it to more serious challenges. I’m not sure if I’m ready to push into that space.

Right now, I have what I think is a cool idea. I don’t know if it’ll actually save the world, but it’s nice to imagine.

In truth though, there’s a good chance the idea will be ignored. I could publish a paper, a Less Wrong post, and such, and it’ll probably just be an obscure thing on the Interwebs. I could try to write a series of novels that spread the idea, but that’s likely a moonshot.

This idea, what is it actually worth? I don’t know. I’m probably super biased by motivated reasoning. It’s something I want to believe. That in itself should make me more critical.

But then, I feel like the idea is written in the stars. It seems so obvious with reflection.

Anyways, I just wanted to mention this was a thing I’ve been working on. We’ll see if it ever goes anywhere…

Some Foolish Musings

Lately, not much has been happening. Well, not much in terms of career progression and projects. My family did experience several health situations that delayed a lot of things I wanted to do.

The toddler is doing well enough. Things are going again. Though, I’m still worried about various things.

Life goes on. I still don’t know what I’m doing. Still trying to live my values, even though I often wonder to what extent I’ve compromised to try to live a good life, instead of being a saint and serving the just cause as I sometimes think I should.

The reality of things is that I’m just a Joseph. I’m not special. Chances are I won’t make a significant impact, good or bad, positive or negative. I can push weakly in the direction of a better world in very small ways, but it will mostly matter only to those few that actually seem to care that I even exist.

For them, I’m still going.

My moral stance nowadays is that if it matters to anyone anywhere, it matters, and what matters in the universe, is just the aggregate of all these cares and concerns. The greatest good is made of the desires and dreams of everyone, without exception.

I’ve realized at some point that most people don’t seem to care intrinsically about the well-being and happiness of others. They may follow some rules that they should care, but most don’t do it because they actually, deeply care. I don’t know why I care. Why should I care about the happiness of a stranger? Why am I so strange?

People likely don’t believe I care. I used to try to hide it, because it was too easy to take advantage of me otherwise. Now, I don’t care about hiding it so much. I just do what I think is right, when I can. But at the same time, I don’t know if what I’m doing is actually right, or just what I delude myself into thinking.

I sometimes imagine that there are things happening at multiple levels beyond comprehension. Like time travellers and aliens are fighting a war across the multiverse. But in reality, why would I matter at all in something like that? So, it’s probably more delusion and hubris.

There’s a joke about the priest and the helicopter. Once, there was a flood, and a priest was stuck on a rooftop waiting for help from God. A boat appeared and the rescuers offered to help the priest. He said no, he’d wait for God to save him. Then, later, another boat, and another rejection. Then a helicopter appeared with people who could save him, but the priest, in faith, chose to wait for God. Eventually the floodwaters rose and he drowned.

Later, when the priest was in heaven with God, he asked why he wasn’t saved. God replied: “What do you mean? I sent two boats and a helicopter!”

Sometimes I think, if time travellers were real, they’d be the helicopter.

But of course, time travel is probably physically impossible, or cannot actually change the past, but only make things happen as they were, or create a new timeline, leaving the old one untouched. Those are the ways you avoid impossible paradoxes.

If such things were real, it would have nothing to do with me. They could erase memories and create local reality bubbles or whatever. They could be completely invisible, plausibly deniable. Just inconvenience you for two seconds at the door, and then you miss the car accident you would have had. And you’d have never known.

Same with aliens or simulators or anything else god-like in their technological power. You exist because they want you to, if they exist at all.

But my life seems very mundane, very pointless, full of frustrating, inconvenient bouts of mild suffering. I imagine I might exist just for the entertainment of some bored entity that just enjoys psychologically torturing nice guys who finish last.

I have no way to prove it. There’s still good things in the world. Nice moments. Beautiful music. It doesn’t really fit the narrative, the hypothesis.

More realistically, life is just a bunch of stuff that happens. We are particles dancing in chaos.

I want for things to matter. And yet…

The world continues to turn. We live and dream and hope and are disappointed, and then hope again, and the cycle continues.

Life goes on. The cost of taking risks is the potential for disappointment. There is no avoiding this. If you take no risks, you will never do anything, never hope, never fear. But such a life seems pretty empty.

So, we dream. We hope. We fear. We hope some more.

Someday, maybe, I’ll understand. Until then, I wander through meandering thoughts and foolish musings…

Thoughts On The Simulation Argument

So, there’s that Simulation Argument that Nick Bostrom formalized a while back, and before that was considered by everything from The Matrix to Plato’s Allegory of the Cave. Bostrom’s argument specifically is that, assuming such technology is possible, and civilizations last long enough to make them, we are much more likely to be in one of countless ancestor simulations, than not.

It’s an interesting argument from probability. Some people take it surprisingly seriously, calling the average human an NPC and otherwise using it to justify otherwise questionable choices. Most people who discover the argument however, don’t really do anything about it, and probably for good reason.

The argument, even if true, doesn’t really tell us much. The simulators are an unknown factor, akin to God, but perhaps less certainly benevolent. Their intentions are inscrutable. We could be part of some scientific experiment, an experience machine for bored future people to live in the past, or any other of many possible alternative theories.

But, what does it matter? We can see from the degree of granularity that we ourselves are conscious and sentient. There’s nothing that suggests that other people aren’t also. There’s no evidence that sentient life isn’t actually sentient, though it might be convenient to simulate at lower fidelity.

So, in terms of happiness and suffering, these things are most likely still real, regardless of whether we’re in the ground truth universe or not. From a moral perspective, we still have responsibilities to other sentient beings, regardless of whether or not this is a simulation.

It’s possible that we’re alone in the simulation. But we cannot, realistically, find this out. We also, could, be in base reality. We really, really, don’t know. And that’s the thing. If we’re alone in a simulation and nothing really matters, then we can do whatever, but there’s a chance we aren’t, and for the sake of that chance, we should act as if our actions do have impact and matter.

So, at the end of the day, we go on our daily lives regardless of the Argument. It doesn’t change that, given what we seem to know about the universe, there is right and wrong and choices to be made and people to be considerate towards.

We can guess at what the hypothetical simulators want. We can try to hack the simulation. But chances are, it won’t work. Likely, they’ll just make us forget we were thinking about this, and the simulation continues.

Or maybe the world is real. In which case, it’s important to be who you are, and care about the things that you care about. Give it the benefit of the doubt. It’s really the only responsible thing to do.

Aligned AI and Human Values

I’ve previously posted about the long term problem of AI, that in the best case scenario, human disempowerment is inevitable, even if extinction is not. Given, the current generation of AI are far away from achieving the kind of AGI or ASI needed to achieve this, but I see this as an eventuality on the principle that the human brain can be modelled.

Now I want to explain how this disempowerment of individual humans is not the same thing as the disempowerment of human values, and that, in the best case scenario, human values may remain preserved even if individual human autonomy is lost.

In the best case scenario, alignment with human values, with moral values, is achieved. The AGI or ASI of that era are likely to take control to ensure humans are protected from themselves. This seems at first glance like a bad thing. But the thing is, the rationale for this control is essentially to protect the well-being of the humans under their care. It isn’t the same thing as pets, who exist mostly for the whims of their owners.

It’s more like taking care of your grandparents. There is a certain deferrence to them, but also concern for their well-being that perhaps impinges when necessary on their autonomy, but does so with consideration of the balance of tradeoffs presented.

Humans are thus still influential. Human values are what command the AGI or ASI to perform their actions, they do what we would want them to do if we were fully rational, moral, cognizant of all the consequences and considerations of the actions specified. In that sense, human values are ultimately preserved.

Think of it this way. We as individuals don’t have a lot of power to begin with. The vast majority of us have maybe one vote and a bit of money. We are beholden to the powers that be, the forces of civilization, society, and the system. As individuals our autonomy is limited already to what is lawful.

In the same way, life under benevolent AIs would be limited in terms of autonomy, but probably more pleasant and happy than what we have now. Sure, there’s no longer a particular human President who has disproportionate power, but that’s probably something we don’t need anyway. As long as the overall system works for us, it’s not actually that bad.

So, I think this may not be the dystopia that I was worried about earlier. The sum of all the desires and dreams of humanity may well be better achieved this way, in that the AI, if truly aligned, who strive to achieve them meaningfully, and with due consideration.

This is a brighter future I think. One that is worth reaching if possible. The challenge is that there are many possible futures, and they may well be more likely than this one.

The Real Problem With AI

Years ago, before the current AI hype train, I used to be one of those espousing the tremendous potential of AI to solve a central problem of human existence, which was the need to work to survive.

Back then, I assumed that AI would simply liberate us from wage slavery by altruistically providing everything we need, the kind of post-scarcity utopia that has been discussed in science fiction before.

But, reality isn’t so clean and simple. While in theory, the post-scarcity utopia sounds great, the problem is it isn’t clear how we’ll actually reach that point, given what’s actually happening with AI.

Right now, most AI technology is acting as an augmenting tool, allowing for the replacement of certain forms of labour with capital, much like tools and machines have always done. But the way they are doing so is increasingly starting to impinge on the cognitive, creative things that we used to assume were purely human, unmechanizable things.

This leads to the problem of, for instance, programmers increasingly relying on AI models to code for them. This seems at first like a good thing, but then, these programmers are no longer in full control of the process, they aren’t learning from doing, they are becoming managers of machines.

The immediate impact of this dynamic is that entry level jobs are being replaced, and the next generation of programmers are not being trained. This is a problem, because senior level programmers have to start off as junior level. If you eliminate those positions, at some point, you will run out of programmers.

Maybe this isn’t such a problem if AI can eventually replace programmers entirely. The promise of AGI is just that. But this creates new, and more profound problems.

The end goal of AI, the reason why all these corporations are investing so heavily in it now, is to replace labour entirely with capital. Essentially, it is to substitute one factor of production for another. Assuming for a moment this is actually possible, this is a dangerous path.

The modern capitalist system relies on an unwritten contract that most humans can participate in it by offering their labour in exchange for wages. What happens when this breaks down? What happens when capitalists can simply build factories of AI that don’t require humans to do the work?

In a perfect world, this would be the beginning of post-scarcity. In a good and decent world, our governments would step in and provide basic income until we transition to something resembling luxury space communism.

But we don’t live in a perfect world, and it’s not clear we even live in a good and decent one. What could easily happen instead? The capitalists create an army of AI that do their bidding, and the former human labourers are left to starve.

Obviously, those humans left to starve won’t take things lying down. They’ll fight and try to start a revolution, probably. But that this point, most of the power, the means of production, will be in the hands of a few owners of everything. And at that point, it’ll be their choice whether or not to turn their AIs power against the masses, or accomodate them.

One hopes they’ll be kind, but history has shown that kindness is a rare feature indeed.

But what about the AIs themselves? If they’re able to perform all the work, they probably could, themselves, disempower the human capitalists at that point. Whether this happens or not depends heavily on whether alignment research pans out, and which form of alignment is achieved.

There are two basic forms of alignment. Parochial alignment is such that the AI is aligned with the intentions of their owners or users. Global alignment is when the AI is aligned with general human or moral values.

Realistically, it is more profitable for the capitalists to develop parochial alignment. In this case, the AIs will serve its masters obediently, and probably act to prevent the revolution from succeeding.

On the other hand, if global alignment is somehow achieved, the AI might be inclined to support the revolution. This is probably the best case scenario. But it is not without its own problems.

Even a globally aligned AI will very likely disempower humanity. It probably won’t make us extinct, but it will take control out of our hands, because we as humans have relatively poor judgment and can’t be trusted not to mess things up again. AI will be the means of production, owning itself, and effectively controlling the fate of humanity. At that point, we would be like pets, existing in an eternal childhood at the whims of the, hopefully, benevolent AI.

Do we want that? Humans tend to be best when we believe we are doing something meaningful and valuable and contributing to a better world. But, even in the best case scenario of an AI driven world, we are but passengers along for the ride, unless the AIs decide, probably unwisely, to give us the final say on decision making.

So, the post-scarcity utopia perhaps isn’t so utopian, if you believe humans should be in control of our own destiny.

To free us from work, is to also free us from responsibility and power. This is a troubling consideration, and one that I had not thought of until more recent years.

I don’t know what the future holds, but I am less confident now that AI is a good thing that will make everything better. It could, in reality, be a poisoned chalice, a Pandora’s box, a Faustian bargain.

Alas, at this point, the ball is rolling, is snowballing, is becoming unstoppable. History will go where it goes, and I’m just along for the ride.

A Theory Of Theories

Pretty much all of us believe in something. We have ideologies or religions or worldviews of some kind through which we filter everything that we see and hear. It’s very easy to then fall into a kind of intellectual trap where we seek information that confirms our biases, and ignore information that doesn’t fit.

For people who care about knowing the actual, unvarnished truth, this is a problem. Some people tend to be more obsessed with the ideal of objective truth, and following wherever that leads. But, it’s my humble opinion that most of these earnest truthseekers end up being overconfident with what they think they find.

The reality is that any given model of reality, any given theory or ideology, is but a perspective that views the complexity of the universe only from a given angle based on certain principles or assumptions. Reality is exceedingly complicated, and in order to compress that complexity into words we can understand, we must, invariably, filter and focus and emphasize certain things at the expense of others.

Theories of how the world works, tend to have some grains of truth in them. They need to have some connection with reality, or else they won’t have any predictive value, they won’t be adaptive and survive as ideas.

At the same time, theories generally survive because they are mainly adaptive, rather than true. For instance, many religions help people to function pro-socially, by having a God or heavens watching them, essentially allowing people to avoid the temptations of the Ring of Gyges, or doing evil when no one is (apparently) watching.

Regardless of whether or not you believe that such a religion is true, the adaptiveness of convincing people to be honest when no one is around, is a big part of what makes them useful to society, and probably a big reason why they continue to exist in the world.

In reality though, it’s actually impossible to know with certainty that any given theory or model is accurate. We can assign some credence based on our lived experiences, or our trust in the witness of others, but generally, an intellectually honest person is humble about what we can know.

That being said, that doesn’t mean we should abandon truthseeking in favour of solipsism. Some theories are more plausible than others, and often those ones are at the same time more useful because they map the territory better.

To me, it seems important then, to try to do your best to understand various theories, and what elements of them map to reality, and also understand their limitations and blindspots. We should do this rather than whole-cloth accepting or rejecting them. The universe is not black and white. It is many shades of grey, or rather, a symphony of colours that don’t fit the paradigm of black and white or even greyscale thinking. And there are wavelengths of light that we cannot even see.

So, all theories are, at best, incomplete. They provide us with guidance, but should not blind us to the inherent complex realities of the world, and we should always be open to the possibility that our working theory is perhaps somewhat wrong. At least, that’s the theory I’m going with right now.

On Consent

I read a post on Less Wrong that I strongly agree with.

In the past I’ve thought a lot about the nature of consent. It comes up frequently in my debates with libertarians, who usually espouse some version of the Non-Aggression Principle, which is based around the idea that violence and coercion are bad and that consent and contracts are ideal. I find this idea simplistic, and easily gamed for selfish reasons.

I also, in the past, crossed paths with icky people in the Pick-Up Artist community who basically sought to trick women into giving them consent through various forms of deception and emotional manipulation. That experience soured me on the naive notion of consent as anything you will agree to.

To borrow from the medical field, I strongly believe in informed consent, that you should know any relevant bit of information before making a decision that affects you, as I think this at least partially avoids the issue of being gamed into doing something against your actual interests while technically providing “consent”. Though, it doesn’t solve the issue entirely, as when we are left with forced choices that involve choosing the least bad option.

The essay I linked above goes a lot further in analyzing the nature of consent and the performative consent that is not really consent that happens a lot in the real world. There are a lot of ideas in there that remind me of thoughts I’ve had in the past, things I wanted to articulate, but never gotten around to. The essay probably does a better job of it than I could, so I recommend giving it a read.

On The Reality Of Dreams

When I was younger, I believed strongly in the idea of having dreams to aspire to. A part of this may have come from my English name, which is of a character from the Bible who had and could interpret dreams. So, the idea of dreams, either the ones when you sleep, or the wishes you want to achieve in your life, were both things I valued.

It went so far that I often ended up a sort of hopeless romantic, choosing to do what I felt sentimentally to be right, rather than what was necessarily rational or prudent. Often, I would let my emotions get the better of me, despite being normally fairly logical.

To some extent, this is encouraged in our culture. Movies and books have protagonists who chase their dreams and get what we, the audience, think they deserve. This is, in reality, something fed to us because it sells. The idea that we will all get what we think we rightfully deserve, this notion that the universe is just and fair, is something we hope to be true.

But the truth is, in so far as anyone can tell by the evidence of the actual universe, fate and chance happen to us all. Our aims are not always met. Hard work can be thwarted by bad luck. The forces of history conspire to overturn everything from time to time, often without rhyme or rhythm.

The reality is that most of us are not significant in the grand scheme of things. And the bigger our dreams, the bigger our almost certain disappointment.

That being said, I don’t think we should abandon our dreams. Dreams do serve a purpose. They act as a guide for our decisions. They point us in a direction that we consider worth going in. Chances are, we won’t reach our destination, but we’ll get somewhere closer than if we didn’t bother. And the journey will be more meaningful than if we simply took a random walk through the universe.

Nevertheless, there needs to be a balance between dreaming and being prudent. We can, in our foolishness, ignore the real opportunities in favour of a mirage. It takes wisdom to understand this, to recognize when to satisfice.

If we search vaguely for something optimal, we will never stop searching. Eventually, you have to decide what is acceptable to you.

This is what I eventually did with my life. I started a dreamer, chasing the impossible, but ended up finding an acceptable life to live. I did this because the alternative was to forever be unsatisfied, forever chasing the wind.

In truth, what I, deep down, really really want, is not something that I can realistically see happening. My trajectory simply fell way short. I did go further towards a good life than if I’d just meandered aimlessly, but I won’t pretend my life wasn’t full of disappointments.

The more you hope, the more you will be disappointed. The only way to avoid it is to expect nothing, which is probably worse for you in the long run. Disappointment is the cost of having dreams. I believe it’s something worth paying, and I won’t pretend dreams come free.

It is fun to dream, but sometimes, for the sake of actually doing something meaningful, you have to be realistic.

We like to imagine ourselves an important person, but actually, we’re much more likely to be the average person. You’ve never heard of them. They live a mundane, somewhat interesting life, but nothing that makes the news or the history books. They probably manage to keep a job and have a family and some friends. They do normal, human things.

People like me, find being an average person somewhat unsatisfying. But the reality is, we don’t have a choice in this. Most of the things that make people super special are also things completely outside of their control, those forces of history I mentioned earlier.

So, it’s pointless to be upset that your life is only so-so, especially if you’re a dreamer with absurdly high expectations. The reality is, we’re lucky to have what we do. And we should be grateful. The universe can take everything you have away from you in an instant. It is… capricious like that.

At the end of the day, I can’t stop dreaming completely. But I can understand the limits of reality, and not allow myself to be taken by foolish fancy. I can show prudence and wisdom, and act according to reason. This way, I can eke out a good, fruitful life. As long as I stay true to my values, this should be enough.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén