An eccentric dreamer in search of truth and happiness for all.

Category: Philosophy Page 1 of 2

Be Fruitful And Multiply

I recently had a baby. There’s some debate in philosophical circles about whether or not it is right to have children. I thought I should -briefly- outline why I chose this path.

When I was a child, I think it was an unwritten assumption within my traditional Chinese Christian family that I would have kids. In undergrad however, I encountered David Benatar’s Better Never To Have Been, which exposed me to anti-natalist views for the first time. These often argued that hypothetical suffering was somehow worse or more real than hypothetical happiness. I didn’t really agree, but I admitted the arguments were interesting.

Subsequent to that, I became a Utilitarian in terms of my moral philosophy, and was exposed to the idea that adding a life worth living to the universe was a good thing.

Environmentalists and degrowthers often argue that there are too many people in the world already, that adding yet another person given the limited resources is unsustainable and dooming us to a future Malthusian nightmare. I admit that there are a lot of people in the world already, but I’m skeptical that we can’t find a way to use resources more efficiently, or develop technology to solve this the way we have in the past with hybrid rice and the Green Revolution.

Though, to be honest, my actual reasons for having a child are more mundane. My wife wanted to have the experience and have someone who she can talk to when she’s old (the actuarial mortality table suggests I’ll probably die before her after all). I ultimately let my wife decide whether or not we have kids, as she’s the one who had to endure the pregnancy.

I personally was 60/40 split on whether to be okay with having a child. My strongest argument for was actually a simple, almost Kantian one. If everyone has children, the human race will continue into a glorious future among the stars. If no one has children, the human race will die out, along with all of its potential. Thus, in general, it is better to have at least one child to contribute to the future potential of humankind.

At the same time, I was worried, given the possibility of things like AI Doom that I could be bringing a life into a world of future misery and discontent, and I also knew that parenthood could be exceedingly stressful for both of us, putting an end to our idyllic lifestyle. Ultimately, these concerns weren’t enough to stop us though.

My hope is that this life that my wife and I created will also live a happy and good life, and that I can perhaps teach some of my values to them, so that they will live on beyond my mortality. But these things are ultimately out of my hands in the long run, so they aren’t definitive reasons to go ahead, so much as wishes for my child.

In Pursuit of Practical Ethics: Eudaimonic Utilitarianism with Kantian Priors

(2024/01/08): Posted to the EA Forums.

Disclaimer: I am not a professional moral philosopher. I have only taken a number of philosophy courses in undergrad, read a bunch of books, and thought a lot about these questions. My ideas are just ideas, and I don’t claim at all to have discovered any version of the One True Morality. I also assume moral realism and moral universalism for reasons that should become obvious in the next sections.

Introduction

While in recent years the CEA and the leadership of EA have emphasized that they do not endorse any particular moral philosophy over any other, the reality is that, as per the last EA Survey that checked, a large majority of EAs lean towards Utilitarianism as their guiding morality.

Between that and the recent concerns about issues with the “naïve” Utilitarianism of SBF, I thought it might be worthwhile to offer some of my philosophical hacks or modifications to Utilitarianism that I think enable it to be more intuitive, practical, and less prone to many of the apparent problems people seem to have with the classical implementation.

This consists of two primary modifications: setting utility to be Eudaimonia, and using Kantian priors. Note that these modifications are essentially independent of each other, and so you can incorporate one or the other separately rather than taking them together.

Eudaimonic Utilitarianism

The notion of Eudaimonia is an old one that stems from the Greek philosophical tradition. In particular, it was popularized by Aristotle, who formulated it as a kind of “human flourishing” (though I think it applies to animals as well) and associated it with happiness and the ultimate good (the “summum bonum”). It’s also commonly thought of as objective well-being.

Compared with subjective happiness, Eudaimonia attempts to capture a more objective state of existence. I tend to think of it as the happiness you would feel about yourself if you had perfect information and knew what was actually going on in the world. It is similar to the concept of Coherent Extrapolated Volition that Eliezer Yudkowsky used to espouse a lot. The state of Eudaimonia is like reaching your full potential as a sentient being with agency, rather than a passive emotional experience like with happiness.

So, why Eudaimonia? The logic of using Eudaimonia rather than mere happiness as the utility to be optimized is that it connects more directly with the authentic truth, which can be desirable to avoid the following intuitively problematic scenarios:

  • The Experience Machine – Plugging into a machine that causes you to experience the life you most desire, but it’s all a simulation and you aren’t actually doing anything meaningful.
  • Wireheading – Continuous direct electrical stimulation of the pleasure centres of the brain.
  • The Utilitronium Shockwave – Converting all matter in the universe into densely packed computational matter that simulates many, many sentient beings in unimaginable bliss.

Essentially, these are all scenarios where happiness is seemingly maximized, but at the expense of something more than we also value, like truth or agency. Eudaimonia, by to an extent capturing this more complex value alongside happiness, allows us to escape these intellectual traps.

I’ll further elaborate with an example. Imagine a mathematician who is brilliant, but also gets by far the most enjoyment out of life from counting blades of grass. But, by doing so, they are objectively wasting their potential as a mathematician to discover interesting things. A hedonistic or preference Utilitarian view would likely argue that their happiness from counting the blades of grass is what matters. A Eudaimonic Utilitarian on the other hand would see this as a waste of potential compared to the flourishing life that they could otherwise have lived.

Another example, again with our mathematician friend, is where there are two scenarios:

  • They discover a great mathematical theorem, but do not ever realize this, such that it is only discovered by others after their death. They die sad, but in effect, a beautiful tragedy.
  • They believe they have discovered a great mathematical theorem, but in reality it is false, and they never learn the truth of the matter. They die happy, but in a state of delusion.

Again, classical Utilitarianism would generally prefer the latter, while Eudaimonic Utilitarianism prefers the former.

Yet another example might be the case of Secret Adultery. A naïve classical Utilitarian might argue that committing adultery in secret, assuming it can never be found out, adds more hedons to the world than doing nothing, and so is good. A Eudaimonic Utilitarian argues that what you don’t know can still hurt your Eudaimonia, that if the partner had perfect information and knew about the adultery, they would feel greatly betrayed and so objectively, Eudaimonic utility is not maximized.

A final example is that of the Surprise Birthday Lie. Given Eudaimonic Utilitarianism seems very high on maintaining the truth, you might assume that it would be against lying to protect the surprise of a surprise birthday party. However, if the target of this surprise knew that people were lying so as to bring about a wonderful surprise for them, they would likely consent to these lies and prefer this to discovering the secret too soon and ruining the surprise. Thus, in this case Eudaimonic Utilitarianism implies that certain white lies can still be good.

Kantian Priors

This talk of truth and lies brings me to my other modification, Kantian Priors. Kant himself argued that truth telling was always right and lying was always wrong, so you might think that Kantianism would be completely incompatible with Utilitarianism. Even if you think Kant was wrong overall about morality, he did contribute some useful ideas that we can utilize. In particular, the categorical imperative in its form of doing only what can be universalized, is an interesting way to establish priors.

By priors, I refer to the Bayesian probabilistic notion of prior beliefs that are based on our previous experience and understanding of the world. When we make decisions with new information in a Bayesian framework, we update our priors with the new evidence to create our posterior beliefs, which we use to make the final decision with.

Kant argued that we don’t know the consequences of our actions, so we should not bother to figure them out. This was admittedly rather simplistic, but the reality is that frequently there is grave uncertainty about the actual consequences of our actions, and predictions made even with the best knowledge are often wrong. In that sense, it is useful to try to adopt a Bayesian methodology to our moral practice, to help us deal with this practical uncertainty.

Thus, we establish priors for our moral policies, essentially default positions that we start from whenever we try to make moral decisions. For instance, in general, lying if universalized would lead to a total breakdown in trust and is thus contradictory. This implies a strong prior towards truth telling in most circumstances.

This truth telling prior is not an absolute rule. If there is strong enough evidence to suggest that it is not the best course of action, Kantian Priors allows us the flexibility to override the default. For instance, if we know the Nazis at our door asking if we are hiding Jews in our basement are up to no good, we can safely decide that lying to them is a justified exception.

Note that we do not have to necessarily base our priors on Kantian reasoning. We could also potentially choose some other roughly deontological system. Christian Priors, or Virtue Ethical Character Priors, are also possible if you are more partial to those systems of thought. The point is to have principled default positions as our baseline. I use Kantian Priors because I find the universalizability criterion to be an especially logical and consistent method for constructing sensible priors.

An interesting usefulness of having priors in our morality is that it causes us to have some of the advantages of deontology without the normal tradeoffs. Many people tend to trust those who have more deontological moralities because they are very reliably consistent with their rules and behaviour. Someone who never lies is quite trustworthy, while someone who frequently lies because they think the ends justifies the means, is not. Someone with deontic priors on the other hand isn’t so rigid as to be blind to changing circumstances, but also isn’t so slippery that you could worry if they’re trying to manipulate you into doing what they think is good.

This idea of priors is similar to Two-Level Utilitarianism, but formulated differently. In Two-Level Utilitarianism, most of the time you follow rules, and sometimes, when the rules conflict or when there’s a peculiar situation that suggests you shouldn’t follow the rules, you calculate the actual consequences. With priors it’s about if you receive strong evidence that can affect your posterior beliefs, and move you to temporarily break from your normal policies.

Conclusion

Classical Utilitarianism is a good system that captures a lot of valuable moral insights, but its practice by naïve Utilitarians can leave something to be desired, due to perplexing edge cases, and a tendency to be able to justify just about anything with it. I offer two possible practical modifications that I hope allow for an integration of some of the insights of deontology and virtue ethics, and create a form of Utilitarianism that is more robust to the complexities of the real world.

I thus offer these ideas with the particular hope that such things as Kantian Priors can act as guardrails for your Utilitarianism against the temptations that appear to have been the ultimate downfall of people like SBF (assuming he was a Utilitarian in good faith of course).

Ultimately, it is up to you how you end up building your worldview and your moral centre. The challenge to behave morally in a world like ours is not easy, given vast uncertainties about what is right and the perverse incentives working against us. Nevertheless, I think it’s commendable to want to do the right thing and be moral, and so I have suggested ways in which one might be able to pursue such practical ethics.

On The Morality Of Work

If you accept the idea that there is no ethical consumption or production under capitalism, a serious question arises: Should you work?

What does it mean to work? Generally, the average person is a wage earner. They sell their labour to an employer in order to afford food to survive. To work thus means to engage with the system, to be a part of society and contribute something that someone somewhere wants done in exchange for the means of survival.

Implicit in this is the reality that there is a fundamental, basic cost to living. Someone, somewhere, is farming the food that you eat, and in a very roundabout way, you are, by participating in the economy, returning the favour. This is ignoring the whole issue of capitalism’s merits. At the end of the day, the economy is a system that feeds and clothes and provides shelter, how ever imperfectly and unfairly. Even if it is not necessarily the most just and perfect system, it nevertheless does provide for most people the amenities that allow a good life.

Thus, in an abstract sense, work is fair. It is fair that the time spent by people to provide food and clothing and shelter is paid back by your spending your time to earn a living, regardless of whatever form that takes. On a basic level, it’s at least minimally fair that you exchange your time and energy for other people’s time and energy. Capitalism may not be fair, but the basic idea of social production is right.

So, if you are able to, please work. Work because in an ideal society, work is your contribution to some common good. It is you adding to the overall utility by doing something that seems needed by someone enough that they’ll pay you for it. Even if in practice, the reality of the system is less than ideal, the fact is that on a basic level, work needs to be done by someone somewhere for people to live.

While you work, try to do so as morally as possible, by choosing insofar as it is possible the professions that are productive and useful to society, and making decisions that reflect your values rather than that of the bottom line. If you must participate in capitalism to survive, then at least try to be humane about it.

In Defence of Defiance Against The World’s Ills

If you want to be perfect, go, sell your possessions and give to the poor, and you will have treasure in heaven. Then come, follow me.” – Jesus

In 1972, the famous Utilitarian moral philosopher Peter Singer published an essay titled: “Famine, Affluence, and Morality” that argued that we have a moral duty to help those in poverty far across the world. In doing so, he echoed a sentiment that Jesus shared almost two millennia prior, yet which most people who call themselves Christians today seem relatively unconcerned with.

From a deeply moral perspective, we live in a world that is fundamentally flawed and unjust. The painful truth is that the vast majority of humans on this Earth live according to a kind of survivorship bias, where the systems and beliefs that perpetuate are not right, but what enables them to survive long enough to procreate and instill a next generation where things continue to exist.

For most people, life is hard enough that questioning whether the way things are is right is something of a privilege that they cannot afford. For others, this questioning requires a kind of soul searching that they shy away from because it would make them uncomfortable to even consider. It’s natural to imagine yourself the hero in your own story. To question this assumption is not easy.

But the reality is that most all of us are in some sense complicit in the most senseless of crimes against humanity. When we participate in an economy to ensure we have food to eat, we are tacitly choosing to give permission to a system of relations that is fundamentally indifferent to the suffering of many. We compete with fellow human beings for jobs and benefit from their misery when we take one of only a limited number of spots in the workforce. We chose to allow those with disproportionate power to decide who gets to live a happier life. And those in power act to further increase their share of power, because to do anything else would lead to being outcompeted and their organization rendered extinct by the perverse incentives that dominate the system.

Given all this, what can one even begin to do about it? Most of us are not born into a position where they have the power to change the world. Our options are limited. To be moral, we would need to defy the very nature of existence. What can we do? If we sell everything we have and give to the poor, that still won’t change the nature of the world, even if it’s the most we could conceivably do.

What does it mean to defy destiny? What does it look like to try to achieve something that seems impossible?

What exists in opposition to this evil? What is good? What is right? What does it look like to live a pure and just life in a world filled with indifference and malice? What does it mean to take responsibility for one’s actions and the consequences of those actions?

Ultimately, it is not in our power to single-handledly change the world, but there are steps we can take to give voice to our values, to live according to what we believe to be right. This means making small choices about how we behave towards others. It means showing kindness and consideration in a world that demands cutthroat competition. It means taking actions that bring light into the world.

Even if we, by ourselves, cannot bring revolution, we can at least act according to the ideals we espouse. This can be as small as donating a modest amount to a charity in a far off land that corrects a small amount of injustice by giving the poorest among us a bednet that protects them from malaria. If approximately $4800 $6500 worth of such things can save a life, and minimum wage can earn you $32,000 a year, if you modestly donate 10% of that to this charity, you can save about three lives one life every two years. If you work for 40 years, you can save about 60 almost 20 lives this way. Those lives matter. They will be etched into eternity, like all lives worth living. (Edit: Corrected some numbers.)

Admittedly, to do this requires participating in the system. You could also choose not to participate. But to do so would abandon your responsibilities for the sake of a kind of moral purity. In the end, you can do more good by living an ethical life, to lead by example and showing that there are ways of living where you strive to move beyond selfish competition, and seek to cooperate and build up the world.

This is the path of true defiance. It does not surrender one’s life to the evils of egoism, or abandon the world to the lost. Instead it seeks to build something better through decisions made that go against the grain. With the understanding that we are all living a mutual co-existence, and that our choices and decisions reflect who we are, our character as people.

We do not have to be perfect. It is enough to be good.

Practical Utilitarianism Cares About Relationships

Anyone reading my writings probably knows that I subscribe roughly to the moral theory of Utilitarianism. To me, we should be trying to maximize the happiness of everyone. Every sentient being should be considered important enough to be weighed in our moral calculus of right and wrong. In theory, this should mean we should place equal weight on every human being on this Earth. In practice however, there are considerations that need to be taken into account that complicate the picture.

Effective Altruism would argue that time and distance don’t matter, that you should help those who you can most effectively assist given limited resources. This usually leads to the recommendation of donating to charities in Africa for bednet or medication delivery as this is considered the most effective use of a given dollar of value. There is definitely merit to the argument that a dollar can go further in poverty-stricken Africa than elsewhere. However, I don’t think that’s the only consideration here.

Time and distance do matter to the extent that we as human beings have limited knowledge of things far away from us in time and space. With respect to donations to a distant country in dire need, there are reasonable uncertainties about the effectiveness of these donations, as many of the arguments in favour of them depend heavily on our trust of the analysis done by the charities working far away, that we cannot confirm or prove directly.

This uncertainty should function as a kind of discount rate on the value of the help we can give. A more nuanced and measured analysis thus suggests that we should both donate some of our resources to those distant charities, but that we should also devote some of our resources to those closer to home whom we can directly see and assist and know that we are able to help. Our friends and family, whom we have relationships that allow us to know their needs and wants, what will best help them, are obvious candidates for this kind of help.

Similarly, those in the distant future, while worth helping to an extent, should not completely absolve us of our responsibilities to those near to us in time, who we are much more certain we can directly help and affect in meaningful ways. The further away a possible being is in time, the more uncertain is their existence, after all.

This also means that we ourselves should value our own happiness and, being the best positioned to know how we ourselves can be happy, should take responsibility for our own happiness.

Thus, in practice, Utilitarianism, carefully considered, does not eliminate our social responsibilities to those around us, but rather reinforces these ties, as being important to understanding how best to make those around us happy.

Equal concern does not mean, in practice, equal duty. It means instead that we should expand our circle of concern to the entire universe, and that there is a balance of considerations that create responsibilities for us, magnified by our practical ability to know and help.

Those distant from us are still important. We should do what we reasonably can to help them. But those close to us put us in a position where we are uniquely responsible for what we know to be true.

In the end, it’s ultimately up to you to decide what matters to you, but may I suggest that you be open to helping both those close and far from you, whose needs you are aware of to varying degrees, and who deserve to be happy just like you.

The Darkness And The Light

Sometimes you’re not feeling well. Sometimes the world seems dark. The way world is seems wrong somehow. This is normal. It is a fundamental flaw in the universe, in that it is impossible to always be satisfied with the reality we live in. It comes from the reality of multiple subjects experiencing a shared reality.

If you were truly alone in the universe, it could be catered to your every whim. But as soon as there are two it immediately becomes possible for goals and desires to misalign. This is a structural problem. If you don’t want to be alone, you must accept that other beings have values that can potentially be different than yours, and who can act in ways contrary to your expectations.

The solution is, put simply, to find the common thread that allows us to cooperate rather than compete. The alternative is to end the existence of all other beings in the multiverse, which is not realistic nor moral. All of the world’s most pressing conflicts are a result of misalignment between subjects who experience reality from different angles of perception.

But the interesting thing is that there are Schelling points, focal points where divergent people can converge on to find common ground and at least partially align in values and interests. Of historical interest, the idea of God is one such point. Regardless of the actual existence of God, the fact of the matter is that the perspective of an all-knowing, all-benevolent, impartial observer is something that multiple religions and philosophies have converged on, allowing a sort of cooperation in the form of some agreement over the Will of God and the common ideas that emerge from considering it.

Another similar Schelling point is the Tit-For-Tat strategy for the Iterated Prisoner’s Dilemma game in Game Theory. The strategy is one of opening with cooperate, then mirroring others and cooperating when cooperated with, and defecting in retaliation for defection, while offering immediate and complete forgiveness for future cooperation. Surprisingly, this extremely simple strategy wins tournaments and has echoes in various religions and philosophies as well. Morality is superrational.

Note however that this strategy depends heavily on repeated interactions between players. If one player is in such a dominant position as to be able to kill the other player by defecting, the strategy is less effective. In practice, Tit-For-Tat works best against close to equally powerful individuals, or when those individuals are part of groups that can retaliate even if the individual dies.

In situations of relative darkness, when people or groups are alone and vulnerable to predators killing in secret, the cooperative strategies are weaker than the more competitive strategies. In situations of relative light, when people are strong enough to survive a first strike, or there are others able to see such first strikes and retaliate accordingly, the cooperative strategies win out.

Thus, early history, with its isolated pockets of humanity facing survival or annihilation on a regular basis, was a period of darkness. As the population grows and becomes more interconnected, the world increasingly transitions into a period of light. The future, with the stars and space where everything is visible to everyone, is dominated by the light.

In the long run, cooperative societies will defeat competitive ones. In the grand scheme of things, Alliances beat Empires. However, in order for this state equilibrium to be reached, certain inevitable but not immediately apparent conditions must first be met. The reason why the world is so messed up, why it seems like competition beats cooperation right now, is that the critical mass required for there to be light has not yet been reached.

We are in the growing pains between stages of history. Darkness was dominant for so long that continues to echo into our present. The Light is nascent. It is beginning to reshape the world. But it is still in the process of emerging from the shadows of the past. But in the long run, the Light will rise and usher in the next age of life.

The True Nature of Reality

It’s something we tend to grow up always assuming is real. This reality, this universe that we see and hear around us, is always with us, ever present. But sometimes there are doubts.

There’s a thing in philosophy called the Simulation Argument. It posits that, given that our descendants will likely develop the technology to simulate reality someday, the odds are quite high that our apparent world is one of these simulations, rather than the original world. It’s a probabilistic argument, based on estimated odds of there being many such simulations.

A long time ago, I had an interesting experience. Back then, as a Christian, I wrestled with my faith and was at times mad at God for the apparent evil in this world. At one point, in a moment of anger, I took a pocket knife and made a gash in a world map on the wall of my bedroom. I then went on a camping trip, and overheard in the news that Russia had invaded Georgia. Upon returning, I found that the gash went straight through the border between Russia and Georgia. I’d made that gash exactly six days before the invasion.

Then there’s the memory I have of a “glitch in the Matrix”, so to speak. Many years ago, I was in a bad place mentally and emotionally, and I tried to open a second floor window to get out of a house that probably would have ended badly, were it not for a momentary change that caused the window, which had a crank to open, to suddenly become a solid frame with no crank or way to open. It happened for a split second. Just long enough for me to panic and throw my body against the frame, making such a racket as to attract the attention of someone who could stop me and calm me down.

I still remember this incident. At the time I thought it was some intervention by God or time travellers/aliens/simulators or some other benevolent higher power. Obviously I have nothing except my memory of this. There’s no real reason for you to believe my testimony. But it’s one reason among many why I believe the world is not as it seems.

Consider for a moment the case of the total solar eclipse. It’s a convenient thing to have occur, because it allowed Einstein to prove his Theory of Relativity in 1919 by looking at the gravitational lensing effect of the sun that is only visible during an eclipse. But total solar eclipses don’t have to be. They only happen because the sun is approximately 400 times the size and 400 times the distance from the Earth as the moon is. They are exactly the right ratio of size and distance for total solar eclipses to occur. Furthermore, due to gradual changes in orbit, this coincidence is only present for a cosmologically short time frame of a few hundred million years that happens to coincide with the development of human civilization.

Note that this coincidence is immune to the Anthropic Principle because it is not essential to human existence. It is merely a useful coincidence.

Another fun coincidence is the names of the arctic and antarctic. The arctic is named after the bear constellations of Ursa Major and Minor, which can be seen only from the northern hemisphere. Antarctic literally means opposite of arctic. Coincidentally, polar bears can be found in the arctic, but no species of bear is found in the antarctic.

There are probably many more interesting coincidences like this, little Easter eggs that have been left for us to notice.

The true nature of our reality is probably something beyond our comprehension. There are hints at it however, that make me wonder about the implications. So, I advise you to keep an open mind about the possible.

Energy Efficiency Trends in Computation and Long-Term Implications

Note: The following is a blog post I wrote as part of a paid written work trial with Epoch. For probably obvious reasons, I didn’t end up getting the job, but they said it was okay to publish this.

Historically, one of the major reasons machine learning was able to take off in the past decade was the utilization of Graphical Processing Units (GPUs) to accelerate the process of training and inference dramatically.  In particular, Nvidia GPUs have been at the forefront of this trend, as most deep learning libraries such as Tensorflow and PyTorch initially relied quite heavily on implementations that made use of the CUDA framework.   The strength of the CUDA ecosystem remains strong, such that Nvidia commands an 80% market share of data center GPUs according to a report by Omdia (https://omdia.tech.informa.com/pr/2021-aug/nvidia-maintains-dominant-position-in-2020-market-for-ai-processors-for-cloud-and-data-center).

Given the importance of hardware acceleration in the timely training and inference of machine learning models, it might be naively seem useful to look at the raw computing power of these devices in terms of FLOPS.  However, due to the massively parallel nature of modern deep learning algorithms, it should be noted that it is relatively trivial to scale up model processing by simply adding additional devices, taking advantage of both data and model parallelism.  Thus, raw computing power isn’t really a proper limit to consider.

What’s more appropriate is to instead look at the energy efficiency of these devices in terms of performance per watt.  In the long run, energy constraints have the potential to be a bottleneck, as power generation requires substantial capital investment.  Notably, data centers currently use up about 2% of the U.S. power generation capacity (https://www.energy.gov/eere/buildings/data-centers-and-servers).

For the purposes of simplifying data collection and as a nod to the dominance of Nvidia, let’s look at the energy efficiency trends in Nvidia Tesla GPUs over the past decade.  Tesla GPUs are chosen because Nvidia has a policy of not selling their other consumer grade GPUs for data center use.

The data for the following was collected from Wikipedia’s page on Nvidia GPUs (https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units), which summarizes information that is publicly available from Nvidia’s product datasheets on their website.  A floating point precision of 32-bits (single precision) is used for determining which FLOPS figures to use.

A more thorough analysis would probably also look at Google TPUs and AMDs lineup of GPUs, as well as Nvidia’s consumer grade GPUs.  The analysis provided here can be seen more as a snapshot of the typical GPU most commonly used in today’s data centers.

Figure 1:  The performance per watt of Nvidia Tesla GPUs from 2011 to 2022, in GigaFLOPS per Watt.

Notably the trend is positive.  While wattages of individual cards have increased slightly over time, the performance has increased faster.  Interestingly, the efficiency of these cards exceeds the efficiency of the most energy efficient supercomputers as seen in the Green500 for the same year (https://www.top500.org/lists/green500/).

An important consideration in all this is that energy efficiency is believed to have a possible hard physical limit, known as the Laudauer Limit (https://en.wikipedia.org/wiki/Landauer%27s_principle), which is dependent on the nature of entropy and information processing.  Although, efforts have been made to develop reversible computation that could, in theory, get around this limit, it is not clear that such technology will ever actually be practical as all proposed forms seem to trade off this energy savings with substantial costs in space and time complexity (https://arxiv.org/abs/1708.08480).

Space complexity costs additional memory storage and time complexity requires additional operations to perform the same effective calculation.  Both in practice translate into energy costs, whether it be the matter required to store the additional data, or the opportunity cost in terms of wasted operations.

More generally, it can be argued that useful information processing is efficient because it compresses information, extracting signal from noise, and filtering away irrelevant data.  Neural networks for instance, rely on neural units that take in many inputs and generate a single output value that is propagated forward.  This efficient aggregation of information is what makes neural networks powerful.  Reversible computation in some sense reverses this efficiency, making its practicality, questionable.

Thus, it is perhaps useful to know how close we are to approaching the Laudauer Limit with our existing technology, and when to expect to reach it.  The Laudauer Limit works out to 87 TeraFLOPS per watt assuming 32-bit floating point precision at room temperature.

Previous research to that end has proposed Koomey’s Law (https://en.wikipedia.org/wiki/Koomey%27s_law), which began as an expected doubling of energy efficiency every 1.57 years, but has since been revised down to once every 2.6 years.  Figure 1 suggests that for Nvidia Tesla GPUs, it’s even slower.

Another interesting reason why energy efficiency may be relevant has to do with the real world benchmark of the human brain, which is believed to have evolved with energy efficiency as a critical constraint.  Although the human brain is obviously not designed for general computation, we are able to roughly estimate the number of computations that the brain performs, and its related energy efficiency.  Although the error bars on this calculation are significant, the human brain is estimated to perform at about 1 PetaFLOPS while using only 20 watts (https://www.openphilanthropy.org/research/new-report-on-how-much-computational-power-it-takes-to-match-the-human-brain/).  This works out to approximately 50 TeraFLOPS per watt.  This makes the human brain less powerful strictly speaking than our most powerful supercomputers, but more energy efficient than them by a significant margin.

Note that this is actually within an order of magnitude of the Laudauer Limit.  Note also that the human brain is also roughly two and a half orders of magnitude more efficient than the most efficient Nvidia Tesla GPUs as of 2022.

On a grander scope, the question of energy efficiency is also relevant to the question of the ideal long term future.  There is a scenario in Utilitarian moral philosophy known as the Utilitronium Shockwave, where the universe is hypothetically converted into the most dense possible computational matter and happiness emulations are run on this hardware to maximize happiness theoretically.  This scenario is occasionally conjured up as a challenge against Utilitarian moral philosophy, but it would look very different if the most computationally efficient form of matter already existed in the form of the human brain.  In such a case, the ideal future would correspond with an extraordinarily vast number of humans living excellent lives.  Thus, if the human brain is in effect at the Laudauer Limit in terms of energy efficiency, and the Laudauer Limit holds against efforts towards reversible computing, we can argue in favour of this desirable human filled future.

In reality, due to entropy, it is energy that ultimately constrains the number of sentient entities that can populate the universe, rather than space, which is much more vast and largely empty.  So, energy efficiency would logically be much more critical than density of matter.

This also has implications for population ethics.  Assuming that entropy cannot be reversed, and the cost of living and existing requires converting some amount of usable energy into entropy, then there is a hard limit on the number of human beings that can be born into the universe.  Thus, more people born at this particular moment in time implies an equivalent reduction of possible people in the future.  This creates a tradeoff.  People born in the present have potentially vast value in terms of influencing the future, but they will likely live worse lives than those who are born into that probably better future.

Interesting philosophical implications aside, the shrinking gap between GPU efficiency and the human brain sets a potential timeline.  Once this gap in efficiency is bridged, it theoretically makes computers as energy efficient as human brains, and it should be possible at that point to emulate a human mind on hardware such that you could essentially have a synthetic human that is as economical as a biological human.  This is comparable to the Ems that the economist Robin Hanson describes in his book, The Age of EM.  The possibility of duplicating copies of human minds comes with its own economic and social considerations.

So, how long away is this point?  Given the trend observed with GPU efficiency growth, it looks like a doubling occurs about every three years.  Thus, one can expect an order of magnitude improvement in about thirty years, and two and a half orders of magnitude in seventy-five years.  As mentioned, two and a half orders of magnitude is the current distance from existing GPUs and the human brain.  Thus, we can roughly anticipate this to be around 2100.  We can also expect to reach the Laudauer Limit shortly thereafter.

Most AI safety timelines are much sooner than this however, so it is likely that we will have to deal with aligning AGI before the potential boost that could come from having synthetic human minds or the potential barrier of the Laudauer Limit slowing down AI capabilities development.

In terms of future research considerations, a logical next step would be to look at how quickly the overall power consumption of data centers is increasing and also the current growth rates of electricity production to see to what extent they are sustainable and whether improvements to energy efficiency will be outpaced by demand.  If so, that could act to slow the pace of machine learning research that relies on very large models trained on massive amounts of compute.  This is in addition to other potential limits, such as the rate of data generation for large language models, which depend on massive datasets of essentially the entire Internet at this point.

The nature of current modern computation is that it is not free.  It requires available energy to be expended and converted to entropy.  Barring radical new innovations like practical reversible computers, this has the potential to be a long-term limiting factor in the advancement of machine learning technologies that rely heavily on parallel processing accelerators like GPUs.

On Altruism

One thing I’ve learned from observing people and society is the awareness that the vast majority of folks are egoistic, or selfish. They tend to care about their own happiness and are at best indifferent to the happiness of others unless they have some kind of relationship with that person, in which case they care about that person’s happiness in so far as it has an effect on their own happiness to keep that person happy. This is the natural, neutral state of affairs. It is unnatural to care about other people’s happiness for the sake of themselves as ends. We call such unnatural behaviour “altruism”, and tend to glorify it in narratives but avoid actually being that way in reality.

In an ideal world, all people would be altruistic. They would equally value their own happiness and the happiness of each other person because we are all persons deserving happiness. Instead, reality is mostly a world of selfishness. To me, the root of all evil is this egoism, this lack of concern for the well-being of others that is the norm in our society.

I say this knowing that I am a hypocrite. I say this as someone who tries to be altruistic at times, but is very inconsistent with the application of the principles that it logically entails. If I were a saint, I would have sold everything I didn’t need and donated at least half my gross income to charities that help the global poor. I would be vegan. I would probably not live in a nice house and own a car (a hybrid at least) and be busy living a pleasant life with my family.

Instead, I donate a small fraction of my gross income to charity and call it a day. I occasionally make the effort to help my friends and family when they are in obvious need. I still eat meat and play computer games and own a grand piano that I don’t need.

The reality is that altruism is hard. Doing the right thing for the right reasons requires sacrificing our selfish desires. Most people don’t even begin to bother. In their world view, acts of kindness and altruism are seen with suspicion, as having ulterior motives of virtue signalling or guilt tripping or something else. In such a world, we are not rewarded for doing good, but punished. The incentives favour egoism. That’s why the world runs on capitalism after all.

And so, the world is the way it is. People largely don’t do the right thing, and don’t even realize there is a right thing to do. Most of them don’t care. There are seven billion people in this world right now, and most likely, only a tiny handful of people care that you or I even exist, much less act consistently towards our well-being and happiness.

So, why am I bothering to explain this to you? Because I think we can do better. Not be perfect, but better. We can do more to try to care about others and make the effort to make the world a better place. I believe I do this with my modest donations to charity, and my acts of kindness towards friends and strangers alike. These are small victories for goodness and justice and should be celebrated, even if in the end we fall short of being saints.

In the end, the direction you go in is more important than the magnitude of the step you take. Many small steps in the right direction will get you to where you want to be eventually. Conversely, if your direction is wrong, then bigger steps aren’t always better.

On Dreams

When I was a child, I wanted, at various times, to be a jet fighter pilot, the next Sherlock Holmes (unaware he was fictional), or a great scientist like Albert Einstein. As I grew older, I found myself drawn to creative hobbies, like writing stories (or at least coming up with ideas for them) and making computer games in my spare time. In grade 8 I won an English award, mostly because I’d shown such fervour in reading my teacher’s copy of The Lord Of The Rings, and written some interesting things while inspired to be like J.R.R. Tolkien, or Isaac Asimov.

In high school my highest grades were reserved for computer science initially, where I managed to turn a hobby of making silly computer games into a top final project a couple years in a row. Even though, at the end of high school, I won another award, this time the Social Science Book award, after doing quite well in a modern history class, I decided to go into computer science in undergrad.

For various reasons, I got depressed at the end of high school, and the depression dragged through the beginning of undergrad where I was no longer a top student. I struggled with the freedom I had, and I wasn’t particularly focused or diligent. Programming became work to me, and my game design hobby fell by the wayside. Writing essays for school made me lose interest in my novel ideas as well.

At some point, one of the few morning lectures I was able to drag myself to was presented by a professor who mentioned he wanted a research assistant for a project. Later that summer, I somehow convinced him to take me on and spent time in a lab trying to get projectors to work with mirrors and fresnel lenses to make a kind of flight simulator for birds. It didn’t go far, but it gave me a taste for this research thing.

I spent the rest of my undergrad trying to shore up my GPA so I could get into a masters program and attempt to learn to be a scientist. In a way, I’d gone full circle to an early dream I had as a child. I’d also become increasingly interested in neural networks as a path towards AI, having switched from software design to cognitive science as my computing specialization early on.

The masters was also a struggle. Around this time emotional and mental health issues made me ineffective at times, and although I did find an understanding professor to be my thesis supervisor, I was often distracted from my work.

Eventually though, I finished my thesis. I technically also published two papers with it, although I don’t consider these my best work. While in the big city, I was also able to attend a machine learning course at a more prestigeous university, and got swept up in the deep learning wave that was happening around then.

Around then I devoted myself to some ambitious projects, like the original iteration of the Earthquake Predictor and Music-RNN. Riding the wave, I joined a startup as a data scientist, briefly, and then a big tech company as a research scientist. I poured my heart and soul into some ideas that I thought had potential, unaware that most of them were either flukes of experimental randomness, or doomed to be swept away by the continuing tide of new innovations that would quickly replace them.

Still, I found myself struggling to keep working on the ideas I thought were meaningful, and became disillusioned when it became apparent that they wouldn’t see support and I was sidelined into a lesser role than before, with little freedom to pursue my research.

In some sense, I left because I wanted to prove my ideas on my own. And then I tried to do so, and realized that I didn’t have the resources or the competency. Further experiments were inconclusive. The thing I thought was my most important work, this activation function that I thought could replace the default, turned out to be less clearly optimal than I’d theorized. My most recent experiments suggest it still is something that is better calibrated and leads to less overconfident models, but I don’t think I have the capabilities to turn this into a paper and publish it anywhere worthwhile. And I’m not sure if I’m just still holding onto a silly hope that all the experiments and effort that went into this project weren’t a grand waste of time.

I’d hoped that I could find my way into a position somewhere that would appreciate the ideas that I’d developed, perhaps help me to finally publish them. I interviewed with places of some repute. But eventually I started to wonder if what I was doing even made sense.

This dream of AI research. It depended on the assumption that this technology would benefit humanity in the grandest way possible. It depended on the belief that by being somewhere in the machinary of research and engineering, I’d be able to help steer things in the right direction.

But then I read in a book about how AI capability was dramatically outpacing AI safety. I was vaguely aware of this fact before. The fact is that these corporations and governments want AI for the power that it offers them, and questions about friendliness and superintelligence seem silly and absurd when looking at the average model that simply perceives and reports probabilities that such and such a thing is such.

And I watched as the surveillance economy grew on the backs of these models. I realized that the people in charge weren’t necessarily considering the moral implications of things. I realized that by pursuing my dream, I was allowing myself to be a part of a machine that was starting to more closely resemble a kind of dystopian nightmare.

So I made a decision. That this dream didn’t serve the greatest good. That my dream was selfish. I took the opportunity that presented itself to go back to another dream from another life, the old one about designing games and telling stories. Because at least, I could see no way for those dreams to turn out wrong.

In theory, I could work on AI safety directly. In theory I could try a different version of the dream. But in practice, I don’t know where to begin that line of research. And I don’t want to be responsible for a mistake that ends the world.

So, for now at least, I’m choosing a dream I can live with. Something less grand, but also less dangerous. I don’t know if this is the right way to go. But it’s the path that seems open to me now. What else happens, I cannot predict. But I can try to take the path that seems best. Because my true dream is to do something that brings about the best world, with the most authentic happiness. How I go about it, that can change with the winds.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén