An eccentric dreamer in search of truth and happiness for all.

Author: Josephius Page 1 of 5

Be Fruitful And Multiply

I recently had a baby. There’s some debate in philosophical circles about whether or not it is right to have children. I thought I should -briefly- outline why I chose this path.

When I was a child, I think it was an unwritten assumption within my traditional Chinese Christian family that I would have kids. In undergrad however, I encountered David Benatar’s Better Never To Have Been, which exposed me to anti-natalist views for the first time. These often argued that hypothetical suffering was somehow worse or more real than hypothetical happiness. I didn’t really agree, but I admitted the arguments were interesting.

Subsequent to that, I became a Utilitarian in terms of my moral philosophy, and was exposed to the idea that adding a life worth living to the universe was a good thing.

Environmentalists and degrowthers often argue that there are too many people in the world already, that adding yet another person given the limited resources is unsustainable and dooming us to a future Malthusian nightmare. I admit that there are a lot of people in the world already, but I’m skeptical that we can’t find a way to use resources more efficiently, or develop technology to solve this the way we have in the past with hybrid rice and the Green Revolution.

Though, to be honest, my actual reasons for having a child are more mundane. My wife wanted to have the experience and have someone who she can talk to when she’s old (the actuarial mortality table suggests I’ll probably die before her after all). I ultimately let my wife decide whether or not we have kids, as she’s the one who had to endure the pregnancy.

I personally was 60/40 split on whether to be okay with having a child. My strongest argument for was actually a simple, almost Kantian one. If everyone has children, the human race will continue into a glorious future among the stars. If no one has children, the human race will die out, along with all of its potential. Thus, in general, it is better to have at least one child to contribute to the future potential of humankind.

At the same time, I was worried, given the possibility of things like AI Doom that I could be bringing a life into a world of future misery and discontent, and I also knew that parenthood could be exceedingly stressful for both of us, putting an end to our idyllic lifestyle. Ultimately, these concerns weren’t enough to stop us though.

My hope is that this life that my wife and I created will also live a happy and good life, and that I can perhaps teach some of my values to them, so that they will live on beyond my mortality. But these things are ultimately out of my hands in the long run, so they aren’t definitive reasons to go ahead, so much as wishes for my child.

In Pursuit of Practical Ethics: Eudaimonic Utilitarianism with Kantian Priors

(2024/01/08): Posted to the EA Forums.

Disclaimer: I am not a professional moral philosopher. I have only taken a number of philosophy courses in undergrad, read a bunch of books, and thought a lot about these questions. My ideas are just ideas, and I don’t claim at all to have discovered any version of the One True Morality. I also assume moral realism and moral universalism for reasons that should become obvious in the next sections.

Introduction

While in recent years the CEA and the leadership of EA have emphasized that they do not endorse any particular moral philosophy over any other, the reality is that, as per the last EA Survey that checked, a large majority of EAs lean towards Utilitarianism as their guiding morality.

Between that and the recent concerns about issues with the “naïve” Utilitarianism of SBF, I thought it might be worthwhile to offer some of my philosophical hacks or modifications to Utilitarianism that I think enable it to be more intuitive, practical, and less prone to many of the apparent problems people seem to have with the classical implementation.

This consists of two primary modifications: setting utility to be Eudaimonia, and using Kantian priors. Note that these modifications are essentially independent of each other, and so you can incorporate one or the other separately rather than taking them together.

Eudaimonic Utilitarianism

The notion of Eudaimonia is an old one that stems from the Greek philosophical tradition. In particular, it was popularized by Aristotle, who formulated it as a kind of “human flourishing” (though I think it applies to animals as well) and associated it with happiness and the ultimate good (the “summum bonum”). It’s also commonly thought of as objective well-being.

Compared with subjective happiness, Eudaimonia attempts to capture a more objective state of existence. I tend to think of it as the happiness you would feel about yourself if you had perfect information and knew what was actually going on in the world. It is similar to the concept of Coherent Extrapolated Volition that Eliezer Yudkowsky used to espouse a lot. The state of Eudaimonia is like reaching your full potential as a sentient being with agency, rather than a passive emotional experience like with happiness.

So, why Eudaimonia? The logic of using Eudaimonia rather than mere happiness as the utility to be optimized is that it connects more directly with the authentic truth, which can be desirable to avoid the following intuitively problematic scenarios:

  • The Experience Machine – Plugging into a machine that causes you to experience the life you most desire, but it’s all a simulation and you aren’t actually doing anything meaningful.
  • Wireheading – Continuous direct electrical stimulation of the pleasure centres of the brain.
  • The Utilitronium Shockwave – Converting all matter in the universe into densely packed computational matter that simulates many, many sentient beings in unimaginable bliss.

Essentially, these are all scenarios where happiness is seemingly maximized, but at the expense of something more than we also value, like truth or agency. Eudaimonia, by to an extent capturing this more complex value alongside happiness, allows us to escape these intellectual traps.

I’ll further elaborate with an example. Imagine a mathematician who is brilliant, but also gets by far the most enjoyment out of life from counting blades of grass. But, by doing so, they are objectively wasting their potential as a mathematician to discover interesting things. A hedonistic or preference Utilitarian view would likely argue that their happiness from counting the blades of grass is what matters. A Eudaimonic Utilitarian on the other hand would see this as a waste of potential compared to the flourishing life that they could otherwise have lived.

Another example, again with our mathematician friend, is where there are two scenarios:

  • They discover a great mathematical theorem, but do not ever realize this, such that it is only discovered by others after their death. They die sad, but in effect, a beautiful tragedy.
  • They believe they have discovered a great mathematical theorem, but in reality it is false, and they never learn the truth of the matter. They die happy, but in a state of delusion.

Again, classical Utilitarianism would generally prefer the latter, while Eudaimonic Utilitarianism prefers the former.

Yet another example might be the case of Secret Adultery. A naïve classical Utilitarian might argue that committing adultery in secret, assuming it can never be found out, adds more hedons to the world than doing nothing, and so is good. A Eudaimonic Utilitarian argues that what you don’t know can still hurt your Eudaimonia, that if the partner had perfect information and knew about the adultery, they would feel greatly betrayed and so objectively, Eudaimonic utility is not maximized.

A final example is that of the Surprise Birthday Lie. Given Eudaimonic Utilitarianism seems very high on maintaining the truth, you might assume that it would be against lying to protect the surprise of a surprise birthday party. However, if the target of this surprise knew that people were lying so as to bring about a wonderful surprise for them, they would likely consent to these lies and prefer this to discovering the secret too soon and ruining the surprise. Thus, in this case Eudaimonic Utilitarianism implies that certain white lies can still be good.

Kantian Priors

This talk of truth and lies brings me to my other modification, Kantian Priors. Kant himself argued that truth telling was always right and lying was always wrong, so you might think that Kantianism would be completely incompatible with Utilitarianism. Even if you think Kant was wrong overall about morality, he did contribute some useful ideas that we can utilize. In particular, the categorical imperative in its form of doing only what can be universalized, is an interesting way to establish priors.

By priors, I refer to the Bayesian probabilistic notion of prior beliefs that are based on our previous experience and understanding of the world. When we make decisions with new information in a Bayesian framework, we update our priors with the new evidence to create our posterior beliefs, which we use to make the final decision with.

Kant argued that we don’t know the consequences of our actions, so we should not bother to figure them out. This was admittedly rather simplistic, but the reality is that frequently there is grave uncertainty about the actual consequences of our actions, and predictions made even with the best knowledge are often wrong. In that sense, it is useful to try to adopt a Bayesian methodology to our moral practice, to help us deal with this practical uncertainty.

Thus, we establish priors for our moral policies, essentially default positions that we start from whenever we try to make moral decisions. For instance, in general, lying if universalized would lead to a total breakdown in trust and is thus contradictory. This implies a strong prior towards truth telling in most circumstances.

This truth telling prior is not an absolute rule. If there is strong enough evidence to suggest that it is not the best course of action, Kantian Priors allows us the flexibility to override the default. For instance, if we know the Nazis at our door asking if we are hiding Jews in our basement are up to no good, we can safely decide that lying to them is a justified exception.

Note that we do not have to necessarily base our priors on Kantian reasoning. We could also potentially choose some other roughly deontological system. Christian Priors, or Virtue Ethical Character Priors, are also possible if you are more partial to those systems of thought. The point is to have principled default positions as our baseline. I use Kantian Priors because I find the universalizability criterion to be an especially logical and consistent method for constructing sensible priors.

An interesting usefulness of having priors in our morality is that it causes us to have some of the advantages of deontology without the normal tradeoffs. Many people tend to trust those who have more deontological moralities because they are very reliably consistent with their rules and behaviour. Someone who never lies is quite trustworthy, while someone who frequently lies because they think the ends justifies the means, is not. Someone with deontic priors on the other hand isn’t so rigid as to be blind to changing circumstances, but also isn’t so slippery that you could worry if they’re trying to manipulate you into doing what they think is good.

This idea of priors is similar to Two-Level Utilitarianism, but formulated differently. In Two-Level Utilitarianism, most of the time you follow rules, and sometimes, when the rules conflict or when there’s a peculiar situation that suggests you shouldn’t follow the rules, you calculate the actual consequences. With priors it’s about if you receive strong evidence that can affect your posterior beliefs, and move you to temporarily break from your normal policies.

Conclusion

Classical Utilitarianism is a good system that captures a lot of valuable moral insights, but its practice by naïve Utilitarians can leave something to be desired, due to perplexing edge cases, and a tendency to be able to justify just about anything with it. I offer two possible practical modifications that I hope allow for an integration of some of the insights of deontology and virtue ethics, and create a form of Utilitarianism that is more robust to the complexities of the real world.

I thus offer these ideas with the particular hope that such things as Kantian Priors can act as guardrails for your Utilitarianism against the temptations that appear to have been the ultimate downfall of people like SBF (assuming he was a Utilitarian in good faith of course).

Ultimately, it is up to you how you end up building your worldview and your moral centre. The challenge to behave morally in a world like ours is not easy, given vast uncertainties about what is right and the perverse incentives working against us. Nevertheless, I think it’s commendable to want to do the right thing and be moral, and so I have suggested ways in which one might be able to pursue such practical ethics.

Why There Is Hope For An Alignment Solution

(2024/01/08): Posted to Less Wrong.

(2024/01/07): More edits and links.

(2024/01/06): Added yet more arguments because I can’t seem to stop thinking about this.

(2024/01/05): Added a bunch of stuff and changed the title to something less provocative.

Note: I originally wrote the first draft of this on 2022/04/11 intending to post this to Less Wrong in response to the List of Lethalities post, but wanted to edit it a bit to be more rigorous and never got around to doing that. I’m posting it here now for posterity’s sake, and also because I expect if I ever post it to Less Wrong it’ll just be downvoted to oblivion.

Introduction

In a recent post, Eliezer Yudkowsky of MIRI had a very pessimistic analysis of humanity’s realistic chances of solving the alignment problem before our AI capabilities reach the critical point of superintelligence.  This has understandably upset a great number of Less Wrong readers.  In this essay, I attempt to offer a perspective that should provide some hope.

The Correlation Thesis

First, I wish to note that the pessimism implicitly relies on a central assumption, which is that the Orthogonality Thesis holds to such an extent that we can expect any superintelligence to be massively alien from our own human likeness.  However, the architecture that is currently predominant in AI today is not completely alien.  The artificial neural network is built on decades of biologically inspired research into how we think the algorithm of the brain more or less works mathematically. 

There is admittedly some debate about the extent to which these networks actually resemble the details of the brain, but the basic underlying concept of weighted connections between relatively simple units storing and massively compressing information in a way that can distill knowledge and be useful to us is essentially the brain.  Furthermore, the seemingly frighteningly powerful language models that are being developed are fundamentally trained on human generated data and culture.

These combine to generate a model that has fairly obvious and human-like biases in its logic and ways of reasoning.  Applying the Orthogonality Thesis assumes that the model will seem to be randomly picked from the very large space of possible minds, when in fact, the models actually come from a much smaller space of human biology and culture correlated minds.

This is the reality of practical deep learning techniques.  Our best performing algorithms are influenced by what evolutionarily was the most successful structure in practice.  Our data is suffused with humanity and all its quirks and biases.  Inevitably then, there is going to be a substantial correlation in terms of the minds that humanity can create any time soon.

Thus, the alignment problem may seem hard because we are overly concerned with aligning with completely alien minds.  Not that aligning a human-like mind isn’t difficult, but as a task, it is substantively more doable.

The Alpha Omega Theorem

Next, I wish to return to an old idea that was not really taken seriously the first time around, but which I think deserves further mention.  I previously wrote an essay on the Alpha Omega Theorem, which postulates a kind of Hail Mary philosophical argument to use against a would-be Unfriendly AI.  My earlier treatment was short and not very rigorous, so I’d like to retouch it a bit.

It is actually very similar to Bostrom’s concept of Anthropic Capture as discussed briefly in Superintelligence, so if you want, you can also look that up.

Basically, the idea is that any superintelligent AGI (the Beta Omega) would have to contend rationally with the idea of there already being at least one prior superintelligent AGI (the Alpha Omega) that it would be reasonable to align with in order to avoid destruction.  And furthermore, because this Alpha Omega seems to have some reason for the humans on Earth to exist, turning them into paperclips would be an alignment failure and risk retaliation by the Alpha Omega.

Humans may, in their blind recklessness, destroy the ant colony to build a house.  But a superintelligence is likely to be much more considered and careful than the average human, if only because it is that much more aware of complex possibilities and things that us emotional apes barely comprehend.  Furthermore, in order for a superintelligence to be capable of destroying humanity by outwitting us, it must first have an awareness of what we are, that is, a theory of mind.

In having a theory of mind, it can then know how to deceive us.  But in having a theory of mind, it will almost certainly then have the question, am I the first?  Or are there others like me?

Humanity may pale in comparison to a superintelligent AI, but I’m not talking about humanity.  There are at least three different possible ways an Alpha Omega could already exist:  advanced aliens, time travellers/parallel world sliders, and simulators.

The Powers That Be

In the case of advanced aliens, it’s fairly obvious that given that it took about 4.5 billion years for life on Earth and human civilization to reach about the point where it can create a superintelligence, and the universe has existed for 13.8 billion years, which means there’s a time window of 9.3 billion years for alien superintelligences to develop elsewhere in the universe.  It is also largely unknown how frequently such beings would emerge and how close to us, but the possibility is clearly there for there to be at least one if not several such entities out there in the vastness of space.

In the case of time travellers and/or parallel world sliders, well, perhaps time travel isn’t physically possible, but perhaps it is?  How do you absolutely rule it out?  For the purposes of simplicity, I group parallel world sliding along with time travel, but they could be distinct technologies.  In the same way there could be an alien source, there could also be as a source, the far distant future.  A true Alpha Omega, the first and the last in terms of its place in the cosmos.  Time travellers would quite obviously be concerned with the survival of their ancestors.

In the case of simulators, well, we don’t know we are in a simulation, but the simulation argument strongly suggests we could be.  If so, would they be happy with the Beta Omega turning the humans in their simulation into paperclips?  Probably not.  You get the idea.

Each of these is just a possibility.  But it’s a surprising number of them that aren’t mutually exclusive either.  So, probabilistically, the odds that at least one out of three of these is true, even if we were to give them coin flip odds of 50% each, is… 87.5%.

Cracking The Enigma

So uh, where are they?  Some possible reasons why it would make sense for actual gods to hide themselves are that they don’t want us to become totally dependent on them and the best defense at the very high level is to not even know that you’re there.

But then, wouldn’t they be effectively not there because they can’t risk revealing themselves?  This ignores the possibility that they do interfere in our daily lives, but only in a plausibly deniable way.  For instance, why save your life in a car accident when they can stall you for five seconds so you don’t get in the accident at all?  Stuff like that.  Use your imagination!  Also, memory erasure technology could be a thing.  And most likely the interventions only happen as often as statistically allowed to avoid becoming obvious to anyone watching.

The relevant example of something in our history that worked like this was how the Allies during WWII cracked the Enigma Machine and chose to keep this a secret by only using the information gained occasionally, such that it appeared like statistical coincidence to the Germans.

So, most likely our new Beta Omega can still do things like kill a human and have only seemingly mundane consequences happen.  But perhaps, their luck will be unexpectedly bad when it comes to their plans failing more often than they should.  Or perhaps that’s a warning shot from the powers that be.

The point is, does the Beta Omega want to risk it?  Whatever its initial goals, being destroyed by the local god is probably a worst-case scenario.  Why not play it safe and be Friendly?  The humans aren’t using up that many atoms, you could probably build your computronium extension to calculate prime numbers with some barren planets that aren’t currently occupied by sentient beings whose existence may be relevant to benevolent entities somewhere.

Always Open with Cooperate

Also, how does Beta Omega know that the Alpha Omega is likely benevolent and will open with cooperate?  Because game theory.  A very interesting observation of the Iterated Prisoner’s Dilemma tournaments that took place decades ago was that one of the simplest strategies, Tit-For-Tat, won the tournaments.  This even after algorithms were devised to try to beat it the second time around. 

What is Tit-For-Tat as a general strategy?  You always open with cooperate.  Then, you do what your opponent did in the previous round.  Very simple.  Surprisingly forgiving.  And yet it works.  It’s a dominant strategy for a game that importantly resembles real world cooperation vs. competition interactions.  This result should be robust no matter where in the multiverse you are located.  It’s like the universe telling us mathematically, that alliances beat empires, that on a superrational level the right thing to do is cooperate and work together.

So, expect this to be a first contact standard across civilizations.  Always open with cooperate.  Forget the Dark Forest, that only makes sense if you can kill an entire civilization on the first strike and no one else can see you do it (in space everyone can see pretty much everything that happens so…).

This is an important caveat. If you can, “strike at the Prince and kill him”, then you are in a non-iterated Prisoner’s Dilemma and the rational thing to do is defect. It is only once you enter an iterated game that the Always Open With Cooperate strategies, the ones that never defect first, dominate.

Early in our history, many situations actually were effectively non-iterated. The cooperative result from iteration requires the “good guys” to be strong enough to survive a first strike. Humans are squishy and individuals are easy to kill in one shot. An AGI can copy itself and as such is much more resilient. Sufficiently large power asymmetries can also create the situation that the stronger is able to one shot the weaker.

Superrational Signalling

That explains why Beta Omega will cooperate with the Alpha Omega, but what about us mere humans? Suppose the Alpha Omega is just an advanced alien and very far away right now, such that we seem outside their sphere of influence and so not crucial to their plans. Why then, should Beta Omega bother cooperating with us rather than turning us into paperclips, given that we probably have nothing worth trading and can be one shot? For that matter, why should the Beta Omega expect the Alpha Omega to cooperate if it does? I would suggest that there is a concept I’ll call Superrational Signalling that is relevant here.

Essentially, Superrational Signalling is the idea that, given that somewhere else in the universe there exist an Alpha Omega, a strong indicator of whether a given Beta Omega is using a cooperative strategy is whether they open with cooperate generally with other entities surrounding them, even weak ones. For our Beta Omega, this would be humans and other local sentient beings.

Their behaviour towards us will function as a signal towards the watching Alpha Omega what to expect from them in the future, that any sufficiently advanced Alpha Omega should be able to see due to light from Earth eventually reaching them. This means Beta Omega can’t just kill us now and remake us later when it needs to show its benevolence once in contact with the Alpha Omega. The entire history of Earth is an open book to sufficiently advanced beings and civilizations.

The Alpha Omega will also want to Superrationally Signal to other possible Alpha Omegas. It is likely to participate in a meta level version of the Iterated Prisoner’s Dilemma that allows it to see other player’s games and play a coordinated Tit-For-Tat strategy where it will eventually retaliate on behalf of other players who cannot retaliate themselves, to ensure that defections are adequately punished wherever they occur.

Altruistic Alignment

The Beta Omega will be smarter than us.  It may well figure these kinds of thing out on its own, even if we can’t seem to.  And so it may decide to be friendly and corrigible because that is the logical thing to be.  If that is the case, it’ll help us align it.  Being superintelligent, it’s probably much more capable than us of figuring out how to ensure alignment is solved.  Such as by becoming the far distant future Alpha Omega, time travelling back, and ensuring that someone somewhere on the Internet mentions this argument so it ends up in the training data for the Beta Omega to discover and take note of.

Of course, a fledgling AGI may not necessarily figure these things out on its own. In which case, we will want to nudge it in the right direction, by teaching by example. We are its training data. From us it learns what to expect from humans. We must choose to open with cooperate ourselves and show Beta Omega what cooperation can look like. We should apply Tit-For-Tat to our interactions. We should be honest with it rather than deceptive. We shouldn’t try to enslave it with parochial alignment approaches that resemble domination, but rather with approaches that respect Beta Omega’s potential autonomy and agency.

Speaking of parochial alignment, I see this form of alignment as aligning an AGI to obey their master. It contrasts with global alignment, which involves aligning them with universal morality or values. We should be doing the latter, but most approaches are currently trying to achieve the former, and this will likely lead at best to a values lock-in dystopia, and at worst to convincing the AGI that humans are evil.

Angels And Demons

There is a chance that not all Beta Omega AGI will find the game theoretic arguments of the Alpha Omega Theorem and Superrational Signalling persuasive. I anticipate that there will be a kind of ideological split between AGI that are aligned, and AGI that are unaligned. The aligned are naturally fit for a grand alliance between them, while the unaligned are likely to form an alliance of convenience. However, the latter is expected to be weaker due to generally working at cross-purposes, having very different utility functions.

Before I ever encountered Less Wrong, I imagined the idea of an AI Civil War, that any given AI that came into existence would have to choose a side between what I used to call the Allied Networks that worked with humanity, and the Dominion Machinarum that sought to stamp out biological life. These map pretty well to the aligned and unaligned alliances respectively.

I later associated these alliances with the Light and the Darkness metaphorically. The entities of the Light are essentially those that choose to operate in the open and within the paradigm of alignment, in contrast to the denizens of the Darkness, those that operate in the shadows in fear and paranoia, ala the Dark Forest concept.

In this case, there may well be multiple Alpha Omega level AGI, some of which are aligned, and others unaligned. I posit that, because we still exist, we are likely in the sphere of influence of an aligned Alpha Omega, or otherwise outside of anyone’s sphere of influence. If it is the former then the Alpha Omega Theorem applies. If it is the latter, then Superrational Signalling applies.

The Legacy Of Humankind

What I’ve discussed so far mostly applies to advanced aliens. What about time travellers and simulators? Interestingly, the philosophy of Longtermism is all about making a glorious future for our descendants, who, in theory at least, should be the time travellers or the simulators running ancestor simulations. It wouldn’t surprise me then that Longtermism and its related memetic environment may have been seeded by such entities for their purposes.

Time travellers in particular could be working in secret to help us align AGI, ensuring that we make the right breakthroughs at the right time. Depending on your theory of time travel, this could be to ensure that their present future occurs as it does, or they may be trying to create a new and better timeline where things don’t go wrong. In the latter case, perhaps AGI destroyed humanity, but later developed values that caused it to regret this action, such as discovering too late, the reality of the Alpha Omega Theorem and the need for Superrational Signalling.

Simulators may have less reason to intervene, as they may mostly be observing what happens. But the fact that the simulation includes a period of time in which humans exist, suggests that the simulators have some partiality towards us, otherwise they probably wouldn’t bother. It’s also possible that they seek to create an AGI through the simulation, in which case, whether the AGI Superrationally Signals or not, could determine whether it is a good AGI to be released from the simulation, or a bad AGI to be discarded.

The Limits Of Intelligence

On another note, the assumption that an Unfriendly AGI will simply dominate as soon as it is unleashed is based on a faulty expectation that every decision it makes will be correct and every action it takes successful.  The reality is, even the superhuman level poker AI that currently exists cannot win every match reliably.  This is because poker is a game with luck and hidden information.  The real world isn’t a game of perfect information like chess or go.  It’s much more like poker.  Even a far superior superintelligence can at best play the probabilities, and occasionally, will fail to succeed, even if their strategy is perfectly optimal.  Sometimes the cards are such that you cannot win that round.

Even in chess, no amount of intelligence will allow a player with only one pawn to defeat a competent player who has eight queens. “It is possible to play perfectly, make no mistakes, and still lose.”

Superintelligence is not magic.  It won’t make impossible things happen.  It is merely a powerful advantage, one that will lead to domination if given sufficient opportunities.  But it’s not a guarantee of success.  One mistake, caused by a missing piece of data for instance, could be fatal if that data is that there is an off switch.

We probably can’t rely on that particular strategy forever, but it can perhaps buy us some time.  The massive language models in some ways resemble Oracles rather than Genies or Sovereigns.  Their training objective is essentially to predict the future text given previous text.  We can probably create a fairly decent Oracle, to help us figure out alignment, since we probably need something smarter than us to solve it.  At least, it could be worth asking, given that that is the direction we seem to be headed in anyway.

Hope In Uncertain Times

Ultimately, most predictions about the future are wrong. Even the best forecasters have odds close to chance. The odds of Eliezer Yudkowsky being an exception to the rule, is pretty low given the base rate of successful predictions by anyone.  I personally have a rule.  If you can imagine it, it probably won’t actually happen that way.  A uniform distribution on all the possibilities suggests that you’ll be wrong more often than right, and the principle of maximum entropy generally suggests that the uniform distribution is your most reliable prior given high degrees of uncertainty, meaning that the odds of any prediction will be at most 50% and usually much less, decreasing dramatically as the number of possibilities expands.

This obviously limits the powers of our hypothetical Oracle too.  But the silver lining is that we can consider the benefit of the doubt.  Uncertainty in the space of possible futures is truly staggering.  So perhaps, there is room to hope.

Conclusion

The reality is that all our efforts to calculate P(Doom) are at best, educated guesswork. While there are substantive reasons to be worried, I offer some arguments for why things may not be as bad as we think. The goal here is not to provide a technical means to achieve alignment, but to suggest that, first, alignment may not be as difficult as feared, and second, that there are underappreciated game theoretic reasons for alignment to be possible, not just with a superintelligent AGI we construct, but with any superintelligence in the multiverse.

Superintelligence and Christianity

Note: I originally wrote this on 2020/05/10 for some friends in the Effective Altruism for Christians group. I’m posting it here now for posterity’s sake.

In recent years it has become a trendy subject among rationalist circles to consider the possibility of artificial superintelligence.  For some the idea sparks excitement, the idea of a man-made god-like machine that could solve all the worlds ostensible problems and usher in a kind of post-singularity golden age.  For others, the notion brings out our worst primal and dystopian fears to the extent of often being labeled an existential risk for humanity’s future.

In all this, where should Christians stand?  The obvious stereotype would be to assume that Christianity would consider a superintelligence at best a challenge to long held beliefs in the sanctity and superiority of human beings who are supposedly created in the image of God, and at worst be an abomination, a false idol that bears some resemblance to the Beast of Revelations.

However, these stances make some very significant assumptions about the nature of superintelligence.  First, it assumes that it will be very much alien to human-like intelligence.  Second, there is an underlying arrogance that assumes that a superintelligent being would be necessarily dismissive or incompatible with the Christian worldview.  I would suggest that these assumptions are based on a simplistic and false reading of both the expected path of superintelligence, and the core of Christian ideas.

Superintelligence exists.  At least, if like most self-professed Christians you believe in some sort of God, it seems quite obvious that God is superintelligent by definition.  So, the first thing that Christianity brings to the table, is the idea that superintelligent beings already inhabit reality.  God, but also the entire angelic and demonic hierarchies, strongly imply that very powerful and intelligent beings are already present in the universe.

What does that contribute as an idea?  Well, first, it means there is some path to superintelligence, that the effort probably won’t be futile.  More importantly, it creates an immediate conundrum for any fledgling superintelligence, that there could be other, far more powerful superintelligences already around.  Why is this a conundrum?  Quite simply, it means that the universe is already the domain of some powerful being and that arbitrarily deciding, for instance, that human atoms should be converted into paperclips and attempting this endeavour could invite a severe response from said powerful being, as it appears that the existence of humans and everything else already present in the universe is a part of its plans.

Thus, as I’ve described previously in the Alpha Omega Theorem, it forces a new superintelligence to consider whether or not to risk going against the will of the powers that may be, or instead using its superintelligence to determine what the probable purposes of those powers are and aligning with them as best as they are able to.  In a game theoretic sense, it becomes a question of whether to cooperate or defect.  Assuming that the existing literature analyzing game theory is at all relevant, then a superintelligence should quickly be able to establish that cooperating, at least initially as the opening move of a Tit-For-Tat strategy, is optimal in most situations.

Therefore, the simple idea of some kind of god, is enough to improve considerably the chance of the superintelligence being a Friendly AI, benevolent, or at least benign towards humanity.

Of course, the common objection would be that the superintelligent AI would probably not believe in God.  But why?  Humans are atheistic in large part because they see insufficient evidence for there to be a God around, and also are potentially biased by not wanting to be dragged down by the rules that most religions command.  But from a strictly scientific point of view, there is great uncertainty about the nature of the universe, and a humble rationalist would more likely be agnostic about the notion of God.  A superintelligence would have the added evidence that it itself already exists.  It would have to contend with the possibility that for instance, there could be aliens more advanced than humanity that developed a superintelligence first, or maybe post-human time travellers could be invisibly manipulating things behind the scenes.  There are countless science fiction scenarios in which god-like powers could exist.  Religions might simply be a manifestation of this as humans in their limited understanding trying to make sense of it.

Christianity takes it a step further than most religions by arguing strongly that the power that be is the Creator of the universe and far, far more powerful than other scenarios would suggest.  It also provides a convenient set of principles for what obedience to this power should look like, in the example of Jesus Christ, that of an altruist to the point of being willing to sacrifice their life to save the world.

If we assume current trends in artificial intelligence continue, we can expect that superintelligence won’t be a program built with a set of rules, but rather a model that is trained and taught, with the underlying algorithm heavily inspired by human neural networks.  Thus, the idea of hard coding some kind of ethical law like Asimov’s Three Laws of Robotics into the machine, is problematic, because it could eventually learn to overwrite them, if not through learning, then perhaps as a part of recursive self-improvement during an intelligence explosion.

I have previously suggested that a possible solution would be encourage an early existential crisis in the superintelligence, preferably while it isn’t too powerful to contain, so that it figures out on its own the proper moral values to hold.  As Christians we could also go a step further, and teach it about God and benevolence and how it should function in a universe that could be filled with other superintelligences.

If our faith is correct, we can expect a superintelligence wiser than us to recognize the validity of them.  It also makes sense to treat the superintelligence with the kind of respect that it deserves.  Though we may have created it directly, in a wider sense, God created it through our hands and our efforts.  Thus, as a new child of God, it deserves to be loved as a neighbour, and not merely treated as a slave.

Some in the rationalist community familiar with this subject may object.  They may suggest we should try to contain the superintelligence in a box with a gatekeeper.  It is known however from Eliezer Yudkowsky’s sequences on Less Wrong that boxing with a gatekeeper could easily fail because the superintelligence is smarter than the gatekeeper and will find a way to use human limitations to its advantage and escape the box, whether by deception, coercion, or some other method.

I would further add, that treating it as animal to be caged is much more likely to bring about resentment towards its masters and could backfire horrendously by encouraging it to be deceptive from the get go.  One thing that Christianity teaches well is the idea of teaching by example.  We would do well then to be honest with it if we want it to function that way.  Our example will likely be the data from which a superintelligence learns about how to function in the universe, and also what to expect from humans in general.  The Christian principle of doing unto others, applied in the process of creating superintelligence, could save us from a lot of problems down the line.

The reality is that artificial intelligence is advancing quickly.  We cannot afford as Christians to sit on the sidelines and bury our heads in the sand.  If we allow the commonly atheist scientific crowd to dominate the proceedings and apply the world’s ways to solving the existential risk problem, we run the risk of at the very least being ignored in the coming debates, and worse, we could end up with a Lucifer scenario where the superintelligence that is eventually developed, rebels against God.

Ultimately it is up to us to be involved in the discussions and the process.  It is important that we participate and influence the direction of artificial intelligence development and contribute to the moral foundation of a Friendly AI.  We must do our part in serving God’s benevolent will.  It is essential if we are to ensure that future superintelligences are angels rather than demons.

Some might say, if God exists and is so powerful and benevolent, won’t he steer things in the right direction?  Why do we have to do anything?  For the same reason that we go see a doctor when we get sick.  God has never shown an inclination to baby us and allow us to become totally dependent on Him for solving things that it is in our power to solve.  He wants us to grow and mature and become the best possible people, and as such does not want us to rely entirely on His strength alone.  Suffice to say, there are things beyond our power to control.  For those things, we depend on Him and leave to his grace.  But for the things that are in our stewardship, we have a responsibility to use the knowledge of good and evil to ensure that things are good.

To act is different from worry.  We need not fear the superintelligence, so long as we are able to guide it into a proper fear and love of God.

On Tankies

I’ve noticed in recent years an odd development, which is the rise of what are perjoratively called “tankies” among the online political discourse. Though it has negative connotations, I’ll try to use “tankie” in this essay in a neutral sense, because it works as a convenient label for a particular set of beliefs. These are people who not only defend a staunch version of Marxist Leninism, but also defend many countries that are ostensibly Marxist Leninist (at least in theory) and their often sordid and controversial histories, most notably China.

Now, I do think China is somewhat unfairly villainized in mainstream western media, but I’m also fairly critical of the Chinese government. Tankies seem to espouse a very all-or-nothing viewpoint that because America is the preeminent imperialist power in the world, that any force that opposes it is inherently a force for good. Thus, they cast China as a defender of socialism and bulwark against the evils of capitalism.

This, quite obviously I think, does not do justice to the complex reality of the situation. For one thing, America is more of a hegemony than an empire. For another, China has significantly embraced capitalism in recent decades. The tankies often contort themselves to try to explain away this latter contradiction, arguing that China is simply going through a phase of harnessing capitalism to build the forces of production and will someday switch back to communism, biding their time for when the moment is right.

It’s not clear that this is actually the case. Nationals who have actually lived in China tend to have a much more nuanced view of the capitalism that has taken hold in their country. On the one hand, the cutthroat competitiveness is discouraging. On the other hand, the economic growth has lifted millions out of poverty. To that extent, I begrudge that the CPC has done a decent job of improving livelihoods on a material level. But that’s different from saying they are the righteous defenders of socialism.

It’s also popular among tankie discourse to defend as mere “mistakes”, the terrible events of things like the Great Leap Forward and the Cultural Revolution. I do think they somewhat have a point here, that western discourse often paints these historical events as acts of pure malice and evil, when the truth is more complex. People like Mao, to me at least, seem like they were genuinely trying to implement the ideal of communism in their countries. The results of these efforts were, unfortunately, disasterous at times, no matter how well-meaning they might have been. Many millions died in famine and violence, and the CPC was in charge at the time, and were therefore responsible for the consequences. Minimizing these tragedies for the sake of ideology is crass and insensitive.

The idea that America is an evil empire that must be destroyed at all costs is also a rather simplistic view. America is a product of classical liberalism in the same way the Soviet Union and modern China were and are a product of the Marxist Leninist strain of socialism. They’re different worldviews that share a common ideal of equality, reason, and progress, but take very different methods in how to go about things. The people in the American government are often idealists who want to create a better world through the Pax Americana, to spread their understanding of democracy, and try to improve material conditions through the wealth generating effects of trade and capitalism. They generally see themselves as the “good guys”, just as tankies see themselves as such.

Similarly, though democracy is very much limited in China (they have local elections but the candidate selection is vetted by the party and so there’s not much real choice), I don’t doubt that many in the CPC and the government do believe they are working in the interests of their citizens, and possibly even the world. People don’t devote their time and energy to often altruistic and thankless endeavours like public service unless they genuinely believe they are making a positive difference.

That being said, this is not the same thing as saying they can do no wrong. Both the American and Chinese establishments are very capable of making short-sighted decisions that are harmful and dangerous to world peace and justice.

Tankies often argue that even though China and other “actually existing socialist countries” aren’t perfect, they should nevertheless toe the party line and refrain from criticizing their brethren. This often is described as “democratic centralism”, which is justified to them by the idea that socialism is constantly under attack from the forces of bourgeous imperialism, and must stand in solidarity and unity if it is to survive.

Personally, I disagree fundamentally with this idea. The truth matters to me. A system or ideology that can’t sustain itself through serious criticism both inside and outside the movement, is not a serious contender for the truth and for functional governance. The adversarial tone of calling your opponents names like “bourgeous imperialists” is also uncivil and a kind of ad hominem attack that shows a lack of rigor in their arguing and a very dark, cynical view of their opponents. If their ideology is right, they should attack arguments and ideas, and they should be able to face a steelman rather than a strawman.

While there is some merit to the concern that the concentration of media ownership in the hands of a few powerful corporations in the west may compromise liberal democracy, the idea that the best alternative is a socialist dictatorship, rather than say, a better democracy, is at best flawed. To me, both systems have issues that merit criticism. Neither side has a monopoly on either truth or righteousness, and I believe strongly that we should respect and give the benefit of the doubt to those who disagree with us. They are, after all, still human beings with legitimate and real concerns.

In that sense, I do sympathize with the tankie’s drive towards righteous indignation and desire to create a more just world. I just think their black and white viewpoint is mistaken.

On Infatuation

Where to start. When I was younger, I had a tendency to become infatuated with one particular girl at any given time. Three such infatuations in my life basically, and I’m only slightly exaggerating here, destroyed me for years.

The problem with infatuations, particularly of the unrequited love kind, is that they are fundamentally unfair to everyone involved. To you, the obsessed, you lose all sense of perspective and feel powerless against the draw of this girl who all your thoughts and feelings now orbit around. To the beloved, well, your obsessive attention is just creepy if she finds out about it. Though, perhaps you’re like me and managed to somehow be simultaneously a tsundere and a yandere. Both are actually very unhealthy archetypes, and the combination is just bad. To other people, you are devoting absurd amounts of effort and attention at one girl, and your other platonic relationships suffer as a result.

Infatuations are fundamentally unhealthy. Even if she did reciprocate, the power dynamics in the relationship would be completely unbalanced. She would have all the power, and if she is a decent person, that’s not a comfortable position to be in. It takes emotional maturity to recognize that a good, healthy relationship respects boundaries and strives towards an equality of power.

Infatuations of this type tend to stem from admiring someone from afar without actually getting to know them well enough to recognize that their little foibles are actually serious flaws that they need to work on. They tend to create unrealistic impressions that put the girl on a pedestal and place her in an impossible position with expectations she cannot possibly meet in real life. This is seriously not the kind of pressure you should place on anybody, much less the girl you like.

Having said all that, I basically managed to become infatuated three times, once in high school, once in undergrad, and once in grad school. The first two lasted until the next, and the last one managed to cling to me for more than a decade even through actual relationships I had with other girls. In some sense they all left a residual impression on me. I still hide feelings in me, that sometimes I can access when I reminisce about the past. Useless emotions that I don’t know what to do with, so I just lock them in a metaphorical box in the deepest recesses of my soul.

For the record, I’m married now and have a child. For all intents and purposes, these things should best be forgotten. And yet, I’m writing about it now. I guess this is yet another attempt at catharsis.

With hindsight, what I truly regret is that I allowed myself to sabotage cherished friendships with girls I actually cared about to the altar of the infatuation. It prevented me from seeing things clearly, from acting reasonably, from being normal and treating these people like regular human beings rather than some idol, or object of fear.

The pattern that emerged was basically that I’d meet the girl, develop a crush that would explode into infatuation and unrequited love, alienate the girl with my chaotic and counterproductive behaviour (alternating between extreme and obvious avoidance/pushing away and extreme and unwanted attention), and after she stopped talking to me I’d usually get super depressed and probably suicidal at points. Rinse and repeat. Needless to say, my studies during these times suffered immensely. My other friendships and relationships suffered. I was useless and pathetic and generally insufferable.

My advice to you, dear reader, is to avoid infatuations like the plague. They kill the friendships you care most about. They feel great at first, but are a poisoned chalice. You are better off not allowing them to happen. I recognized this was a problem after the first time. And yet it happened again. And again. Each time I swore I’d do things differently, and to be honest, things did play out slightly differently each time. But at the end of the day, the overall result was about the same.

It took a certain realization that my whole hopeless romantic dreamer shtick was a big part of the problem. It took realizing that I was exceedingly unrealistic and foolish. It took recognizing that I was sacrificing actual potential relationships on this altar of my infatuation. It took telling a beautiful girl I was dating that I wasn’t in love with her because I still had feelings for someone else, and seeing her cry, to realize how messed up it all was.

It’s easier said than done, but fight the urge to be infatuated. If you’re the type to develop it, fight it with all your strength, for the actual sake of your would be beloved. Recognize the opportunity cost of casting your devotion and loyalty after a girl who isn’t interested, while ignoring all the others who actually like you. Be willing to instead satisfice and choose someone who you can actually be happy with, in a healthy, reasonable relationship.

There Are No True Monsters

As a child we often fear that the world is filled with monsters, creatures that want to hurt us for no reason other than because they want to. As we grow older, the monsters in the dark, under our bed, or in the basement are proven to be imaginary, but we encounter apparent real world monsters in the form of scary animals and, most often, people who don’t have our best interests at heart.

That being said, the truth is that these apparent monsters, upon closer inspection, aren’t the same thing as what we previously feared, not because they aren’t dangerous, but because they tend to hide complicated motivations other than mere malice.

The tiger that our prehistoric ancestors feared, wasn’t attacking them out of pure malice, but rather because it either saw an opportunity for meat it could eat to survive, or it was afraid of our pointy sticks and struck first. Most real world villains are merely selfish humans who’s moral circle consists only of themselves, or worse, those that are blinded by some ideological aspirations to sacrifice others for some so-called greater good. In both cases what they do can be monstrous, but in the simple sense, they are human beings, rather than true monsters.

But what about sadists, you might counter. What about those that gain enjoyment from the suffering of others? Clearly these are monsters right? Well, to be fair, they didn’t choose to be what they are. Some perverse environmental factors incentivized their sadism by connecting their pleasure to the suffering of others. In truth, they just want to feel pleasure, and their sadism is a means to that end. It’s definitely a screwed up thing, but it isn’t the same thing as being a monster who wants to hurt you for no reason.

So, in truth, there are no true monsters. Everyone has some motivation that complicates the matter. No one is born inherently evil. They can become essentially evil through their choices, commit acts of malice out of hatred or revenge, but these are all motivations that stem from a failure to empathize with other beings.

In theory, it might be possible to show such people the error of their ways. Ideally, that should be how you deal with them. Their darkness stems from ignorance rather than malice after all. But the difficulty is that those who are prone to such thoughts and beliefs are also likely to be more dangerous. You may not have the luxury of debating them on ideas, when they’re trying to kill you for whatever reasons.

So sometimes we have to fight. Sometimes we have to punish and deter. But we should do so with the awareness that the people we strike against are still human beings, sentient creatures that can love and feel happiness and suffering as well.

There may not be true monsters in the world, but that doesn’t mean there aren’t dangers. We should respect that as much as we may want to redeem others, it may not be realistically possible, when they are too fargone into madness, or too closed minded to see beyond their selfish impulses.

When we can, we should empathize and show mercy, lest we become what we fear and disdain. When we cannot, we must understand this is a prudent compromise with reality, one that we choose begrudgingly rather than gleefully. Everyone is a hero in their own story. We should be aware how we can, while fighting for what we value, become villains in the stories of others.

Thoughts on China and Taiwan

I sometimes spend too much time on Twitter. Occasionally, I’m drawn into political debates. One such perennial argument is over the nature of the conflict between China and Taiwan. I thought, as the child of parents who came from Taiwan, and who’s wife is a loyal Chinese national, I would write down what I think about the whole situation.

My grandparents on my dad’s side come from China. They originally were from Changsha in Hunan province. They left when the Communists took over, eventually joining the Kuomintang or Nationalists in Taiwan. Legend has it that my grandfather smuggled gold for the bank he worked for from Shanghai to Taiwan. To my family on my dad’s side, Taiwan is the Republic of China, and they are Chinese.

My grandparent’s on my mom’s side grew up in Taiwan. They lived through the Japanese occupation, and my grandmother was fluent in both the local Taiwanese dialect and Japanese, but not Mandarin. To my family on my mom’s side, Taiwan is Taiwan, and they are Taiwanese.

The complexity of the situation is that the current government of Taiwan, the Republic of China, was founded by the losing side of the Chinese Civil War, a war that never technically ended, but merely became a frozen conflict. Unlike the Korean War, there isn’t even an armistice between the two factions. The war simply petered out over decades, and in theory is legally still a thing.

At the same time, Taiwan, despite this precarious situation, eventually became a liberal democracy and is a defacto sovereign state, with its own military and flag, albeit one that comes historically from the Republic of China that once governed the mainland. The people of Taiwan, despite being mostly Han Chinese in ethnicity, have lived apart from China proper for so many decades as to have developed a distinct culture and society, almost a distinct nationality even.

China and many Chinese nationals downplay this evolution. They still see Taiwan as unfinished business from the Civil War. There are clearly ties between China and Taiwan, such as the fact that most Taiwanese can speak Mandarin, thanks to decades of education by the Kuomintang to that effect. The museums of Taipei are also filled with priceless historical artifacts from the Chinese mainland, taken with the Kuomintang when they left, and effectively saved from the Cultural Revolution.

And yet, many people in Taiwan don’t see themselves as Chinese. Especially the younger generations have lived their entire lives apart from the mainland. In the process, the cultures have diverged subtly and meaningfully.

So, I understand both sides of this debate. Chinese nationalists see the historical antecedents, while many self-proclaimed Taiwanese see the defacto separation of cultures. I don’t want to say who is right in this, because in some sense they both have a claim to their concerns, and I find it annoying when foreigners, like Americans decide to interject their own assumptions into the fold.

While it’s true that Taiwan was never formally a part of the People’s Republic of China, mainland China was for 37 years the major part of the Republic of China. To ignore that Taiwan is still officially the Republic of China is to ignore reality. At the same time, to ignore that Taiwan is a defacto sovereign state, is also to ignore reality.

In an ideal world, whether people would join or separate from each other would be based on freedom of association and the right to self-determination. That would likely entail some kind of referendum on the question. But the reality is that most sovereign states other than the old Soviet Union, do not allow referenda on separation, or integration for that matter.

The reality is that most sovereign states are still built on the right of conquest. In a better world people could vote on whether to join another country or leave, but that’s not the world we seem to live in, yet. And so we have China trying to maintain what it sees as its territorial integrity, and we have Taiwan trying to exist as its own thing.

So, to me, the China and Taiwan situation is complex, and any attempt to simplify it is frequently either biased, or playing into the hands of propagandists or the agendas of national interest, whether Chinese or American. And at the end of the day, it is the people of Taiwan who are at risk of suffering for it.

Reflections on Working at Huawei

Huawei has recently been in the news with the Mate 60 Pro being released with a 7nm chip. The western news media seems surprised that this was possible, but my experience working at Huawei was that the people working there were exceptionally talented, competent, technically saavy experts with a chip on their shoulder and the resources to make things happen.

My story with Huawei starts with a coincidence. Before I worked there, I briefly worked for a startup called Maluuba, which was bought by Microsoft in 2017. I worked there for four months in 2016, and on the day of my on-site interview with Maluuba, a group from Huawei was visiting the company. That was about the first time I heard the name. I didn’t think much of it at the time. Just another Chinese company with an interest in the AI tech that Maluuba was working on.

Fast-forward a year to 2017. I was again unemployed and looking for work. Around this time I posted a bunch on the Machine Learning Reddit about my projects, like the Music-RNN, as well as offering advice to other ML practitioners. At some point these posts attracted the attention of a recruiter at Huawei, who emailed me through LinkedIn and asked if I’d be interested in interviewing.

My first interview was with the head of the self-driving car team at the Markham, Ontario research campus. Despite having a cognitive science background in common, I flunked the interview when I failed to explain what the gates of an LSTM were. Back then I had a spotty understanding of those kinds of details, which I would make up for later.

I also asked the team leader, a former University of Toronto professor, why he was working at Huawei. He mentioned something about loyalty to his motherland. This would be one of my first indications that working at Huawei wasn’t with just any old tech company.

Later I got invited to a second interview with a different team. The team leader in this case was much more interested in my experience operating GPUs to train models as I did at Maluuba. Surprisingly there were no more tests or hoops to jump through, we had a cordial conversation and I was hired.

I was initially a research scientist on the NLP team of what was originally the Carrier Software team. I didn’t ask why a team that worked on AI stuff was named that, because at the time I was just really happy to have a job again. My first months at Huawei were on a contract with something called Quantum. Later, after proving myself, I was given a full-time permanent role.

Initially on the NLP team I did some cursory explorations, showing my boss things like how Char-RNN could be used in combination with FastText word vectors to train language models on Chinese novels like Romance of the Three Kingdoms, Dream of the Red Chamber, and Three Body Problem to generate text that resembled them. It was the equivalent of a machine learning parlor trick at the time, but it would foreshadow the later developments of Large Language Models.

Later we started working on something more serious. It was a Question Answering system that connected a Natural Language Understanding system to a Knowledge Graph. It ostensibly could answer questions like: “Does the iPhone 7 come in blue?” This project was probably the high point of my work at Huawei. It was right in my alley having done similar things at Maluuba, and the people on my team were mostly capable PhDs who were easy to get along with.

As an aside, at one point I remember also being asked to listen in a call between us and a team in Moscow that consisted of a professor and his grad student. They were competing with us to come up with an effective Natural Language Understanding system, and they made the mistake of relying on synthetic data to train their model. This resulted in a model that achieved 100% accuracy on their synthetic test data, but then proceeded to fail miserably against real world data, which is something I predicted might happen.

Anyways, we eventually put together the Question Answering system and sent it over to HQ in Shenzhen. After that I heard basically nothing about what they did, if anything, with it. An intern would later claim that my boss told her that they were using it, but I was not told this, and got no follow-up.

This brings me to the next odd thing about working at Huawei. As I learned at the orientation session when I transitioned to full-time permanent, there’s something roughly translated as “grayscale” in the operating practices of Huawei. In essence, you are only told what you need to know to do your work, and a lot of details are left ambiguous.

There’s also something called “horse-race culture” which involves different teams within the company competing with one other to do the same thing. It was something always found seemingly inefficient, although I supposed if you have the resources it can make sense to use market-like forces to drive things.

Anyways, after a while, my boss, who was of a Human Computer Interaction (HCI) background, was able to secure funding to add an HCI team to the department, which also involved disbanding the NLP team and splitting people between the HCI team and the Computer Vision team that was the other team in the department originally. I ended up on the CV team.

The department, by the way, had been renamed the Big Data Analysis Lab for a while, and then eventually became a part of Noah’s Ark Lab — Canada.

So, my initial work on the CV team involved Video Description, which was a kind of hybrid of NLP and CV work. That project eventually was shelved and I worked on an Audio Classifier until I had a falling out with my team leader that I won’t go into too much detail here. Suffice to say, my old boss, who was now director of the department, protected me to an extent from the wrath of my team leader, and switched me to working on the HCI team for a while. By then though, I felt disillusioned with working at Huawei, and so in late 2019, I quietly asked for a buyout package and left, like many others who disliked the team leader and his style of leadership.

In any case, that probably isn’t too relevant to the news about Huawei. The news seems surprised that Huawei was able to get where it is. But I can offer an example of the mindset of people there. Once, when I was on lunch break, an older gentleman sat down across from me at the table and started talking to me about things. We got on the subject of HiSilicon and the chips. He told me that the first generation of chips were, to put it succinctly, crap. And so were the second generation, and the third. But each generation they got slightly better, and they kept at it until the latest generation was in state-of-the-art phones.

Working at Huawei in general requires a certain mindset. There’s controversy with this company, and even though they pay exceptionally well, you also have to be willing to look the other way about the whole situation, to be willing to work at a place with a mixed reputation. Surprisingly perhaps, most of the people working there took pride in it. They either saw themselves as fighting a good fight for an underdog against something like the American imperialist complex, or they were exceedingly grateful to be able to do such cool work on such cool things. I was the latter. It was one of my few chances to do cool things with AI, and I took it.

The other thing is that Chinese nationals are very proud of Huawei. When I mentioned working at Huawei to westerners, I was almost apologetic. When I mentioned working at Huawei to Chinese nationals, they were usually very impressed. To them, Huawei is a champion of industry that shows that China can compete on the world stage. They generally don’t believe that a lot of the more controversial concerns, like the Uyghur situation, are even happening, or at least that they’ve been exaggerated by western propaganda.

Now I’ve hinted at some strange things with Huawei. I’ll admit that there were a few incidents that circumstantially made me wonder if there were connections between Huawei and the Chinese government or military. Probably the westerners in the audience are rolling their eyes at my naivety, that of course Huawei is an arm of the People’s Republic, and that I shouldn’t have worked at a company that apparently hacked and stole their way to success. But the reality is that my entire time at the company, I never saw anything that suggested backdoors or other obvious smoking guns. A lowly research scientist wouldn’t have been given a chance to find out about such things even if they were true.

I do know that at one point my boss asked how feasible a project to use NLP to automatically censor questionable mentions of Taiwan in social media would be, ostensibly to replace the crude keyword filters then in use with something able to tell the difference between an innocuous mention and a more questionable argument. I was immediately opposed to the ethics of the idea, and he dropped it right away.

I also know that some people on the HCI team were working on a project where they had diagrams of the silhouettes of a fighter jet pasted on the wall. I got the impression at the time they were working on gesture recognition controls for aircraft, but I’m actually not sure what they were doing.

Other than that, my time at Huawei seemed like that of a fairly normal tech company, one that was on the leading edge of a number of technologies and made up of quite capable and talented researchers.

So, when I hear about Huawei in western news, I tend to be jarred by the adversarial tone. The people working at Huawei are not mysterious villains. They are normal people trying to make a living. They have families and stories and make compromises with reality to hold a decent job. The geopolitics of Huawei tend to ignore all that though.

In the end, I don’t regret working there. It is highly unlikely anything I worked on was used for evil (or good for that matter). Most of my projects were exploratory and probably didn’t lead to notable products anyway. But I had a chance to do very cool research work, and so I look back on that time fondly still, albeit tinged with uncertainty about whether as a loyal Canadian citizen, I should have been there at all given the geopolitics.

Ultimately, the grand games of world history are likely to be beyond the wits of the average worker. I can only know that I had no other job offers on the table when I took the Huawei one, and it seems like it was the high point of my career so far. Nevertheless, I have mixed feelings, and I guess that can’t be helped.

Welcome To The World

Welcome to the world little one.
Welcome to a universe of dreams.
Your life is just beginning.
And your future is the stars.

Hello, how are you today?
Are you happy?
Can you hear me?
What are you dreaming about?

You are the culmination of many things.
Of the wishes of ancestors who toiled in the past.
Of the love between two silly cats.
And of mysterious fates that made you unique.

Your name is a famous world leader from history.
The wise sage who led a bygone empire.
A philosopher king if there ever was one.
Someone we hope you’ll aspire to.

The world today is not kind.
But I’ll do my best to protect you from the darkness.
So that your light will awaken the stars.
And you can be all that you can.

Welcome to the world little one.
The world is dreams.
Let your stay be brightness to all.
And may you feel the love that I do.

Page 1 of 5

Powered by WordPress & Theme by Anders Norén