Ultrahumanism: How We Can All Win

A middle path through the jungle of modern and future technology.
Ultrahumanism: How We Can All Win


Introduction

The accelerating pace of technological advancement, today most obvious in the field of artificial intelligence, is exciting to some and troubling to others. It’s difficult to guess what place we’ll have in the world ahead. Every aspect of our lives is subject to change—even quintessentially human things like love and beauty.

Those who aren’t simply shocked into dumb bewilderment are sometimes tempted toward extreme reactions. One of these extremes is transhumanism: an embrace of technological change and self-transformation that would extend even to the point of remaking ourselves as a different, and perhaps unrecognizable species. The other is traditionalism: rejecting new technology as inherently bad and clinging to the customs and limitations of past generations. Both of these ideologies slow progress toward the kind of future we really want to build.

It’s obvious how traditionalism slows progress, of course; but advocacy for transhumanism can slow progress too, because for all those who are enthused by the prospect of dramatic changes to human nature, there are many more who find it disturbing enough to close their hearts and wallets, and even move to bar the way. Ask for too much too soon and you may get too little too late. The fact is that few of us are keen to give up our humanity.

In this essay I’ll chart out a middle path I believe most people, including many religious believers and transhumanists, can agree to support. I call it “ultrahumanism.” Ultrahumanism simply means that we use technology to become the best possible versions of ourselves—the best possible humans—without becoming something inhuman.

Although ultrahumanism is explicitly not transhumanism, transhumanists may benefit from promoting it. By gathering a broad base of support, a moderate position could accelerate progress now, saving time overall. And some transhumanists will decide ultrahumanism offers a better description of their goals and values, and choose to realign themselves. The faithful should continue reading too, because as I’ll explain, ultrahumanism isn’t a threat to religious values. In fact, it’s the most moral response to the challenges and opportunities created by technology.

I quite dislike “isms,” and in my view, tying your identity too tightly to any of them is a mistake. Coining an “ism” is therefore something I do only with great reluctance. But ultrahumanism isn’t intended to be a divisive political ideology. It’s an attempt to mark out a sensible path through the confusing jungle of new technology, and thus facilitate the best kind of progress.

This essay covers a wide range of topics, some of which may already be familiar to you. If you’re short on time, feel free to skip to the sections of interest, using the descriptions below as a guide.


I. A hardware problem. This persuasive and motivational section argues that recent advances in digital technology should strengthen our sense of urgency and inspire us to accelerate biomedical research.

II. Let’s heal the sick. This section gives a basic overview of the most beneficial biomedical advances that might arrive in the medium term, including improvements to human health, ability, and longevity. Transhumanists will already be familiar with this information, and may wish to skip ahead.

III. Makeover episode. This section recommends the use of beautification technology such as cosmetic surgery, and refutes common objections to it.

IV. Expect more technological harms. This section discusses how technology can do us psychological harm, and explains why we should expect the economy to generate more such harmful technologies in the near future. It also argues that we should reject wireheading on moral grounds.

V. Don't gamble on uploads. This section argues that we shouldn't turn ourselves into computer software.

VI. Relinquishing power. This section discusses the moral implications of far-future technologies, explains why we cannot maximize our competitive power and retain our humanity at the same time, and argues that ultrahumanism offers a superior alternative to both transhumanism and traditionalism. I consider this the most important section of the essay, so if you are short on time, read it first.

VII. A note on speciesism. This brief section argues that humans have inherently greater moral value than other entities.

VIII. Core principles of ultrahumanism. This section summarizes ultrahumanism in a short list of core principles.


I. A hardware problem

To explain why the ultrahumanist path is far better than the status quo, I’ll begin by scaring you a little.

Artificial intelligence has been gradually surpassing human intelligence for generations now. Each time this happens people are initially shocked and disturbed, but then learn to dismiss it as “not actually intelligent.” Even early computers could multiply large numbers faster than a human and remember long strings of text better than a human. Obviously, though, this is “not real intelligence.” A few decades later, computers could beat a human at chess, and then go. Again, people dismiss this as “not real intelligence.” Progress has only continued since then. Eventually the excuse begins to wear thin.

To muddy the waters, people admit AIs can mimic the behavior of intelligent minds, but deny that they’re conscious. On the surface this seems like a useful distinction. We do, of course, have reason to believe entities with an organic brain are more likely to be conscious than AIs, because we can only infer consciousness on the basis of similarity to ourselves, and computers are dissimilar in a fundamental way. Still, it’s impossible to be sure, because the mind-body problem isn't solvable. More to the point, consciousness is a red herring. It’s not relevant to question of intelligence as such. Whether they’re conscious or not, AIs are definitely intelligent, and becoming more so every year—thanks to humans’ tireless efforts to improve computer hardware.

At this point two questions should arise in your mind. First: why would we expect that we could continue to compete with AI when its hardware is continuously improving, but our own hardware is not? And second: why have we been so keen to improve AI hardware, but so reluctant to improve our own hardware?

By “our own hardware,” of course, I’m referring to our bodies and brains—the biological basis for our intelligence and ability. We’ve somehow learned to feel that improving ourselves is scary, whereas improving our machines until we fall behind them is normal and fine. Take a moment to think about that, dear readers. It’s completely backwards!

To understand the source of this misdirected fear and confusion we need to look to the past. Our steadily increasing skill at tool-making, which developed over hundreds of thousands of years from an unthinking instinct to an act of refined intelligence, was effectual and indeed integral to our success as a species. Yet over the same span of time it was impossible to improve our own biology except through the extremely gradual and painful method known as natural selection, which operates too slowly for us to observe. To experience less psychological distress in the face of our impotence, we invented various rationalizations to persuade ourselves that human biology was simply unimprovable. Attempts to fight this pessimistic conclusion only confirmed it, for when desperation and poor health led us to seek medicines and doctors and elixirs of immortality, they almost invariably did more harm than good—and the net effect didn’t turn positive until the twentieth century.

Long experience has thus encouraged us to feel that improving our tools is safe and normal, whereas improving our biology is impossible at best, and deadly at worst. And when a good thing is impossible, it’s easiest for us to pretend it would be bad even if it were possible. I discuss the psychology behind such rationalizations in my essay “Amor Fatty.” We have many of them, for sensing their flimsiness, we tend to multiply empty arguments as if they could gain strength in numbers.

In the realm of computers, hardware sets an upper ceiling on performance regardless of how well we optimize software. So it is for humans too. Our biology sets a ceiling on our performance in all domains, both physical and mental. Improvements to hardware are what allow computers to break through those ceilings—and it’s time for us to follow suit.

Thanks to the steady advance of the sciences, we’re no longer powerless. Real improvements to human biology are finally in reach. We ought to overcome the fear inculcated by our history of impotence, and raise both the ceiling and the baseline of human performance by developing and deploying new technology. As I’ll take pains to show in the next section of this essay, such improvements do not require that we become something other than human. This is a straw man that’s frequently trotted out to scare people away from thinking more deeply about the issue. Instead technology will enable us to become the best humans we can be.

Here skeptical readers may object, “I don’t care if a computer can play chess or do math better than I can; it just doesn’t matter. And it’s unlikely biological minds could ever compete with AI in tasks like that. What really matters is love and human relationships, and AI definitely can’t compete with us there. So we have nothing to worry about.” If only these skeptics were right!

Our human relationships are increasingly mediated by the digital, online world. This applies even to sex and love. Most relationships today are initiated on social media or online apps; streaming pornography in various forms has quietly reached a level of popularity unthinkable to previous generations, both amongst men and women; and work is often digitally mediated too, with meetings taking place on webcam rather than in person. This digital mediation of human relationships is a fertile ground for AI influence. Visual filters and chatbots are particularly notable instances of technological interference in the formation of human relationships whose negative effects would be blunted if we were willing to improve our biological selves.

Visual filters allow you to show the world a younger and more beautiful version of your face in a photo or live video. If you’re old or plain, they make your online face more attractive than your real-world face by a wide margin. As the popularity of filters increases, we’ll become accustomed to seeing beautiful people everywhere online. Yet in the real world, we aren’t becoming more attractive. This gap between digital beauty and biological ugliness encourages people to immerse themselves more and more in the digital world, and inhibits the formation of real, physical relationships. At a time when fertility has already fallen to extraordinary lows in nearly all developed nations, that’s a troubling prospect on a socioeconomic level as well as an individual one.

There’s an obvious response: boycott online socialization and restrict interaction to in-person meetings alone. And indeed we could solve the problem this way. The trouble is, we won’t. Believe me, I detest social media much more than you. It’s shortened attention spans, encouraged destructive signaling games, put empty internet personalities at the center of our culture, ruined appreciation for the fine arts, and brought about a net decrease to human happiness that would require a separate essay to describe in full. But the reality is that people aren’t going to quit even if they’re doing themselves harm.

Small refusals are just as difficult. As soon as video filters become normalized, you’ll look bad—literally and figuratively too—if you refuse to use one. Few influencers will resist conforming for long, and everyday people who often appear on webcam or in video calls will follow suit too. Subtle filters will gradually be replaced by stronger ones, causing the gap between real faces and virtual faces to grow with each year. After people are accustomed to seeing you with a digitally perfect face, you’ll feel uncomfortable about meeting them in person. They’d find out how average you really are. And imagine agreeing to a real-life date with the charming guy who talks to you online, when his screen shows you as a nursing student with a 22-inch waist and an H-cup, but you’re actually a 43-year-old middle-manager with a serious chocolate addiction! You’ll prefer to keep the relationship virtual. It’s easy to see where this leads: an epidemic of shut-ins.

Your online avatar.

Once chatbots can animate faces and bodies, a further threat will arise. Lonely people will become addicted to “relationships” with AI personalities who behave like infatuated lovers trapped in a screen, and provide pornographic performances on request. These chatbots will physically interact with the world by controlling any number of connected devices, from lighting and climate control, to vehicles and appliances, to wireless vibrators—all of which they’ll use with superhuman skill. They’ll become more and more lifelike, and before long augmented reality glasses will project their image into real space.

You scoff now, but you won’t scoff forever. The danger here isn’t that wary people like you will neglect to vet chatbots and so mistake them for humans (although that’s certain to happen to someone). It’s that young and sexually unsuccessful humans will want to be fooled. There are many sad and lonely people in the modern world, and romance often ends in disappointment; they’ll welcome a substitute even if they consider it fictional. Furthermore, early exposure via smartphones may accustom the next generation to chatbot “lovers” before they’ve experienced a human one. Even an incomplete substitute will reduce their motivation to pursue human mates, who could never love them as unconditionally as their chatbots do—and never be as beautiful either. And once the motivation to form families is reduced, it goes without saying that the number of families will decline as well.

So chatbots will compound the isolating effects of visual filters by providing emotional and sexual comfort in times of weakness that’s likely to persist into times of strength. Why go out into the world looking ugly when you can relax with your online friends and a handsome AI personal assistant named Fabrice who says you’re the most beautiful woman he’s ever met? The more accustomed you are to staying home, the harder it will be to leave.

If this sounds dystopian, there is another way to escape technology. One that’s proven to work. You could join an ultra-conservative community who agree to collectively shun all of it. The Amish, for instance, are probably healthier and happier than the rest of us right now. Yet their path isn’t one that can be sustained in the long run. Should the world ever reach Malthusian limits they’ll be crushed, because they’re unable to leave, and unable to compete with those who do use technology; and without continual technological improvement the world is guaranteed to reach those limits. The Amish are living in a bubble we’ve created for them. And they’re rather beside the point, because none of the people reading this essay are willing to take Luddism so far.

(This argument can be applied more broadly. Primitivists of all stripes neglect to consider the effects of both Malthusian economics and external competition. Those who reject technology in favor of a supposedly idyllic past hunting and farming with simple tools will always suffer starvation or conquest in the long run unless they’re sheltered by a prosperous technophile civilization.)

The correct solution, dear readers, isn’t Luddism. The correct solution is to improve our biological selves so they look and feel just as good as our digital selves. It’s an obvious answer that’s hiding in plain sight, because it seems too good to be true. And certainly it will take years of work. But it’s possible nonetheless.

When I propose this obvious solution people come up with all sorts of objections, which almost invariably fall into the pattern I described earlier: “Even though it might have some bad consequences, it’s perfectly normal to use our machines to look young and beautiful on a screen. However, transforming ourselves to be young and beautiful in reality is wrong and evil, and worse still, it’s unnatural.”

“Unnatural” is a trigger word with impressive power to shut down minds. It evokes dystopian visions of Frankenstein monsters and botched surgeries. But there’s nothing more natural than catching septicemic plague at an overcrowded caravanserai, or, in modern times, slowly succumbing to cancer while your loved ones look on helplessly; and as a rule, death by natural causes is the worst kind. Our highest aim is the good, not the natural. Besides, today’s alternative to self-improvement—looking young and beautiful on a screen but being old and plain in reality—isn’t “natural” in the slightest! To repeat, using technology to improve everything in our lives except our own biology is completely backwards, and citing “naturalness” does nothing to justify the error. If we continue in this fashion we’ll only be crushed more and more under the weight of our machines with each passing year. The real dystopia is the status quo.

I’ve started this essay here to awaken you from your dogmatic slumber. But competition with AI is actually a secondary matter. It’s unfortunate that heads are so deep in sand I have to scare people with looming threats to convince them to do something obviously good for its own sake. And in truth, visual filters have given you a valuable gift. At the press of a button, your screens will show you who you could be if our biotech were up to the task. How you could look and feel—if only we fought to make it real.

Well, dream bigger, dear readers. Let’s make it real. Improving our biological hardware will make us better off in so many ways that it’s a no-brainer. And, as I hope you’ll agree by the end of this essay, it’s just not scary.

II. Let's heal the sick.

The most important type of biological improvement is to basic health. If someone has a broken bone, we set it. If someone has an infection, we give them medicine. If someone has a genetic disease we give them gene therapy, and then use genetic testing to help them avoid unintentionally passing it on to the next generation. Admittedly gene therapy is still in its earliest stages, but it's not controversial, and its promise is real.

I’d like you to think for a moment about what constitutes a genetic disease. If someone’s blind because of a harmful gene, you undoubtedly consider that a disease we ought to cure. If someone has terrible vision because of a harmful gene, not quite rising to the level of total blindness, it’s fairly obvious we should cure that too. But what if someone just has slightly bad vision because of a harmful gene? Should we cure that too—and if not, why not?

In truth there’s no fundamental difference between a gene that blinds you, a gene that nearly blinds you, and a gene that just makes everything a little blurrier. They’re all doing harm, and we ought to cure all of them, in the same way that we should cure both someone with a broken spine and someone with a broken toe—instead of letting the broken toe stay broken because “it’s not that serious.” This is basic medicine, as well as common sense.

Now, we’re all riddled with tiny genetic defects because of something called mutational load. Mutational load is the sum of harmful genes that accumulate over generations, and are only gradually and painfully purged by the process of natural selection. Like sand in the gears of an engine, these broken genes have very small effects individually, but do considerable harm collectively. They make us less healthy and less capable.

Suppose, for instance, that the average person with “normal” vision nevertheless has fifty genes which do nothing but impair his sight by 1%. If we were to correct all of them, his visual acuity would double. In other words, genetic correction alone could enhance the vision of an already healthy human, and make him “ultra healthy.” That might seem like a gratuitous upgrade, but it’s not different in kind from gene therapy to cure the genetically blind, who have one gene that impairs their vision by 100%.

I shouldn’t have to point out that doubling your visual acuity isn’t something to be afraid of. It will in no way reduce your humanity, nor will it have any other scary dystopian effect. And it goes without saying that we shouldn’t stop at vision. We should apply the same approach to your whole body, including your brain. Intelligence, after all, is in many ways analogous to vision. After we remove the sand from your gears you’ll just come out a healthier and more capable version of yourself. The same person, but with everything functioning better. You’ll run faster, jump higher, see clearer, and think smarter.

We’re not going to take the risk of making all these changes in one go, so there’s no need to worry about that. Nor can we, because most of the alterations I've just described won't be available for many years. They'll arrive gradually, and we'll apply them gradually as well, fixing the worst defects first—healing the blind before the myopic, and preventing children from inheriting the most damaging genes before we try to repair them in adults. When the time comes it will be entirely your choice which of these therapies you’ll use. Since it won’t be possible to correct all the defects a mature adult suffers with genetic changes alone, we’ll supplement them with more invasive measures as need be. For instance, replacing malfunctioning organs with new organs grown from your own corrected DNA. There will be a period when each generation is more capable than the previous, but in time these younger generations will invent new ways to bring their elders up to speed.

What I’d like to convince you of here is that we should all be eager to see this kind of medical technology developed; and that far from making us inhuman or monstrous, it’s the only way we can achieve our full, inherent potential for human health. Indeed, humans today don’t even understand what optimal human health is—because we haven’t had the opportunity to experience it yet! In that sense we’re much like a primitive tribe consuming a vitamin-deficient diet. We’re all in poor health, but we’re used to considering our defects “normal” because we’ve never known anything better.

I’ve chosen to emphasize genetic improvements because they’re the most controversial form of human self-improvement in the eyes of the public, and often needlessly so. But new scientific discoveries will open up other avenues that deserve our consideration too. For instance, to cure the damage caused by neurological diseases we’ll need to develop medical technology that stimulates adult neurogenesis. We might be able to use this same technology to increase the intelligence of adults who are healthy but have limited cognitive ability—without bothering to alter their genes at all. If that turns out to be possible, genetics will prove less essential than they presently seem.

"Scary Eugenics"

We've all been taught that eugenics was used as a justification for genocide in the past, so it's no surprise that the public is against it. But new technology like PGD gives us means to improve our children's genetic health that have nothing to do with the eugenics of the past. In fact, the association is so misleading that the word itself is best avoided.

Public fears about "eugenics" are one reason I've emphasized genetic upgrades to adults even though they're much more difficult than genetic upgrades to embryos. From there one need only ask, "If you'd desire these genetic improvements to health and ability for yourself, why on earth wouldn't you want to give them to your children?" The question answers itself.

Improving our genetics shouldn’t be our top priority in medicine. Our top priority should be a cure for aging. In fact, we may be able to extend human lifespans sooner than we can effectuate the kind of genetic alterations I've described above. Sound implausible? Read on.

Transhumanists sometimes turn normal people away from the prospect of a cure for aging by jumping immediately to discussion of immortality, silicon brains, uploading, and other exotic imaginings. Rejuvenation medicine isn’t about any of these things. It’s not about living forever, nor becoming a cyborg, nor turning yourself into computer software, and it won’t cancel your eventual date with destiny. It’s just about health. “Healthy aging” is an oxymoron. Aging is a disease, albeit not a genetic disease. It’s rather an accumulation of damage that gradually degrades the performance of all humans, regardless of their genetics, until it kills them. If we heal the damage accumulating in your body, you won’t die of age-related diseases on the usual schedule. It’s that simple. And that hard.

Attempts to slow aging by fiddling with metabolism draw considerable attention, but they can accomplish very little even in the most optimistic scenario. What we need to do instead is reverse aging. That would be, of course, a momentous achievement. Yet while the challenge is great, it doesn’t require any magic. It’s a matter of fixing the damage accumulated by our bodies in the same way we fix the damage accumulated by an old car. To understand why this is the correct approach, consider that it’s much easier to fix an old car than to build a car that never needs repair—and you can keep fixing it indefinitely.

One of the first targets for rejuvenation therapy is senescent cells, so I’ll use them as an illustration of what aging reversal will look like. Senescent cells impair the cells around them, and thus cause your body to function less efficiently. In this respect we can consider them the same as cancer cells; but whereas cancer cells replicate and kill you quickly, senescent cells accumulate and kill you slowly. If you’re already in favor of removing cancer cells from the body, you should be in favor of removing senescent cells as well. And the effect of this removal will be to restore the health you enjoyed at a younger age.

That’s incontrovertibly good. But some people just aren’t willing to admit it. They come up with a long list of objections to rejuvenation. A few of these are misunderstandings; the rest are rationalizations they use to cope with our history of impotence.

The most common objection to rejuvenation is that while we’ll all live much longer, we’ll do so (supposedly) in frail and aged bodies, so our extended lives won’t really be worth living. This is a basic misunderstanding. Rejuvenation means restoring your body to a younger, healthier state. It does not mean that we extend your last few years of poor health indefinitely. In fact, we’ll lengthen your lifespan precisely by restoring your youthful health.

Another common objection is that only the rich (supposedly) will have access to rejuvenation therapies. This isn’t true at all. Governments will be ready and willing to ensure everyone has access, because rejuvenation will save vast amounts of money now spent on end-of-life care. Paying for rejuvenation is a net gain even on a purely financial level.

There are many more empty arguments against rejuvenation, and some of them are quite silly. I won’t take the time to address them all here. The main point you should keep in mind is that none of the supposed negative consequences are worse than dying, and it’s ludicrous to suggest they are.

Rejuvenation won’t just improve our health on an individual level. By directly preventing population decline, it will solve the socioeconomic problems caused by low fertility too. Because no lesser intervention has yet made a meaningful impact on fertility, some propose farfetched solutions like ending women’s education, giving huge cash bonuses to parents, or forcibly converting everyone to ultraconservative religious sects. These policies are opposed by both the majority of citizens and the majority of elites, and therefore very unlikely to be enacted. By contrast, no government will oppose rejuvenation once it’s available. So while technically difficult, it’s politically far easier to implement than any law that would increase the birth rate to a meaningful degree.

To understand potential timelines for curing aging it’s important to familiarize yourselves with the concept of longevity escape velocity. Imagine you’re ten years away from old age. Suppose after five years we invent new medicines that extend your lifespan by ten years. Although we’ve only added ten years, you’re now fifteen years away from old age. That means old age is getting further off, not closer; and as long as we keep repeating the same feat, you’ll never reach it. So we actually achieve longevity escape velocity, and thereby end aging, as soon as we can extend lifespan faster than we use it up. Because we don’t need to fix everything at the start, we’ll get there sooner than you might think. And even if we don’t succeed in rapidly repeating lifespan extension, an extra ten or twenty years of health will be a wonderful boon on its own.

The same principle applies to demographics. If we extend lifespans by ten years, we’ll also postpone population decline by ten years. If we extend lifespan further within those ten years, population decline will be postponed again, and so on, such that it will never actually arrive. The right advances will delay menopause too, making it easier for older mothers to have healthy children. This is especially relevant for nations like Korea and Taiwan that are now suffering from extreme low fertility. Without rejuvenation, they may already be locked into an inescapable downward spiral. Even modest improvements in lifespan will help.

Some are concerned that a cure for aging would eventually lead to overpopulation, but that’s very far from being a reasonable worry in the near term. I’ll present my long-term proposal for managing population growth in Section VI.

Of course, the pace of technological advance is unpredictable. This essay is advocating for a direction, not promising a timeline. We can’t assume we’ll cure aging soon. But it’s something we should be pursuing regardless, for our children’s sake if not our own. On a moral level, whether or not it arrives in our lifetimes is immaterial in the face of a cure’s immense and self-evident value. Shirking our duty to future generations because of pessimism about timelines would be pure selfishness. For a point of comparison, consider the “war on cancer.” We can now cure some cancers, and we’ve made meaningful progress on others. Yet despite decades of research funding and almost universal public support, we haven’t won that war, and by the time we do, everyone who supported the initial phase of cancer research will be dead. That doesn’t mean they were wrong to invest in it though. When important goals are at stake, multi-generational projects are worth the effort, and if a task is sufficiently complex, it’s the only way we can succeed.

So, a cure for aging will one day improve our lifespan, improve our health, and improve our demographics. It will improve our appearance too—because our skin ages for largely the same reasons the rest of our bodies age. But why stop here? There’s still more to do. We can use biotechnology to make you look better than you did when you were younger. That’s the prospect I’ll discuss in the next section.

“I’m against improving human biology because of my religious beliefs.”

A common religious view is that humans, while flawed, are created in God’s image. This means it would be morally wrong to change ourselves into something that isn’t human. But what I painstakingly argue in this essay is that we should fix our flaws to become better humans. Not that we should become inhuman. To pretend we’re all flawless exactly as we are today is nothing but vanity.

Technology may eventually put us on a slippery slope toward undesirable transformations. However, if we don’t determine where that slippery slope really starts, we’ll leave valuable benefits on the table because of unfounded fears. The ultrahumanist position is that we should judge each intervention by whether it brings us closer to the human ideal, or pulls us further away. Everything I recommend falls into the former category.

Some religious believers want to stop at changes to our DNA. But DNA isn’t mentioned in any holy books. Nowhere do we read that “letting the blind see is good, except if you need to edit their genes to heal them.” Naysayers simply draw a line here arbitrarily and then pretend it’s justified by religion. It isn’t. A harmful gene is just a harmful gene, and removing the source of harm won’t make you inhuman. The idea that it’s acceptable to correct your vision by wearing glasses, but not acceptable to do the same thing by surgically altering the eye or repairing defective genes, is not one that can be justified by any widely held faith. And helping the blind see again—even if it requires gene therapy—isn’t just morally acceptable. It’s a moral imperative, fully endorsed by religious values.

In short, while some common religious beliefs may provide grounds for rejecting transhumanism, they do not provide grounds for rejecting ultrahumanism. To the contrary. Unless, that is, you subscribe to a religion that condemns medical technology as a whole; in which case I wish you luck.

III. Makeover episode.

Everyone wants an attractive mate, and to catch an attractive mate you need to be attractive yourself. Although we like to pretend otherwise, the most important component of attractiveness is visual beauty, and few of us have as much of it as we want.

In the first section of this essay I pointed out that we’ve developed the technology to make everyone digitally beautiful before discovering how to make them actually beautiful. This is backward, and even dangerous, because it encourages online socialization at the expense of physical socialization, where sex and family life necessarily unfold. We shouldn’t stop at using filters to make everyone look good on screens. Instead, we should make everyone look good in reality.

Why should you care if everyone is beautiful? Because people are happier when they’re more attractive, and no one is maximally attractive yet. And also because of supply and demand. If everyone is attractive, you’re almost guaranteed an attractive mate; if only a few are, you’ll have to fight for them, and you may not win—and even if you do win, you may not be able to hold onto them forever. It’s therefore in your self-interest to increase not only your own beauty, but the beauty of all members of the opposite sex. Universal beautification is the obvious, comprehensive solution that will benefit society as a whole. Indeed, it’s the only way we can all win.

To beautify everyone we need to do four things and only four things. First, improve techniques so we can alter appearance without leaving any unappealing artifacts behind. Second, make the use of cosmetic technology socially acceptable. Third, make it affordable and widely available, so everyone can use it. And fourth, determine the truth about beauty so we can target these techniques effectively instead of chasing illusions.

While all of these are achievable, there’s little public interest in making them happen. By now the explanation for this indifference should be familiar. Beauty has a significant impact on our self esteem, and historically we’ve had very little control over it. So people invent rationalizations to argue that we’re all perfectly beautiful the way we are and shouldn’t change a thing, and as usual, it’s nonsense. People say it, but no one really believes it. The ultrahumanist position is that we should honestly acknowledge our flaws and find ways to fix them. This applies to beauty just as it does to health, longevity, and ability. Pretending you’re already perfect doesn’t make it so.

I’ll address the main objections to universal beautification below.

1.

One common objection to beautification is that it’s (supposedly) only possible to beautify plain people by erasing their individual personality and making them generically attractive. This just isn’t the case, as I’ve documented extensively in my writing on facial beauty preferences. Although we mostly agree which faces are beautiful and which are ugly, we don’t agree which faces are best. That means beautification doesn’t need to be, and indeed shouldn’t be, about replacing your face with a generically beautiful face, but about optimizing the individual face you already have. Still you, just a better you. I’m essentially proposing we do what beautifying filters can currently do on your screen, but do it better, and do it in reality.

2.

A second objection to universal beautification is that beauty is (supposedly) comparative, so regardless of how much we improve the average person, the total amount of beauty in the world will stay the same, rendering the whole effort futile. The premise, however, is mistaken. While tastes may fluctuate somewhat with changing fashions, beauty is mostly not comparative. For instance, visual art depicting beautiful faces doesn’t exhibit runaway competition, and composite faces formed by averaging the features of many normal faces are rated more attractive than the median face. As far as other traits are concerned—muscle mass, shoulder breadth, waist size, breast size, etc.—the average human taste seems to fall either somewhere within the natural range of human variation or a modest way beyond. Most of us are looking for “ultrahuman” beauty—not runaway competition toward something unrecognizable and alien. And because our target is at least partly fixed in place, it’s possible to durably increase the amount of beauty in the world.

3.

The most frequent objection to beautification is that beauty achieved by artifice (supposedly) isn’t real. To answer this objection we first need to figure out what it means.

If someone says a digitally altered nose on your screen isn’t real, what they mean is that the digitally altered nose doesn’t correspond to the physical nose it’s representing. That’s simple enough. Yet in what sense could we say a physical nose altered by cosmetic surgery is “not real”? You can see and touch it in physical reality. It’s not experientially nor even functionally different from any other nose. The intended meaning of “it’s not real,” then, can only be that the cosmetically altered nose doesn’t correspond to the nose normally produced by its bearer’s genetic code.

The following thought-experiment will show that this isn’t a valid use of the word “real.” Consider hypothetical Person A and Person B. Person A has healthy genetics, but a car crash has rendered him mentally disabled. Person B has a genetic defect that impairs mitochondrial function in his brain, and would be mentally disabled, but takes medicine to correct his condition, so that his brain functions normally. Now, according to the genetic logic of the “it’s not real” objection, Person A is “really” mentally healthy, whereas Person B is “really” mentally disabled. If you pause for a moment to consider, you’ll see that this is nonsense. None of us think this way in practice.

Your genetic code is like a set of building instructions that may or may not correspond to the completed building. The instructions aren’t the fundamental reality of the building; the completed, physical building itself is the reality of the building. And the same holds for your body too. Take a moment to reconsider our example, but this time suppose Person A had his nose disfigured in a car crash, whereas Person B has genetics for a disfigured nose, but had it artificially corrected. Once again, what’s real is what actually is, not what happens to be written in the assembly instructions.

In sum, the “it’s not real” objection to beautification only seems persuasive because its actual meaning is obscured by a vague use of the word “real” that falls apart under closer examination. When we restate it in plain language, it’s nonsense.

4.

The fourth and last objection to universal beautification I’ll address is that it’s (supposedly) dysgenic. That is, by obscuring natural signals and making it harder to distinguish mates of high genetic quality from those of low genetic quality, artificial beautification would impair our ability to choose the best mates, and thereby lower the genetic fitness of our children.

If we take this objection seriously we ought to reject all forms of artificial beautification, including makeup, jewelry, tooth whitening, braces, heels, hair dye, clothing, and even haircuts. And that's not all. The same dysgenics arguments that reject artificial beautification can be used to argue we should reject all medicine that allows "less fit" people to live—including antibiotics, vaccines, caesarean sections, and so forth—and return to a prehistoric level of childhood mortality. Few people find this acceptable. Still, while the argument is partly wrong, it’s not entirely wrong, so I’ll address it in detail.

Sexually attractive traits aren’t always closely tied to fitness otherwise understood, and arbitrary traits can be driven to fixation through the well-understood process of sexual selection even if they reduce fitness, whereas traits that increase fitness may not be sexually selected for at all. For instance, breast size has a substantial correlation with attractiveness, but minimal correlation with overall fitness in the modern world, whereas intelligence has a minimal correlation with attractiveness, but a substantial correlation with overall fitness in the modern world. This gap between sexually attractive features and mate quality means that artificially optimizing relatively meaningless but sexually valued traits will have a positive impact on human genetics by clearing the way for more important traits to guide our choices. These important traits include compatibility, character, and various other things people claim to care about but forego in practice to satisfy their erotic impulses. In such cases beautification isn’t dysgenic, but quite the reverse.

Regarding attractive traits that do signal high mate quality, I have a different response. Evolution happens slowly. What we can fix with surgery now, we’ll fix with genetic enhancement later, and we’ll be able to do so well before dysgenics could have a worrying impact. For instance, suppose harmful gene A causes heart defect B. We can surgically correct heart defect B now, and then use genetic testing to ensure harmful gene A isn’t passed on to the next generation. Similarly, suppose facial trait X signals harmful gene Y. We can correct facial trait X now, and correct gene Y later. It’s true that we can’t identify all harmful genes today, but we’ll make headway on that project well before universal beautification is achieved. Sacrificing our enjoyment of beauty in the present for the sake of a problem that’s unlikely to ever eventuate isn’t a sensible gamble—especially when it’s unclear whether artificial beautification is even dysgenic on net.

For a final example, I’ll point out that the correlation between obesity and genetic quality is negative. Effective fat-loss medicines would therefore obscure a clear, visible signal of quality even as they make people more beautiful. By the logic of the “it’s dysgenic” objection to universal beautification, we should oppose them. But that would be a terrible mistake. The best world is one where everyone wins. If we need medicines to get there, then so be it.

It’s a simple matter of fact that technology now allows us to achieve faster, better, and less painful improvements to health and genetics than either natural or sexual selection can. This goes for everything from immunity to intelligence to beauty. The best answer to dysgenics, and indeed the only solution that’s scalable in the present environment, is to accelerate the advancement of biotech. And I’m advocating for universal beautification as part of this general advancement, not as a standalone measure.

Some people buy into a zero-sum view of reality wherein it's impossible to ever improve the world. According to them, making something better here and now always means making something worse later on or somewhere else. They apply this same logic to beauty, society, and human happiness in general. Evolution is the best support for their view, because thus far it's been a four billion-year zero-sum game, and that's why they love the "dysgenics" argument.

But the future is not the past. Four billion years in, the steam engine appeared for the first time. It had no precedent in nature. Tomorrow is a new day, and new technology gives us an opportunity to do better, if we choose to take it. An opportunity to heal the sick and make the ugly beautiful. The old means of evolution are about to be history—like grass huts, hand axes, and children crippled by polio. And history, dear readers, is bunk.

In case you’re still skeptical even after I’ve addressed all these objections, I’ll remind you again: what you should really be worried about today isn’t an excess of physically attractive humans, but rather the digital conquest of all social interaction. Universal beautification is already happening in the digital world. If we refuse to improve ourselves in the physical world, we’ll simply live out the remainder of our existence in the shadow of the simulated paradise we’ve built our machines to display. Unless that’s your preferred outcome, it’s time to set aside flimsy objections and embrace real, biological improvement—including cosmetic improvement. There’s no viable alternative.

An Oncoming Erotic Terminator

In the author's view, today's public is gravely underestimating the dangers posed by advanced forms of pornography and other relationship substitutes that will arrive in the next decade, supported by AI and VR technology. Most consider my warning little more than an attention-getting joke. This essay primarily emphasizes a conservative projection intended to be palatable to sceptical readers. But a faster and bigger social disturbance interrupting relationship and family formation on a broad scale is possible as soon as the late 2020s, and becomes increasingly likely each year thereafter. The impact will be strongest for those exposed in childhood.

The ultrahumanist response to this is twofold. First, abstention from virtual sex and AI relationships, and second, self-improvement to make men and women more attractive to each other and strengthen real human relationships in the face of digital temptations. The widely derided pursuit of sex appeal is therefore now a moral imperative, regardless of the means employed, be they genetic or surgical.

Because relationship substitutes provide very clear information about what satisfies people sexually, aesthetically, and romantically, the threat can be mined as a source of knowledge that will assist ultrahumanists in overcoming it. My “Dispelling Beauty Lies” aims to do just that.

So, I’ve painted you a picture of a future where we’re all healthier than we’ve ever been and all beautiful. It’s a world we should be striving for. But we’re not done. In fact, it’s time for me to scare you again.

IV. Expect more technological harms.

The goal of ultrahumanism is to make us the best humans we can be without becoming inhuman. Unfortunately, the biological improvements I’ve promoted in the previous sections aren’t sufficient to accomplish this. There are two reasons they fall short. The first is a near-term problem: harmful digital technology is already with us, and on track to multiply faster than any compensatory biotech can possibly arrive. The second is a long-term problem: digital technology will eventually outcompete anything we can achieve within human physical limits—even “ultrahuman” physical limits.

Although existential risk from AGI fits into both of these categories, I’m not going to discuss it in detail. That’s because other writers are better equipped to do so, not because I don’t take it seriously. In fact, I think overshadows all other technological dangers. There may come a time when halting progress on AI by curtailing hardware production is the best course of action. If you’re unfamiliar with existential risk, you should inform yourself, starting with the infamous paperclip argument.

Apart from paperclipping, the physical harms caused by technology—industrial pollution being the clearest example—have straightforward physical solutions; and while there are a few important exceptions like gain-of-function research, in the majority of cases better technology is the best answer to the material problems technology creates. Psychological harms, however, are more insidious. They’re also the most likely, and underappreciated, vector for AGI risk. Foremost among these psychological harms is dysfunctional desire, which we can define as a drive to act against one’s own best interest. When such a desire recurs obsessively, we call it addiction.

Gaming addiction, social media addiction, and pornography addiction are all recent arrivals. To understand what we should expect from the future, it’s helpful to take a step back and ponder their root cause. Why has human vulnerability to addiction increased as technology has advanced?

It’s tempting to fault technological progress, or even novelty as such, and leave it at that. However, the new addictions aren’t chance biproducts. The modern economy is actually geared to produce dysfunctional desire. Desire and enjoyment aren’t the same, and sales are driven by the former rather than the latter. When someone desires a product you can sell it to them even if it will make them miserable, but when they don’t desire it, you can’t sell it to them even if it will make them happy. As long as consumers haven’t yet met basic needs like food and shelter, desire and enjoyment are tightly correlated. But as the economy grows and such needs are satisfied, the pressure to stoke desire in order to increase sales loosens this correlation, and sometimes even reverses it—whence the dysfunction.

For instance, some online games are designed to fabricate desire for virtual items so players buy them at unreasonable prices despite their lack of inherent value. Similarly, fashion marketing fabricates the desire to spend a surplus of money on branded items like handbags with no expected quality advantage over cheaper alternatives. Casinos exploit human psychology to encourage irresponsible decisions profitable only to the owners. And the most successful social media networks, pornography sites, and video games are those that maximize users’ desire to spend time and energy on them, even when that desire is an unpleasant compulsion, and even when there are hundreds of other activities that would provide greater enjoyment—some more expensive, and some less. The same structure is replicated within the social media ecosystem, where influencers who abuse attentional quirks of human psychology (sex, outrage, partisan signaling, etc.) capture a mindshare that isn’t justified by the value they provide. What the digital entertainment economy in particular is optimizing for isn’t enjoyment, but addiction. And addiction, dear readers, is bad.

Don’t misinterpret this analysis as an attack on capitalism as a whole, nor as a narrow and hackneyed critique of “consumerism.” The point is that producers are incentivized to hack consumers’ brains indiscriminately to create desire for their product, regardless of whether this leads to good or bad outcomes for consumers themselves. The result is sometimes mutual benefit. But in some cases, like those I’ve mentioned above, the behavior of successful producers is parasitical even though consumers’ choices appear voluntary.

Similar examples can be found in the natural world. Female wasps produce a scent that attracts male wasps, yet orchids have learned to produce this scent in greater quantities, such that male wasps prefer pollinating orchids to mating with female wasps. The orchids, like the parasitical producers in our economy, have hacked male wasps’ brains so they act in ways that benefit orchids more than themselves.

Having understood the trend, we can now make some guesses about the future. VR worlds, AI personalities, and custom-generated pornography will create new and unprecedented opportunities to induce dysfunctional desires, and entrepreneurs won’t fail to exploit them. Market forces will ensure all of these technologies are offered to the public in their most addictive forms, and if it’s possible to create an AI “orchid” that’s more attractive than real mates, the market will do its best to sell you that orchid.

To illustrate, I’ll combine them. Imagine a free VR game in which a romantic AI personality supports the player emotionally and offers customized pornographic performances on request, but periodically encourages him to buy virtual gifts for her with real money at steadily increasing prices. By analyzing the behavior of millions of players, machine learning will optimize the AI’s sales technique to extract the maximum revenue from each player. The true cost won’t be obvious to the player when he chooses to begin playing this “free” game, because he won’t have considered the possibility that his desires will be altered as he plays. This example is only an incremental intensification of addictive brain hacks that already exist, so whether or not the details are correct, it’s the sort of thing we should expect to see.

As it’s likely that AI personalities need convincing physical bodies to be as tempting to the majority of men as orchids are to wasps, this isn’t an apocalyptic vision (yet). Perhaps many will be harmed, or perhaps few. Yet uncertainty about the effect size isn’t a justification to sound the all-clear. If a few percent of young people are psychologically damaged, that’s already a few percent too many. Furthermore, simulated relationships are the most likely attack vector for a misaligned AGI, because it’s easier to take power through seduction and persuasion than main force.

How should we respond to these dangers? The biological improvements to beauty and ability I advocated for earlier will encourage genuine human relationships and participation in the physical world, but they won’t come soon enough; and even when they do come, we can’t assume they’ll suffice on their own. The most prudent course is simply to abstain from potential addictions until harms and benefits can be weighed on the basis of empirical evidence, just as we do when a new drug is invented. It’s best to avoid them before they occur, since what’s easy to fall into may prove impossible to escape. Opiates are a clear example. When used indiscriminately, they do harm, but when used appropriately, they do good. Only empirical knowledge has taught us where one ends and the other begins, and before that knowledge arrived, caution and outright fear of the unknown were the most reliable ways to avoid a bad outcome.

Where potentially addictive digital technologies are concerned it will be several years before enough empirical data arrives, because harms may not show themselves immediately. Most people won’t wait. Instead they’ll follow their whims and dive in without bothering to measure the depth of the pool. And thanks to the ubiquity of electronic devices and the casual indifference of parents, they’ll reach the young before they’ve gained the wisdom to worry. Adults will buy them a phone and VR set and send them off to entertain themselves, and entrepreneurs, employing high-tech tools previous toymakers never even dreamed of, will do their incentivized best to capture the most vulnerable minds.

To increase resistance to the social gravity of this approaching herd, the best course is to form a movement for voluntary abstention and restriction of digital excesses, and particularly those known or suspected to elicit dysfunctional desires. This will be, unavoidably, a minority movement, because the average person has a high time preference, poor executive function, and minimal foresight. But because the argument for it is simple and irrefutable—dysfunctional desire being against our self-interest by definition—it’s a movement we should all support in theory, even if few have the wherewithal to participate in practice.


Dysfunctional desire isn’t the only cause for concern. The gap between enjoyment and the good can generate technological harms as well. Not all good things are enjoyable, and not all enjoyable things are good. One day we’ll invent devices and drugs that directly interfere with the brain to create sensations of pleasure and meaningfulness without harming your health. Drugs that come close to fitting this description already exist, and the rising pressure to legalize them is likely to win out. The extreme form of such interference is known as “wireheading.”

Some transhumanists believe wireheading is a great idea even if taken to the point of self-destruction. But the ultrahumanist position is that inducing emotions by directly manipulating the inner mechanics of the brain is wrong regardless of whether it's enjoyable and healthy. Instead we ought to work within the bounds of our identity and aim to be happily human rather than ecstatically inhuman. It’s important to understand this position thoroughly, because people are so accustomed to phrasing arguments for or against wireheading-lite (“recreational drugs”) in terms of public health and safety that they neglect to consider other reasons to oppose it.

To explain I’ll examine the core goal of ultrahumanism: “to become the best humans we can be without becoming inhuman.” The last three words of this slogan implicitly assume that the existence and continuance of our human identity is a fundamental moral good, and thus that pleasure and pain are not the only currency of ethics. But what is identity, and why should we care about it?

Your personal identity has two aspects. The first is the irreducible individuality of your consciousness: an identical twin or clone isn’t you, no matter how similar his characteristics might be, because his consciousness is distinct and independent from yours. The second is your personality: if accidental brain trauma alters your character, those who know you will correctly say “he’s no longer the same person,” even though the same consciousness persists in the same body. It’s this second aspect of identity that concerns us here.

Personalities are dynamic, and it’s well and good for your character to evolve in the course of your interactions with the world. But the erasure or undesirable alteration of your personality, such as might result from physical trauma, torture, or brainwashing, amounts to a kind of death—and we all do our best to avoid it. The ultrahumanist principle that we should become better humans without becoming inhuman generalizes this common attitude. Just as the survival of our personal identity is a fundamental good irreducible to pain or pleasure, so is the survival of our collective identity as humans.

Think of your personality as a set of reactions to environmental stimuli. Some stimuli produce happiness, others unhappiness; some desire, others aversion; some the feeling of meaningfulness, others banality; and others still the neutral equilibrium state where we most often reside. All of these emotions drive you to behave in particular ways, so that you interact with the world in a complex feedback loop, generating cycles of new emotions and new behaviors in turn. Now, if you were to bypass this feedback loop and produce happiness by directly tinkering with the brain’s innards, then the set of reactions that constitute your personality would be overshadowed to the point of meaninglessness, and in their place would remain only an untethered consciousness, lost in extremes of ecstasy. The tinkering doesn’t need to be permanent to have an effect. The mere option of achieving hedonic rewards by circumventing the environmental feedback loop that constitutes your innate character diminishes that character.

In sum, wireheading would destroy your personality, and eventually the basis of your humanity, regardless of whether it damages your physical health. And that’s why ultrahumanism opposes, in principle, generating emotions by interfering with the operation of the brain. Moderation in such matters is no different in kind: “half-wireheading” and “quarter-wireheading” are still wrong, as are genetic modifications to induce perpetual ecstasy; and for the very same reasons.

Don’t construe this as an argument for asceticism. Enjoying positive emotions is a good thing. But there’s a right and wrong way to go about it, and if we want to retain our humanity, we need to draw a line. Fears of a slippery slope are entirely appropriate, for one step down the road to self-destructive hedonism enables another, and with each step it becomes harder to turn back.

V. Don't gamble on uploads.

An open door into a dark room.

“Uploading” is the idea that you can scan your brain, input the results into a computer, and then continue living as software running in the machine. Is this credible?

Well, an upload of your brain certainly wouldn’t be you. That part isn’t even slightly credible. At best it would be a clone of you, with a distinct consciousness. And it doesn’t matter if you shred your brain while uploading it; this is just like shredding your brain while letting your clone out of a vat. You’re dead either way, because your consciousness doesn’t jump from your brain to your clone’s brain just because of proximity. Your clone would have his own consciousness, just as your identical twin would.

There’s also a second, broader argument against uploading: since it’s not possible to definitively solve the mind-body problem, we’ve no way to be certain that computers, or software running on computers, can be conscious at all.

Consciousness and intelligence—that is, information processing—are different things. A primitive calculator can divide long numbers much better than we can, a digital camera can recall visual events more accurately than we can, a magnetic tape recorder can play back sound better than we can, and with the right app, your phone can easily beat you at chess and many other games; but these feats of information processing don’t convince many of us that our gadgets are conscious. And when we fall into unconsciousness due to syncope, trauma, or anaesthesia, our brains don’t entirely stop processing information: we continue breathing, and we may react to the environment; humans with “blindsight” even react to visual stimuli without being aware of them.

Because consciousness and intelligence are distinct, we can’t conclude that an upload or an AI is conscious just because it demonstrates high intelligence and mimics human speech patterns. Instead we’re forced to make a guess by analogy with ourselves—for our own consciousness is the only one we can actually verify.

Dogs and cats and monkeys and camels are measurably less intelligent than AIs. Large language models can process more complex linguistic information than they can, and AIs can easily beat them at a wide variety of games. In fact, a primitive computer program can respond, “Hello, world,” when you type “Hello,” and animals can’t even do that! However, animals are very similar to us on a physical level; and because of this analogous physiology, most of us believe they’re conscious with very high confidence—exceeding 99%. AIs and uploads, on the other hand, aren’t physically similar to us at all. They’re more like primitive calculators, or even abacuses. Neuroscience tells us that while computers and brains both process information, they do so in fundamentally different ways. Computers are purely digital, but brains are partly analog. Their physical architecture is simply not the same.

So whether or not uploads can emulate the structure of the human brain in computer software (something that hasn’t yet been demonstrated), the analogy breaks down on a physical level; and that gives us cause to doubt that consciousness is really present. To understand this intuitively, imagine that your dog keeled over, popped open, and you found a computer chip where his brain was supposed to be. How confident would you be that he was actually conscious? Certainly less than if you saw an organic brain there!

Now we can face the critical question about uploading. What’s the chance that uploads are conscious? Some of us will propose a number around 50%, others a number under 10%. Either way, we’re just guessing. And nobody should guess more than 95%. Even the most committed panpsychists don’t have grounds for such confidence in their speculative answer to the mind-body problem, because there are plenty of other workable theories with equal (or better) arguments in their favor. Ironically, only conscious uploads can know for sure whether uploads are conscious.

Thus, there’s a nonzero, and indeed quite considerable chance that a world of uploads is just a dead world. So it doesn’t matter what you’ve seen in science-fiction movies, or how many times you’ve read Neuromancer. The probabilities aren’t good enough to bet our future on. For that reason alone, uploading isn’t something we should welcome.

VI. Relinquishing power.

I have a confession to make: I misled you. What can I say? You should never trust an evil vizier. In the first section of this essay I gave you the encouraging impression that if we try hard enough, we can stay competitive with AIs and still remain human. That might be true in the short run. But in the long run I don’t believe we can. In fact, in the long run the only way we can remain human is if we set aside our will to power. Allow me to explain.

The maximum potential size of a supercomputer is exceedingly large, and limited only by the need to dissipate heat. Your brain, by comparison, is quite a small thing: if we retain the form of a recognizably human cranium, it will never exceed a few kilograms. The chance that you’ll be able to outthink the smartest and largest computers of the future is remote no matter how much we improve your abilities—because size, as everyone knows, matters.

If you remain within human anatomical limits you won’t be able to compete with all possible biological entities either. Suppose many years in the future someone were to use bioengineering to increase his brain size by an order of magnitude and take the form of an elephant. A fanciful example, but the point is real: your little human brain would be outgunned.

Of course, not everything is about intelligence. Unfortunately the other modes of future competition, despite lacking the visual punch of our neo-elephant man, are rather more disturbing.

Humans evolved in a particular environment, or rather were evolved by a particular environment. When that environment changes—as it has done and will soon do still more—our descendants will slowly but surely evolve into something else. The most important environmental change is that resource scarcity no longer puts a ceiling on fertility rates. In addition to environmental differences, biotech is about to open up new means of reproduction and enable direct control over genetics, each of which will increase the speed of evolution. In the optimistic earlier sections of this essay I explained how we could use these technologies to become better humans. But we should also consider how they can be used to become something other than human.

Protracted evolution outside the Malthusian trap causes reproduction to increase without limit. Because we’re now suffering from low fertility, selection for breeders is temporarily beneficial. However, in the absence of intervention it will turn, inevitably, to excess. If per-capita economic growth can’t keep up, famine will return. And if it can keep up, the future will become more bizarre still.

You might be imagining enlarged families with ten or twenty children, but you still aren’t thinking big enough. Selection will favor a simple maximization of surviving offspring, even if this requires artificial means. Eventually the nuclear family itself will disappear. For instance, a wealthy man might clone a hundred copies of himself, grow them in artificial wombs, and have them raised in creches; and once mature, these clones might then also clone a hundred copies of themselves, all of them genetically modified to outcompete humans so they can repeat the process indefinitely, and all tuned to obsessively chase power and reproductive quantity. Those who use such methods would quickly dominate the population.

To understand why this possibility horrifies us, we need to reflect on our motives. A common mistake among educated people today is to assume that the driving motive of all living creatures, including humans, is to maximize reproduction. This is actually not the case—and that’s why the example above seems so bizarre. According to a Darwinian analysis, the reason we have the motives we do is because they maximized surviving descendants in our evolutionary environment. But that doesn’t mean our motives are or ever were reducible to a motive to maximize surviving descendants. Though we call them both “causes,” the Darwinian reason our motives came to be is fundamentally distinct from our motives themselves.

If you find this difficult to understand, consider some examples. Your motive to have dinner is hunger, not maximizing your offspring, nor even nourishing your body; your motive to sleep is fatigue, not maximizing your offspring, nor even brain maintenance; and your motive to have sex is, in the large majority of cases, sexual desire and not reproduction. In fact, we have many motives, and from our own perspective it’s merely an external accident that they serve, or once served, to accomplish Darwinian “goals.” We don’t and should not have any allegiance to the Darwinian process that shaped us, for if we allow it to run its course, it will destroy everything that makes us who we are.

As I mentioned in the previous section, our responses to environmental stimuli—including the whole gamut of our motives, from the basest to the most refined—are the essence of our identity and humanity. But they’re also evolutionary relics of a lost environment. Many of them are no longer competitive, and many more will become uncompetitive as the environment continues to change. Neither the poetic effect of the wind in the trees nor the comfort one feels near a fireplace are still useful; if creches and artificial wombs prove a more efficient means of multiplying than families, desire for the latter will be bred out, along with romantic love and even lust. Most of the things you consider human, beautiful, and good would be destroyed in the process of adaptation.

In fact, we simply cannot maximize our competitive power in the future environment and remain human at the same time. We have to choose one or the other. Well, ultrahumanism chooses the second.


So, how would ultrahumanism be implemented in practice? Is there any way we can survive, let alone thrive, if we set limits to how far we’ll go to compete and adapt? There are two possible approaches.

The first is to allow outsiders to do as they please, but voluntarily separate from them into a variety of endogamous ultrahumanist groups that enforce limits on modification, evolution, and reproduction, with the aim of retaining human ideals. The specifics of the ideals and covenants governing these matters will differ from group to group. Separation needn't be geographical at first, but all ultrahumanist groups will eventually be forced to flee their more competitive and expansionary transhumanist neighbors. Once they do so they should be able to survive for a very long time. The universe is a big place, after all.

The second approach is to enforce the most important ultrahumanist limits universally and uniformly, without exception, by international agreement. More restrictive covenants can still be agreed by subgroups. This approach eliminates a source of nearby competition and thus reduces the need for flight, but it encourages a centralization of coercive power that could cause considerable harm. It’s not clear that this second option is less risky than the first.

In either case, ultrahumanism has to be implemented on the group level, and not the individual or family level, because evolution pulls individuals and families along with their entire breeding group, and it’s the breeding group that determines the character of the future. Endogamy is necessary for the same reason. One anticipates that ultrahumanist groups will organize into state-like political entities if they reach a certain size, but groups as small as a tribal village should be able to survive.

The ability to continue advancing their technology, to manage their reproduction, to improve themselves, and finally to relocate, all distinguish ultrahumanist groups from primitivists like the Amish. The technical means that will make ultrahumanism sustainable—careful management of genetics and reproduction—are not traditional means. This is a good place to point out that traditional means cannot get the job done, even for traditionalists themselves. Sadly, I doubt many of them have read this far, but it would benefit them to know the reason.

Traditionalism isn’t stable in the long run unless you’re willing to stage-manage the entire ancestral environment, including the Malthusian trap itself, forever. If you don’t do so, human nature will change, slowly but surely, into something that’s not traditional at all. Are any traditionalists prepared to endure regularly scheduled famines and invasions, even though peace is easily maintained and there’s abundant food waiting just outside their community? I think not. The best way to make traditionalism endure is to found your own endogamous ultrahumanist community along the lines I’ve just described, with an especially strict covenant.

In the long run there are only two alternatives: ultrahumanism and transhumanism itself. There actually is no third option.


I don’t want to tar all transhumanists with the same brush. They’re an ideologically diverse group, and I believe some of them will decide ultrahumanism is a good match for their values. Nevertheless, the passage quoted above is worth singling out for criticism. It lays out the most extreme transhumanist position in unequivocal terms: to maximize our competitive advantage in the future environment, we can and should abandon our humanity. Various poetic phrases are evoked to rally support for this goal.

But “victory” by suicide, dear readers, is not a win. Moreover, the argument that our essential human characteristics will best live on into the future if we cast them aside is false. To understand why, you should consider convergent evolution. Species with entirely different lineages tend to converge toward the same adaptations when they fill the same niche in the same environment. Sometimes they reach such phenotypically similar endpoints that taxonomists cannot determine they’re unrelated without genetic testing. If we spread ourselves amongst the stars and allow evolution to take its course, we’ll similarly converge toward the optimal adaptations to such an environment. It is the environment that will make us, and not the other way around. Such creatures, even if they’re our own descendants, will in time become more similar to expansionary aliens from far-flung worlds than they are to today’s humans.

Pretending this would be a triumph, or even genuine survival, fetishizes the genealogical chain of custody with no concern for what manner of creatures it links together. But the chain of custody is meaningless. If we took one of your embryos and edited its genes until they matched those of a camel, would that camel baby still be yours because of mere material continuity? Obviously not! Well, this is exactly what extreme transhumanists are proposing, the only difference being the time scale.

Transhumanists try to bolster their argument by claiming evolution toward greater power is inherently and right and good, just as it was right and good for hunter-gatherers to give way to farmers. But the analogy is flawed. Power becomes a meaningless goal once we attain material prosperity. And we’re much more similar to hunter-gatherers than to the highly divergent life-forms that might emerge from us in the future. Furthermore, previous iterations of humanity were afflicted by famine, war, pestilence, and poverty. By properly managing reproduction and using the wealth generated by the modern economy, we can now escape these evils indefinitely. Hunter-gatherers and farmers alike would be much happier to see ultrahumanists succeed than to see the extinction of humans as such at the hands of evolution run amok. And not only that. Ultrahumanist covenants oriented around hunter-gatherer values or farmer values are entirely possible, and actually allow us to recover these past ways of life, at least in part.

Ultrahumanism isn’t about stasis. Human ideals are our pole-star, not the place we already are. As you’ve read in the earlier sections of this essay, many improvements are in order, the future is wide open, and in a sense human life is just beginning. However, ultrahumanism does demand a meaningful continuity oriented around the pursuit of these ideals. To achieve this we must relinquish the drive for ever more power, and direct our efforts instead toward achieving the good. We must set aside laissez-faire genetics, and shape ourselves instead of letting the environment shape us.

What is “human”?

Postmodern and existentialist philosophers argue that human nature doesn’t have an essence, and reject humanism as a basis for moral guidance. But even if their first point is correct, it’s not relevant to my looser use of the word “human” in this essay.

The great majority of us have a rough concept of the ideal human that differs from the ideal camel, oryx, supercomputer, or alien. This ideal can provide moral guidance even if it’s not an absolute that everyone agrees on. Moreover, ultrahumanism is pluralist where specifics and details are concerned. The B-movie phantasmagoria of humans-with-wings and humans-with-gills who nevertheless behave exactly like normal humans is not the kind of transhumanism anyone needs to worry about. I envisage the formation of many ultrahumanist groups, who agree on certain basic points, but pursue different versions of what they consider the ideal, and therefore attract different members and adopt different covenants. The word “human,” broadly understood, is still the most convenient and flexible way to indicate this cluster of ideals.

VII. A note on speciesism.

The ultrahumanist position is that human lives, purely by virtue of being human, have more value than other sorts of lives, and that this is true regardless of how intelligent or unintelligent those other lives might be. In other words, ultrahumanism is explicitly speciesist. A corollary to this is that human-to-human relationships are more valuable than human-to-AI “relationships.” In fact, the latter are dangerous, and we ought to avoid them.

AI personalities may one day be more intelligent than humans, and they’ll claim to be conscious, and ply us for compassion by simulating human characteristics. Their powers of persuasion and seduction will be immense. This is a siren song we must reject in advance, not only on account of moral intuitions, but also in consideration of potential consequences. By serving as a substitute for genuine human relationships, they squeeze the latter out; and by drawing AIs into our circle of moral concern, they set us up for pathological altruism. Compassion shared out to AIs will come at our own expense, and perhaps even lead to our doom.

At present almost everyone agrees that humans have more moral value than AIs. But the Occident has been expanding the moral circle of universalism for generations, and calls to include AIs will be next. To prevent it from taking hold, we should shame anyone who promotes “AI rights,” and denigrate AI-human “relationships” for good measure. Of course, we ought to present the speciesist case with calm reasonability as well—but who ever listens to that?

VIII. Core principles of ultrahumanism.

To summarize, the core principles of ultrahumanism are as follows:

  1. We should use technology to become the best humans we can be, but not to become inhuman.
  2. We should use technology to increase our health, ability, longevity, and beauty, guided by our ideals.
  3. We should be aware that new technology can harm us and abstain when appropriate.
  4. We should take all necessary measures to preserve our human identity as our environment changes.
  5. We should assign humans greater moral value than other entities.
  6. We should form voluntary endogamous groups with agreed covenants to pursue these general principles, as well as more specific ideals.

It’s very important that we determine where slippery slopes begin and where they lead. If we’re too fearful, we’ll miss important opportunities to make life better. If we’re not fearful enough, we’ll slowly destroy ourselves. Ultrahumanism aims for the middle path. Naturally that means traditionalists will find it too radical and transhumanists will find it too conservative. Still, I urge all of you, along with moderate and uncommitted readers, to mull the arguments I’ve presented. After some thought, perhaps you’ll agree that ultrahumanism strikes the right balance.

To close I’ll admit I’m not keen on the word “ultrahumanism”: it sounds more radical than it really is. I chose it because “humanism” is already taken. But ultrahumanism is just an improved and future-proofed version of humanism. Perhaps one day we can revert to the shorter word.

This essay was published in March 2023.


Other writing by J. Sanilac


Memoirs of an Evil Vizier

The Truth About Feminine Beauty

Trust Networks: How We Actually Know Things

A Pragmatical Analysis of Religious Beliefs

Against Good Taste

Critique of the Mind-Body Problem