Skip to content

Unpacking the flaws of techbro dreams of the future

    Cutaway view of a fictional space colony concept painted by artist Rick Guidice as part of a NASA art program in the 1970s. NASA/Rick Guidice/Flickr

    Get your news from a source that’s not owned and controlled by oligarchs. Sign up for the free Mother Jones Daily.

    This story was originally published by Undark and is reproduced here as part of the Climate Desk collaboration.

    Elon Musk once joked: “I would like to die on Mars. Just not on impact.” Musk is, in fact, deadly serious about colonizing the Red Planet. Part of his motivation is the idea of having a “back-up” planet in case some future catastrophe renders the Earth uninhabitable.

    Musk has suggested that a million people may be calling Mars home by 2050 — and he’s hardly alone in his enthusiasm. Venture capitalist Marc Andreessen believes the world can easily support 50 billion people, and more than that once we settle other planets. And Jeff Bezos has spoken of exploiting the resources of the moon and the asteroids to build giant space stations. “I would love to see a trillion humans living in the solar system,” he has said.

    Not so fast, cautions science journalist Adam Becker. In “More Everything Forever,” Becker details a multitude of flaws in the grand designs espoused not only by Musk, Andreessen, and Bezos, but by Sam Altman, Nick Bostrom, Ray Kurzweil, and an array of tech billionaires and future-focused thinkers whose ambitions are transforming today’s world and shaping how we think about the centuries to come.

    Becker targets not only their aspirations for outer space, but also their claims about artificial intelligence, the need for endless growth, their ambitions for eradicating aging and death, and more—as suggested by the book’s subtitle: “AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity.”

    Becker finds the idea of colonizing Mars easy to deflate, explaining that dying may in fact be the only thing that humans are likely to do there. “The radiation levels are too high, the gravity is too low, there’s no air, and the dirt is made of poison,” he bluntly puts it. He notes that we have a hard time convincing people to spend any great length of time in Antarctica—a far more hospitable place. “Mars,” Becker says, “would make Antarctica look like Tahiti.”

    “Nobody’s going to boldly go anywhere, not to live out their lives and build families and communities—not now, not soon, and maybe not ever.”

    The solar system’s other planets (and moons) are equally unwelcoming, and star systems beyond our own solar system are unimaginably distant. He concludes: “Nobody’s going to boldly go anywhere, not to live out their lives and build families and communities—not now, not soon, and maybe not ever.”

    Becker sees space colonization as not only unrealistic but also morally dubious. Why, he asks, are the billionaires so keen on leaving our planet as opposed to taking care of it? He interviews the astronomer Lucianne Walkowicz, who sees their focus on killer asteroids and rogue AIs—and their seeming disinterest in climate change—as an evasion of responsibility. “The idea of backing up humanity is about getting out of responsibility by making it seem that we have this Get Out of Jail Free card,” Walkowicz says.

    Becker targets not only tech gurus but also so-called longtermists (who prioritize the flourishing of humans who will live eons from now), rationalists (who believe decision-making should be guided by reason and logic), and transhumanists (who hold a variety of beliefs related to extending human life spans and merging humanity with AI). These groups perceive the future in a multitude of ways, but underlying many of their visions is what Becker sees as a misplaced faith in artificial intelligence, sometimes imagined to be on the verge of blossoming into “AGI” (artificial general intelligence) but also potentially perilous if its goals diverge from those of humanity (the so-called alignment problem).

    Not everyone shares this fear of AI running amok, and Becker makes a point of speaking with skeptics such as Jaron Lanier, Melanie Mitchell, and Yann LeCun, all of whom are far from convinced that this is a real danger. He also cites the entrepreneur and web developer Maciej Cegłowski, who has described the unaligned superintelligent AI alignment problem as “the idea that eats smart people.” Still, the book is not mere AI-guru-bashing on Becker’s part: He spells out what it is these devotees believe, before presenting a more skeptical alternative view.

    Becker also notes that computer power may not be destined to increase as quickly as many proponents imagine. He scrutinizes Moore’s law, the notion that the number of transistors in integrated circuits doubles roughly every two years, noting that this growth will inevitably come up against limitations imposed by the laws of physics. Becker points out that Gordon Moore himself estimated in 2010 that the current rate of exponential growth would come to an end in 10 or 20 years—in other words, now or very soon.

    As Becker sees it, faith in Moore’s law is just one facet of a poorly thought-out commitment to endless growth that some technophiles seem to be advocating. Exponential growth, in particular, is by definition not sustainable. He cites an analogy that inventor and futurist Ray Kurzweil has made about the growth of lily pads in a pond: Every few days, the number of pads will have doubled, and before you know it they’ve covered the whole pond. “That’s true,” Becker writes, “but that’s also where the lily pads’ growth ends, because they can’t cover more than 100 percent of the pond. Every exponential trend works like this. All resources are finite; nothing lasts forever; everything has limits.”

    Becker says that if we keep using energy at our current (and accelerating) rate, we’ll be exploiting the full energy output of the sun in 1,350 years, and a bit more than a millennium later, all the energy emitted by all the stars in the Milky Way—and so on.

    Becker also takes issue with the idea at the core of longtermism—that the needs of countless billions or even trillions of future humans are as important as the needs of those alive on Earth today—and perhaps more important, because of their (eventual) vast numbers. (Many of these ideas are spelled out in philosopher William MacAskill’s 2022 book, “What We Owe the Future.”)

    For the longtermists, our actions today ought to be focused on allowing this bountiful future to unfold, even if it means sacrifices in the here and now. The problem, writes Becker, is that we just can’t know what conditions will prevail centuries from now, let alone millennia, so it’s presumptuous to imagine that today’s decisions can be tailored to benefit people who won’t be born for an unfathomably long time.

    www.motherjones.com (Article Sourced Website)

    #Unpacking #flaws #techbro #dreams #future