Free Novel Read

Transformers and Philosophy Page 2


  Several remarkable developments are expected in our own technology within the coming century. The ones most important to the subject at hand will be artificial intelligence and nanotechnology. By “nanotechnology’” I mean the ability to build machines with atomically-precise, molecular-sized parts. By the end of the century (and most likely, by the end of the 2020s), we’ll have robots who can talk and think like people. Not quite as soon, but still comfortably in the century, we’ll get to a level of nanotechnology that will give us broadly general control over the structure of matter at the atomic level. And in roughly the same timeframe, we’ll become a Kardashev Type 1 civilization.

  Here’s a simple way to be a Type 1 civilization, if you have nanotechnology: put a layer of tiny balloons in the high stratosphere, covering the entire Earth. Each one contains a wisp of aerogel that consists of switchable molecular-scale antenna elements. Taken together, they form an optical phased-array with an eight thousand mile aperture. You have a planet-sized solar power plant, which is incidentally a planet-sized telescope and laser as well.

  Alternatively, you could launch ten million square-kilometer-sized solar satellites, something that nanotechnology also puts within the range of economic feasibility.

  Sending a spaceship to another star, which would involve accelerating it to some decent fraction of lightspeed, would use more energy than our current civilization can produce in a year. But it would be doable for a Type 1 civilization, if still a bit expensive.

  So it’s hard to avoid the conclusion that the civilization the visiting robots come from is at least Type 1. Having reasonable predictions that our own civilization will reach Type 1 sometime in the coming century, including fairly detailed technical roadmaps of how we can do it, allows us to make a good estimate of what the originating civilization would be like, at least as regards their technological capabilities. They’ll have true artificial intelligence (unlike our current planetary explorer robots, which are not too smart), and nanotechnology.

  Physical Bodies

  Probably the most compelling feature of the Transformers of fiction is their shape-changing bodies. The bodies combine two different essences that are powerful and, more importantly for the movies, eye-catching. They depict complicated mechanisms with many, many moving parts, and they alter radically in overall form. This makes them seem like living machines.

  We’re used to seeing machines with lots of moving parts, and we know how fast and capable they can be. Watch a printing press sometime: a decent-sized one can produce several books or newspapers per second. Imagine how long it would take you to write out several books by hand. To anyone with a gut-level understanding of technology, a machine with thousands of smoothly interacting moving parts conveys the impression of an almost limitless capacity to do something.

  Our machines, however, don’t change their overall shape much. The most that real cars do is to open and shut their doors, with a very few which can raise and lower roofs. We humans and indeed most animals, on the other hand, can’t even move without changing shape. We change shape to walk, sit, pick up things, climb trees, and so forth. Watch a bear change from a big furry cushion, curled up for sleep, into a powerful hunting machine. Watch a moth change from an almost invisible lump amidst the tree bark to a multi-winged flying marvel.

  Put these two notions together and you get the sense of something transcendant, with both the power of the machine and the ineffable spark of the living creature. This is in fact what we can expect of the bodies of our visitors. There’s only one minor caveat.

  For visual appeal, the Transformers are shown as being made of parts of about the size and moving at about the speeds of the parts of machines we’re used to. But in reality, our most advanced machines already use parts that are too small for the naked eye to see and which operate at billions of cycles per second. Our own bodies have moving parts, the molecular machines inside the cells, that operate on that scale if not quite at that speed.

  A robot built with nanotechnology would, to an outside view, be as seamless and fluid as a living creature. Internally, it would consist not of thousands, or even millions, of moving parts, but trillions, just as your own body does. But the parts would be thousands of times faster than your body’s proteins, hundreds of times stronger, and capable of operating in a much wider range of temperatures, pressures, and atmospheric compositions (including none at all).

  A human-sized robot, with human strength, built with nanotechnology, could be able to fold up into a package the size and weight of a ball-point pen. A human-sized nanotech robot with human weight could be stronger than a locomotive and fly faster than a speeding bullet.

  Minds

  Our most advanced machines today are our computers. I’m writing this essay using a computer that is frankly overkill for the task. If you took the technology of just a century ago, and tried to build a computer of equivalent processing power using electromechanical parts such as relays and gears, you’d need more than the budget of the entire United States (then), and the machine would fill a fair-sized city.

  Our computers today are within hailing distance of the biological technology of our brains, in size and speed (they are nowhere near as efficient in power usage, yet). We can’t build a computer with the processing power of the brain that’s the same size as the brain today, but such a machine would fit in a bedroom. Given the historical trend of power-to-size of computers, known as Moore’s Law, we should expect computers to match the brain’s power within its size in about ten years.

  Then add another century of development to that.

  We can reasonably expect our human-sized, human-strength robots to have humanly-smart brains that fit into the ballpoint pen as well.

  But of course, nobody’s going to stop there. If a century of progress can put a city’s worth of computing into a desktop box, a century from now we’ll have a city’s worth of human thinking in a desktop box. A “big brain” robot the size of, say, a tractor-trailer truck, could have a mind with the intellectual power of the entire current human race.

  We can’t come close to saying what the optimum configuration of these possible bodies and minds would be. The ten-ton supermind would use about as much material as a million ballpoint-pen (human size, human strength) robots. Obviously you’d want some balance between wise, insightful governance on the one hand, and distributed manipulating and sensing capability on the other. The human brain is about two percent of total body mass, but this is quite a bit higher than for most other animals. Let’s suppose the evolutionary trend continues and the percentage of brain goes up in the interstellar robots. Then we could imagine a two-level society, consisting of physical robots that did things with matter and energy and who typically had roughly human intelligence (and a two-percent brain-to-body ratio), and “big brains” which were much less mobile but did the heavy intellectual lifting (taking up, say, another two percent of the overall mass of the robots, for a total of four percent). To some extent our society is already trending in this direction: the “brain” part consists of everybody who works in an office.

  Society

  To all appearances and best estimates, we will build such robots within this century. They will then proceed to build other robots themselves, and so forth, producing a society much more diverse than our current human-only one. There would be a wide range of different body types in the robots who did physical work. There would be insect-sized robots who kept the floor clean, and robots the size of cities who were interstellar spaceships—and everything in between. But will the robots’ home civilization have any humans (or other biological forebears) at all?

  There are at least two reasons to think probably not. First is that given the ability to build robot bodies and brains of the kind we’ve been talking about, the most adventurous and ambitious of people will be the ones to copy their minds over into electronic form—“uploading.” This is particularly true of anyone who wants to go into space, but the prospect of having a bigger, faster b
rain will be a powerful draw for anyone. Although there will be plenty of people who want to remain in biological human form, they will become essentially the Amish of the civilization as the human uploads, pure AIs, and mixtures of the two continue to improve in productive and intellectual capacity. Nothing nefarious would necessarily happen to them—they would just become a steadily dwindling proportion of the total.

  The second reason is simply that it’s almost certain to be the robots, not biological humans, exploring and settling new worlds and star systems. Thus even if the original civilization retains many organic citizens, their colonies and colonies of colonies and so forth will probably be all-robot. These will be more numerous than the original worlds, and will form the outer and expanding parts of any civilization’s sphere of influence - so they are where we expect our visitors to come from.

  But why expand in the first place? Interstellar travel is expensive, so much so that our current civilization can’t do it at all. Part of the answer is simply evolutionary: given a few civilizations, some with the itch to explore and colonize and others without, come back in a few centuries and the galaxy will be full of the explorer types and not the others. But even if you don’t have that territorial urge, it might be a good idea to settle a buffer zone with gentle, peaceful cultures that are compatible with yours before the noisy, gung-ho types take them all.

  Culture

  It’s tempting to say that a civilization that spent all its time fighting wars amongst itself wouldn’t have the time or resources to waste on interstellar exploration, and so that if we get visitors from the stars, they’ll be from a unified, peaceful culture. But unfortunately history says differently: the great age of exploration and colonization in the sixteenth and seventeenth centuries originated in Europe, where there were numerous countries often at war with each other, while China, though unified, peaceful, and technologically more advanced, sat in a self-imposed isolation.

  So we should expect our visitors, if not necessarily out-and-out warlike, to be from a competitive culture. But what else can we say about such a culture? Will it be greatly different from a human one because it’s composed of robots and not biological humans?

  One obvious way robots might differ from humans is in their reproductive arrangements. For example, it might be much more efficient for robots to build factories, which then build robots, which build factories, and so on, than the human method. When species of biological life follow this pattern, it is called alternation of generations. One could think of ant colonies and bee hives as pursuing a similar strategy. The social insects exhibit an enormously higher group loyalty than humans, and also a fairly vicious inter-hive rivalry.

  But there are also pressures that are likely to make the robots more like humans. The most important is that they will be intelligent. Consider the technological advantages the European explorers had over the native Americans in the Age of Exploration: perhaps more important than the guns, germs, and steel was the printing press. In European culture, knowledge propagated, was tested, and improved much faster. The Europeans were smarter, not because of anything about their physical brains but because those brains were populated by more advanced ideas. Ideas evolve in a way that is not necessarily linked to the physical substrate, and it is the evolution of ideas—memetics—that determines the ultimate shape of a culture.

  The printing press was a quantum jump up in communication from manuscripts and word of mouth; the internet is a quantum jump up from the printing press. A robot culture would be connected by communications of such high bandwidth that the only thing we have to compare them with in our experience is the corpus callosum, the connection between the two hemispheres of your brain. (Its bandwidth is just about the same as a current-day high-speed Ethernet connection.) Imagine a society of geniuses who could co-operate as seamlessly as the two halves of your brain do. That’s what a robot society would be like.

  Technology

  Why would a solar-system-wide civilization of robots—or, more precisely, some faction of such a civilization in competition with the others—send representatives to Earth? There might be many reasons, but given the dynamics of the evolution of interstellar cultures, there’s a very high probability they would be prepared to take any opportunity to reproduce their civilization around the new sun. The galaxy is simply much more likely to be populated by cultures that do that, than by ones that don’t.

  Before proceeding, it’s worth reiterating the difference in capability between our civilization and one which could send starships. It’s difficult to convey the power, speed, and pure volume of effect a combination of artificial intelligence and nanotechnology would bring to bear. Imagine that Columbus had arrived in the West Indies, not with three wooden sailboats, but with a nuclear carrier task force—but one where the technology was aimed at exploration and development instead of fighting another navy. They’d have chainsaws instead of rifles, earth-movers instead of tanks, well-drilling equipment instead of artillery, and prefab factories that built more pre-fab factories. Now imagine that all of the islands were uninhabited, except that one of the smaller ones was home to a band of monkeys.

  In other words, given the technological mismatch, we couldn’t put up enough resistance for them to be able to “conquer” us in any meaningful sense. It would be more a question of whether the footnote beginning “Incidentally, the indigenous life forms of the third planet were . . .” ended with the word “displaced” or “preserved.”

  Ethics

  Probably the best we could hope for would be that the small island Earth with us monkeys on it would be left alone as a nature preserve. Luckily for us, that turns out to be a fairly minor concession. Science-fiction writers have a propensity to imagine biological visitors who would find the surface of a life-bearing planet useful. But given that the visitors are robots with polymorphic bodies, they will need only matter and energy to rebuild their civilization and send their descendants on to other stars.

  Most of the matter and energy in the solar system is not on Earth. Virtually all of the available energy comes from the Sun, and the Earth gets less than one part per billion of the Sun’s output. Most of the matter (outside the Sun itself) is in the large outer planets, starting with Jupiter. But the low-hanging fruit, so to speak, is in the smaller planets, moons, and asteroids. Mercury is an obvious place to start, with relatively low gravity and close proximity to the Sun for power. Smaller but closer than Earth, it picks up about the same amount of solar power as does the Earth—the Kardashev Type 1 level. Or if you disassembled it and used the mass to build solar collectors in its current orbit, it would provide a hundred million times as much energy—well on the way to Type 2.

  So for the robots to avoid wiping us out, they would only have to have a fairly minor moral concern for indigenous biological species. It’s not as if we had anything they really needed.

  Why might they have such moral concerns? It seems likely that they will have descended from biological creatures similar to ourselves—that’s the only way we know that they might exist in the first place. Their ancestors are reasonably likely to have given them at least a human level of ethics—if for no other reason than that many of the robots will originally be uploads (and besides, as robot ethicist Ronald Arkin points out, it’s a pretty low bar).

  Let us think of an ethical scale for civilizations in the same spirit as the Kardashev scale. I’ll modestly call it the Hall scale. The Hall scale rates a civilization’s ethics in terms of how many individuals can co-operate productively without breaking out into physical warfare. For primitive human hunter-gatherers, who lived in tribes of about two or three hundred which were constantly at war with each other, we can say order of one hundred people. For current Western civilization, it’s about a billion. We’ll simply use the logarithm of the number of co-operating people (or robots!) as our scale, so the hunter-gatherers have Hall Level 2 ethics and our globalized economy has Level 9 (there are more than a billion people on Earth now but we don�
�t all co-operate!).

  A Kardashev Type 1 civilization probably needs Level 10 ethics—or more—to sustain the kind of co-operation necessary to launch interstellar probes. Over the past century, in moving from the colonial era to our present world (or at least Western) culture, we’ve moved up from about Level 8 to Level 9. In doing so our moral concern for things like indigenous peoples, species preservation, and the like has increased dramatically. It is at least not unreasonable to hope that a civilization with Level 10 ethics would be concerned enough about such things to make the minor accomodations necessary to let us survive.

  I can already hear the outraged objections of any philosophers who may ultimately read this. There is a lot more to a system of ethics than the number of people who co-operate within it. Ethical development doesn’t proceed in a straight line, such that by measuring one aspect (number of cooperators) you could say anything about some other one (concern for other lifeforms). Yes, indeed, the Hall Scale is a very crude way of measuring ethics.

  Yet there are some strong overall correlations between number of co-operators and the other concerns we’re worying about. In a hunter-gatherer society, there is a major distinction between a member of your own tribe (not to be killed) and someone from another (kill if you can get away with it). The higher on the scale the level of ethics, the more people have to be in the “own tribe” category - and concomitantly, the more kinds of people. People with different accents, who wear different clothes, and eat different kinds of food. People who speak different languages and worship different gods. Even people of different races and sexes fall under the expanding umbrella of inclusiveness.