Free Novel Read

Transformers and Philosophy Page 9


  6

  Can Metal Be Mental?

  MATTHEW PIKE

  When we think of minds, or of ‘events in the mind’, such as thoughts or feelings, we think of something ephemeral, something wispy and hard to grasp. And yet as science has continued to march on, our views about what a mind is are now intimately connected with what goes on in the brain—a purely physical part of the body.

  Perception, reasoning, and even consciousness are commonly traced to physical goings-on in our central and peripheral nervous systems. Is it possible then that something made not of carbon-based molecules, like our brains, but rather of metal, silicon, or some other non-biological material, could possess what we would call mind?

  Even our most powerful computers, despite their amazing capabilities, do not have minds. They have microprocessors that can do incredible things, ranging from creating immersive games like World of Warcraft to performing state-of-the-art climate modeling and DNA analysis. But they do not think for themselves, feel pain, or reflect on their lives. We might wonder then, whether it’s even possible for machines to actually have thoughts or feelings.

  If we encountered machines, say, from the planet Cybertron, that seemed to be having these kinds of experiences, how could we know that they actually had mental lives, and were not just programmed to act as if they did? The Transformers provide an excellent example of what machines with minds might be like. Thinking about the Transformers raises some fascinating questions about what it is to be a person and what it means to have a mind. It may also help prepare us to meet something that looks as if it’s intelligent—whether this is something we humans have built or something that shows up one day on our planet, looking for energon cubes or the All-Spark.

  What is it about the Transformers that makes us more inclined to believe that they have thoughts and feelings than that our laptops or cell phones do? What evidence could humans have that Transformers actually do have minds? For that matter, can you ever be sure that anyone besides you has a mind? The problem presented by this last question, known to philosophers as ‘the problem of other minds’, is that it seems that any action, facial expression, or language use that another person or machine might exhibit could be nothing but the programmed response of an unthinking thing. So how do we tell the difference between something that does have a mind and something that doesn’t? To answer this question, we need to have a sense of what a mind is, so that we can know what counts as evidence for or against its presence.

  One of the first things that comes to mind (no pun intended) is to say that a thing has a mind if and only if it acts in certain ways. René Descartes, for instance, thought that mechanical things (things without souls, or minds) could do everything that things endowed with a mind can do, except for using language and solving unexpected or novel problems. Only a special sort of non-physical, God-given thing could do that.

  A very different view of the mind became popular among certain philosophers, known as Logical Behaviorists, in the early and mid-twentieth century. These philosophers maintained that mental things (such as thoughts) just are doing certain things in certain circumstances, or being inclined to do these things. They argued that being in a certain mental state is nothing except for having a disposition to engage in certain observable behaviors when presented with certain sense input. So, being happy is the same thing as being likely to smile, and saying that you are happy when someone asks how you are doing, and having a certain positive tone in your voice when speaking with other people, and so forth. The behaviors that you engage in when something causes you pain is all that pain is. These behaviors can be very complex though, and, as we know, they can be very unpredictable: different people respond differently, and even the same person responds differently to the same thing in different situations.

  According to this theory, you know that your roommate has a mind because your roommate does certain things, like talking to you about his or her favorite movie, choosing what to eat for dinner and preparing it, and saying “Ouch!” when stubbing a toe on the coffee table. In the same way, perhaps we can tell that the Transformers have minds because Bumblebee makes grunts and groans when being captured and tormented by the agents of Sector 7, Starscream repeatedly takes actions to try to gain more power and control, and Optimus Prime asks for help from humans in foiling the plots of the Decepticons. These all seem like behaviors that only things with minds can do.

  Testing 1—2—3

  As a result of the identification of the mind with behaviors, a test was suggested by Alan Turing (which is now unimaginatively called the “Turing Test”) that aims at determining what it would take for an objective investigator to make the judgment that a machine was thinking. The test involves a human’s asking questions of another human and of a computer, robot, or some other mechanical being, without knowing which is which (so, no peeking—the idea is to judge on verbal responses alone, as Descartes would have us do, not on looking like a person or otherwise observing them). If the human tester cannot reliably tell which is the human and which is the artificial system based on responses to a series of conversational questions, then, according to Turing the system is said to have demonstrated intelligence, that is, real thinking.

  The Turing Test works on the assumption that indistinguishable input and output patterns can be safely presumed to indicate roughly indistinguishable degrees of mind. In other words, if a system acts as if it has a mind, then it must have a mind. This test is by no means universally accepted as satisfactory, but as a quick justification of its value, we should note two things. First, we all seem to intuitively employ what amounts to a modified version of the Turing Test when determining whether we are dealing with other minds. This seems especially likely during our childhoods, in which we at some point learn to distinguish between objects and persons, and then devise (albeit unconsciously, most of the time) a series of conditions that a thing must meet in order to be identified as a person. These conditions will be relative to a prototypical person (most likely one’s mother or father) that then serves as the point of comparison for all future encounters. Granted, by the time that we have performed this evaluation several hundred times, and have noted that each and every “object” we encounter with human shape, appearance, and movement has met the conditions, this process becomes increasingly automatic, but it seems that, at bottom, we are performing something like Turing’s test all the time.

  Second, Turing competitions are still held annually, because this behavior test is quite a challenge, and the possibility of writing a program that can pass it continues to absorb many an Artificial Intelligence investigator. No machine so far has been able to convince anyone that it is a person, based on its linguistic behavior, although they have come closer over time. Although they have become good at chess, and at other “rational” kinds of thinking, no machine so far has convinced humans set on making the distinction that it is in fact thinking. If we don’t consider the Turing Test to be a satisfactory test of whether there’s a mind, then we ought to be able to point to some other way of deciding that our fellow humans have minds. It seems unlikely that we will be able to come up with a method that does not depend on observing their behavior.

  In earlier eras, the most popular approach which was to judge whether something has a mind based simply on appearance. Only beings of human form, and this very narrowly defined, were believed to possess minds. Famous disputes over “changelings” in the medieval period and indeed into the Enlightenment period were waged over whether babies born with severe deformities were in fact human, and the treatment of individuals with such diseases as neurofibromatosis (Elephant Man’s disease) reveals that the pervasive attitude was that individuals who did not look sufficiently like the prototype human surely could not think or feel. Remnants of such views indeed still remain, with many people continuing to maintain that primates other than humans, regardless of their problem solving abilities, tool use, or other complex behaviors (never mind animals such as octopuses, that
look so very different from us), cannot really be thinking.

  Surely at least part of the reason for our willingness to accept that Transformers are thinking beings is attributable to the fact that they have roughly the same physical shape as us, speak our language, engage in similar interactions, and even appear to have the same emotions as us. Someone even apparently felt the need to give Optimus Prime lips in the 2007 movie so that his facial expressions could more closely resemble ours. Although this is a natural and common association, we can see that physical appearance is the least sophisticated criterion that one might use for determining whether something can think. At least observing a thing’s behavior broadens the pool of those who might be considered to have minds; with this approach, even entities who do not resemble human beings could be understood to have minds, if they act as though they do.

  The Chinese Room

  There seems to be a rather obvious problem, though, with trying to decide whether something has a mind merely by watching the way it behaves. Despite the fact that it hasn’t yet been done, can’t we conceive that as technology improves, a computer could be specifically programmed to do things that would give every appearance of its having a mind?

  If some brilliant and compulsively industrious programmer were to program in every conceivable bit of information, say on a particular topic (this is how the Turing competitions are currently run—the computers being tested are “interviewed” on only a select area of concern, such as baseball, or the stock market, since the more general challenge of attempting to build a machine that can converse intelligently across areas of discourse has been suspended for the time being), and successfully anticipate every question that an examiner might ask it, then the computer would be able to pass the test—but it wouldn’t actually be thinking. The reason that we would be inclined to this opinion is that we can see that all this inputting and outputting was being done without the computer understanding what any of it meant.

  This is the argument made by philosopher John Searle.1 He illustrated his point by way of a clever thought experiment: he imagined a human being locked in a room where the only opening is a slot through which written messages are passed. The human inside does not speak, read, or understand any components of the Chinese language, and yet is able to successfully process messages passed into the room that are written in Chinese through the use of a collection of Chinese characters (a database) and a rulebook (a program) which, written in a language that the person does understand, gives step-by-step instructions for what output to send back, given any particular input. The rulebook does not in any way reveal the meanings of any of the Chinese characters; it simply specifies which set of symbols, meaningless to the man inside the room, are to be returned, based on the shapes of the written lines, curves, dots, and squiggles.

  Searle points out that the room in this thought experiment may be able to pass the Turing Test without the human inside understanding any Chinese, or being aware of any of the content of the “discussion” that has taken place. Searle takes the conceivability of this situation to show that no simple rule-governed manipulation of symbols (as a computer is restricted to doing) is sufficient for meaning.

  It’s the lack of meaning, of true understanding, in this kind of blind instruction following, that accounts for the failure of the machine to think. As Searle famously put it, “syntax is insufficient for semantics.” No computer program, regardless of whom it can fool, Searle insists, is sufficient to give a system the understanding that is essential to having a mind. So, even though a Transformer might be able to perform all the actions and provide all the responses we might expect from a minded being, that fact would, according to Searle, be no proof that it actually possessed a mind.

  The situation that Searle lays out points to one of the reasons why it’s difficult to work with the concepts involved in talk about minds: there are two different perspectives from which we look at minds. First, there’s the external, third-person perspective from which we see other people. We see the way they behave, hear them give verbal reports about what they are thinking, feeling, and so forth, and from this we come to the conclusion that they have minds. We don’t directly experience their minds. All we actually experience are the external results that would come from their having minds (in other words, evidence consistent with their having minds).

  Even when we examine state-of-the-art brain scan results, we are only looking at neurons in action—we are not looking at the mind of the subjects. All of this is very different from the way we come to know that we have minds ourselves: we seem to know this directly. We have an internal, first-person perspective on our own mental states. We seem to have direct, privileged access to our own thoughts, feelings, desires, hopes, fears, beliefs, and consciousness, in a way that no other person can have. It seems that I cannot experience what it feels like for you to smell a rose, or enjoy your favorite Transformers episode. I may have very similar experiences, but they are my experiences, not yours.

  So, since none of us can be a Transformer, we are left with the question of whether we can decide if Optimus Prime has a mind based on our own view of him (what philosophers call an epistemological question), or based on his having the right internal experience, regardless of whether we can ever know it or not (which is what philosophers call a metaphysical question).

  The Spark Within?

  As we have seen, Descartes thought that only beings with minds could perform certain kinds of feats, including using language and solving novel problems. In order to rise above the capabilities that simple mechanical nature accounts for, to the level of language use and reason, Descartes thought that we must have some special element bestowed by God—an immaterial mind, or a soul.

  In the opinion of Descartes, Searle, and countless others, it’s not enough to behave in certain ways because what makes us willing to say that a human or a Transformer has a mind is the idea that something is going on inside of them. When we see someone in pain, we observe their behaviors, such as taking pain medication, or pulling their hand away from a hot stove, and from this we infer that they are feeling pain. And we seem to know from our own internal experience that there is a host of internal feelings that go along with our behaviors. When you see someone you are in love with, you do not just automatically start behaving in certain ways; rather, you have thoughts and powerful feelings, and these thoughts and feelings cause you to act in the ways that you do.

  One of the reasons that Behaviorism became popular is that it seemed to offer an appealing alternative to the view put forth by Descartes, which came to be known as Substance Dualism. The problems with Substance Dualism were many, and were noted from the moment, back in the early seventeenth century, when Descartes published his Meditations on First Philosophy. For one thing, there appears to be no way for mind, a completely non-physical thing, to interact with physical things (like one’s own body), with which it has absolutely nothing in common. For another, there appears to be no way for one to get a mind other than to have it bestowed on him or her by God. This was objectionable for many thinkers in the scientifically revolutionary late nineteenth and early twentieth centuries, when the theory of evolution, along with developments in mathematics and physics, were becoming popular.

  To some philosophers, Behaviorism looked like an improvement over Descartes, but the problem with Behaviorism is that it leaves out altogether the immeasurably important internal mental states that we all recognize in ourselves. By defining mental states as nothing but behaviors and dispositions to behave, it completely failed to recognize the internal thoughts, feelings, and beliefs that many see as central to the nature of mind.

  These apparently non-physical elements that we tend to attribute to minds lead many people to think of something very similar to a ‘soul’, regardless of their enthusiasm for the developments of science. According to this common view, there is something else besides the physical body that composes the individual, and it is that which makes people different from inanimat
e and mindless things like tables and chairs.

  This special internal something is a candidate for what the Transformers have that our current computers do not. The 2007 movie, Transformers, seems to portray something similar to this special something. When the All-Spark inexplicably brings cell phones, steering wheels, and vending-machines to life with a single zap, something happens. What is it, and how does it happen? Just what does the zap do that instills non-living, non-mental gadgets with life and the will to fulfill something like personal desires or goals? Is it some kind of divine magic that gives the Transformers souls and minds, or is it some combination of energy and programmatic information that accomplishes this feat? Whatever it is, the All-Spark obviously imparts something more than being able to perform as the actor in Searle’s Chinese Room does. Once hit with that force, machines are able to ‘think for themselves’.

  Physicalism

  Some of us would prefer the explanation that the All-Spark imparts some combination of programmatic information and physical energy to get things started to the idea that it bestows something like a soul on the machines it empowers, because at least with the first option there is some hope of finding an explanation for how minds arise. In the second case, the existence of a mind remains a mystery, something about which no more can be said. For that reason, many of us tend to think that the view known as “Physicalism” makes better sense as an explanation of what minds are than does the Cartesian soul interpretation of mind.

  According to Physicalism, having a mind is nothing but having the right kind of physical system (which in our case is our brain, together with other parts of our central nervous system, extending to our sensory receptors). This Physicalist view comes in many different forms. For instance, some philosophers who hold this view think that it doesn’t matter so much what the physical system is made of, or how it is set up, but rather that it is set up so that it performs certain functions. According to these philosophers, it is the function that matters, not the particulars of the physical system. So, whether it is instantiated, or “realized”, in organic matter, silicon, or some as-yet-undiscovered substance, as long as it does the jobs that minds do (believing, perceiving, intending), it is a mind.