What is cybernetics?

is the interdisciplinary study of the structure of regulatory systems. Cybernetics is closely related to control theory and systems theory. Both in its origins and in its evolution in the second-half of the 20th century, cybernetics is equally applicable to physical and social (that is, language-based) systems.

Contemporary cybernetics began as an interdisciplinary study connecting the fields of control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology in the 1940s, often attributed to the Macy Conferences.

Friday, March 20, 2009

Simulated reality

Simulated reality is the proposition that reality could be simulated—perhaps by computer simulation—to a degree indistinguishable from "true" reality. It could contain conscious minds which may or may not know that they are living inside a simulation. In its strongest form, the "simulation hypothesis" claims it is possible and even probable that we are actually living in such a simulation.

This is different from the current, technologically achievable concept of virtual reality. Virtual reality is easily distinguished from the experience of "true" reality; participants are never in doubt about the nature of what they experience. Simulated reality, by contrast, would be hard or impossible to distinguish from "true" reality.

The idea of a simulated reality raises several questions:

  • Is it possible, even in principle, to tell whether we are in a simulated reality?
  • Is there any difference between a simulated reality and a "real" one?
  • How should we behave if we knew that we were living in a simulated reality?

Types of simulation

Brain-computer interface

In a brain-computer interface simulation, each participant enters from outside, directly connecting their brain to the simulation computer. The computer transfers sensory data to them and reads their desires and actions back; in this manner they interact with the simulated world and receive feedback from it. The participant may even receive adjustment in order to temporarily forget that they are inside a virtual realm (e.g. "passing through the veil"). While inside the simulation, the participant's consciousness is represented by an avatar, which could look very different from the participant's actual appearance.

Simulation-brain communications

If one were to effectively communicate with the brain, a code or sequence must be created/discovered to send information between the part of our brain that hears and talks.

Virtual people

In a virtual-people simulation, every inhabitant is a native of the simulated world. They do not have a "real" body in the external reality. Rather, each is a fully simulated entity, possessing an appropriate level of consciousness that is implemented using the simulation's own logic (i.e. using its own physics). As such, they could be downloaded from one simulation to another, or even archived and resurrected at a later date. It is also possible that a simulated entity could be moved out of the simulation entirely by means of mind transfer into a synthetic body. Another way of getting an inhabitant of the virtual reality out of its simulation would be to "clone" the entity, by taking a sample of its virtual DNA and create a real-world counterpart from that model. The result would not bring the "mind" of the entity out of its simulation, but its body would be born in the real world.

This category subdivides into two further types:

  • Virtual people-virtual world, in which an external reality is simulated separately to the artificial consciousnesses;
  • Solipsistic simulation in which consciousness is simulated and the "world" participants perceive exists only within their minds.


In an emigration simulation, the participant enters the simulation from the outer reality, as in the brain-computer interface simulation, but to a much greater degree. On entry, the participant uses mind transfer to temporarily relocate their mental processing into a virtual-person. After the simulation is over, the participant's mind is transferred back into their outer-reality body, along with all new memories and experience gained within (as in the movie The Thirteenth Floor, or when one flatlines in Neuromancer).

Also worthy is mentioning the option of a completely virtual-person (born in the simulation) becoming somehow self-aware (after "waking up") and willing to escape the simulation, consequently somehow succeeding to be transferred into an outer-reality person (transcendent to the simulated world), and this option can be contributed to Gurdjieff's aspect in Fourth Way that "humans are not born with a soul. Rather, a man must create a soul through the course of his life".

This "creation of a soul" for a (by its nature soulless) virtual-person (part of the Program) would ultimately mean exiting (emigrating) and getting transformed on exit into a real (outer-reality) person, assuming the outer-reality is a realm of Spirit. And the (right) "course of life" in simulation would only be the preparation for that final act of emigration (transferring and related transforming).

In this case, since the emigrating inhabitant of the simulation didn't have an associated outer-reality person (user with a "real body"), this virtual person would be transferred into either a new outer-reality person (assuming that possible), or an already existing one, whether being a player of the simulation or not at all. And if being a player, that outer-reality person, as a user, would be previously associated with some other inhabitant from the simulated world and thus with "taking over" (or merging with) this chosen special previous-inhabitant that emigrates, he could choose to destroy that other/old inhabitant, or abandon him (leaving him then in the simulated world without a user temporarily or permanently). Or if neither destroying or abandoning, but willing to further 'play the simulation' and choosing to play that same old inhabitant (that didn't emigrate), he would do that now as a 'transformed' user ('enriched' with an emigrated virtual-person, or now even completely being that previously virtual person, if that was chosen and possible, and as such continuing to play the simulation using a 'new' virtual-person).

And the outer-reality person (which as self is transcendent to the simulated world) can be 'something' completely indescribable from the point of the simulated world, but as self(=soul), essentially emanates from the Spirit, with a 'personality' manifesting the Spirit.


Morpheus teaches Neo inside a small simulated reality

An intermingled simulation supports both types of consciousness: "players" from the outer reality who are visiting (as a brain-computer interface simulation) or emigrating, and virtual-people who are natives of the simulation and hence lack any physical body in the outer reality.

The Matrix movies feature an intermingled type of simulation: they contain not only human minds (with their physical bodies remaining outside), but also sentient software programs that govern various aspects of the computed realm.


We are living in a simulation

Nick Bostrom's argument

The philosopher Nick Bostrom investigated the possibility that we may be living in a simulation. A simplified version of his argument proceeds as such:

i. It is possible that a civilization could create a computer simulation which contains individuals with artificial intelligence.
ii. Such a civilization would likely run many—say billions—of these simulations (just for fun; for research, etc.)
iii. A simulated individual inside the simulation wouldn’t necessarily know that it’s inside a simulation—it’s just going about its daily business in what it considers to be the "real world."

Then the ultimate question is—if one accepts that theses 1, 2, and 3 are at least possible— which of the following is more likely?

a. We are the one civilization which develops AI simulations and happens not to be in one itself? Or,
b. We are one of the many (billions) of simulations that has run? (Remember point iii.)

In greater detail, his argument attempts to prove the trichotomy, that:

  1. intelligent races will never reach a level of technology where they can run simulations of reality so detailed they can be mistaken for reality (or this is impossible in principle); or
  2. races who do reach such a level do not tend to run such simulations; or
  3. we are almost certainly living in such a simulation.

Bostrom's argument uses the premise that given sufficiently advanced technology, it is possible to simulate entire inhabited planets or even larger habitats or even entire universes as quantum simulations in time/space pockets, including all the people on them, on a computer, and that simulated people can be fully conscious, and are as much persons as non-simulated people.

A particular case provided in the original paper poses the scenario where we assume that the human race could reach such a technological level without destroying themselves in the process (i.e. we deny the first hypothesis); and that once we reached such a level we would still be interested in history, the past, and our ancestors, and that there would be no legal or moral strictures on running such simulations (we deny the second hypothesis)—then

  • it is likely that we would run a very large number of so-called ancestor simulations to study our past;
  • and that, by the same line of reasoning, many of these simulations would in turn run other sub-simulations, and so on;
  • and that given the fact that right now it is impossible to tell whether we are living in one of the vast number of simulations or the original ancestor universe, the likelihood is that the former is true.

Assumptions as to whether the human race (or another intelligent species) could reach such a technological level without destroying themselves depend greatly on the value of the Drake equation, which gives the number of intelligent technological species communicating via radio in a galaxy at any given point in time. The expanded equation looks to the number of posthuman civilizations that ever would exist in any given universe. If the average for all universes, real or simulated, is greater than or equal to one such civilization existing in each universe's entire history, then odds are rather overwhelmingly in favor of the proposition that the average civilization is in a simulation, assuming that such simulated universes are possible and such civilizations would want to run such simulations.

Frank J. Tipler's Omega Point

Physicist Prof. Frank J. Tipler envisages a similar scenario to Nick Bostrom's argument, one that Tipler maintains is a physically required cosmological scenario in the far future of the universe: as the universe comes to an end in a solitary singularity during the Big Crunch, the computational capacity of the universe is capable of increasing at a sufficient rate that is accelerating exponentially faster than the time running out. In principle, a simulation run on this universe-computer can thus continue forever in its own terms, even though proper time lasts only a finite duration.

Prof. Tipler identifies this final singularity and its state of infinite information capacity with God. According to Prof. Tipler and Prof. David Deutsch, the implication of this theory for present-day humans is that this ultimate cosmic computer will essentially be able to resurrect everyone who has ever lived, by recreating all possible quantum brain states within the master simulation, somewhat reminiscent of the resurrection ideas of Nikolai Fyodorovich Fyodorov. This would manifest as a simulated reality. From the perspective of the inhabitant, the Omega Point represents an infinite-duration afterlife, which could take any imaginable form due to its virtual nature. At first glance, Tipler's hypothesis requires some means by which the inhabitants of the far future can recover historical information in order to reincarnate their ancestors into a simulated afterlife. However, if they really have access to infinite computing power, that is no problem at all—they can just simulate "all possible worlds". (This line of thought is continued in Platonic simulation theories). Tipler's argument can also be intertwined with Nick Bostrom's aforementioned argument from probability. If Omega Point will simulate an infinite number of virtual worlds then it would be infinitely more likely that our reality is in one of those simulated worlds, rather than in the lone real world that created the Omega Point.

Prof. Tipler's Omega Point Theory is predicated on an eventual Big Crunch, thought by some to be an unlikely scenario by virtue of a number of recent astronomical observations. Tipler has recently amended his views to accommodate an accelerating universe due to a positive cosmological constant. He proposes baryon tunneling as a means of propelling interstellar spacecraft. He states that if the baryons in the universe were to be annihilated by this process, then this would force the Higgs field toward its absolute vacuum, cancelling the positive cosmological constant, stopping the acceleration, and allowing the universe to collapse into the Omega Point.

Computationalism & Platonic simulation theories

Computationalism is a philosophy of mind theory stating that cognition is a form of computation. It is relevant to the Simulation Hypothesis in that it illustrates how a simulation could contain conscious subjects, as required by a "virtual people" simulation. For example, it is well known that physical systems can be simulated to some degree of accuracy. If computationalism is correct, and if there is no problem in generating artificial consciousness from cognition, it would establish the theoretical possibility of a simulated reality. However, the relationship between cognition and phenomenal consciousness is disputed. It is possible that consciousness requires a substrate of "real" physics, and simulated people, while behaving appropriately, would be philosophical zombies. This would also seem to negate Nick Bostrom's simulation argument; we cannot be inside a simulation, as conscious beings, if consciousness cannot be simulated. However, we could still be within a simulation, and yet be envatted brains. This would allow us to exist as conscious beings within a simulated environment, even if a simulated environment could not simulate consciousness.

Some theorists have argued that if the "consciousness-is-computation" version of computationalism and mathematical realism (also known as mathematical Platonism) are both true our consciousnesses must be inside a simulation. This argument states that a "Plato's heaven" or ultimate ensemble would contain every algorithm, including those which implement consciousness. Platonic simulation theories are also subsets of the multiverse theories and theories of everything.


A dream could be considered a type of simulation capable of fooling someone who is asleep. As a result the "dream hypothesis" cannot be ruled out, although it has been argued that common sense and considerations of simplicity rule against it. One of the first philosophers to question the distinction between reality and dreams was Zhuangzi, a Chinese philosopher from the 4th Century BC. He phrased the problem as the well-known "Butterfly Dream," which went as follows:

Once Zhuangzi dreamt he was a butterfly, a butterfly flitting and fluttering around, happy with himself and doing as he pleased. He didn't know he was Zhuangzi. Suddenly he woke up and there he was, solid and unmistakable Zhuangzi. But he didn't know if he was Zhuangzi who had dreamt he was a butterfly, or a butterfly dreaming he was Zhuangzi. Between Zhuangzi and a butterfly there must be some distinction! This is called the Transformation of Things. (2, tr. Burton Watson 1968:49)

The philosophical underpinnings of this argument are also brought up by Descartes, who was one of the first Western philosophers to do so. In Meditations on First Philosophy, he states "... there are no certain indications by which we may clearly distinguish wakefulness from sleep", and goes on to conclude that "It is possible that I am dreaming right now and that all of my perceptions are false".

Chalmers (2003) discusses the dream hypothesis, and notes that this comes in two distinct forms:

  • that he is currently dreaming, in which case many of his beliefs about the world are incorrect;
  • that he has always been dreaming, in which case the objects he perceives actually exist, albeit in his imagination.

Both the dream argument and the Simulation hypothesis can be regarded as skeptical hypotheses; however in raising these doubts, just as Descartes noted that his own thinking led him to be convinced of his own existence, the existence of the argument itself is testament to the possibility of its own truth.

Another state of mind in which an individual's perceptions have no physical basis in the real world is called psychosis.

Computability of physics

A decisive refutation of any claim that our reality is computer-simulated would be the discovery of some uncomputable physics, because if reality is doing something no computer can do, it cannot be a computer simulation. In fact, known physics is held to be computable.

The objection could be made that the simulation does not have to run in "real time". But it misses an important point: the shortfall is not linear, rather it is a matter of performing an infinite number of computational steps in a finite time. This objection does not apply if the hypothetical simulation is being run on a hypercomputer, a machine more powerful than a Turing machine. Unfortunately, there is no way of working out if computers running a simulation are capable of doing things that computers in the simulation cannot do. No one has shown that the laws of physics inside a simulation and those outside it have to be the same, and simulations of different physical laws have been constructed. The problem now is that there is no evidence that can conceivably be produced to show that the universe is not any kind of computer, making the Simulation Hypothesis unfalsifiable and therefore scientifically unacceptable, at least by Popperian standards.

CantGoTu Environments

The concept of a CantGoTu Environment takes the ideas embedded in the Diagonal Argument of George Cantor, the Undecidability theorems of Kurt Gödel, and the limits of computability highlighted by Alan Turing, and applies them to Virtual Reality environments. The argument is set out in The Fabric of Reality (1997) by David Deutsch, and runs thus:

Imagine a computer built to render every possible Virtual Reality. Suppose all possible environments produced by this generator can be laid out sequentially, as Environment 1, Environment 2, etc. Take time slices through each of these of equal duration. (Deutsch specifies one minute, but this could, in principle be anything, e.g. Planck time.) Now construct a new environment as follows. In the first time-period, generate in the environment anything which is different from Environment 1, and in the second time period, anything different from Environment 2, and so on. This new environment cannot be found in the sequential layout of environments specified earlier, as it differs from all possible environments by what happens in one particular time-slice. Hence this means that no such universal VR generator can be created, and there are environments which effectively can never be rendered by any means (since there are infinitely many).

[Yet if all possible virtual reality initial conditions have been simulated and still it is possible to create a reality that plays out differently to those already created (despite starting at an initial condition common to one of those already in existence) then that extra environment must obey slightly different cause and effect laws of reality, or else it would simply play out in the same way as one of those already simulated. This implies that the argument by Deutsch is only valid if the laws that govern each virtual reality may be different: i.e. they would have to allow inconsistencies such as objects suddenly disappearing or appearing out of nowhere for every time an environment transitions from one time slot to another. If instead one simply assumes that there are infinitely many possible initial conditions, since they vary by infinitesimally small amounts, then (even if all follow the same laws) there will be infinitely many possible virtual realities that could be generated, which leads to the same conclusion as Deutsch.]

However, later on in the book, Deutsch goes on to argue for a very strong version of the Turing principle, namely: "It is possible to build a virtual reality generator whose repertoire includes every physically possible environment."

However, in order to include every physically possible environment, the computer would have to be able to include a full simulation of the environment containing itself. Even so, a computer running a simulation need not have to run every possible physical moment to be plausible to its inhabitants.

Computational load

Virtual people

As of 2007, the computational requirements for molecular dynamics are such that it takes several months of computing time on the world's fastest computers to simulate 1/10th of one second of the folding of a single protein molecule. To simulate an entire galaxy would require more computing power than can presently be envisioned, assuming that no shortcuts are taken when simulating areas that nobody is observing.

In answer to this objection, Bostrom calculated that simulating the brain functions of all humans who have ever lived would require roughly 1033 to 1036 calculations. He further calculated that a planet-sized computer built using known nanotechnological methods would perform about 1042 calculations per second — and a planet-sized computer is not inherently impossible to build, (although the speed of light could severely constrain the speed at which its subprocessors share data). In any case, a simulation need not compute every single molecular event that occurs inside it; it may only process events that its participants can actively perceive. This is particularly the case if the simulation contained only a handful of people; far less processing power would be needed to make them believe they were in a "world" much larger than was actually the case.

Brain-computer interface

Some have argued that a dream is a reality being simulated for certain parts of the dreamer's brain by other parts of the dreamer's brain — possibly showing that a 'computer' less powerful than a whole human brain can simulate oft-believable realities for the senses. Similar arguments would apply to vivid recollections, imaginings, and especially hallucinations. However, all of these things are usually less vivid and do not have to consistently obey the laws of physics, which our world does and which constraint presumably requires more computational power. (Another point some have made about hallucinations is that the hallucination cannot be interacted with in a rich, vivid way requiring simulation of multiple senses, possibly because the brain knows it does not have the computing power to support such interaction.)

Additionally, it's possible that the parts of our brains that question the validity of a situation are impaired when we sleep. The believability of a simulation is an important influence on the results it generates.

Validity of the arguments

In any case, it is perhaps erroneous to apply our current sense of feasibility to projects undertaken in an outer reality, where resources and physical laws may be very different. It also assumes designers would need to simulate reality beyond our natural senses.

Also, a simulated reality need not run in real time. The inhabitants of a simulated universe would have no way of knowing that one day of subjective time actually required much longer to calculate in their host computer, or vice-versa. Isaac Asimov pushed the limits of this by claiming that, unbeknownst to the inhabitants, the simulation could even run backwards, or in pieces on different computers, or with a million generations of monks working weekends on abacuses — all without the simulation missing a beat 'in simulation time'.

Nested simulations

The existence of simulated reality is unprovable in any concrete sense: any "evidence" that is directly observed could be another simulation itself. In other words, there is an infinite regress problem with the argument. Even if we are a simulated reality, there is no way to be sure the beings running the simulation are not themselves a simulation, and the operators of that simulation are not a simulation, ad infinitum. Given the premises of the simulation argument, any reality, even one running a simulation, has no better or worse a chance of being a simulation than any other.

Occam's razor

It has been noted that there is no definitive way to tell whether one is in a simulation. It is generally the case that any number of hypotheses can explain the same evidence. This situation often prompts the use of a heuristic rule called Occam's razor, which prefers simpler explanations over more complex ones, and is often implicated in skeptical criticisms of far-fetched hypotheses.

Since it is a heuristic rule, and not a natural law, it is not an infallible guide as to what is ultimately the truth, but only what is usually best to believe, all other things being equal. If we assume Occam's Razor applies, then it would tell us to reject simulated reality as being too complex, in favor of reality being what it appears to be.

Scientific and technological approaches

Software Bugs

A computed simulation may have voids or other errors that manifest inside. A simple example of this, when the "hall of mirrors effect" occurs in the first person shooter Doom, the game attempts to display "nothing" and, obviously fails in its attempt to do so. If a void can be found and tested, and if the observers survive its discovery, then it may reveal the underlying computational substrate. However, lapses in physical law could be attributed to other explanations, for instance inherent instability in the nature of reality.

In fact, bugs could be very common. An interesting question is whether knowledge of bugs or loopholes in a sufficiently powerful simulation are instantly erased the minute they are observed since presumably all thoughts and experiences in a simulated world could be carefully monitored and altered. This would, however, require enormous processing capability in order to simultaneously monitor billions of people at once. Of course, if this is the case we would never be able to act on discovery of bugs. In fact, any simulation significantly determined to protect its existence could erase any proof that it was a simulation whenever it arose, provided it had the enormous capacity necessary to do so.

To take this argument to an even greater extreme, a sufficiently powerful simulation could make its inhabitants think that erasing proof of its existence is difficult. This would mean that the computer actually has an easy time of erasing glitches, but we all think that changing reality requires great power.

Hidden messages or "Easter eggs"

The simulation may contain secret messages or exits, placed there by the designer, or by other inhabitants who have solved the riddle, in the way that computer games and other media sometimes do. People have already spent considerable effort searching for patterns or messages within the endless decimal places of the fundamental constants such as e and pi. In Carl Sagan's science fiction novel Contact, Sagan contemplates the possibility of finding a signature embedded in pi (in its base-11 expansion) by the creators of the universe.

However, such messages have not been made public if they have been found, and the argument relies on the messages being truthful. As usual, other hypotheses could explain the same evidence. In any case, if such constants are in fact infinite, then at some point an apparently meaningful message will appear in them (this is known as the infinite monkey theorem), not necessarily because it was placed there.

The Easter Egg Theory also assumes that a simulation would want to inform its inhabitants of its real nature; it may not. Otherwise, if we consider that the human race will eventually be capable of creating intelligent programs (i.e. machines) living inside a virtual subspace of our "real" world, then an interesting question would be to define whether or not we will be capable of suppressing from our sentient robots their capability of knowing their artificial nature.

Processing power

A computer simulation would be limited to the processing power of its host computer, and so there may be aspects of the simulation that are not computed at a fine-grained (e.g. subatomic) level. This might show up as a limitation on the accuracy of information that can be obtained in particle physics.

However, this argument, like many others, assumes that accurate judgments about the simulating computer can be made from within the simulation. If we are being simulated, we might be misled about the nature of computers.

Taken one step further, the "fine grained" elements of our world could themselves be simulated since we never see the sub-atomic particles due to our inherent physical limitations. In order to see such particles we rely on other instruments which appear to magnify or translate that information into a format our limited senses are able to view: computer print out, lens of a microscope, etc. Therefore, we essentially take on faith that they're an accurate portrayal of the fine grained world which appears to exist in a realm beyond our natural senses. Assuming the sub-atomic could also be simulated then the processing power required to generate a realistic world would be greatly reduced.

Digital physics and cellular automata

In theoretical physics, digital physics holds the basic premise that the entire history of our universe is computable in some sense. The hypothesis was pioneered in Konrad Zuse's book Rechnender Raum (translated by MIT into English as Calculating Space, 1970), which focuses on cellular automata. Juergen Schmidhuber suggested that the universe could be a Turing machine, because there is a very short program that outputs all possible programmes in an asymptotically optimal way. Other proponents include Edward Fredkin, Stephen Wolfram, and Nobel laureate Gerard 't Hooft. They hold that the apparently probabilistic nature of quantum physics is not incompatible with the notion of computability. A quantum version of digital physics has recently been proposed by Seth Lloyd. None of these suggestions has been developed into a workable physical theory.

It can be argued that the use of continua in physics constitutes a possible argument against the simulation of a physical universe. Removing the real numbers and uncountable infinities from physics would counter some of the objections noted above, and at least make computer simulation a possibility. However, digital physics must overcome these objections. For instance, cellular automata would appear to be a poor model for the non-locality of quantum mechanics.

Other issues

Non-player characters or "bots"

Some of the people in a simulated reality may be automatons, philosophical zombies, or 'bots' added to the simulation to make it more realistic or interesting or challenging. Indeed, it is conceivable that every person other than oneself is a bot. Bostrom called this a "me-simulation", in which oneself is the only sovereign lifeform, or at least the only inhabitant who entered the simulation from outside.

Bostrom further elaborated on the idea of bots:

In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or "shadow-people" – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much [computationally] cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience.

The idea of "zombies" has a well known corollary in the video game industry where computer generated characters are known as Non-Player Characters ("NPCs"). The term 'bots' is short for 'robots'. The usage originated as the name given to the simple AI opponents of modern video games.

Subjective time

A brain-computer interface simulated reality may be required to progress at a rate that is near realtime; that is, time within it may be required to pass at approximately the same rate as the outer reality which contains it. This might be the case because the players are interacting with the simulation using brains which still reside in the outer reality. Therefore, if the simulation were to run faster or slower, those brains could notice because they were not contained within it.

It is possible that time passes slower or quicker for brains in a dream state (i.e., in a brain-computer interface trance); however, the point is that they still function at a finite, biological speed, and the simulation must track with them. Unless those interacting with the simulation are augmented and capable of processing information at the same rate as the simulation itself.

A virtual-people or emigration simulated reality, on the other hand, need not. This is because its inhabitants are using the simulation's own physics in order to experience, think, and react. If the simulation were slowed down or sped up, so also would the inhabitants' own senses, brains, and muscles, as well as every other molecule inside. The inhabitants would perceive no change in the passage of time, simply because their method of measuring time is dependent on the cosmic clock that they are seeking to measure. (They could perform the measurement only if they had some access to data from the outer reality.)

For that matter, they could not even detect whether the simulation had been completely halted: a pause in the simulation would pause every life and mind within it. When the simulation was later resumed, the inhabitants would continue exactly as they were before the pause, completely unaware that (for example) their cosmos had been paused and archived for a billion years before being resumed by a completely different director. A simulation could also be created with its inhabitants already possessing memories as though they had already lived part of their lives before; said inhabitants would not be able to tell the difference unless informed of it by the simulation. (Compare with the five minute hypothesis and Last Thursdayism).

One practical implication of this is that a virtual-people or a hybrid simulation does not require a computer powerful enough to model its entire cosmos at full speed. Per the Turing completeness theorem, a simulation can progress at whatever speed its host computer can manage; it would be constrained by available memory but not by computation rate.

Recursive simulations

A simulated reality could contain a computer that is running a simulated reality. The 'parent' simulator would be simulating all of the atoms of the computer, atoms which happen to be calculating a 'child' simulation. By way of illustration: imagine that a human is playing a game of The Sims in which one of the player's Sims (simulated people) is playing a computer game in the game. Alternatively, imagine a Java Runtime Environment running a virtual computer on a "real-world" computer that itself is located within a simulation.

This recursion could continue to infinitely many levels — a simulation containing a computer running a simulation containing a computer running a simulation and so on. The recursion is subject only to one constraint: each 'nested' simulation must be:

  • smaller than its parent reality, because its own memory must be a subset of the parent's;

...and must be at least one of the following:

  • slower than its parent reality, because its own calculations must be a subset of the parent's; or
  • less complex than its parent reality, via simplifications of processes that are computationally intensive in the parent reality; or
  • less complete than its parent reality, via approximations of objects that nobody is observing.

The latter is the basis of the idea that quantum uncertainties are circumstantial evidence that our own reality is a simulation. However, this assumes that there is a finite limitation somewhere in the chain. Assuming an infinite number of simulations within simulations, there need not be any noticeable difference between any of the subsets.

Simulated reality in fiction

Simulated reality is a theme that pre-dates science fiction. In Medieval and Renaissance religious theatre, the concept of the world as theater is frequent. Works, early and contemporary, include:


  • Neuromancer (1984) and Mona Lisa Overdrive (1988) by William Gibson
  • Otherland (1998) by Tad Williams
  • Permutation City (1994) by Greg Egan
  • The Metamorphosis of Prime Intellect (1994) by Roger Williams
  • The Reality Bug, a novel by D. J. MacHale, is set on a world destroyed by simulated reality.
  • Realtime Interrupt, a novel by James P. Hogan, is set in near future, a cyber reality with its creator trapped inside.
  • The Remnants series by K. A. Applegate is set on a ship which creates virtual landscapes.
  • Riverworld (1979) by Philip José Farmer
  • The Seventh Sally and The Princess Ineffabelle(from the cyberiad) by Stanislaw Lem
  • Simulacron 3 (1964) by Daniel F. Galouye
  • Snow Crash (1992) by Neal Stephenson
  • Sophie's World (1991) by Jostein Gaarder
  • Words Made Flesh (1987) by Ramsey Dukes
  • "They", a 1941 short-story by Robert Heinlein, focuses on a man who believes the universe was created to deceive him.

Film, plays & TV series

  • .hack//SIGN, an anime series about a person whose mind is trapped in an online computer role-playing game.
  • Avalon by Mamoru Oshii
  • The Red Dwarf episodes "Better Than Life" and "Back to Reality", by Rob Grant and Doug Naylor.
  • The Big O by Hajime Yatate and Chiaki J. Konaka. N.B. the reality in question has not been confirmed as simulated, but it is extremely likely.
  • Brainscan by John Flynn
  • "The Cage" and "The Menagerie", the unaired pilot and later episodes (respectively) of Star Trek, screenplays by Gene Roddenberry.
  • Cube 2: Hypercube (2002) written by Sean Hood
  • The Gamekeeper, an episode of Stargate SG-1.
  • Danger Room A training simulator from the (X-Men) universe.
  • Dark City by Alex Proyas, in which the sim is halted every night at midnight, rearranged, and then restarted. People are given false memories of different lives than they led in the previous 24 hours, reminiscent of last Thursdayism.
  • "The Deadly Assassin," an episode of Doctor Who written by Robert Holmes.
  • Die Another Day - James Bond, shows the protagonist wearing VR glasses which very closely reflect reality.
  • Eternal Family, a 1997 surreal comedy anime OVA.
  • eXistenZ by David Cronenberg, in which level switches occur so seamlessly and numerously that at the end of the movie it is difficult to tell whether the main characters are back in "reality".
  • Ghost in the Shell, a 1995 postcyberpunk anime film and series
  • Good Bye Lenin!, by Wolfgang Becker, a Berlin family tries to make the feeble mother believe that East Germany did not fall.
  • Aeon Flux took place in a cartoon world.
  • Harsh Realm the short lived TV series created by Chris Carter which took place in a virtual world.
  • The Island, directed by Michael Bay.
  • Jacob's Ladder, a 1990 thriller film directed by Adrian Lyne
  • Lost Highway, a 1997 movie by David Lynch
  • Lyoko The virtual world run by a super computer in the French anime Code Lyoko.
  • The Matrix series by the Wachowski brothers
  • The Thirteenth Floor, a 1999 film directed by Josef Rusnak
  • Megazone 23 (1985-1989), an anime OVA series created by Noboru Ishiguro and Shinji Aramaki based on a simulated reality of Tokyo controlled by a super computer.
  • In the Doctor Who universe.
  • The Nines, a 2007 film which, unknowingly to the viewer, is focused completely on the subject of simulated reality.
  • Noein a 24 episode anime directed by Kazuki Akane and Kenji Yasuda where a simulated reality is created.
  • Paranoia Agent by Satoshi Kon
  • Possible Worlds, both the play and the 2000 film adaptation of that play.
  • The Prisoner the tv series
  • Robotech: The Movie, a 1986 adaptation of Megazone 23.
  • "The Sentence", an episode of The Outer Limits television series.
  • Serial Experiments Lain, a 13 episode anime series by Chiaki J. Konaka.
  • "Ship in a Bottle", episode of Star Trek: The Next Generation, in which the fictional Professor Moriarty of Sir Arthur Conan Doyle's Sherlock Holmes stories is allowed to exist in a simulation of the world.
  • In the Star Trek fictional universe, particularly in and since the series Star Trek: The Next Generation, holodecks are simulators aboard starships and other facilities used for training and recreation.
  • The 13th Floor, 1999 film loosely based on the novel Simulacron-3 by Daniel F. Galouye
  • Total Recall, 1990 Paul Verhoeven film, based on a Philip K. Dick's story We Can Remember It for You Wholesale.
  • Tron (1982) by Walt Disney Pictures
  • The Truman Show, in which the titular character unknowingly lives his entire life in a false reality created to make a voyeur television show about him
  • The Twilight Zone has featured a number of episodes involving false or simulated realities of some sort.
  • Vanilla Sky by Cameron Crowe (a remake of Abre los ojos by Alejandro Amenábar).
  • La vida es sueño (Life is a Dream), a Spanish play by Pedro Calderón de la Barca (1600-1681) that evolved from the legends of the early years of Siddhartha Gautama, the Buddha.
  • The X-Files has featured a number of episodes involving simulated realities of some sort.
  • Welt am Draht a 1973 German film adaptation of Simulacron-3 from Rainer Werner Fassbinder.
  • Zegapain, a 2006 anime series.

Interactive fiction

  • A Mind Forever Voyaging by Steve Meretzky

Video games

  • .hack series.
  • Active Worlds
  • Assassin's Creed
  • Chrono Trigger
  • Creatures
  • Darwinia
  • Deus Ex
  • Digital Devil Saga
  • Eternal Sonata
  • Fallout 3
  • Final Fantasy X
  • Harvester
  • Metal Gear Solid 2: Sons of Liberty
  • Persona
  • Planescape:Torment
  • Second Life
  • Shadowrun
  • Shin Megami Tensei
  • Spore
  • Star Ocean: Till the End of Time
  • The Sims
  • The World Ends With You
  • There.com
  • Ultima (series), especially starting with Ultima V which simulated people's daily activities using a schedule, which was novel at the time.
  • Xenosaga (series)

Technological singularity

According to Ray Kurzweil, the logarithmic graph of 15 separate lists of paradigm shifts for key events in human history show an exponential trend. Lists prepared by, among others, Carl Sagan, Paul D. Boyer, Encyclopædia Britannica, American Museum of Natural History and University of Arizona; compiled by Kurzweil.

The technological singularity is a theoretical future point of unprecedented technological progress—typically associated with advancements in computer hardware or the ability of machines to improve themselves using artificial intelligence.

Statistician I. J. Good first wrote of an "intelligence explosion", suggesting that if machines could even slightly surpass human intellect, they could improve their own designs in ways unforeseen by their designers, and thus recursively augment themselves into far greater intelligences. The first such improvements might be small, but as the machine became more intelligent it would become better at becoming more intelligent, which could lead to an exponential and quite sudden growth in intelligence.

Vernor Vinge later called this event "the Singularity" as an analogy between the breakdown of modern physics near a gravitational singularity and the drastic change in society he argues would occur following an intelligence explosion. In the 1980s, Vinge popularized the singularity in lectures, essays, and science fiction. More recently, some prominent technologists such as Bill Joy, founder of Sun Microsystems, voiced concern over the potential dangers of Vinge's singularity (Joy 2000). Following its introduction in Vinge's stories, particularly Marooned in Realtime and A Fire Upon the Deep, the singularity has also become a common plot element in science fiction.

Others, most prominently Ray Kurzweil, define the singularity as a period of extremely rapid technological progress. Kurzweil argues such an event is implied by a long-term pattern of accelerating change that generalizes Moore's Law to technologies predating the integrated circuit and which he argues will continue to other technologies not yet invented. Critics of Kurzweil's interpretation consider it an example of static analysis, citing particular failures of the predictions of Moore's Law.

Robin Hanson proposes that multiple "singularities" have occurred throughout history, dramatically affecting the growth rate of the economy. Like the agricultural and industrial revolutions of the past, the technological singularity would increase economic growth between 60 and 250 times. An innovation that allowed for replacement of virtually all human labor could trigger this singularity.

Critics allege that the singularity concept does not take into account increased energy resource usage by the new technologies, or the current physical (atomic) limits in electronic components miniaturization. However, by its nature, the theory implies the creation of currently unknown technologies and relies on the concept of improvements in one field affecting another — an event paralleled in the industrial revolution.

Intelligence explosion

Good (1965) speculated on the consequences of machines smarter than humans:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Mathematician and author Vernor Vinge greatly popularized Good’s notion of an intelligence explosion in the 1980s, calling the creation of the first ultraintelligent machine the Singularity. Vinge first addressed the topic in print in the January 1983 issue of Omni magazine. Vinge (1993) contains the oft-quoted statement, "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended." Vinge refines his estimate of the time scales involved, adding, "I'll be surprised if this event occurs before 2005 or after 2030."

Vinge continues by predicting that superhuman intelligences, however created, will be able to enhance their own minds faster than the humans that created them. "When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid." This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period of time.

Most proposed methods for creating smarter-than-human or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence. The means speculated to produce intelligence augmentation are numerous, and include bio- and genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces, and mind transfer.

Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to advance the singularity, a choice addressed by Singularity Institute for Artificial Intelligence (2002). Hanson (1998) is also skeptical of human intelligence augmentation, writing that once one has exhausted the "low-hanging fruit" of easy methods for increasing human intelligence, further improvements will become increasingly difficult to find.

It is difficult to directly compare silicon-based hardware with neurons. But Berglas (2008) notes that computer speech recognition is approaching human capabilities, and that this capability seems to require 0.01% of the volume of the brain. This analogy suggests that modern computer hardware is within a few orders of magnitude as powerful as the human brain.

One other factor potentially hastening the singularity is the ongoing expansion of the community working on it, resulting from the increase in scientific research within developing countries.

Economic aspects

Dramatic changes in the rate of economic growth have occurred in the past because of some technological advancement. Based on population growth, the economy doubled every 250,000 years from the Paleolithic era until the Neolithic Revolution. This new agricultural economy began to double every 900 years, a remarkable increase. In the current era, beginning with the Industrial Revolution, the world’s economic output doubles every fifteen years, sixty times faster than in the agricultural era. If the rise of superhuman intelligences causes a similar revolution, one would expect the economy to double at least quarterly and possibly on a weekly basis.

Machines capable of performing most mental and physical tasks as well as humans would cause a rise in wages for the jobs at which humans can still outperform machines. However, a sudden proliferation of humanlike machines would likely cause a net drop in wages, as humans compete with robots for jobs. Also, the wealth of the technological singularity may be concentrated in the hands of only a few. These wealthy few would be those who own the means of mass producing the intelligent robot workforce.

Potential dangers

Superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests AIs may simply eliminate the human race, and humans would be powerless to stop them.

Berglas (2008) argues that unlike man, a computer based intelligence is not tied to any particular body, which would give it a radically different world view. In particular, a software intelligence would essentially be immortal and so have no need to produce independent children that live on after it dies. It would thus have no evolutionary need for love.

Other oft-cited dangers include those commonly associated with molecular nanotechnology and genetic engineering. These threats are major issues for both singularity advocates and critics, and were the subject of Bill Joy's Wired magazine article "Why the future doesn't need us" (Joy 2000).

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

Moravec (1992) argues that although superintelligence in the form of machines may make humans in some sense obsolete as the top intelligence, there will still be room in the ecology for humans.

Eliezer Yudkowsky proposed that research be undertaken to produce friendly artificial intelligence in order to address the dangers. He noted that if the first real AI was friendly it would have a head start on self-improvement and thus might prevent other unfriendly AIs from developing. The Singularity Institute for Artificial Intelligence is dedicated to this cause. Bill Hibbard also addresses issues of AI safety and morality in his book Super-Intelligent Machines. However, Berglas (2008) notes that there is no direct evolutionary motivation for an AI to be friendly to man.

Isaac Asimov’s Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI. The laws are intended to prevent artificially intelligent robots from harming humans. In Asimov’s stories, any perceived problems with the laws tend to arise as a result of a misunderstanding on the part of some human operator; the robots themselves are merely acting to their best interpretation of their rules. In the 2004 film I, Robot, a possibility is explored in which AI take complete control over humanity for the purpose of protecting humanity from itself. (The movie was based loosely on Asimov's stories; the aspect of machines taking over bears closer resemblance to Capek's R.U.R., the first novel ever to use the term robot.) In 2004, the Singularity Institute launched an Internet campaign called 3 Laws Unsafe to raise awareness of AI safety issues and the inadequacy of Asimov’s laws in particular (Singularity Institute for Artificial Intelligence 2004).

Many Singularitarians consider nanotechnology to be one of the greatest dangers facing humanity. For this reason, they often believe seed AI (an AI capable of making itself smarter) should precede nanotechnology. Others, such as the Foresight Institute, advocate efforts to create molecular nanotechnology, claiming nanotechnology can be made safe for pre-singularity use or can expedite the arrival of a beneficial singularity.

Accelerating change

Kurzweil writes that, due to paradigm shifts, a trend of exponential growth extends from integrated circuits to earlier transistors, vacuum tubes, relays and electromechanical computers.

Various Kardashev scale projections through 2100. One results in a singularity.

Some singularity proponents argue its inevitability through extrapolation of past trends, especially those pertaining to shortening gaps between improvements to technology. In one of the first uses of the term "singularity" in the context of technological progress, Ulam (1958) tells of a conversation with John von Neumann about accelerating change:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

Hawkins (1983) writes that "mindsteps", dramatic and irreversible changes to paradigms or world views, are accelerating in frequency as quantified in his mindstep equation. He cites the inventions of writing, mathematics, and the computer as examples of such changes.

Ray Kurzweil's analysis of history concludes that technological progress follows a pattern of exponential growth, following what he calls The Law of Accelerating Returns. He generalizes Moore's Law, which describes geometric growth in integrated semiconductor complexity, to include technologies from far before the integrated circuit.

Whenever technology approaches a barrier, Kurzweil writes, new technologies will cross it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history" (Kurzweil 2001). Kurzweil believes that the singularity will occur before the end of the 21st century, setting the date at 2045 (Kurzweil 2005). His predictions differ from Vinge’s in that he predicts a gradual ascent to the singularity, rather than Vinge’s rapidly self-improving superhuman intelligence.

This leads to the conclusion that an artificial intelligence that is capable of improving on its own design is also faced with a singularity. This idea is explored by Dan Simmons in his novel Hyperion, where a collection of artificial intelligences debate whether or not to make themselves obsolete by creating a new generation of "ultimate" intelligence.

The Acceleration Studies Foundation, an educational non-profit foundation founded by John Smart, engages in outreach, education, research and advocacy concerning accelerating change (Acceleration Studies Foundation 2007). It produces the Accelerating Change conference at Stanford University, and maintains the educational site Acceleration Watch.

Presumably, a technological singularity would lead to a rapid development of a Kardashev Type I civilization where a Kardashev Type I civilization has achieved mastery of the resources of its home planet, Type II of its planetary system, and Type III of its galaxy. Given the fact that, depending on the calculations used, humans on Earth will reach 0.7 on the Kardashev scale by 2040 or sooner, a technological singularity between now and then would push us rapidly over that limit.


Some critics assert that no computer or machine will ever achieve human intelligence while others do not rule out the possibility. Theodore Modis and Jonathan Huebner argue that the rate of technological innovation has not only ceased to rise, but is actually now declining. John Smart criticizes Huebner's analysis. Some evidence for this decline is that the rise in computer clock speeds is slowing, even while Moore's prediction of exponentially increasing circuit density continues to hold. Although clock speeds in the past were advertised as the main source of speed from a processor, that's no longer true. Today's processors use the circuits for different, more efficient purposes than pushing raw clock speed. For instance, a Core i7 at 2 GHz is far more powerful than a Pentium 4 at 4 GHz.

Others propose that other "singularities" can be found through analysis of trends in world population, world GDP, and other indices. Andrey Korotayev and others argue that historical hyperbolic growth curves can be attributed to feedback loops that ceased to affect global trends in the 1970s, and thus hyperbolic growth should not be expected in the future.

In The Progress of Computing, William Nordhaus argued that, prior to 1940, computers followed the much slower growth of a traditional industrial economy, thus rejecting extrapolations of Moore's Law to 19th-century computers. Schmidhuber (2006) suggests differences in memory of recent and distant events create an illusion of accelerating change, and that such phenomena may be responsible for past apocalyptic predictions.

A recent study of patents per thousand persons shows that human creativity does not show accelerating returns, but in fact—as suggested by Joseph Tainter in his seminal The Collapse of Complex Societies—a law of diminishing returns. The number of patents per thousand peaked in the period from 1850–1900, and has been declining since. The growth of complexity eventually becomes self-limiting, and leads to a wide spread "general systems collapse". Thomas Homer Dixon in The Upside of Down: Catastrophe, Creativity and the Renewal of Civilization shows that the declining energy returns on investment has led to the collapse of civilizations. Jared Diamond in Collapse: How Societies Choose to Fail or Succeed also shows that cultures self-limit when they exceed the sustainable carrying capacity of their environment, and the consumption of strategic resources (frequently timber, soils or water) creates a deleterious positive feedback loop that leads eventually to social collapse and technological retrogression.

Popular culture

While discussing the singularity's growing recognition, Vinge (1993) writes that "it was the science-fiction writers who felt the first concrete impact." In addition to his own short story "Bookworm, Run!", whose protagonist is a chimpanzee with intelligence augmented by a government experiment, he cites Greg Bear's novel Blood Music (1983) as an example of the singularity in fiction. In William Gibson's 1984 novel Neuromancer, AIs capable of improving their own programs are strictly regulated by special "Turing police" to ensure they never exceed a certain level of intelligence, and the plot centers on the efforts of one such AI to circumvent their control. The 1994 novel The Metamorphosis of Prime Intellect features an AI that augments itself so quickly as to gain low-level control of all matter in the Universe in a matter of hours. A more malevolent AI achieves similar levels of omnipotence in Harlan Ellison's short story I Have No Mouth, and I Must Scream (1967). William Thomas Quick's novels Dreams of Flesh and Sand (1988), Dreams of Gods and Men (1989), and Singularities (1990) present an account of the transition through the singularity; in the latter novel, one of the characters states that it is necessary for Mankind's survival that they achieve an integration with the emerging machine intelligences, or it will be crushed under the dominance of the machines – the greatest risk to the survival of a species reaching this point (and alluding to large numbers of other species that either survived or failed this test, although no actual contact with alien species occurs in the novels).

The singularity is sometimes addressed in fictional works to explain the event's absence. Neal Asher's Gridlinked series features a future where humans living in the Polity are governed by AIs and while some are resentful, most believe that they are far better governors than any human. In the fourth novel, Polity Agent, it is mentioned that the singularity is far overdue yet most AIs have decided not to partake in it for reasons that only they know. A flashback character in Ken MacLeod's 1998 novel The Cassini Division dismissively refers to the singularity as "the Rapture for nerds", though the singularity goes on to happen anyway.

Popular movies in which computers become intelligent and overpower the human race include Colossus: The Forbin Project, the Terminator series, I, Robot, and The Matrix series. The television series Battlestar Galactica also explores these themes.

Isaac Asimov expressed ideas similar to a post-Kurzweilian singularity in his short story The Last Question. Asimov's future envisions a reality where a combination of strong artificial intelligence and post-humans consume the cosmos, during a time Kurzweil describes as when "the universe wakes up", the last of his six stages of cosmic evolution as described in The Singularity is Near. Post-human entities throughout various time periods of the story inquire of the artificial intelligence within the story as to how entropy death will be avoided. The AI responds that it lacks sufficient information to come to a conclusion, until the end of the story when the AI does indeed arrive at a solution, and demonstrates it by re-creating the universe, in godlike speech and fashion, from scratch. Notably, it does so in order to fulfill its duty to answer the humans' question.

St. Edward's University chemist Eamonn Healy discusses accelerating change in the film Waking Life. He divides history into increasingly shorter periods, estimating "two billion years for life, six million years for the hominid, a hundred-thousand years for mankind as we know it". He proceeds to human cultural evolution, giving time scales of ten thousand years for agriculture, four hundred years for the scientific revolution, and one hundred fifty years for the industrial revolution. Information is emphasized as providing the basis for the new evolutionary paradigm, with artificial intelligence its culmination. He concludes we will eventually create "neohumans" which will usurp humanity’s present role in scientific and technological progress and allow the exponential trend of accelerating change to continue past the limits of human ability.

Accelerating progress features in some science fiction works, and is a central theme in Charles Stross's Accelerando. Other notable authors that address singularity-related issues include Karl Schroeder, Greg Egan, Ken MacLeod, David Brin, Iain M. Banks, Neal Stephenson, Tony Ballantyne, Bruce Sterling, Dan Simmons, Damien Broderick, Fredric Brown, Jacek Dukaj, Nagaru Tanigawa and Cory Doctorow. Another relevant work is Warren Ellis’ ongoing comic book series newuniversal.

In the episode "The Turk" of Terminator: The Sarah Connor Chronicles, John Connor mentions the singularity. The Terminator franchise is predicated on the concept of a human-designed computer system becoming self-aware and deciding to destroy humankind. It eventually achieves superintelligence.

In the film Screamers—based on Philip K. Dick's short story Second Variety—mankind's own weapons begin to design and assemble themselves. Self replicating machines (here, the screamers) are often considered to be a significant prerequisite "final phase"—almost like a catalyst to the accelerating progress leading to a singularity. Interestingly, screamers develop to a level where they will kill each other and one even professes her love for the human. This idea is common in Dick's stories, that explore beyond the simplistic "man vs machine" scenario in which our creations consider us a threat.

The feature-length documentary film Transcendent Man is based on Ray Kurzweil and his book The Singularity Is Near. The film documents Kurzweil's quest to reveal what he believes to be mankind's destiny.

On his album People of Earth, Dr. Steel has a song by the title of "The Singularity."

Sunday, March 15, 2009

Applications of artificial intelligence

Artificial intelligence has been used in a wide range of fields including medical diagnosis, stock trading, robot control, law, scientific discovery and toys. However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." "Many thousands of AI applications are deeply embedded in the infrastructure of every industry." In the late 90s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes.

Computer science

AI researchers have created many tools to solve the most difficult problems in computer science. Many of their inventions have been adopted by mainstream computer science and are no longer considered a part of AI. (See AI effect). According to Russell & Norvig (2003, p. 15), all of the following were originally developed in AI laboratories:

  • Time sharing
  • Interactive interpreters
  • Graphical user interfaces and the computer mouse
  • Rapid development environments
  • The linked list data type
  • Automatic storage management
  • Symbolic programming
  • Dynamic programming
  • Object-oriented programming


    Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition.

    Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation.


    A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and provide medical information.

    They may also be used for medical diagnosis,

    Artificial neural networks are used for medical diagnosis (such as in Concept Processing technology in EMR software), functioning as machine differential diagnosis.

    Heavy industry

    Robots have become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks such as painting, welding, and assembly. Japan is the leader in using and producing robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan.


    Fuzzy logic controllers have been developed for automatic gearboxes in automobiles (the 2006 Audi TT, VW Toureg and VW Caravell feature the DSP transmission which utilizes Fuzzy logic, a number of Škoda variants (Škoda Fabia) also currently include a Fuzzy Logic based controller).


    Many telecommunications companies make use of heuristic search in the management of their workforces, for example BT Group has deployed heuristic search in a scheduling application that provides the work schedules of 20000 engineers.

    Toys and games

    The 1990s saw some of the first attempts to mass-produce domestically aimed types of basic Artificial Intelligence for education, or leisure. This prospered greatly with the Digital Revolution, and helped introduce people, especially children, to a life of dealing with various types of AI, specifically in the form of Tamagotchis and Giga Pets, the Internet (example: basic search engine interfaces are one simple form), and the first widely released robot, Furby. A mere year later an improved type of domestic robot was released in the form of Aibo, a robotic dog with intelligent features and autonomy. AI has also been applied to video games.


    The evolution of music has always been affected by technology. With AI, scientists are trying to make the computer emulate the activities of the skillful musician. Composition, performance, music theory, sound processing are some of the major areas on which research in Music and Artificial Intelligence are focusing on.


    The Air Operations Division , AOD, uses for the rule based expert systems. The AOD has use for artificial intelligence for surrogate operators for combat and training simulators, mission management aids, support systems for tactical decision making, and post processing of the simulator data into symbolic summaries.

    The use of artificial intelligence in simulators is proving to be very useful for the AOD. Airplane simulators are using artificial intelligence in order to process the data taken from simulated flights. Other than simulated flying, there is also simulated aircraft warfare. The computers are able to come up with the best success scenarios in these situations. The computers can also create strategies based on the placement, size, speed, and strength of the forces and counter forces. Pilots may be given assistance in the air during combat by computers. The artificial intelligent programs can sort the information and provide the pilot with the best possible maneuvers, not to mention getting rid of certain maneuvers that would be impossible for a sentient being to perform. Multiple aircraft are needed to get good approximations for some calculations so computer simulated pilots are used to gather data. These computer simulated pilots are also used to train future air traffic controllers.

    The system used by the AOD in order to measure performance was the Interactive Fault Diagnosis and Isolation System, or IFDIS. It is a rule based expert system put together by collecting information from TF-30 documents and the expert advice from mechanics that work on the TF-30. This system was designed to be used to for the development of the TF-30 for the RAAF F-111C. The performance system was also used to replace specialized workers. The system allowed the regular workers to communicate with the system and avoid mistakes, miscalculations, or having to speak to one of the specialized workers.

    The AOD also uses artificial intelligence in speech recognition software. The air traffic controllers are giving directions to the artificial pilots and the AOD wants to the pilots to respond to the ATC’s with simple responses. The programs that incorporate the speech software must be trained, which means they use neural networks. The program used, the Verbex 7000, is still a very early program that has plenty of room for improvement. The improvements are imperative because ATCs use very specific dialog and the software needs to be able to communicate correctly and promptly every time.

    The Artificial Intelligence supported Design of Aircraft, or AIDA, is used to help designers in the process of creating conceptual designs of aircraft. This program allows the designers to focus more on the design itself and less on the design process. The software also allows the user to focus less on the software tools. The AIDA uses rule based systems to compute its data. This is a diagram of the arrangement of the AIDA modules. Although simple, the program is proving effective.

    In 2003, NASA’s Dryden Flight Research Center, and many other companies, created software that could enable a damaged aircraft to continue flight until a safe landing zone can be reached. The Intelligent Flight Control System was tested on an F-15, which was heavily modified by NASA. The software compensates for all the damaged components by relying on the undamaged components. The neural network used in the software proved to be effective and marked a triumph for artificial intelligence.

    The Integrated Vehicle Health Management system, also used by NASA, on board an aircraft must process and interpret data taken from the various sensors on the aircraft. The system needs to be able to determine the structural integrity of the aircraft. The system also needs to implement protocols in case of any damage taken the vehicle.


    Neural networks are also being widely deployed in homeland security, speech and text recognition, data mining, and e-mail spam filtering.

    List of applications

    Typical problems to which AI methods are applied
    • Pattern recognition
    • Optical character recognition
      • Handwriting recognition
      • Speech recognition
      • Face recognition
    • Artificial Creativity

    • Computer vision, Virtual reality and Image processing
    • Diagnosis (artificial intelligence)
    • Game theory and Strategic planning
    • Game artificial intelligence and Computer game bot
    • Natural language processing, Translation and Chatterbots
    • Nonlinear control and Robotics
    Other fields in which AI methods are implemented
    • Artificial life
    • Automated reasoning
    • Automation
    • Biologically-inspired computing
    • Concept mining
    • Data mining
    • Knowledge representation
    • Semantic Web
    • E-mail spam filtering

    • Robotics
    • Behavior-based roboticc
    • Behavior-based robotics
    • Cognitive
    • Cybernetics
    • Developmental robotics
    • Epigenetic robotics
    • Evolutionary robotics
    • Hybrid intelligent system
    • Intelligent agent
    • Intelligent control
    • Litigation