What is cybernetics?


Cybernetics
is the interdisciplinary study of the structure of regulatory systems. Cybernetics is closely related to control theory and systems theory. Both in its origins and in its evolution in the second-half of the 20th century, cybernetics is equally applicable to physical and social (that is, language-based) systems.

Contemporary cybernetics began as an interdisciplinary study connecting the fields of control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology in the 1940s, often attributed to the Macy Conferences.

Monday, February 23, 2009

Cellular automaton

A cellular automaton (plural: cellular automata) is a discrete model studied in computability theory, mathematics, theoretical biology and microstructure modeling. It consists of a regular grid of cells, each in one of a finite number of states. The grid can be in any finite number of dimensions. Time is also discrete, and the state of a cell at time t is a function of the states of a finite number of cells (called its neighborhood) at time t − 1. These neighbors are a selection of cells relative to the specified cell, and do not change (though the cell itself may be in its neighborhood, it is not usually considered a neighbor). Every cell has the same rule for updating, based on the values in this neighbourhood. Each time the rules are applied to the whole grid a new generation is created.



Overview

One way to simulate a two-dimensional cellular automaton is with an infinite sheet of graph paper along with a set of rules for the cells to follow. Each square is called a "cell" and each cell has two possible states, black and white. The "neighbors" of a cell are the 8 squares touching it. For such a cell and its neighbors, there are 512 (= 29) possible patterns. For each of the 512 possible patterns, the rule table would state whether the center cell will be black or white on the next time interval. Conway's Game of Life is a popular version of this model.

It is usually assumed that every cell in the universe starts in the same state, except for a finite number of cells in other states, often called a configuration. More generally, it is sometimes assumed that the universe starts out covered with a periodic pattern, and only a finite number of cells violate that pattern. The latter assumption is common in one-dimensional cellular automata.

A torus, a toroidal shape.

Cellular automata are often simulated on a finite grid rather than an infinite one. In two dimensions, the universe would be a rectangle instead of an infinite plane. The obvious problem with finite grids is how to handle the cells on the edges. How they are handled will affect the values of all the cells in the grid. One possible method is to allow the values in those cells to remain constant. Another method is to define neighbourhoods differently for these cells. One could say that they have fewer neighbours, but then one would also have to define new rules for the cells located on the edges. These cells are usually handled with a toroidal arrangement: when one goes off the top, one comes in at the corresponding position on the bottom, and when one goes off the left, one comes in on the right. (This essentially simulates an infinite periodic tiling, and in the field of partial differential equations is sometimes referred to as periodic boundary conditions.) This can be visualized as taping the left and right edges of the rectangle to form a tube, then taping the top and bottom edges of the tube to form a torus (doughnut shape). Universes of other dimensions are handled similarly. This is done in order to solve boundary problems with neighborhoods. For example, in a 1-dimensional cellular automaton like the examples below, the neighborhood of a cell xit—where t is the time step (vertical), and i is the index (horizontal) in one generation—is {xi−1t−1, xit−1, xi+1t−1}. There will obviously be problems when a neighbourhood on a left border references its upper left cell, which is not in the cellular space, as part of its neighborhood.


History

Stanisław Ulam, while working at the Los Alamos National Laboratory in the 1940s, studied the growth of crystals, using a simple lattice network as his model. At the same time, John von Neumann, Ulam's colleague at Los Alamos, was working on the problem of self-replicating systems. Von Neumann's initial design was founded upon the notion of one robot building another robot. This design is known as the kinematic model. As he developed this design, von Neumann came to realize the great difficulty of building a self-replicating robot, and of the great cost in providing the robot with a "sea of parts" from which to build its replicant. Ulam suggested that von Neumann develop his design around a mathematical abstraction, such as the one Ulam used to study crystal growth. Thus was born the first system of cellular automata. Like Ulam's lattice network, von Neumann's cellular automata are two-dimensional, with his self-replicator implemented algorithmically. The result was a universal copier and constructor working within a CA with a small neighborhood (only those cells that touch are neighbors; for von Neumann's cellular automata, only orthogonal cells), and with 29 states per cell. Von Neumann gave an existence proof that a particular pattern would make endless copies of itself within the given cellular universe. This design is known as the tessellation model, and is called a von Neumann universal constructor.

In the 1970s a two-state, two-dimensional cellular automaton named Game of Life became very widely known, particularly among the early computing community. Invented by John Conway, and popularized by Martin Gardner in a Scientific American article, its rules are as follows: If a black cell has 2 or 3 black neighbors, it stays black. If a black cell has less than 2 or more than 3 black neighbors it becomes white. If a white cell has 3 black neighbors, it becomes black. Despite its simplicity, the system achieves an impressive diversity of behavior, fluctuating between apparent randomness and order. One of the most apparent features of the Game of Life is the frequent occurrence of gliders, arrangements of cells that essentially move themselves across the grid. It is possible to arrange the automaton so that the gliders interact to perform computations, and after much effort it has been shown that the Game of Life can emulate a universal Turing machine. Possibly because it was viewed as a largely recreational topic, little follow-up work was done outside of investigating the particularities of the Game of Life and a few related rules.

In 1969, however, German computer pioneer Konrad Zuse published his book Calculating Space, proposing that the physical laws of the universe are discrete by nature, and that the entire universe is just the output of a deterministic computation on a giant cellular automaton. This was the first book on what today is called digital physics.

In 1983 Stephen Wolfram published the first of a series of papers systematically investigating a very basic but essentially unknown class of cellular automata, which he terms elementary cellular automata (see below). The unexpected complexity of the behavior of these simple rules led Wolfram to suspect that complexity in nature may be due to similar mechanisms. Additionally, during this period Wolfram formulated the concepts of intrinsic randomness and computational irreducibility, and suggested that rule 110 may be universal—a fact proved later with the help of Wolfram's research assistant Matthew Cook in the 1990s.

Wolfram left academia in the mid-late 1980s to create Mathematica, which he then used to extend his earlier results to a broad range of other simple, abstract systems. In 2002 he published his results in the 1280-page text A New Kind of Science, which extensively argued that the discoveries about cellular automata are not isolated facts but are robust and have significance for all disciplines of science. Despite much confusion in the press and academia, the book did not argue for a fundamental theory of physics based on cellular automata, and although it did describe a few specific physical models based on cellular automata, it also provided models based on qualitatively different abstract systems.

In his 2005 book, The Lifebox, The Seashell and The Soul, Rudy Rucker expanded upon Wolfram's theories toward a theory of Universal Automatism. This used cellular automata as a model to explain how simple rules can generate complex results.


Simplest

The simplest nontrivial CA would be one-dimensional, with two possible states per cell, and a cell's neighbors defined to be the adjacent cells on either side of it. A cell and its two neighbors form a neighborhood of 3 cells, so there are 2³=8 possible patterns for a neighborhood. There are then 28=256 possible rules. These 256 CAs are generally referred to using Wolfram notation, a standard naming convention invented by Wolfram. The name of a CA is the decimal number which, in binary, gives the rule table, with the eight possible neighborhoods listed in reverse counting order. For example, below are tables defining the "rule 30 CA" and the "rule 110 CA" (in binary, 30 and 110 are written 11110 and 1101110, respectively) and graphical representations of them starting from a 1 in the center of each image:

Image:CA rule30s.png
Rule 30 cellular automaton

current pattern 111 110 101 100 011 010 001 000
new state for center cell 0 0 0 1 1 1 1 0

Image:CA rule110s.png
Rule 110 cellular automaton

current pattern 111 110 101 100 011 010 001 000
new state for center cell 0 1 1 0 1 1 1 0

A table completely defines a CA rule. For example, the rule 30 table says that if three adjacent cells in the CA currently have the pattern 100 (left cell is on, middle and right cells are off), then the middle cell will become 1 (on) on the next time step. The rule 110 CA says the opposite for that particular case.

A number of papers have analyzed and compared these 256 CAs, either individually or collectively. The rule 30 and rule 110 CAs are particularly interesting.

Rule 30 generates apparent randomness despite the lack of anything that could reasonably be considered random input. Wolfram proposed using its center column as a pseudorandom number generator (PRNG); it passes many standard tests for randomness, and Wolfram uses this rule in the Mathematica product for creating random integers. (In particular, in the 1990s a cryptography survey book claimed that rule 30 was equivalent to a linear feedback shift register (LFSR), but in fact the claim was about rule 90.) Although Rule 30 produces randomness on many input patterns, there are also an infinite number of input patterns that result in repeating patterns. The trivial example of such a pattern is the input pattern only consisting of zeros. A less trivial example, found by Matthew Cook, is any input pattern consisting of infinite repetitions of the pattern '00001000111000', with repetitions optionally being separated by six ones.

Rule 110, like the Game of Life, exhibits what Wolfram calls class IV behavior, which is neither completely random nor completely repetitive. Localized structures appear and interact in various complicated-looking ways. In the course of the development of A New Kind of Science, as a research assistant to Stephen Wolfram back in 1994, Matthew Cook proved that some of these structures were rich enough to support universality. This result is interesting because rule 110 is an extremely simple one-dimensional system, and one which is difficult to engineer to perform specific behavior. This result therefore provides significant support for Wolfram's view that class IV systems are inherently likely to be universal. Cook presented his proof at a Santa Fe Institute conference on Cellular Automata in 1998, but Wolfram blocked the proof from being included in the conference proceedings, as Wolfram did not want the proof to be announced before the publication of A New Kind of Science. In 2004, Cook's proof was finally published in Wolfram's journal Complex Systems (Vol. 15, No. 1), over ten years after Cook came up with it. Rule 110 has been the basis over which some of the smallest universal Turing machines have been built, inspired on the breakthrough concepts that the development of the proof of rule 110 universality produced.


Reversible

A CA is said to be reversible if for every current configuration of the CA there is exactly one past configuration (preimage). If one thinks of a CA as a function mapping configurations to configurations, reversibility implies that this function is bijective.

For one dimensional CA there are known algorithms for finding preimages, and any 1D rule can be proved either reversible or irreversible. For CA of two or more dimensions it has been proved that the reversibility is undecidable for arbitrary rules. The proof by Jarkko Kari is related to the tiling problem by Wang tiles.

Reversible CA are often used to simulate such physical phenomena as gas and fluid dynamics, since they obey the laws of thermodynamics. Such CA have rules specially constructed to be reversible. Such systems have been studied by Tommaso Toffoli, Norman Margolus and others.

For finite CAs that are not reversible, there must exist patterns for which there are no previous states. These patterns are called Garden of Eden patterns. In other words, no pattern exists which will develop into a Garden of Eden pattern.

Several techniques can be used to explicitly construct reversible CA with known inverses. Two common ones are the second order technique and the partitioning technique, both of which involve modifying the definition of a CA in some way. Although such automata do not strictly satisfy the definition given above, it can be shown that they can be emulated by conventional CAs with sufficiently large neighborhoods and numbers of states, and can therefore be considered a subset of conventional CA.

Another technique due to K. Morita and M. Harao consists in partitioning each cell into a finite number of parts, each part being devoted to some neighbor. The evolution proceeds by exchanging the corresponding parts between neighbors and then applying on each cell a purely local transformation F depending only on the state of the cell (and not on the states of its neighbors). With such a construction scheme, the cellular automaton is guaranteed to be reversible if the local transformation F is itself a bijection.


Totalistic

A special class of CAs are totalistic CAs. The state of each cell in a totalistic CA is represented by a number (usually an integer value drawn from a finite set), and the value of a cell at time t depends only on the sum of the values of the cells in its neighborhood (possibly including the cell itself) at time t−1. If the state of the cell at time t does depend on its own state at time t−1 then the CA is properly called outer totalistic, although the distinction is not always made. Conway's Game of Life is an example of an outer totalistic CA with cell values 0 and 1.

A notation exists to describe rulesets of two-state totalistic CAs consisting of an initial indicating the neighbourhood of each cell and sums following the letters S (for survival) and B (for birth) for which those changes occur. In this notation Conway's Game of Life is M:S23/B3. This notation has been extended for non-totalistic CAs, where a letter or letters follow each sum indicating what patterns of neighbours cause survival or birth events.


Evolving cellular automata using genetic algorithms

Recently there has been a keen interest in building decentralized systems, be it sensor networks or more sophisticated micro level structures designed at network level and aimed at decentralized information processing. The idea of emergent computation came from the need of using distributed system to do information processing at the global level. Although, it is still an area in infancy but some people have started taking the idea seriously. Melanie Mitchell who is the Professor of computer science at Portland State University and also the Santa Fe Institute External professor has been working on the idea of using self evolving cellular arrays to study emergent computation and distributed information processing. Mitchell and colleagues at SFI are working on evolutionary computation to program cellular arrays for performing computations. Computation in decentralized systems is very different from classical systems where the information is processed at some central location depending on the system’s state. In decentralized system, the information processing occurs in form of global and local pattern dynamics.

The inspiration for such an approach comes from complex natural systems like insect colonies, nervous system and economic systems. The focus of the research is to know that how computation occurs in an evolving decentralized system. In order to model some of the features of these systems and to see how they give rise to emergent computation, Mitchell and collaborators at the SFI have applied Genetic Algorithms to evolve patterns in cellular automata. In their results they were able to show that the GA discovered rules that gave rise to sophisticated emergent computational strategies. Mitchell’s group used a single dimensional binary (each cell can have 2 states) array where each cell has 2 neighbors. The array can be thought of as a circle where the first and last cells in the array are neighbors to each other. The evolution of the array was tracked through the number of ones and zeros after each iteration. The results were plotted in form of space time diagrams which showed clearly that how the network evolved and what sort of emergent computation was visible. One thing in which the researchers were interested was to know the competing regions with regards to the density of ones or zeros.

The beauty of complex systems occurring in nature is that actions of simple components with local information and communication give rise to coordination global information processing. Although, it is not yet known that how such systems give rise to computation over global scale but models of the form of cellular automata may uncover a lot of hidden features of such systems. The motivation for such an approach comes from the need of understanding natural systems and also to engineer decentralized artificial systems which can give rise to emergent computation.

Mitchell’s group showed three levels of information processing occurring during iterations in evolving cellular automata. The first type was the storage and transmission of information due to particles (cell) interactions. The second level comprised of the geometric subroutines that implemented intermediate-scale computations for example the size competition between regions of low and high density. The third level is the global computation over entire lattices (emergent computation at the global scale).

The type of results produced by Mitchell’s group is interesting in the sense that a very simple single dimensional array of cellular automata produced results showing coordination over global scale which fits to the idea of emergent computation.

Future work in the area may include more sophisticated models using cellular automata of higher dimensions which can be used to model complex natural systems. Apart from emergent computation, various other aspects of complex natural systems are also studied using cellular automata which include communication and interaction in insect societies, cognition and other naturally occurring systems.


Cryptography use

Rule 30 was originally suggested as a possible Block cipher for use in cryptography.

Cellular automata have been proposed for public key cryptography. The one way function is the evolution of a finite CA whose inverse is believed to be hard to find. Given the rule, anyone can easily calculate future states, but it appears to be very difficult to calculate previous states. However, the designer of the rule can create it in such a way as to be able to easily invert it. Therefore, it is apparently a trapdoor function, and can be used as a public-key cryptosystem. The security of such systems is not currently known.


Related automata

There are many possible generalizations of the CA concept.

One way is by using something other than a rectangular (cubic, etc.) grid. For example, if a plane is tiled with equilateral triangles, those triangles could be used as cells.

Also, rules can be probabilistic rather than deterministic. A probabilistic rule gives, for each pattern at time t, the probabilities that the central cell will transition to each possible state at time t+1. Sometimes a simpler rule is used; for example: "The rule is the Game of Life, but on each time step there is a 0.001% probability that each cell will transition to the opposite color."

The neighborhood or rules could change over time or space. For example, initially the new state of a cell could be determined by the horizontally adjacent cells, but for the next generation the vertical cells would be used.

The grid can be finite, so that patterns can "fall off" the edge of the universe.

In CA, the new state of a cell is not affected by the new state of other cells. This could be changed so that, for instance, a 2 by 2 block of cells can be determined by itself and the cells adjacent to itself.

There are continuous automata. These are like totalistic CA, but instead of the rule and states being discrete (e.g. a table, using states {0,1,2}), continuous functions are used, and the states become continuous (usually values in [0,1]). The state of a location is a finite number of real numbers. Certain CA can yield diffusion in liquid patterns in this way.

Continuous spatial automata have a continuum of locations. The state of a location is a finite number of real numbers. Time is also continuous, and the state evolves according to differential equations. One important example is reaction-diffusion textures, differential equations proposed by Alan Turing to explain how chemical reactions could create the stripes on zebras and spots on leopards. When these are approximated by CA, such CAs often yield similar patterns. MacLennan considers continuous spatial automata as a model of computation.

There are known examples of continuous spatial automata which exhibit propagating phenomena analogous to gliders in the Game of Life.


Natural biotic types

Conus textile exhibits a cellular automaton pattern on its shell

Some living things use naturally occurring cellular automata in their functioning.

Patterns of some seashells, like the ones in Conus and Cymbiola genus, are generated by natural CA. The pigment cells reside in a narrow band along the shell's lip. Each cell secretes pigments according to the activating and inhibiting activity of its neighbour pigment cells, obeying a natural version of a mathematical rule. The cell band leaves the colored pattern on the shell as it grows slowly. For example, the widespread species Conus textile bears a pattern resembling the Rule 30 CA described above.

Plants regulate their intake and loss of gases via a CA mechanism. Each stoma on the leaf acts as a cell.

Neural networks can be used as cellular automata, too. The complex moving wave patterns on the skin of cephalopods are a good display of corresponding activation patterns in the animals' brain.


Chemical types

The Belousov-Zhabotinsky reaction is a spatio-temporal chemical oscillator which can be simulated by means of a cellular automaton. In the 1950s A. M. Zhabotinsky (extending the work of B. P. Belousov) discovered that when a thin, homogenous layer of a mixture of malonic acid, acidified bromate and a ceric salt were mixed together and left undisturbed, fascinating geometric patterns such as concentric circles and spirals propagate across the medium. In the "Computer Recreations" section of the August 1988 issue of Scientific American Professor A. K. Dewdney presented a cellular automaton whose behavior closely resembles the Belousov-Zhabotinsky reaction. So far, no naturally occurring chemical cellular automata have been observed.


Computer processors

CA processors are a physical, not software only, implementation of CA concepts, which can process information computationally. Processing elements are arranged in a regular grid of identical cells. The grid is usually a square tiling, or tessellation, of two or three dimensions; other tilings are possible, but not yet used. Cell states are determined only by interactions with the small number of adjoining cells. Cells interact, communicate, directly only with adjoining, adjacent, neighbor cells. No means exists to communicate directly with cells farther away.

One such CA processor array configuration is the systolic array.

Cell interaction can be via electric charge, magnetism, vibration (phonons at quantum scales), or any other physically useful means. This can be done in several ways so no wires are needed between any elements.

This is very unlike processors used in most computers today, von Neumann designs, which are divided into sections with elements that can communicate with distant elements, over wires.


Error correction coding

CA have been applied to design error correction codes in the paper "Design of CAECC - Cellular Automata Based Error Correcting Code", by D. Roy Chowdhury, S. Basu , I. Sen Gupta , P. Pal Chaudhuri. The paper defines a new scheme of building SEC-DED codes using CA, and also reports a fast hardware decoder for the code.


No comments:

Post a Comment

Powered By Blogger