What is cybernetics?


Cybernetics
is the interdisciplinary study of the structure of regulatory systems. Cybernetics is closely related to control theory and systems theory. Both in its origins and in its evolution in the second-half of the 20th century, cybernetics is equally applicable to physical and social (that is, language-based) systems.

Contemporary cybernetics began as an interdisciplinary study connecting the fields of control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology in the 1940s, often attributed to the Macy Conferences.

Friday, February 27, 2009

Tensegrity

Needle Tower by Kenneth Snelson (1968)

Tensegrity is a portmanteau of tensional integrity. It refers to the integrity of structures as being based in a synergy between balanced tension and compression components.

The term "tensegrity" was first explored by artist Kenneth Snelson to produce sculptures such as his 18 meter high Needle Tower in 1968. The term 'tensegrity' was coined for Snelson by Buckminster Fuller. Fuller is best known for his geodesic domes, which he developed based on concepts explained and demonstrated by Snelson through sculptures. The term "synergetics" may refer more abstractly to synergetic systems of contrasting forces.



Concept

The simplest tensegrity structure. Each of three compression members is symmetric with the other three and end to end. Each end is connected to three cables which provide compression and which precisely define the position of that end in the same way as the three cables in the Skylon tower define the bottom end of that pillar.

A similar structure but with four compression members.

Tensegrity is the exhibited strength that results "when push and pull have a win-win relationship with each other". Tension is continuous and compression discontinuous, such that continuous pull is balanced by equivalently discontinuous pushing forces.

Buckminster Fuller explained that these fundamental phenomena were not opposites, but complements that could always be found together. Tensegrity is the name for a synergy between co-existing pairs of fundamental physical laws; of push and pull, and compression and tension, or repulsion and attraction.

If one pushes a ping-pong ball on a smooth table with the point of a sharp pencil, the ball would always roll away from the direction of the push, first rolling one way then the other. Push is divergent. On the other hand, attaching a string to the ping pong ball with tape and pulling it creates convergence. No matter how other forces might influence the ball to roll away from you, the string would always bring it to you more and more directly. Pull is convergent. Similarly, pulling a trailer uphill with a car, the trailer will converge toward the same course behind the car. If the trailer begins to sway, increasing pull by increasing acceleration can dampen the swaying motion. Driving downhill however will cause the trailer to push, and the trailer will show tendencies to sway from side to side.

The rubber skin of the balloon can be seen as continuously pushing (against the air inside) while the individual molecules of air are discontinuously pushing against the inside of the balloon keeping it inflated. All external forces striking the external surface are immediately and continuously distributed over the entire system, meaning the balloon is very strong despite its thin material. Similarly, thin plastic membranes such as plastic bags are often stronger when loaded rather than unloaded.


Metaphorical tensegrity

It should be noted that Webster's dictionary has simplified the collective meaning as "all things working together". And, while this may be seen as an "over-simplification" in lieu of the technical underpinnings that accompany this concept, the phrase is legitimate in the sense of this concept's universal applicability. The phrase aptly covers the "spirit" of the two words used in the creation of this neologism, in that the breadth & depth in its inherent meaning (semantic carriage) is extensive and multi-disciplinary.

For example, the formation of functioning architectures in one particular area of cybernetics research - sometimes referred to as "systems intellect" (the paradigmatic landscapes of the meta-cognitive level), makes use of octahedral matrices for the creation of focuses. The simplest form of tensegrity is that of the octahedron (first discovered by Theodore Pope). These matrices help to expeditiously capture the ever-changing/dynamic "contextual backdrop". So, the formation of these functioning architectures which, theoretically, allows for a kind of "auto-epistemological" approach to the creation of "sentience" in the cybernetic environment, is then dependent upon conceptually-based "primitives" held in tension - compression or complementary relationships. These architectures also allow for the ready implementation of any acquired acumen that has been "codified" as an "expert system" (the algorithmic & heuristic content found therein) relative to the perceived contextual backdrop. This would imply then that even human thought & reconnoitering dialogs have "tensegrital (adj)" underpinnings.


Applications

The idea was adopted into architecture in the 1980s when David Geiger designed the first significant structure - Seoul Olympic Gymnastics Arena for the 1988 Summer Olympics. The Georgia Dome, which was used for the 1996 Summer Olympics is a large tensegrity structure of similar design to the aforementioned Gymnastics Hall.

Theoretically, there is no limitation to the size of a tensegrity. Cities could be covered with geodesic domes. Planets and stars (Dyson sphere) could be contained within them.

"Tensegrity is a contraction of tensional integrity structuring. All geodesic domes are tensegrity structures, whether the tension-islanded compression differentiations are visible to the observer or not. Tensegrity geodesic spheres do what they do because they have the properties of hydraulically or pneumatically inflated structures."

As Harvard physician and scientist Donald Ingber explains:

"The tension-bearing members in these structures – whether Fuller's domes or Snelson's sculptures – map out the shortest paths between adjacent members (and are therefore, by definition, arranged geodesically) Tensional forces naturally transmit themselves over the shortest distance between two points, so the members of a tensegrity structure are precisely positioned to best withstand stress. For this reason, tensegrity structures offer a maximum amount of strength."

In 2009, the Kurilpa Bridge will open across the Brisbane River in Queensland, Australia. The new greenbridge will be a multiple-mast, cable-stay structure based on the principles of tensegrity.


Biology

The concept has applications in biology. Biological structures such as muscles and bones, or rigid and elastic cell membranes, are made strong by the unison of tensioned and compressed parts. The muscular-skeletal system is a synergy of muscle and bone, the muscle provides continuous pull, the bones discontinuous push. Tensegrity has been theorized by Rick Barrett to have implications in Taijiquan and athletics in general. He claims that high level athletes and Taijiquan players enter into "the zone", and have access to the tensegrity of their connective tissue system. This would explain greater levels of integration, strength, and faster reactions in elite athletes.


Russian claims

Russian artist Viatcheslav Koleichuk claimed that the idea of tensegrity was invented first by Karl Ioganson, Russian artist of Latvian descent, who contributed some works based on this principle to the main exhibition of Russian constructivism in 1921 . This claim was backed by Maria Gough in her paper "In the Laboratory of Constructivism: Karl Ioganson's Cold Structures" (October, Vol. 84, Spring, 1998, pp. 90-117). Kenneth Snelson however denied this claim insisting that Ioganson's works were much further than one step from his own concept of tensegrity.



Synergy

Synergy is the term used to describe a situation where different entities cooperate advantageously for a final outcome.

The opposite of synergy is antagonism, the phenomenon where two agents in combination have an overall effect that is less than that predicted from their individual effects.

Synergy can also mean:

  • A mutually advantageous conjunction.
  • A dynamic state in which combined action is favored over the sum of individual component actions.
  • Behavior of whole systems unpredicted by the behavior of their parts taken separately. More accurately known as emergent behavior.
  • The cooperative action of two or more stimuli or drugs.


Drug synergy

Drug synergism occurs when drugs can interact in ways that enhance or magnify one or more effects, or side effects, of those drugs. This is sometimes exploited in combination preparations, such as codeine mixed with acetaminophen or ibuprofen to enhance the action of codeine as a pain reliever. This is often seen with recreational drugs, where 5-HTP, a serotonin precursor often used as an antidepressant, is often used prior to, during, and shortly after recreational use of MDMA as it allegedly increases the "high" and decreases the "comedown" stages of MDMA use (although most anecdotal evidence has pointed to 5-HTP moderately muting the effect of MDMA). Other examples include the use of cannabis with LSD, where the active chemicals in cannabis enhance the hallucinatory experience of LSD-use.

Negative effects of synergy are a form of contraindication, which for instance can be if more than one depressant drug is used that affects the central nervous system (CNS), an example being alcohol and Valium. The combination can cause a greater reaction than simply the sum of the individual effects of each drug if they were used separately. In this particular case, the most serious consequence of drug synergy is exaggerated respiratory depression, which can be fatal if left untreated.

Pest synergy

Pest synergy would occur in a biological host organism population where, for example, the introduction of parasite A may cause 10% fatalities, and parasite B may also cause 10% loss. When both parasites are present, the losses would normally be expected to total less than 20%, yet in some cases, losses are significantly greater. In such cases it is said that the parasites in combination have a synergistic effect.

Positive synergy

Toxicologic synergy is of concern to the public and regulatory agencies because chemicals individually considered safe might pose unacceptable health or ecological risk when exposure is to a combination. Articles in scientific and lay journals include many definitions of chemical or toxicologic synergy, often vague or in conflict with each other. Because toxic interactions are defined relative to the expectation under "no interaction," a determination of synergy (or antagonism) depends on what is meant by "no interaction." The United States Environmental Protection Agency has one of the more detailed and precise definitions of toxic interaction, designed to facilitate risk assessment. In their guidance documents, the no-interaction default assumption is dose addition, so synergy means a mixture response that exceeds that predicted from dose addition. The EPA emphasizes that synergy does not always make a mixture dangerous, nor does antagonism always make the mixture safe; each depends on the predicted risk under dose addition.

For example, a consequence of pesticide use is the risk of health effects. During the registration of pesticides in the US exhaustive tests are performed to discern health effects on humans at various exposure levels. A regulatory upper limit of presence in foods is then placed on this pesticide. As long as residues in the food stay below this regulatory level, health effects are deemed highly unlikely and the food is considered safe to consume.

However in normal agal practice it is rare to use only a single pesticide. During the production of a crop several different materials may be used. Each of them has had determined a regulatory level at which they would be considered individually safe. In many cases, a commercial pesticide is itself a combination of several chemical agents, and thus the safe levels actually represent levels of the mixture. In contrast, combinations created by the end user, such as a farmer, are rarely tested as that combination. The potential for synergy is then unknown or estimated from data on similar combinations. This lack of information also applies to many of the chemical combinations to which humans are exposed, including residues in food, indoor air contaminants, and occupational exposures to chemicals. Some groups think that the rising rates of cancer, asthma and other health problems may be caused by these combination exposures; others have other explanations. This question will likely be answered only after years of exposure by the population in general and research on chemical toxicity, usually performed on animals.

Human synergy

Human synergy relates to interacting humans. For example, say person A alone is too short to reach an apple on a tree and person B is too short as well. Once person B sits on the shoulders of person A, they are more than tall enough to reach the apple. In this example, the product of their synergy would be one apple. Another case would be two politicians. If each is able to gather one million votes on their own, but together they were able to appeal to 2.5 million voters, their synergy would have produced 500,000 more votes than had they each worked independently. A song is also a good example of human synergy, taking more than one musical part and putting them together to create a song that has a much more dramatic effect than each of the parts when played individually.

A third form of human synergy is when one person is able to complete two separate tasks by doing one action. For example, if a person was asked by a teacher and his boss at work to write an essay on how he could improve his work, that would be considered synergy. Or, a more visual example of this synergy is a drummer while he's drumming, using four separate rhythms to create one drum beat.

Synergy usually arises when two persons with different complementary skills cooperate. The fundamental example is cooperation of men and women in a couple. In business, cooperation of people with organizational and technical skills happens very often. In general, the most common reason why people cooperate is it brings a synergy. On the other hand, people tend to specialize just to be able to form groups with high synergy.

Corporate synergy

Corporate synergy occurs when corporations interact congruently. A corporate synergy refers to a financial benefit that a corporation expects to realize when it merges with or acquires another corporation. This type of synergy is a nearly ubiquitous feature of a corporate acquisition and is a negotiating point between the buyer and seller that impacts the final price both parties agree to. There are two distinct types of corporate synergies:

Revenue

A revenue synergy refers to the opportunity of a combined corporate entity to generate more revenue than its two predecessor stand alone companies would be able to generate. For example, if company A sells product X through its sales force, company B sells product Y, and company A decides to buy company B then the new company could use each sales person to sell products X and Y thereby increasing the revenue that each sales person generates for the company.

Management

Synergy in terms of management and in relation to team working, refers to the combined effort of individuals as participants of the team. Positive or negative synergy can exist. An easy way to interpret and understand both positive and negative synergy is:

Positive: 2 + 2 = 5, Negative: 2 + 2 = 3

An example of positive synergy is the effects of 'social facilitation', whereas the opposite to this would be 'social loafing'.

Cost

A cost synergy refers to the opportunity of a combined corporate entity to reduce or eliminate expenses associated with running a business. Cost synergies are realized by eliminating positions that are viewed as duplicate within the merged entity. Examples include the head quarters office of one of the predecessor companies, certain executives, the human resources department, or other employees of the predecessor companies. This is related to the economic concept of [[Returns to scale|Economies of Scale].

Computers

Synergy can also be defined as the combination of human strengths and computer strengths. Computers can process data much more quickly than humans, but lack the ability to respond to arbitrary stimuli.

Synergy in the media

In media economics, synergy is the promotion and sale of a product (and all its versions) throughout the various subsidiaries of a media conglomerate, e.g. films, soundtracks or video games. Walt Disney pioneered synergistic marketing techniques in the 1930s by granting dozens of firms the right to use his Mickey Mouse character in products and ads, and continued to market Disney media through licensing arrangements. These products can help advertise the film itself and thus help to increase the film's sales. For example, the Spider-Man films had toys of webshooters and figures of the characters made, as well as posters and games.


Syntegrity

Syntegrity is a formal model presented by Anthony Stafford Beer, a British theorist, in the 1990s and now is a registered trademark. It is a form of non-hierarchical problem solving that can be used in a small team of 10 to 42 people. It is a business consultation product that is licensed out to consulting firms as a basis model for solving problems in a team environment.

"Syntegrity", "Syntegration", "Team Syntegrity", and "Team Syntegration" are all registered trademarks. "Syntegrity" was derived from “synergistic tensegrity".


See also

Synergy and Tensegrity,


Software agent


In computer science, a software agent is a piece of software that acts for a user or other program in a relationship of agency. Such "action on behalf of" implies the authority to decide which (and if) action is appropriate. The idea is that agents are not strictly invoked for a task, but activate themselves.

Related and derived concepts include Intelligent agents (in particular exhibiting some aspect of Artificial Intelligence, such as learning and reasoning), autonomous agents (capable of modifying the way in which they achieve their objectives), distributed agents (being executed on physically distinct computers), multi-agent systems (distributed agents that do not have the capabilities to achieve an objective alone and thus must communicate), and mobile agents (agents that can relocate their execution onto different processors).



Definition

Nwana's Category of Software Agent

The term "agent" describes a software abstraction, an idea, or a concept, similar to OOP terms such as methods, functions, and objects. The concept of an agent provides a convenient and powerful way to describe a complex software entity that is capable of acting with a certain degree of autonomy in order to accomplish tasks on behalf of its user. But unlike objects, which are defined in terms of methods and attributes, an agent is defined in terms of its behavior.

Various authors have proposed different definitions of agents, these commonly include concepts such as

  • persistence (code is not executed on demand but runs continuously and decides for itself when it should perform some activity)
  • autonomy (agents have capabilities of task selection, prioritization, goal-directed behaviour, decision-making without human intervention)
  • social ability (agents are able to engage other components through some sort of communication and coordination, they may collaborate on a task)
  • reactivity (agents perceive the context in which they operate and react to it appropriately).

The Agent concept is most useful as a tool to analyze systems, not as a prescription. The concepts mentioned above often relate well to the way we naturally think about complex tasks and thus agents can be useful to model such tasks

Intelligent software agents

The design of intelligent agents (or intelligent software agents) is a branch of artificial intelligence research. Capabilities of intelligent agents include:

  • ability to adapt
Adaptation implies sensing the environment and reconfiguring in response. This can be achieved through the choice of alternative problem-solving-rules or algorithms, or through the discovery of problem solving strategies. Adaptation may also include other aspects of an agent's internal construction, such as recruiting processor or storage resources.
  • ability to learn
Learning may proceed through trial-and-error, then it implies a capability of introspection and analysis of behaviour and success. Alternatively, learning may proceed by example and generalization, then it implies a capacity to abstract and generalize.

Autonomous agents

Autonomous agents are software agents that claim to be autonomous, being self-contained and capable of making independent decisions, and taking actions to satisfy internal goals based upon their perceived environment. All software agents in important applications are closely supervised by people who start them up, monitor and continually modify their behavior, and shut them down when necessary. The Popek and Goldberg virtualization requirements is a hardware solution to the supervision problem, which in principle prevents the execution of critical instructions without entering a suitable mode (such as System or Super-user mode).

Distributed agents

Since agents are well suited to include their required resources in their description, they can be designed to be very loosely coupled and it becomes easy to have them executed as independent threads and on distributed processors. Thus they become distributed agents and the considerations of distributed computing apply. Agent code is particularly easy to implement in a distributed fashion and should scale well.

Multi-agent systems

When several agents (inter)act they may form a multi-agent system a.k.a. multiple-agent system. Characteristically such agents will not have all data or all methods available to achieve an objective (this can be referred to as "limited viewpoint") and thus will have to collaborate with other agents. Also, there may be little or no global control and thus such systems are sometimes referred to as swarm systems. As with distributed agents, data is decentralized and execution is asynchronous. Earlier, related fields include Distributed Artificial Intelligence (DAI) and distributed problem solving (DPS).

Mobile agents

Agent code that moves itself, including its execution state, on to another machine, to continue execution there. This is also referred to as mobile code. Agents can be used to gather system information, taking back-up of files by copying them in client-server paradigm, monitoring network throughput or to check resources availability and moderating the resource utilization of system by checking the services running on system.

analysis of packets

Fuzzy agents

In computer science a fuzzy agent is a software agent that implements fuzzy logic. This software entity interacts with its environment through an adaptive rule-base and can therefore be considered as a type of intelligent agent.


What an agent is not

It is not useful to prescribe what is, and what is not an agent. However contrasting the term with related concepts may help clarify its meaning:

Distinguishing agents from programs

Fanklin & Graesser (1997) discuss four key notions that distinguish agents from arbitrary programs: reaction to the environment, autonomy, goal-orientation and persistence.

Intuitive distinguishing agents from objects

  • Agents are more autonomous than objects.
  • Agents have flexible behaviour, reactive, proactive, social.
  • Agents have at least one thread of control but may have more.

Distinguishing agents from expert systems

  • Expert systems are not coupled to their environment;
  • Expert systems are not designed for reactive, proactive behavior.
  • Expert systems do not consider social ability

Distinguishing intelligent software agents from intelligent agents in artificial intelligence

  • Intelligent agents (also known as rational agents) are not just software programs, they may also be machines, human beings, communities of human beings (such as firms) or anything that is capable of goal directed behavior.

History

The concept of an agent can be traced back to Hewitt's Actor Model (Hewitt, 1977) - "A self-contained, interactive and concurrently-executing object, possessing internal state and communication capability."

To be more academic, software agent systems are a direct evolution from Multi-Agent Systems (MAS). MAS evolved from Distributed Artificial Intelligence (DAI), Distributed Problem Solving (DPS) and Parallel AI (PAI), thus inheriting all characteristics (good and bad) from DAI and AI.

John Sculley’s 1987 “Knowledge Navigator” video portrayed an image of a relationship between end-users and agents. Being an ideal first, this field experienced a series of unsuccessful top-down implementations, instead of a piece-by-piece, bottom-up approach. The range of agent types is now (from 1990) broad: WWW, search engines, etc.


Examples

Intelligent software agents

Haag (2006) suggests that there are only four essential types of intelligent software agents:

  1. Buyer agents or shopping bots
  2. User or personal agents
  3. Monitoring-and-surveillance agents
  4. Data Mining agents

Buyer agents (shopping bots)

Buyer agents travel around a network (i.e. the internet) retrieving information about goods and services. These agents, also known as 'shopping bots', work very efficiently for commodity products such as CDs, books, electronic components, and other one-size-fits-all products. Amazon.com is a good example of a shopping bot. The website will offer you a list of books that you might like to buy on the basis of what you're buying now and what you have bought in the past.

Another example is used on eBay. At the bottom of the page there is a list of similar products that other customers who did the same search looked at. This is because it is assumed the user tastes are relatively similar and they will be interested in the same products. This technology is known as collaborative filtering.

User agents (personal agents)

User agents, or personal agents, are intelligent agents that take action on your behalf. In this category belong those intelligent agents that already perform, or will shortly perform, the following tasks:

  • Check your e-mail, sort it according to the user's order of preference, and alert you when important emails arrive.
  • Play computer games as your opponent or patrol game areas for you.
  • Assemble customized news reports for you. There are several versions of these, including newshub and CNN.
  • Find information for you on the subject of your choice.
  • Fill out forms on the Web automatically for you, storing your information for future reference (e.g. newshub).
  • Scan Web pages looking for and highlighting text that constitutes the "important" part of the information there
  • "Discuss" topics with you ranging from your deepest fears to sports

Monitoring-and-surveillance (predictive) agents

Monitoring and Surveillance Agents are used to observe and report on equipment, usually computer systems. The agents may keep track of company inventory levels, observe competitors' prices and relay them back to the company, watch stock manipulation by insider trading and rumors, etc.

service monitoring

For example, NASA's Jet Propulsion Laboratory has an agent that monitors inventory, planning, and scheduling equipment ordering to keep costs down, as well as food storage facilities. These agents usually monitor complex computer networks that can keep track of the configuration of each computer connected to the network.

Data mining agents

This agent uses information technology to find trends and patterns in an abundance of information from many different sources. The user can sort through this information in order to find whatever information they are seeking.

A data mining agent operates in a data warehouse discovering information. A 'data warehouse' brings together information from lots of different sources. "Data mining" is the process of looking through the data warehouse to find information that you can use to take action, such as ways to increase sales or keep customers who are considering defecting.

'Classification' is one of the most common types of data mining, which finds patterns in information and categorizes them into different classes. Data mining agents can also detect major shifts in trends or a key indicator and can detect the presence of new information and alert you to it. For example, the agent may detect a decline in the construction industry for an economy; based on this relayed information construction companies will be able to make intelligent decisions regarding the hiring/firing of employees or the purchase/lease of equipment in order to best suit their firm.

Other examples

Some other examples of current Intelligent agents include some spam filters, game bots, and server monitoring tools. Search engine indexing bots also qualify as intelligent agents.

  • User agent - for browsing the World Wide Web
  • Mail transfer agent - For serving E-mail, such as Microsoft Outlook. Why? It communicates with the POP3 mail server, without users having to understand POP3 command protocols. It even has rule sets that filter mail for the user, thus sparing them the trouble of having to do it themselves.
  • SNMP agent
  • DAML (DARPA Agent Markup Language)
  • 3APL (Artificial Autonomous Agents Programming Language)
  • Web Ontology Language (OWL)
  • daemons in Unix-like systems.
  • In Unix-style networking servers, httpd is an HTTP daemon which implements the HyperText Transfer Protocol at the root of the World Wide Web
  • Management agents used to manage telecom devices
  • Crowd simulation for safety planning or 3D computer graphics,

Design issues

Interesting issues to consider in the development of agent-based systems include

  • how tasks are scheduled and how synchronization of tasks is achieved
  • how tasks are prioritized by agents
  • how agents can collaborate, or recruit resources,
  • how agents can be re-instantiated in different environments, and how their internal state can be stored,
  • how the environment will be probed and how a change of environment leads to behavioral changes of the agents
  • how messaging and communication can be achieved,
  • what hierarchies of agents are useful (e.g. task execution agents, scheduling agents, resource providers ...).

For software agents to work together efficiently they must share semantics of their data elements. This can be done by having computer systems publish their metadata.

The definition of agent processing can be approached from two interrelated directions:

  • internal state processing and ontologies for representing knowledge
  • interaction protocols - standards for specifying communication of tasks

Agent systems are used to model real world systems with concurrency or parallel processing.

  • Agent Machinery - Engines of various kinds, which support the varying degrees of intelligence
  • Agent Content - Data employed by the machinery in Reasoning and Learning
  • Agent Access - Methods to enable the machinery to perceive content and perform actions as outcomes of Reasoning
  • Agent Security - Concerns related to distributed computing, augmented by a few special concerns related to agents

The agent uses its access methods to go out into local and remote databases to forage for content. These access methods may include setting up news stream delivery to the agent, or retrieval from bulletin boards, or using a spider to walk the Web. The content that is retrieved in this way is probably already partially filtered – by the selection of the newsfeed or the databases that are searched. The agent next may use its detailed searching or language-processing machinery to extract keywords or signatures from the body of the content that has been received or retrieved. This abstracted content (or event) is then passed to the agent’s Reasoning or inferencing machinery in order to decide what to do with the new content. This process combines the event content with the rule-based or knowledge content provided by the user. If this process finds a good hit or match in the new content, the agent may use another piece of its machinery to do a more detailed search on the content. Finally, the agent may decide to take an action based on the new content; for example, to notify the user that an important event has occurred. This action is verified by a security function and then given the authority of the user. The agent makes use of a user-access method to deliver that message to the user. If the user confirms that the event is important by acting quickly on the notification, the agent may also employ its learning machinery to increase its weighting for this kind of event.


Self-organization

Self-organization is a process of attraction and repulsion in which the internal organization of a system, normally an open system, increases in complexity without being guided or managed by an outside source. Self-organizing systems typically (but not always) display emergent properties.


Overview

The most robust and unambiguous examples of self-organizing systems are from physics. Self-organization is also relevant in chemistry, where it has often been taken as being synonymous with self-assembly. The concept of self-organization is central to the description of biological systems, from the subcellular to the ecosystem level. There are also cited examples of "self-organizing" behaviour found in the literature of many other disciplines, both in the natural sciences and the social sciences such as economics or anthropology. Self-organization has also been observed in mathematical systems such as cellular automata.

Sometimes the notion of self-organization is conflated with that of the related concept of emergence. Properly defined, however, there may be instances of self-organization without emergence and emergence without self-organization, and it is clear from the literature that the phenomena are not the same. The link between emergence and self-organization remains an active research question.

Self-organization usually relies on four basic ingredients:

  1. Positive feedback
  2. Negative feedback
  3. Balance of exploitation and exploration
  4. Multiple interactions

History of the idea

The idea that the dynamics of a system can tend by themselves to increase the inherent order of a system has a long history. One of the earliest statements of this idea was by the philosopher Descartes, in the fifth part of his Discourse on Method, where he presents it hypothetically. Descartes further elaborated on the idea at great length in his unpublished work The World.

The ancient atomists (among others) believed that a designing intelligence was unnecessary, arguing that given enough time and space and matter, organization was ultimately inevitable, although there would be no preferred tendency for this to happen. What Descartes introduced was the idea that the ordinary laws of nature tend to produce organization (For related history, see Aram Vartanian, Diderot and Descartes).

Beginning with the 18th century naturalists a movement arose that sought to understand the "universal laws of form" in order to explain the observed forms of living organisms. Because of its association with Lamarckism, their ideas fell into disrepute until the early 20th century, when pioneers such as D'Arcy Wentworth Thompson revived them. The modern understanding is that there are indeed universal laws (arising from fundamental physics and chemistry) that govern growth and form in biological systems.

Originally, the term "self-organizing" was used by Immanuel Kant in his Critique of Judgment, where he argued that teleology is a meaningful concept only if there exists such an entity whose parts or "organs" are simultaneously ends and means. Such a system of organs must be able to behave as if it has a mind of its own, that is, it is capable of governing itself.

In such a natural product as this every part is thought as owing its presence to the agency of all the remaining parts, and also as existing for the sake of the others and of the whole, that is as an instrument, or organ... The part must be an organ producing the other parts—each, consequently, reciprocally producing the others... Only under these conditions and upon these terms can such a product be an organized and self-organized being, and, as such, be called a physical end.

The term "self-organizing" was introduced to contemporary science in 1947 by the psychiatrist and engineer W. Ross Ashby. It was taken up by the cyberneticians Heinz von Foerster, Gordon Pask, Stafford Beer and Norbert Wiener himself in the second edition of his "Cybernetics: or Control and Communication in the Animal and the Machine" (MIT Press 1961).

Self-organization as a word and concept was used by those associated with general systems theory in the 1960s, but did not become commonplace in the scientific literature until its adoption by physicists and researchers in the field of complex systems in the 1970s and 1980s. After 1977's Ilya Prigogine Nobel Prize, the thermodynamic concept of self-organization received some attention of the public, and scientific researchers start to migrate from the cibernetic view to the thermodynamic view.


Examples

The following list summarizes and classifies the instances of self-organization found in different disciplines. As the list grows, it becomes increasingly difficult to determine whether these phenomena are all fundamentally the same process, or the same label applied to several different processes. Self-organization, despite its intuitive simplicity as a concept, has proven notoriously difficult to define and pin down formally or mathematically, and it is entirely possible that any precise definition might not include all the phenomena to which the label has been applied.

It should also be noted that, the farther a phenomenon is removed from physics, the more controversial the idea of self-organization as understood by physicists becomes. Also, even when self-organization is clearly present, attempts at explaining it through physics or statistics are usually criticized as reductionistic.

Similarly, when ideas about self-organization originate in, say, biology or social science, the farther one tries to take the concept into chemistry, physics or mathematics, the more resistance is encountered, usually on the grounds that it implies direction in fundamental physical processes. However the tendency of hot bodies to get cold (see Thermodynamics) and by Le Chatelier's Principle- the statistical mechanics extension of Newton's Third Law- to oppose this tendency should be noted.

Self-organization in physics

There are several broad classes of physical processes that can be described as self-organization. Such examples from physics include:

  • structural (order-disorder, first-order) phase transitions, and spontaneous symmetry breaking such as
    • spontaneous magnetization, crystallization (see crystal growth, and liquid crystal) in the classical domain and
    • the laser, superconductivity and Bose-Einstein condensation, in the quantum domain (but with macroscopic manifestations)
  • second-order phase transitions, associated with "critical points" at which the system exhibits scale-invariant structures. Examples of these include:
    • critical opalescence of fluids at the critical point
    • percolation in random media
  • structure formation in thermodynamic systems away from equilibrium. The theory of dissipative structures of Prigogine and Hermann Haken's Synergetics were developed to unify the understanding of these phenomena, which include lasers, turbulence and convective instabilities (e.g., Bénard cells) in fluid dynamics,
    • structure formation in astrophysics and cosmology (including star formation, galaxy formation)
    • self-similar expansion
    • Diffusion-limited aggregation
    • percolation
    • reaction-diffusion systems, such as Belousov-Zhabotinsky reaction
  • self-organizing dynamical systems: complex systems made up of small, simple units connected to each other usually exhibit self-organization
    • Self-organized criticality (SOC)
  • In spin foam system and loop quantum gravity that was proposed by Lee Smolin. The main idea is that the evolution of space in time should be robust in general. Any fine-tuning of cosmological parameters weaken the independency of the fundamental theory. Philosophically, it can be assumed that in the early time, there has not been any agent to tune the cosmological parameters. Smolin and his colleagues in a series of works show that, based on the loop quantization of spacetime, in the very early time, a simple evolutionary model (similar to the sand pile model) behaves as a power law distribution on both the size and area of avalanche.
    • Although, this model, which is restricted only on the frozen spin networks, exhibits a non-stationary expansion of the universe. However, it is the first serious attempt toward the final ambitious goal of determining the cosmic expansion and inflation based on a self-organized criticality theory in which the parameters are not tuned, but instead are determined from within the complex system.

Self-organization vs. entropy

Statistical mechanics informs us that large scale phenomena can be viewed as a large system of small interacting particles, whose processes are assumed consistent with well established mechanical laws such as entropy, i.e., equilibrium thermodynamics. However, “… following the macroscopic point of view the same physical media can be thought of as continua whose properties of evolution are given by phenomenological laws between directly measurable quantities on our scale, such as, for example, the pressure, the temperature, or the concentrations of the different components of the media. The macroscopic perspective is of interest because of its greater simplicity of formalism and because it is often the only view practicable.” Against this background, Glansdorff and Ilya Prigogine introduced a deeper view at the microscopic level, where “… the principles of thermodynamics explicitly make apparent the concept of irreversibility and along with it the concept of dissipation and temporal orientation which were ignored by classical (or quantum) dynamics, where the time appears as a simple parameter and the trajectories are entirely reversible.”

As a result, processes considered part of thermodynamically open systems, such as biological processes that are constantly receiving, transforming and dissipating chemical energy (and even the earth itself which is constantly receiving and dissipating solar energy), can and do exhibit properties of self organization far from thermodynamic equilibrium.

A LASER (acronym for “light amplification by stimulated emission of radiation”) can also be characterized as a self organized system to the extent that normal states of thermal equilibrium characterized by electromagnetic energy absorption are stimulated out of equilibrium in a reverse of the absorption process. “If the matter can be forced out of thermal equilibrium to a sufficient degree, so that the upper state has a higher population than the lower state (population inversion), then more stimulated emission than absorption occurs, leading to coherent growth (amplification or gain) of the electromagnetic wave at the transition frequency.”

Self-organization in chemistry

Self-organization in chemistry includes:

  1. molecular self-assembly
  2. reaction-diffusion systems and oscillating chemical reactions
  3. autocatalytic networks
  4. liquid crystals
  5. colloidal crystals
  6. self-assembled monolayers
  7. micelles
  8. microphase separation of block copolymers
  9. Langmuir-Blodgett films

Self-organization in biology

According to Scott Camazine.. [et al.]:

In biological systems self-organization is a process in which pattern at the global level of a system emerges solely from numerous interactions among the lower-level components of the system. Moreover, the rules specifying interactions among the system's components are executed using only local information, without reference to the global pattern.

The following is an incomplete list of the diverse phenomena which have been described as self-organizing in biology.

  1. spontaneous folding of proteins and other biomacromolecules
  2. formation of lipid bilayer membranes
  3. homeostasis (the self-maintaining nature of systems from the cell to the whole organism)
  4. pattern formation and morphogenesis, or how the living organism develops and grows. See also embryology.
  5. the coordination of human movement, e.g. seminal studies of bimanual coordination by Kelso
  6. the creation of structures by social animals, such as social insects (bees, ants, termites), and many mammals
  7. flocking behaviour (such as the formation of flocks by birds, schools of fish, etc.)
  8. the origin of life itself from self-organizing chemical systems, in the theories of hypercycles and autocatalytic networks
  9. the organization of Earth's biosphere in a way that is broadly conducive to life (according to the controversial Gaia hypothesis)

Self-organization in mathematics and computer science

As mentioned above, phenomena from mathematics and computer science such as cellular automata, random graphs, and some instances of evolutionary computation and artificial life exhibit features of self-organization. In swarm robotics, self-organization is used to produce emergent behavior. In particular the theory of random graphs has been used as a justification for self-organization as a general principle of complex systems. In the field of multi-agent systems, understanding how to engineer systems that are capable of presenting self-organized behavior is a very active research area.

Self-organization in cybernetics

Wiener regarded the automatic serial identification of a black box and its subsequent reproduction as sufficient to meet the condition of self-organization. The importance of phase locking or the "attraction of frequencies", as he called it, is discussed in the 2nd edition of his "Cybernetics". Drexler sees self-replication as a key step in nano and universal assembly.

By contrast, the four concurrently connected galvanometers of W. Ross Ashby's homeostat hunt, when perturbed, to converge on one of many possible stable states. Ashby used his state counting measure of variety to describe stable states and produced the "Good Regulator" theorem which requires internal models for self-organized endurance and stability.

Warren McCulloch proposed "Redundancy of Potential Command" as characteristic of the organization of the brain and human nervous system and the necessary condition for self-organization.

Heinz von Foerster proposed Redundancy, R = 1- H/Hmax , where H is entropy. In essence this states that unused potential communication bandwidth is a measure of self-organization.

In the 1970s Stafford Beer considered this condition as necessary for autonomy which identifies self-organization in persisting and living systems. Using Variety analyses he applied his neurophysiologically derived recursive Viable System Model to management. It consists of five parts: the monitoring of performance of the survival processes (1), their management by recursive application of regulation (2), homeostatic operational control (3) and development (4) which produce maintenance of identity (5) under environmental perturbation. Focus is prioritized by an "algedonic loop" feedback: a sensitivity to both pain and pleasure.

In the 1990s Gordon Pask pointed out von Foerster's H and Hmax were not independent and interacted via countably infinite recursive concurrent spin processes (he favoured the Bohm interpretation) which he called concepts (liberally defined in any medium, "productive and, incidentally reproductive"). His strict definition of concept "a procedure to bring about a relation" permitted his theorem "Like concepts repel, unlike concepts attract" to state a general spin based Principle of Self-organization. His edict, an exclusion principle, "There are No Doppelgangers" means no two concepts can be the same (all interactions occur with different perpectives making time incommensurable for actors). This means, after sufficient duration as differences assert, all concepts will attract and coalesce as pink noise and entropy increases (and see Big Crunch, self-organized criticality). The theory is applicable to all organizationally closed or homeostatic processes that produce endurance and coherence (also in the sense of Reshcher Coherence Theory of Truth with the proviso that the sets and their members exert repulsive forces at their boundaries) through interactions: evolving, learning and adapting.

Pask's Interactions of actors "hard carapace" model is reflected in some of the ideas of emergence and coherence. It requires a knot emergence topology that produces radiation during interaction with a unit cell that has a prismatic tensegrity structure. Laughlin's contribution to emergence reflects some of these constraints.

Self-organization in human society

The self-organizing behaviour of social animals and the self-organization of simple mathematical structures both suggest that self-organization should be expected in human society. Tell-tale signs of self-organization are usually statistical properties shared with self-organizing physical systems (see Zipf's law, power law, Pareto principle). Examples such as Critical Mass, herd behaviour, groupthink and others, abound in sociology, economics, behavioral finance and anthropology.

In social theory the concept of self-referentiality has been introduced as a sociological application of self-organization theory by Niklas Luhmann (1984). For Luhmann the elements of a social system are self-producing communications, i.e. a communication produces further communications and hence a social system can reproduce itself as long as there is dynamic communication. For Luhmann human beings are sensors in the environment of the system. Luhmann put forward a functional theory of society.

Self-organization in human and computer networks can give rise to a decentralized, distributed, self-healing system, protecting the security of the actors in the network by limiting the scope of knowledge of the entire system held by each individual actor. The Underground Railroad is a good example of this sort of network. The networks that arise from drug trafficking exhibit similar self-organizing properties. Parallel examples exist in the world of privacy-preserving computer networks such as Tor. In each case, the network as a whole exhibits distinctive synergistic behavior through the combination of the behaviors of individual actors in the network. Usually the growth of such networks is fueled by an ideology or sociological force that is adhered to or shared by all participants in the network.

In economics

In economics, a market economy is sometimes said to be self-organizing. Friedrich Hayek coined the term catallaxy to describe a "self-organizing system of voluntary co-operation," in regard to capitalism. Most modern economists hold that imposing central planning usually makes the self-organized economic system less efficient. By contrast, some socialist economists consider that market failures are so significant that self-organization produces bad results and that the state should direct production and pricing. Many economists adopt an intermediate position and recommend a mixture of market economy and command economy characteristics (sometimes called a mixed economy). When applied to economics, the concept of self-organization can quickly become ideologically-imbued (as explained in chapter 5 of A. Marshall, The Unity of Nature, Imperial College Press, 2002).

In collective intelligence

Non-thermodynamic concepts of entropy and self-organization have been explored by many theorists. Cliff Joslyn and colleagues and their so-called "global brain" projects. Marvin Minsky's "Society of Mind" and the no-central editor in charge policy of the open sourced internet encyclopedia, called Wikipedia, are examples of applications of these principles - see collective intelligence.

Donella Meadows, who codified twelve leverage points that a self-organizing system could exploit to organize itself, was one of a school of theorists who saw human creativity as part of a general process of adapting human lifeways to the planet and taking humans out of conflict with natural processes. See Gaia philosophy, deep ecology, ecology movement and Green movement for similar self-organizing ideals. (The connections between self-organisation and Gaia theory and the environmental movement are explored in A. Marshall, 2002, The Unity of Nature, Imperial College Press: London).



Tuesday, February 24, 2009

System of systems engineering

System-of-Systems Engineering (SoSE) is a set of developing processes, tools, and methods for designing, re-designing and deploying solutions to System-of-Systems challenges.


Overview

System of Systems Engineering (SoSE) methodology is heavily used in Department of Defense applications, but is increasingly being applied to non-defense related problems such as architectural design of problems in air and auto transportation, healthcare, global communication networks, search and rescue, space exploration and many other System of Systems application domains. SoSE is more than systems engineering of monolithic, complex systems because design for System-of-Systems problems is performed under some level of uncertainty in the requirements and the constituent systems, and it involves considerations in multiple levels and domains. Whereas systems engineering focuses on building the system right, SoSE focuses on choosing the right system(s) and their interactions to satisfy the requirements.

System-of-Systems Engineering and Systems Engineering are related but different fields of study. Whereas systems engineering addresses the development and operations of monolithic products, SoSE addresses the development and operations of evolving programs. In other words, traditional systems engineering seeks to optimize an individual system (i.e., the product), while SoSE seeks to optimize network of various interacting legacy and new systems brought together to satisfy multiple objectives of the program. SoSE should enable the decision-makers to understand the implications of various choices on technical performance, costs, extensibility and flexibility over time; thus, effective SoSE methodology should prepare the decision-makers for informed architecting of System-of-Systems problems.

Due to varied methodology and domains of applications in existing literature, there does not exist a single unified consensus for processes involved in System-of-Systems Engineering. One of the proposed SoSE frameworks, by Dr. Daniel A. DeLaurentis, recommends a three-phase method where a SoS problem is defined (understood), abstracted, modeled and analyzed for behavioral patterns. More information on this method and other proposed methods can be found in the listed SoSE focused organizations and SoSE literature in the subsequent sections.



Enterprise systems engineering

Enterprise Systems Engineering (ESE) is a discipline of engineering that focuses on integration of many engineering sub-systems and principles into a complete system.

It accomplishes all of the tasks of "traditional" systems engineering, further informed by an expansive view of the context (political, operational, economic, technological, interacting systems, etc.) in which the system(a) under consideration are being developed, acquired, modified, maintained, or disposed of.

Enterprise Systems Engineering may be required when the complexity being faced (due to scale, uncontrollable interdependencies, and other uncertainties) breaks down the assumptions upon which textbook systems engineering is based, such as requirements being relatively stable and well-understood, a system configuration that can be controlled, and a small, easily discernible set of stakeholders.



Management cybernetics

Sketch for a cybernetic factory, 1959

Management cybernetics is the field of cybernetics concerned with management and organizations. The notion of cybernetics and management was first introduced by Stafford Beer in the late 1950s.




Overview

Cybernetics was defined by the mathematician Norbert Wiener in 1947 as the science of communication and control in the animal and the machine. That is to say that cybernetics studies the flow of information round a system and the way in which that information is used by the system as a mean of controlling itself; it does this for the animate and inanimate systems indifferently. For cybernetics is an interdisciplinary science, owing as much to biology as to physics, as much as the study of the brain as to the study of computers, and owing also a great deal to the formal languages of science for providing tools with which the behaviour of all systems can be objectively described.

Management cybernetics is the concrete application of natural cybernetic laws to all types of organizations and institutions created by human beings, and to the interactions within them and between them. It is a theory based on natural laws. It addresses the issues that every individual who wants to influence an organization in any way must learn to resolve. This theory is not restricted to the actions of top managers. Every member of an organization and every person who to a greater or lesser extent communicates or interacts with it is involved in the considerations.

Management cybernetics is founded and first developed by Stafford Beer since the 1960s. His management theory is not limited only to industrial and commercial enterprises. It also relates to the management of all types of organizations and institutions in the profit and non-profit sectors:

  • from individual enterprises to huge multinationals
  • in the private and public sector
  • in associations and political bodies
  • and lastly in professional and private life

Institutions in the sense of general legal and contractual regulations are also covered.


History

The earliest systems models, used in management studied organizations as mechanical systems in equilibrium. The idea of studying social system in this way, was originally derived from Pareto in 1919 and was promoted in the United States by Henderson at Harvard in the 1930s. Henderson saw organizations made up of parts in mutual interaction. From the 1930s onward three different models of management competed for precedence in organization theory - the traditional approach, human relation theory and systems theory.

In 1948 Wiener published the book Cybernetics, bringing together ideas about control process. Ashby in his Introduction to Cybernetics noted that cybernetics should reveal parallels between machine, brain and society. It was the Beer with his Cybernetics and management in 1959 that got managers and management scientists interested According to Beer by then several attempts had been made to give a systematic exposition of the science of cybernetics, and had drawn attention to the relevance to various orthodox fields. Some biologists have been quick to realise the value of cybernetics to them. Some engineers too, were well aware of the importance of the subject to engineering, and to automation in particular. The social sciences were conscious of their need for a formal framework of a cybernetic kind. The distinguished anthropologist Margaret Mead and Simons essays in this area were notable. Economists, too, had seizes a similar point. But the exposition of Beer's Cybernetics and management was the first directed to the relevance of cybernetics to industrial management.

Beer was the first to apply cybernetics to management, defining management as the "science of effective organization". Throughout the 1960s Beer was a prolific writer and an influential practitioner. It was during that period that he developed the viable system model, to diagnose the faults in any existing organizational system. In that time Forrester invented systems dynamics, which held out the promise that the behavior of whole systems could be represented and understood through modeling the dynamical feedback process going on within them.

Organizations as systems gradually developed to become the dominant approach in the 1960s and 1970s. Systems people whether theorists or practitioners operated from within the same paradigm. Summarizing greatly that systems of all types could be identified by empirical observation of reality, and could be analyzed by essentially the same methods that brought success in the natural sciences. Systems could then, if the interest was in practice, be manipulated the better to achieve whatever purpose they were designed to serve. Systems thinking until the 1970s, therefore, was dominated by the positivism and functionalism characteristics of the traditional version of the scientific method. We can call this kind of systems the traditional systems approach. It embraces strands of work such as "organizations as systems", general systems theory, contingency theory, operations research, systems analysis, systems engineering, and management cybernetics.

During the 1970s and 1980s traditional systems thinking became subject to increasing criticism. As a result alternative systems were born and flourished for example "soft systems thinking", "organizational cybernetics" and critical "systems thinking".


Management cybernetics Topics

Control

The concept of control is of fundamental importance to organizations. It has been identified as a significant influence on:

  • the formation of organizational strategy,
  • the design of organizational structure,
  • the selection, socialisation and evaluation of personnel and
  • the ongoing process of leadership and motivation.

The concept of control itself is a subject of scientific reflection in Management cybernetics.

With the years the definition of control has broadened. And according to some its meaning has lessened. Originally the discussion of control to business organizations referred to monitoring employee behavior. Over time a broader definition has developed making it synonymous with the concept of power and influences.

Decision making

Systems theory and related areas such as computer science, information theory, and management cybernetics have long been devoted to the study of decision-making. A common assumption of these areas is that all organism are information systems.

The characteristics of a decision situation are:

  • A problem exists.
  • At least two alternatives for action remain.
  • Knowledge exists of the objective and its relationship to the problem
  • The consequences of the decision can be established and sometimes quantified

Decision making by management staff can also be practised in computerized business simulators that are made to resemble the ordinary decision environment as closely as possible. Beer's "decision room" or Frontesterion is an example of such an environment.

Modelling

Scientific models are not descriptors nor are they pointers toward some neutral, objective reality, but are consensual conventions which enable particular understanding and coordination of activity in a community of observers. Impeccable communication of a model entails making visible this activeconsensual function, rather than simply pursuing a more detailed investigation of the phenomenon considered as the source or origin for the model.

The starting point for the management cybernetic model of the organization is the input – transformation – output schema. This is used to describe the basic operational activities of the enterprise. The goal or purpose of the enterprise is, in management cybernetics, invariably determined outside the system (as with a first-order feedback arrangement). Then, if the operations are to succeed in bringing about the goal, they must, because of inevitable disturbance, be regulated in some way. This regulation is effected by management. Management cybernetics attempts to equip managers with a number of tools that should enable them to regulate operations. Chief among these are the black box technique and the use of feedback to induce self-regulation into organizations. The latter is often supplemented by strategic control, based on feed-forward information, and external control. According to Jackson (2000) management cybernetics makes little use of the more complex, observer-dependent notion of variety, and organizational cybernetics the more. Stafford Beer (1985) confirms variety as fundamental to matching resources to requirement and the measurement of performance.

Systems

Beer defined a system as anything that consists of parts connected together (1959, p.9).

Viable System Model

Principal functions of the VSM

The Viable Systems Model is a abstract model of the organisational structure of any viable or autonomous system. A viable system is any system organised in such a way as to maintain its identity in a changing environment. One of the prime features of systems that survive is that they are adaptive. The VSM is a model for a viable system, - an abstracted cybernetic description that is applicable to any organisation.


Close related fields

Entrepreneurial cybernetics

Similar to management cybernetics, entrepreneurial cybernetics is primarily concerned with applying the knowledge gained from general cybernetic theories applicable in everyday business contexts. Rules and methods for establishing and improving regulation, control and communication are focused on helping and supporting small and medium-sized enterprises. Such businesses act as a major driving force in many of today’s economies and it is therefore important that entrepreneurial cybernetics offers them new ways of thinking and approaching business so that they can survive in increasingly complex and competitive markets.

Organizational Cybernetics

Organizational cybernetics is distinguished from management cybernetics. Both uses many of the same terms but interpret them according to another philosophy of systems thinking. Organizational cybernetics by contrast offers a significant break with the assumption of the hard approach. The full flowering of organizational cybernetics is represented by Beer's Viable System Model.

Organizational Cybernetics (OC) studies organizational design, and the regulation and self-regulation of organizations from a systems theory perspective that also takes the social dimension into consideration. Researchers in economics, public administration and political science focus on the changes in institutions, organisation and mechanisms of social steering at various levels (sub-national, national, European, international) and in different sectors (including the private, semi-private and public sectors; the latter sector is emphasised).

Organizational Cybernetics has contributed to the analysis of what is arguably one of the most remarkable developments in modern societies in the past few decades: the transformation of traditional governing mechanisms (‘government’) and the advancement of new forms of ‘governance’. This development is most obvious in the private, the semi-private and the public sectors and involves the local, regional, national, transnational, and global levels within these sectors.

Sociocybernetics

Sociocybernetics is the science and art of steering societies. Sociocybernetics is an applications of GST and first-and second-order cybernetics to the social sciences. Actually, sociocybernetics is to a large extent based on second-order cybernetics, which was developed precisely because first-order cybernetics had only a limited applicability to the social sciences, where the researcher himself forms part of the subject under investigation, in contrast with the natural sciences.


Organizations

There are few organisations particular specialized in management cybernetics. A selection of organizations related to management cybernetics:

  • Centre for Systems Studies, Hull University Business School, Hull, United Kingdom: Founded in 1992, the Centre for Systems Studies is an internationally renowned research group, unique in its interdisciplinary research agenda of critical systems thinking, information systems, logistics and supply chain management. With Mike Jackson specialized in Systems thinking, management cybernetics, critical systems practice as one of the core members.
  • Cwarel Isaf Institute, Ceredigion, Wales, United Kingdom.
This institute was founded in 2001 by Stafford Beer and Fredmund Malik to make the life's work of Stafford Beer available to society. Its purpose is the systematic development and application of Management Cybernetics - the teachings of Stafford Beer. central question are:
What is the nature of complex systems? What are the effects of complexity? What happens if we handle complex systems wrongly and what if we handle them correctly? When do we refer to a system as complex anyway? What are Management Cybernetics and what is their "secret"? Even your little toe is a highly complex system. Your company or your institution certainly is one. Whether it be your toe or your company, both of them have a common pattern of working if they are viable. You can use this pattern in every life situation if you recognise and understand it.
  • Cybernetics College of Technology, Southern Cross University Australia.
  • Institut für Unternhemenskybernetik, Aachen, Germany.
This German research institution dedicated to advancing the study of Entrepreneurial cybernetics and management cybernetics, is founded in 1988. The aim of this institute is to study and optimise business processes in organisations.
  • Institute for Management Research', Radboud University Nijmegen, the Netherlands has a research program in organisational cybernetics.
At the Radboud University organizational cybernetics focuses on the study of organizational design, and the regulation and self-regulation of organizations from a systems theory perspective that also takes the social dimension into consideration. Researchers in economics, public administration and political science recently began focusing on the changes in institutions, organisation and mechanisms of social steering at various levels (sub-national, national, European, international) and in different sectors (including the private, semi-private and public sectors; the latter sector is emphasised). They have contributed to the analysis of what is arguably one of the most remarkable developments in modern societies in the past few decades: the transformation of traditional governing mechanisms (‘government’) and the advancement of new forms of ‘governance’. This development is most obvious in the private, the semi-private and the public sectors and involves the local, regional, national, transnational, and global levels within these sectors.

Applications

Some examples:

  • Academic organization: Management cybernetics supplies ideas, that may help college and university administrators develop a more coherent and integrated view of the institutions they inhabit. It can help think in more complex ways about their work and improving their performance. In this field ideas have been developed by many scholars in a number of fields over a period of more than 50 years. Management cybernetics has helped here to:
  • Project Cybersyn (Chile 1970-1973) was the first application of formal cybernetic methods to the government of a country. Stafford Beer developed the Viable System Model he applied in Chile for the management of complex enterprises from his foundational work in management cybernetics. Real- time performance monitoring in actuality, capability and potential, variety analysis, algedonic alerting and participatory development modelling were all new then. The approach came out of Wiener's work on purposeful error correction and the inter-disciplinary focus it produced on self-organisation and autonomy. Now these techniques are becoming mainstream. Cheap high performance multimedia computing supporting email, workflow and data mining on the web can realise this potential. But still company and government accounts, for example, are produced seasonally reflecting agricultural practice rather than the real-time needs of a developing "postcode lottery" society unable or unwilling to regulate waste and allocate resources fairly.

Examples of other applications:


  • Business process redesign


  • City planning

  • Coastal Management
  • Community Operational Research
  • Control of organizations
  • Enterprise process architecture

  • Environmental management

  • Information management


  • Logistic systems

  • Management education and research: .
  • Organisational Fitness
  • Railway enterprise
  • Software Acquisition

  • Supply chains

  • Total Quality Management

Criticism

Jackson in 2000 stated that, management cybernetics represents little advance on hard systems thinking and is subject to the same criticisms. There is little to choose between the two. Conventional management scientists are able to take cognizance of its insights and to employ concepts such as feedback in their traditional analyses. Management cybernetics, therefore, offers no new direction in systems thinking. Whether based on a machine analogy or on a biological analogy, it can be criticized for exactly the same reasons as hard systems thinking – an inability to deal with subjectivity and with the extreme complexity of organizational systems, and for an inherent conservatism.



SEE ALSO
Powered By Blogger