What is cybernetics?


Cybernetics
is the interdisciplinary study of the structure of regulatory systems. Cybernetics is closely related to control theory and systems theory. Both in its origins and in its evolution in the second-half of the 20th century, cybernetics is equally applicable to physical and social (that is, language-based) systems.

Contemporary cybernetics began as an interdisciplinary study connecting the fields of control systems, electrical network theory, mechanical engineering, logic modeling, evolutionary biology, neuroscience, anthropology, and psychology in the 1940s, often attributed to the Macy Conferences.

Monday, March 9, 2009

AI-complete

In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI.

The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. (Mallery 1988) Early uses of the term are in Erik Mueller's 1987 Ph.D. dissertation and in Eric Raymond's 1991 jargon file.

To call a problem AI-complete reflects an attitude that it won't be solved by a simple algorithm, such as those used in ELIZA. Such problems are hypothesised to include:

  • Computer vision (and subproblems such as object recognition)
  • Natural language understanding (and subproblems such as text mining, machine translation, and word sense disambiguation)
  • Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems.


Examples

For example, consider a straight-forward, limited and specific task: machine translation. To translate accurately, a machine must be able to understand the text. It must be able to follow the author's argument, so it must have some ability to reason. It must have extensive world knowledge so that it knows what is being discussed — it must at least be familiar with all the same commonsense facts that the average human translator knows. Some of this knowledge is in the form of facts that can be explicitly represented, but some knowledge is unconscious and closely tied to the human body: for example, the machine may need to understand how an ocean makes one feel to accurately translate a specific metaphor in the text. It must also model the authors' goals, intentions, and emotional states to accurately reproduce them in a new language. In short, the machine is required to have wide variety of human intellectual skills, including reason, commonsense knowledge and the intuitions that underly motion and manipulation, perception, and social intelligence. Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.

AI systems can solve very simple restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of it's original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on. (Lenat & Guha 1989, pp. 1-5)


Formalisation

Computational complexity theory deals with the relative computational difficulty of computable functions. By definition it does not cover problems whose solution are unknown or have not been characterised formally. Since many AI problems have no formalisation yet, conventional complexity theory does not allow the definition of AI-completeness.

To address this problem, a complexity theory for AI has been proposed. It is based on a model of computation that splits the computational burden between a computer and a human: one part is solved by computer and the other part solved by human. This is formalised by a human-assisted Turing machine. The formalisation defines algorithm complexity, problem complexity and reducibility which in turn allows equivalence classes to be defined.

The complexity of executing an algorithm with a human-assisted Turing machine is given by a pair:

\langle\Phi_{H},\Phi_{M}\rangle

where the first element represents the complexity of the human's part and the second element is the complexity of the machine's part.

Results

The complexity of solving the following problems with a human-assisted Turing machine is:

  • Optical character recognition for printed text: \langle O(1), poly(n) \rangle
  • Turing test:
    • for an n-sentence conversation where the oracle remembers the conversation history (persistent oracle): \langle O(n), O(n) \rangle
    • for an n-sentence conversation where the conversation history must be retransmitted: \langle O(n), O(n^2) \rangle
    • for an n-sentence conversation where the conversation history must be retransmitted and the person takes linear time to read the query\langle O(n^2), O(n^2) \rangle
  • ESP game: \langle O(n), O(n) \rangle
  • Image labelling (based on the Arthur–Merlin protocol): \langle O(n), O(n) \rangle
  • Image classification: human only: \langle O(n), O(n) \rangle , and with less reliance on the human: \langle O(\log n), O(n \log n) \rangle .

No comments:

Post a Comment

Powered By Blogger