| Home | Sign up! | Projects | Seminars | Research Notes | Today's Lecture | About |

Update Research Note Form
 Course: Research Note Topic: Research Note Description:

Turing Machines and Neural Networks
Who was Alan Turing

Alan Turing was the founder of computer science, mathematician, philosopher, code breaker and a visionary. He was born in 1912 (23 June) at Paddington, London. He studied at the Sherborne School, during 1926-31. He was then an undergraduate student at King’s College, Cambridge University during 1931-34. In 1935 he was elected Fellow of King’s College, Cambridge for his work in Quantum mechanics, probability and logic during 1932-35. Before the advent of the digital computer i.e. in the 1930's several mathematicians began to think about what it means to be able to compute a function. Turing's definitive contribution was the relation between logical instructions, the action of the mind, and a machine which could in principle be embodied in a practical physical form. Thus, in 1936, he contributed ‘The Turing machine, computability, universal machine’. At the same time American logician, Alonzo Church had also been working on the same and came with a parallel conclusion. It was seen, however that Turing's approach was original and different; Church relied upon an assumption internal to mathematics, rather than appealing to operations that could actually be done by real things or people in the physical world. Alonzo Church and Alan Turing independently arrived at equivalent conclusions. As we might phrase their common definition now:

A function is computable if it can be computed by a Turing machine.

What is a Turing Machine
Logically a Turing machine has all the power of any digital computer. Conceptually a Turing machine processes an infinite tape which is divided into squares, any square of which may contain a symbol from a finite alphabet, with the restriction that there can be only finitely many non-blank squares on the tape. At any time, the Turing machine has a read/write head positioned at some square on the tape. Furthermore, at any time, the Turing machine is in any one of a finite number of internal states. The Turing machine is further specified by a set of instructions of the following form:

(current_state, current_symbol, new_state, new_symbol, left/right)

Turing expanded this idea to show that there exists a "universal" Turing machine, a machine which can calculate any number and function, given the appropriate instructions. The set of instructions can be thought of as a program for the Turing machine. Turing’s Paper, On Computable Numbers was published near the end of 1936.This paper generated a lot of attention for him and the idea came to mathematician John Von Neumann. Turing obtained his Ph.D thesis through work that extended his original ideas i.e. during 1936-38 Turing also completed his Ph.D. in Logic, algebra, number theory at Princeton University.

Reference: http://www.turing.org.uk/bio/part3.html http://www.nmia.com/~soki/turing/ http://www.ams. org/new-in-math/cover/turing.html

It is found that Recurrent Neural Networks are actually equivalent to Turing Machines i.e. they can compute any algebraically computable function. Heikki Hyötyniemi of Helsinki University of Technology, Control Engineering Laboratory, Otakaari 5A, FIN-02150 Espoo, Finland has published a paper stating the same i.e. ‘Turing Machines are Recurrent Neural Networks’.

The paper can be viewed at the following link:
http://www.uwasa.fi/stes/step96/step96/hyotyniemi1/
Explanation: One can use Neural networks for classification tasks, telling whether an input pattern belongs to a specific class or not. It has been known for a long time that one-layer feedforward networks (i.e a single Threshold Logic Unit) can be used to classify only patterns that are linearly separable (linearly separable functions). However, the more there are successive layers, the more complex the distribution of the classes can be. In a Recurrent Neural Network we have two layers: Feed forward Layer and Recurrent Layer. When feedback is introduced in the network structure, so that perceptron output values are recycled, the number of successive layers becomes, in principle, infinite. So now is the computational power enhanced qualitatively? The answer is yes. In this paper, it is shown that a recurrent network structure consisting of identical computing elements can be used to accomplish any (algorithmically) computable function. Then Recurrent Neural Networks are capable of doing all that a Turing Machine can do.

P and NP Problem
P Problem

The class P informally is the class of decision problems that can be solved by some algorithms within a number of steps bounded by some fixed polynomial in the length of the input. We can get an exact solution to those problems within some reasonable amount of time. Such problems fall in class P where P stands for polynomial time complexity. Problems for which there is a polynomial time algorithm are called tractable. To define the problem in a precise way it is necessary to give a formal model of a computer. The Turing machine is the standard computer model in computability theory which was introduced by Alan Turing in 1936. The model was introduced before physical computers were built; but it still continues to be accepted as the proper computer model for the purpose of defining computable function.

NP Problem
There is another class of problems that are intractable, for which we currently have no algorithm that will solve the problems in any reasonable amount of time. These problems fall in the class NP, which means nondeterministic polynomial time complexity. Only deterministic algorithms that are known to solve these problems have a complexity that is of exponential or factorial order. For some problems, the complexity is given by 2N, where N is equal to the number of input values. In this case, each time we add one additional input value, the amount of time required by the algorithm to solve the problem doubles. Example: If an algorithm computes 512 operations(that is 29) to solve a problem that has 9 elements, it would take 1024 operations to solve a problem that has 10 elements. NP is not the same as non-P. NP which means “nondeterministic polynomial time” was originally defined in terms of nondeterministic machines (that is, machines that have more than one possible move from a given configuration). Thus a NP class contains problems if they can be solved in polynomial time by a nondeterministic Turing machine (a nondeterministic Turing machine means a "parallel" Turing machine which can simultaneously take many computational paths, but have the restriction that these parallel Turing machines cannot communicate). A P problem (whose solution time is bounded by a polynomial) is always NP also. If a problem is known to be NP, and a solution to the problem is somehow known, then demonstrating the correctness of the solution can always be reduced to a single P (polynomial time) verification. If P and NP are not equivalent, then the solution of NP-problems requires (in the worst case) an exhaustive search. An Exhaustive search means it might be required to test each possibility sequentially for discrete problems in which no efficient solution method is known, in order to determine if it is the solution. Such exhaustive examination of all possibilities is known as exhaustive search, direct search, or the "brute force" method (that is checking all the possibilities, like in Hamiltonian circuits which yet have no known algorithms for finding a Hamiltonian circuit).
NP-Hard Problem A problem is NP-hard (Non-deterministic Polynomial-time Hard) if solving it in polynomial time would make it possible to solve all problems in class NP in polynomial time. That is, a problem is NP-hard if an algorithm for solving it can be translated into one for solving any other NP problem (nondeterministic polynomial time) problem.

NP-Complete Problem

NP Complete are the hardest problems in the class NP. Such problems are singled out because if at any point we find a deterministic polynomial time algorithm for one of them, all of the problems in NP must have deterministic polynomial time algorithms. Use of Neural Networks to solve P, NP, NP Hard and NP Complete problems

The Turing Machines can be used in computing P and NP problems. So the Recurrent Neural Networks can also be used for the same. However, both Hopfield and Recurrent Neural Networks are used for solving problems that deal with P and NP complexities.

The maximum clique problem (MCP) is a classic graph optimization problem with many real-world applications. This problem is NP-complete and computationally intractable even to approximate with certain absolute performance bounds. New discrete competitive Hopfield neural network for finding near-optimum solutions for the MCP are used. More information related to this topic can be obtained at the following address:
h ttp://portal.acm.org/citation.cfm?id=775532&dl=ACM&coll=GUIDE

Thus, Recurrent Neural Networks can be used for Universal Computation and Universal Approximation that is all places here Turing Machines can be used.

Universal Approximation:
It means having the ability to approximate any function to an arbitrary degree of accuracy.
Universal Computation: It means the ability of computing anything that can be computed in principle, that is equivalent in computing power to a Turing machine, the lambda calculus, or a Post production system. Your Password: