Theory of Computation is a branch of computer science which was initially developed in the 1930’s that mainly deals with the possibilities that can be achieved with computers and the problems that computers can solve, while also dealing with their limitations. Since this field was started by mathematicians, the initial purpose of this field was to form ‘mathematical models of computation that reflected real-world computers. It has since branched out into its own discipline with three major areas of study: Complexity Theory, Computability Theory and Automata Theory.

**Automata Theory**

This is the study of theoretical computational models, otherwise known as abstract machines, and automata. It basically deals with the definitions and properties of these models and how they can be used to solve computational problems. Some examples of such models are text processors such and the infamous Turing machine models. A branch of math closely linked to automata theory is the Formal Language theory, since automated machines are used to generate formal languages and these languages are the preferred medium of communication for problems that must be computed.

**Complexity Theory**

Human beings have a certain standard when it comes to concepts such as ‘difficulty’. For us, a problem may be considered difficult when it takes too much time to solve. Similarly, a ‘difficult’ problem for computers may be one which takes up a lot of it’s resources. However, difficult problems for us may not be the same kind of problems that are considered difficult for computers. Complexity theory deals with these varying classes of difficulty and focuses on classifying problems according to these classes.

**Computability Theory (also known as Recursion Theory) **

The main concern of Computability Theory is to figure out whether certain problems can be solved by computational methods or not. Think of the age-old riddle, “Can a room of monkeys on typewriters write the entire works of Shakespeare, given enough time?”, and apply it to computers, i.e “Can a computer solve any problem given enough time and resources such as RAM, disk space etc?” An important term to also consider is ‘effective computation’, which may or may not collide with the concept of efficient computation. Hence, you can notice how complexity theory may overlap with computability theory in certain areas of computability.

The Theory of Computation is basically the science of computation. It’s objective is to observe and understand computational phenomena, and seek out their limits. It’s aim is to also find the most efficient and effective methods for solving problems. Besides the three main areas of study of Theory of Computation, there are several more fields that aren’t restricted to those areas such as cryptography, computational learning theory and distributed computing.

**Questions: **

What are the main problems that experts in this field are concerned about?

Why is the Turing machine just a model and what is limiting us from creating a real life Turing machine?

What is the P vs NP problem?

What’s the future of this field?

**Links: **