# Theory of Computation

The Theory of Computation is the study of how efficiently problems can be solved on a model of computation. It has 3 Branches: automata theory and language, computability theory, and computational complexity theory.

Automata are abstract machines that take an input and produce a (or several) output(s).

An automaton can be defined by a quintaple (Q,S,d,q0,F), where:

• Q is the set of possible states the automaton can be at,
• S is the alphabet of the automaton, or symbols it recognizes,
• d is the function that describes what the automaton does with the input,
• q_0 is the start state of the automaton, and
• F is the set of accepted states (if the final state is in this set, the input is accepted).

There are many options for each of these 5 characteristics, so we get many types of automata.

https://en.wikipedia.org/wiki/Automata_theory

A formal language is a subset of the set of possible strings formed from the alphabet. This subset is usually infinite, so it is described using a collection of rules rather than an explicit description (i.e. L={____}).

https://en.wikipedia.org/wiki/Formal_language

Computability Theory studies computable functions, something that I was not able to understand (probably too high-level stuff). It's the type of thing that when you read seems so obvious that you wonder why they make such a fuss about it:

a major breakthrough in the field was the proof that: " any function that is computable by an algorithm is a computable function[!!!!]" (Wikipedia).

My mind was blown when I met this!

It turns out of course the mathematician Godel was very skeptical about this theorem at first, but then he decided to give it a chance...

https://en.wikipedia.org/wiki/Computability_theory

Computational Complexity Theory is the easiest to understand. It studies the classification of algorithms acording to the degree of the time they take running. The most common classification is the big-O notation.

O(2^n) algorithms take exponential time to run.

O(n,n^2,etc) take polynomial time to execute.

https://en.wikipedia.org/wiki/Computational_complexity_theory

And on that note:

A problem that can be solved using an algorithm that runs in polynomial time is called a P-problem. The set of such problems is called P.

A problem for which, given a possible answer, one is able to check whether this answer is right or wrong in polynomial time is called an NP-problem. The set of such problems is called NP.

Notice that if a problem can be solved in polynomial time, an answer for it can also be checked in polynomial time: just solve the problem and see if that is the answer. So clearly no P-problems are not in NP: P is contained in NP.

Wiki Article: https://en.wikipedia.org/wiki/Theory_of_computation

P vs. NP: http://www.claymath.org/millennium-problems/p-vs-np-problem

Turing Machines video: https://www.youtube.com/watch?v=dNRDvLACg5Q

## Questions:

Do you think P=NP?

What is the mathematical background used for this area?

Can quantum computers solve exponential problems very quickly?