Since the dawn of humans, their ability to think critically and solve problems contributed to an astounding scale of change that not only affected them, but also the world they live in. As it progressed by solving more and more problems, mankind developed rapidly in all aspects. Be it the advent of the wheel or the steam engine, inventions were always fuelled because of a necessity- the necessity to solve a problem. The nature of problems, in most cases, is such that once it is unraveled, several others emerge. Consequently, our inventions contributed to a vast expanse of problems. One such section of problems is computational problems that essentially led to the emergence of the theory of computation. This branch of theoretical computer science specifically deals with the possibility of solving a problem and the efficiency of such solutions. In my opinion, theory of computation is the key contributor in the advancement of computer science as a discipline.

Computational problems, given their complex and obscure nature, can be classified into two groups: solvable and unsolvable. Ones that are solvable can be further broken down to two more parts: ones that can be solved efficiently and ones that cannot. It is extremely important for a computer scientist to make these distinctions. For this guides him through which problems does he needs to work one. Obviously, it would be a waste of time trying to solve problems that are mathematically proven to be unsolvable. Yes, there exist such problems! One of the famous unsolvable problems is the Halting problem, which was developed by the renowned computer scientist and mathematician Alan Turing. On the other hand, it is important to come up with efficient solutions to solvable problems or else it becomes practically impossible to solve (but not in theory). In other words, with the discovery of efficient solutions, the boundary of solvable problems but it will always be limited by the unsolvable ones. Needless to say, efficient programs consume less computing power and memory, thus being convenient for real life applications.

Theory of computation can be divided into three sub fields: automata theory, computability theory and computational complexity theory. Automata theory deals with solving problems with abstract computational models. Computability above all is more related to the extent to what extent can a computer solve a problem. The idea is further developed in computational complexity theory which works not only on recognizing the computability of a problem but also the efficiency of the solution. Along with that compute scientist work with mathematical models of computation in order to conduct thorough studies in this vast field. Some of the models are:

• Turing machine

• Lambda Calculus

• Combinatory logic

• Mu-recursive functions

• Markov algorithm

• Register machine

• P’’

Basic discoveries in theoretical computer science extends to dramatic breakthroughs. Arbitrary computations of very simple problems can give phenomenal, rich and sophisticated results. The computation theory is taking us to a whole new era of science and technology. In fact, Stephen Wolfram has indeed established a whole new kind of science. It is extremely novel and is believed to lead us to the answers of unraveling the nature’s secret of how it had led to the formation of such a complicated universe. It is about time we start asking, can there be somewhere in the computational universe that we can find our physical universe?

Sources:

Professor Christos’s lecture

http://www.youtube.com/watch?v=60P7717-XOQ

http://en.wikipedia.org/wiki/Computer_security

http://www.cse.buffalo.edu/~selman/report/

http://www.cse.buffalo.edu/~selman/report/

http://quantumprogress.wordpress.com/2010/11/30/why-students-must-computational-thinking-and- possibly-how-to-teach-it/

Questions:

What does P=NP mean?

Did Watson and Siri beat the Turing Test and can be potential replace of humans?

How does the Turing machine work?