• The City School (pre-school to class IX)
• Generation’s School
o O levels-2011:
English Language (Grade: A)
, English Literature (Grade: A*)
, First Language Urdu (Grade: A*)
, Additional Mathematics(Grade: A)
, Mathematics (Grade: A*)
, Economics (Grade: A*)
, Physics (Grade: A*)
, Biology (Grade: A*)
, Chemistry (Grade: A*)
, Pakistan Studies (Grade: A)
, Islamiyat (Grade: A*)
, Chemistry(Grade: A)
, Mathematics(Grade: A)
, Economics(Grade: A)
, Chemistry(Grade: A*)
, Mathematics(Grade: A*)
, Economics(Grade: A*)
, General Paper(Grade: A)
o Organized Generation’s School’s first inter-school event (Youth Spark Declamatrics 2012):
IT and Graphics Director in an 11 member team by the name of Committee Organizing Diverse Events (CODE).
I designed all the visuals which include the posters, banners, digital backdrops, certificates and the intro video.
We are currently working on the 2nd iteration of the event to be held, tentatively, in early February 2013.
o LUMS PSIFI 2012 (Top 4 in Junkyard wars among 13 teams)
I was selected as a part of our school's team which participated in LUMS PSIFI organized at the Lahore University of Management Sciences.
The events, which included the construction of a functional catapult from scrap materials, a rubber band powered car and a helium balloon
air ship, tested the creativity, ingenuity and engineering skills of the participants.
o Participated in the All Paksitan Mathematics Olympiad 2012 organized at the Ghulam Ishaq Khan Institute (received the best school award)
I was selected as a part of our school's team which participated in the 2nd All Pakistan Mathematics Olympiad organized at the Ghulam Ishaq Khan Institute (GIKI).
Our team qualified in the top 10 out of a total of about 70 teams in all the three events we participated in and bagged the "best participating
school" award. The competitions there tested the prior knowledge as well as the ability of the participants to learn on the go and their
ability to think logically.
o Model United Nations:
MUN@TIS: 2010 (as delegate of Azerbaijan), 2011(as delegate of Iraq)
ROTMUN 2011 (as delegate of Bosnia and Herzegovina)
o Designed the cover page of the 2012 edition of the school magazine:
My design was chosen from several others prepared by a number of students to appear as the cover page
o Instructor for Video and Sound editing workshops:
As the president of the computer society I organized a series of workshops for web designing and Video/Sound editing with me as the
instructor in the latter.
o ISEO Photography 2011 (topic: Transference)
o Maths Olympiad 2011 organized by Happy Home School (2nd place among 12 schools)
o Documentary competition 2010 at Generation’s School (3rd place among 10 teams)
o Generations CARMA 2011 (1st place in newspaper designing and Discover the Difference among 15 teams)
o Video Editor for the official Generation’s School documentary movie
o LUMS CARMA 2010 (as speaker in the Discover the Difference presentation and video editor)
• Ibn Rushd Computer Society-Vice President 2010-2011
o Part of the organizing team for the annual documentary competition
o Designed the certificates
• Model United Nations Society-Multimedia Coordinator 2010-2012
o Prepared the multi-media presentation for the Model United Nations at Generation’s School
o Prepared the intro video for the event
o Responsible of all the technical elements of the event
• CODE–IT & Graphics Director 2011-2012
• School Prefect- 2012-2013
• Ibn Rushd Computer Society President 2012-2013
• CODE-Managing Director 2012-2013
• The Citizen Foundation 2010 and 2012 (~40 hours)
• Zindagi Trust 2010 (~4 hours)
• Participated in the collection and packing of relief goods for the victims of the 2010 floods
• Volunteered at the Help in a Box foundation to help in the packing of relief goods for the victims of the 2010 floods.
• Private tuitions:
o In Economics- July to August 2011
o In Physics, Chemistry and Mathematics- July 2012 to date
Randolph Frederick ‘Randy’ Pausch, born October 23rd, 1960 in Baltimore, Maryland, USA was an American Computer Scientist.
He received his bachelor’s degree from Brown University and Ph.D. in Computer Science from Carnegie Mellon University.
Pausch’s career spanned many organizations including Walt Disney Imagineering, Electronic Arts, Google and Xerox’s Paulo Alto
Research Centre (PARC). He was an assistant and associate professor at the University of Virginia for 9 year before assuming the
office of Associate Professor of Computer Science and Human-Computer Interaction at Carnegie Mellon University.
Pausch is best known for his talk in the “Last Lecture” series at CMU and the eponymous book that followed it. In his lecture he
touched on how he pursued his dreams and gave life lessons to ambitious students in the audience. It was his way of creating his
legacy and explicitly defining what he wanted to be remembered for.
Pausch also co-founded the Entertainment-Education Centre at CMU. The ETC allowed students from various disciplines to collaborate
on projects in the field of entertainment. It was a unique concept and its graduates were highly sought after.
Pausch was known for his belief that if something isn’t fun it is probably not worth doing. Being a computer scientist he recognized
that programming is not a very exciting endeavor specially for females so he developed the Alice programming software to teach programming
passively by having students create 3D animations.
Pausch’s Time Management talk was highly informative and enlightening. Below are some of the highlights and importants points of the lecture:
• The Idea of equating time to money to enable us to realize its true value.
• The process of categorizing and prioritizing tasks with respect to their importance and urgency.
• His emphasis on getting the courage to say no, to others and to yourself.
• His tips of effective delegation techniques
• How he presented the idea that technology could both, facilitate and hinder time management.
Links related to Randy Pausch:
For decades computer scientists and engineers have been hard at work to secure computer systems.
Indeed they have introduced major breakthroughs in the field of security. More robust encryption
protocols and multi-level security systems are the more notable ones. However, at large the
implementation of these security systems is scarce or insufficient at best which allows any
competent attacker to steal or destroy information on millions of systems, even simultaneously.
There are two main reasons for this state of affairs. Firstly people do not buy security systems
because they are expensive and are an inconvenience. Secondly these security systems are extremely
complex so their code and their setup themselves are very vulnerable to loop holes.
A computer security system can be studied under 3 headings; policy, mechanism and assurance:
Policy means to specify the required security systems. In an organization it would mean
controlling who gets access to how much information, controlling how information and/or
computing resources are used, ensuring prompt access to information and resources and
knowing who has accessed information or used company resources for the purpose of
Mechanism is the actual deployment of a security system. This has two components to it; the
software itself and the setup of security parameters. The job of a security system, quiet
obviously, is to defend the computer system from any entity that intends to harm it. A
security system has several techniques at its disposal to accomplish this. The one
that is used depends upon the computer system’s purpose and the desired level of security.
Assurance refers to making the security system actually work. The working of the security
system is based on the idea of a trusted computing base (TCB), which is a collection of
hardware, software and setup information which makes up a security system. A technique
used to make TCBs more robust is called defense in depth. This could include, among
other things, using a firewall and sandboxing (running a program in a virtual
environment so that it cannot harm the computer) at the operating system level
and authorization checks at the application level.
In the modern era, computer security systems are becoming ever more essential for any entity,
an individual or an organization. Large corporations, since the integration of computers with
business, had to be wary of cyber-attacks however due to the pervasiveness of technology new
avenues for cyber-crime have opened up as a result even average technology consumers have to
be mindful of the security of their devices to avoid being victims of cyber-crimes like
identity theft or extortion.
Lampson.B.Computer Security in the Real World:[internet].2004 June[cited 2013 Sept 3];available form:
TEDxMidAtlantic 2011 - Avi Rubin - All Your Devices Can Be Hacked
Network Security: History, Importance, and Future
1--Is a strong single layered security system better than a multilayered system?
2--Is cyberwarfare between countries illegal and punishable by international law?
Natural Language Processing (NLP) is a part of the branch of computer science that is known as
Human-Computer interaction. The main goal of developing natural language processing techniques is
to reduce dependence on programming languages and specialized commands as a means for communicating
with computers by enabling computers to comprehend and respond to natural human speech. “The choice of
the word ‘processing’ is very deliberate, and should not be replaced with ‘understanding’. For although
the field of NLP was originally referred to as Natural Language Understanding (NLU) in the early days of
AI, it is well agreed today that while the goal of NLP is true NLU, that goal has not yet been accomplished.
A full NLU System would be able to: Paraphrase an input text, translate the text into another language,
answer questions about the contents of the text, Draw inferences from the text.”
Considerable headway has been made in NLP to bring it to mainstream consumer goods. Voice recognition has been
around for some time. The first effective speech recognizer was invented in 1952 however but effective voice
control systems did not come to be until the end of the 20th century. Initially their functionality was very
limited and they could only recognize specific commands and did not work very well with non-American accents.
It wasn’t until 2008 that voice control systems utilizing NLP became available to the average consumer when
Apple launched Siri, their mobile assistant app. The user could actually speak colloquial English and the
software would recognize and execute the relevant commands. In the current year facebook launched its internal
search engine, called graph search, which is so powerful that it may even supersede Google. Graph search does
not rely on keywords to frame a search query rather it takes its search parameters in the form of simple
English sentences and phrases and gives highly personalized results to the user.
Indeed the era presented in the sci-fi movies of the past is not too far off from becoming a reality.
Very soon the barriers that hinder human-computer communication would become non-existent and with the
development of more advanced AI systems more human-like machines may come into existence that would learn
on the fly and not be limited by strict syntax rules hence allowing communication between them and the
user to become more fluid and natural.
1-How does a computer recognize different contexts of a word?
2-Can a system be designed to learn new words and phrases on the fly like people do?
3- What are the current limitations of NLP?
Programming languages are unique languages that humans use to communicate with machines. Any commands and instruction that
determine the behavior of a machine must be given in any one if the several programming languages if the machine is expected
to process and respond to them.
Ada Lovelace is known as the inventor of computer programming. She worked closely with Charles Babbage on his proposed “Analytical
Engine”. She is accredited with the creation of the loop, an indispensable element of modern programming languages.
One of the earliest forms of programming would be the punch cards developed by Herman Hollerith. They were used to input instructions
and data into machines. The very first programming language was Lambda calculus. It was invented in the 1930s by Alonzo Church. The roots
of the modern programming languages may be traced back to FORTRAN (FORmula TRANSlating system). What set FORTRAN apart was that it combined
the use on standard English words such as “IF”, “AND” and “OR” with programming code. It was also a general purpose language, i.e., it was
not designed for a specific process rather it could be used to a multitude of operations. Arguably the one most influential languages was C,
a general purpose programming language developed by Denis Richie at AT&T Bell Labs in the late 60s. Many later languages, including C#, JAVA
and Python, have borrowed either directly or indirectly from C.
New programming languages are constantly under development some are designed for a very specific application while others are designed such as
to overcome the faults and limitations in the existing languages to improve efficiency, reliability and speed of execution.
Verification is the process of ensuring that the code written fulfills the specified requirements. Verification is not the same as validation
in that the former answers the question “Is the program being built right?” as opposed to “Is the right program being built?”. Unlike testing
verification is done before, during and after the code is written to ensure accuracy and correctness hence eliminating any errors and failures
1-What are the limitations of the modern languages?
2-What features would be included in the languages of the future?
For more information, please visit the following links,
A computer network is a group of computer connected to each other so as to allow them to exchange information with each other. Computers use, what
is called a protocol to encode the data in to a format that can be understood by other devices. Each computer connected to a network is assigned a
unique address called an IP (Internet Protocol) address by which it can be identified.
Computers in a network are connected using a variety of physical methods including wire, fiber or air. The method employed depends with the specification
of the network. Wired communication over Ethernet can be employed for short-range networks, such as intra-building networks. Fiber and air communication
may be employed for long-range communication. The largest computer network, the Internet, uses thousands of miles of fiber optic cables laid on the ocean
bed between continents to connect computers from around the world.
With all the boon that this connectivity brings it also entails certain shortcomings. The most pressing among these is that computers connected to computers
are left vulnerable to security threats. Therefore in any networking environment, ensuring security is of utmost importance.
Physical security refers to monitoring and regulating physical access to machine and network equipment. In fact ensuring physical security is the most important
aspect of network security. Even with the most advanced security software in place, a hostile element can cause significant damage if it has access to the physical
infrastructure of the network. Secure files, passwords and certificates stored on the servers can be cloned or even physical drives can be stolen. An assailant can
simply destroy a server farm causing the loss of a large volume of venerable information.
Digital access to the network must also be controlled. A firewall mechanism is used to monitor the influx and exodus of network traffic. Firewall system may be
considered as sieves as they block any activity that may be considered suspicious and allow activities considered safe. if configured correctly they can be a
reasonable form of protection from external threats including some denial of service (DOS) attacks. If not configured correctly they can be major security holes
in an organization.
1-Is it possible that data transfer rates over a network could match those experienced during local file transfers on a machine?
The Theory of Computation aims to develop mathematical models of computation that reflect real world computers. Models of computation refer to mathematical
abstraction of computers. The central question that it tried to answer is whether all mathematical problems can be solved in a systematic way. The theory of
computation consists of three components: complexity theory, computability theory and automata theory.
The complexity theory aims to understand and explains the existence of order in complex, non-linear systems. One of the most famous problem in complexity theory
and in all of computer science is the P=NP problem.. In fact it is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute which means
that the first person to solve it is entitled to a $ 1 million prize.
Computer Scientists measure the time a computer takes to execute an algorithm in terms of the number of elements it has to manipulate in the process. For simple problems
the time taken to execute the algorithm is proportional to N (the number of elements manipulated) or N raised to any power a.k.a polynomial time. However there other
operations that are so complex that the time taken to execute them is exponential rather than polynomial. This means that their execution time would be proportional to,
for example 2N.
To get a sense of the difference between monomial, polynomial and exponential time consider that if a simple algorithm whose execution time is proportional to N takes 1
second to perform calculations involving 100 elements, a more complex algorithm whose execution time is proportional to N3 will take almost 3 hours and an even more complex
algorithm with an execution time of the order of 2^N will take 3 quintillion years, that is 3 x 101^8 years.
P denotes the set of fairly simple problems which have a solution time proportional to a polynomial while NP denotes the set of problems whose solutions can be verified in
polynomial time. A well known example of a NP problem is finding the prime factors of a very large number. Verifying the factors only requires multiplication but actually
deriving them requires trial and error of a very large quantity of numbers. P=NP proposes that if the solution to a problem can be verified in polynomial time then its
solution can also be obtained in polynomial time.
Hardesty, L. 2009 Oct 29. Explained: P vs. NP. MIT News (Camb, Mass).[internet]. [cited 2 Sept 2013] Available from: http://web.mit.edu/newsoffice/2009/explainer-pnp.html
The technological breakthroughs of the past decade have miniaturized computers very rapidly. An average smartphone now has more computing power than a desktop pc had 10 years ago. The mobilization of such amounts of processing power and the pervasiveness of such mobile technologies into the common population has opened a multitude of avenues in several fields, including commerce, education and entertainment. The rise of this new era of mobile computers has revolutionized e-commerce. Mobile e-commerce (m-commerce) is a term that describes online sales transactions that use wireless electronic devices such as hand-held computers, mobile phones or laptops conducted over a wireless network. M-commerce is characterized by Ubiquity, Convenience, Interactivity, Personalization, and Localization. Mobile banking is one of the earliest and most widely used applications of mobile computing technology. Mobile banking usually consists of a client application that allows customers to view and manage their accounts using their mobile devices. It may also allow them to transfer money between accounts. An extension of this concept is the e-wallet. It may be integrated with e-banking such that a person’s bank account can act as their wallet and payments can debited directly from it or it can function as an independent application, like Google Wallet, which allows consumers to make purchases up to the amount of money they have credited to their wallet account. Mobile technologies have facilitated retailer to personalize the shopping experience according to the data collected on individual consumers. Mobile devices equipped with GPS receivers allow retailer to determine the location of the customer quiet accurately and then adjust search results accordingly. They may use this information to refer their transactions to the nearest outlet of the retailer hence reducing order processing and delivery time.