Lectures

As part of my "15-129: Freshman Immigration" course —notably not a class on human immigration, but rather an introduction to Computer Science— I have been fortunate enough to sit in on several wonderful lectures. I've summarized them below for your consumption. They highlight my takeaways from the lectures, and present the most important tid-bits of the talks.

Before every lecture —in order to gain the most from each one— I also did research on the topic of presentation. I've included those as well. Enjoy!

Randy Pausch Lecture: Time Management

Raundy Pausch lecture on time management is inspiring! He argues that in order to truly learn how to manage your time you need to understand what it's worth. You need to truly understand its dollar value, and that you cannot get it back. Money can be earned later, but time once gone can never be recovered —use it wisely!

Pausch's 25 nuggets of wisdom:

  1. Learn to have fun. Maximizing time is good, but your goal should be maximizing fun.
  2. Stop trying to do things right, do the right things. Wrong things done right get you nowhere.
  3. Use your time to gain experience. It cannot be faked.
  4. Dream. Do. No shortcuts.
  5. Failing to plan is planning to fail. You've heard it many times, and still choose to ingore it.
  6. Get a todo list, and do the ugliest tasks first!
  7. Figure out what's important and whats due, and plan accordingly
  8. Clean out your clutter. Clear desk. Clear inbox. Clear mind.
  9. Multiple monitors? Look into it.
  10. Let your computer work for you. You brain has a lot better things to do.
  11. Shorten your phone calls. It's a win-win for both people in the conversation.
  12. Write thank you notes. Pen to paper. People remember!
  13. Make your workspace comfortable for you, and optionally comfortable for others.
  14. Learn to say no. Its simple but not easy.
  15. Avoid interruptions. They add up.
  16. Focus you energy on the things that matter.
  17. Efficiency does not equal effectiveness. Figure out how to prioritize for either.
  18. Doing things last minute is expensive. Save yourself the stress.
  19. Delegate, but do the dirtiest parts yourself.
  20. Make meetings short, summarize them, and assign tasks discussed. You'll thank yourself later.
  21. Only use technolgy that helps you. If it's troubling you, it's not worth it.
  22. Don't delete your email. It's your diary.
  23. Kill your television. Or at least tuck it far away.
  24. Never break a promise, but renogatioate if need be. Be good on your word.
  25. Eat, sleep and exercise. Otherwise everything false apart.

If you interested in more pieces of advice —and a bit of laughter— watch the full video below:

Kemal Oflazer - History and Future of CS

If you think know everything, then you know nothing and are too stupid to see it. His words, not mine. In order to truly succed at computer science, you need to understand the fundamentals -- Mathematics, theoretical computer science, algorithms, data structures, hardware, programming. But you also need to be open to learning more, and the more you learn, the more you'll realize you don't know.

If you don't believe Professor Oflazer, listen to the old and wise John Cleese:

The world today is ephemeral, and constantly changing --especially in the world of technology. It is in constantly learning, more and more about a particular topic that we become an expert. His exact words were:

An expert is someone who knows more and more about less and less, until s/he knows everything about nothing

That said, it's just as important to understand where computer science is coming from and where it's going. Computers have exponentially faster and faster memory, processing and CS is developing better and better systems and techniques to solve problems that we face in today's world. In order to stay relevant in such a field, a good computer scientist should have an open mind, common sense and the ability to analyze and communicate. They also need the ability to abstract and synthesize problems. Most importantly though, the need to be patient. Not at all limited to the field of Computer Science, there's a lot to be learnt and a lot more to be discovered. Put in the time. For more on this read Peter Novig's "Teach Yourself Programming in Ten Years."

While the slides alone don't do the talk justice, you can check them out here.

Christos Kapoutsis – Theory of Computation

The Theory of Computation is the scientific study of the properties of computation. It’s a branch of theoretical CS that focuses on the efficiencies of algorithms based on a model of computation. More specifically, it involves the abstracting of “the computer” into mathematical models that accurately approximate real-world computing abilities, which are in turn used to study “natural, man-made or imaginary” computational problems. The most common model of computation is the Turing Machine as it is a robust yet sensible and easy to use.

The field in essence defines the fundamental capabilities and limitations of computers, by delineating the state of things today, and building on top of it using mathematical proofs and theoretical intuition, as is a common convention with most scientific fields. What is different, however, is that while most other fields involve the study of already existing phenomena, the Theory of Computation also involves creation of such phenomena. In an ever evolving technological world, the field concerns itself with probing the feasibility of faster, more persistent and efficient hardware and software, and possible the requirements for the creation of such better systems.

The field breaks down into three sub-fields, namely: automata theory, computability theory, and complexity theory. The automata theory is the sub-branch of Theory of Computation that involves abstracting physical machines in addition the problems solved on the machines for scientific study and research. These abstracted “computers” are referred to as automatons. They are mathematical representations that take in a string of symbols (an input word ) as input and either rejects or accepts it. The above mentioned “Turing Machine” is an example of an automaton.

The computability theory on the other hand is a conceptual sub-section of the Theory of Computation that seeks to answer where a problem can be solved. By using theoretical intuition and mathematical proofs, the sub-field withdraws itself from the physical computing, allowing the feasibility of solutions to be proven or disproven without any actual computing.

Knowing that a problem is solvable, being able to determine the difficulty or easy with which it can be solved is just as important. This is where the complexity theory comes in. The sub-field focuses on the efficiency of problem solving, taking into consideration both the time (time-complexity) and the memory (space complexity) that is required to solve the problem.

Questions for the Speaker:

  1. Is your research useful if it does not eventually solve the problem you sought out to solve in the first place?

    Prof. Kapoutsis believes that research is important because if your research does not lead you to find what you were looking for, it will still likely lead you to learning something else. Also gaining a better understanding of a "difficult" problems is beneficial to not only you as a researcher, but also your community, even though a research problem is not solved. Eventually it will be (or at least mathemtically proven impossible, so you can move on to other things).
  2. As a researcher and professor in the field, what do you think is the most appealing aspect of Theoretical CS?

  3. The buzzword question: Do you think quantum computing will help figure out np-complete problems?

Sources:

  1. Book: Introduction to Theory of Computation
  2. Article: A Brief Introduction to the Theory of Computation
  3. Video: Introduction to Theory of Computation - YouTube
Ryan Riley - Security

Cyber security involves the protection of physical systems, and data from damage or theft. Often, it involves securing connected devices or systems in order to prevent external access by unauthorized individuals.With the growing need for the networking of devices, rasnginging from large computing systems such as “the cloud” to small Internet of Things (IoT) devices, the need for more secure systems have never been more necessary as the average number of entry points into a network steadily increases.

In order to prevent vulnerabilities of a computer system from being exploited, it is necessary to improve hardware, software and secure the data. Regarding the hardware, it is paramount to create physical systems that are not vulnerable in the case that physical contact with the device is obtained. Backdoors into hardware allow unauthorized parties access to software or data. For example, a hacker can make a physical modification to the system, particularly in circuitry allowing them to bypass software authentication, or access data in disks.

Furthermore, software implementations should encourage safe transfer of data using techniques such as encryptions and hashing, where the prior is reversible and the latter is not. Software also has a role in informing its users of possible vulnerabilities they may have using informative prompts and notifications. In other words, they should protect users from their own actions, which would otherwise compromise both their security, as well as that of others around them.

On matters of data, an important issue to consider is “big data” and politics, and how they interact. The example of the recent use of Facebook data to influence politics, followed by inadequate question of Mark ZUckerberg --the CEO of Facebook-- showed the lack of technical know how of a significant number policy making bodies. How involved the government should be in the computing industry and how do we ensure that it is well versed with the fundamental facts that it needs to make decisions continues to be a contentious issue. Policies and regulations need to be passed to ensure that companies that handle users’ data meet a certain standard in regards to information security.

Questions to Speaker...

  1. What do you think we should do to improve politics and the policies that go with it, when it comes to cyber security?

  2. How do you keep yourself updated with what’s happening in the field?

  3. The buzzword question: Do you think machine learning/AI or quantum computing is an upcoming threat to cyber security?

Sources:

  1. Article: Cybersecurity
  2. Paper: A Primer on Hardware Security: Models, Methods and Metrics
  3. Article: Why salted hash is as good for passwords as for breakfast
  4. Paper: Cybersecurity thoughts and issues from a political perspective
Mohammed Hammoud – Cloud Computing

Cloud computing involves the sharing of computing resources —usually over the internet— in order to achieve economies of scale. Sharing computing resources often means that it’s cheaper and more accessible. In essence, it is a scalable network of interconnected computers or IT resources that are decentralized and can be accessed remotely, usually through a “pay-as-you-go” model of payment.

While the internet allows the access of public shared networked resources, cloud computing is often managed by private entities. Often, third-party cloud services offer computing and storage as a service, allowing other organizations to reduce the initial costs of setting up, as well as the burden of maintaining servers.

For smaller businesses, cloud computing significantly abstracts the IT infrastructure components of their business, allowing them to focus on their core business. That is to say, it reduces the barrier of entry for individuals by making storage and bandwidth accessible.

Other advantages of cloud computing may include scalability, international reach, and reliability. Looking at scalability for example, when needed, one can multiply server space within a few clicks of a button. If one were running private servers, you would need to plan ahead to purchase hardware, set up the servers and “merge” them with your current setup. In regards to international reach, cloud computing services also often offer servers in other countries, resulting in faster international access to your online product. They will also usually guarantee uptime, and be faster at responding to any down time than an in-house IT team would be able to. They’re therefore often more reliable.

Cloud computing services are often categorized into three sub-fields, namely: infrastructure-as-a-service (IaaS), platform as a service(PaaS), and software a service (SaaS). IaaS is the most commonplace category of cloud computing involving simply renting IT infrastructure (i.e storage, operating systems and computing). PaaS takes “IaaS” further creating an environment where developers can create, test and deploy software, thus abstracting server, storage or database management. SaaS on the other hand is the offering of developed software over the internet rather than through “traditional” downloads. This makes updating and patching easier.

Questions to Speaker?

  1. Cloud computing is becoming less and less of a buzzword, what are some interesting recent innovations in the field?

  2. What does cloud computing at a University look like without the large server facilities and big data at a large company like Google? Isn't such an environment necessary for cloud computing research?

  3. Is edge-computing going to replace cloud-computing?

Sources and more info:

  1. Microsoft Azure - What is Cloud Computing?
  2. Wikipedia - Cloud Computing
  3. Youtube - Cloud Computing
Giselle Reis - Programming Languages

Programming languages allow programmers to give a machine a set of instructions, in order for them to achieve a desired result or output. Just like any other language, they are a way to communicate (with machines). Furthermore, they use a standard or commonly accepted vocabulary and grammar. These usually vary from one programming language to the other, though they often share similarities in their general structure.

Typically, when talking about programming languages, we’re referring to higher-level languages such as python, javascript, Java, C++, C etc.They are built on assembly languages, which are in turn build on machine languages. Higher level languages are a lot more human-readable than assembly languages. Assembly, however, is easier to program than machine languages, because a programmer can substitute names for numbers. Machine languages, on the other hand, are purely numerical. Some of the higher level languages such as C and C++, give you greater access to lower level attributes of the machine such as memory and drivers than others, though often at the expense of ease of use and convenience.

Over the years there have been numerous programming languages created, and the numbers are still growing. These languages need a standard way of “tracking” them. Conventionally, most popular languages have a specification document that allows a standardized implementation of the programming language. Nonetheless, some languages are not standardized by rather variations or extensions of other languages. An example of this is C++, which was simply an extension of the C programming language before becoming officially standardized.

For a programming language to be run by the machine, it needs to be converted into machine language, namely: compiling or interpreting. Compiling converts into machine language before it used by the user, while an interpreter interprets a language into machine code in real time during execution.

Questions to the speaker:

  1. Is there a standard formula creating new programming languages?
  2. As a teacher, is there a common bug in code/misconception that you often find in students’ work?
  3. At what point should you consider yourself an “expert” programmer?

Want to read more on this?

  1. Interpreter vs Compiler
  2. Wikipedia: Higher level programming languages
  3. Standardization of Programming Languages
Saquib Razak – Embedded Systems

Embedded systems are smaller hardware or mechanical system housed in a larger computing machine. Conventionally, it has a single exclusive function or purpose within the larger system, as they are typically physically small and therefore have limited memory and processing power. This may make them difficult to use. Nonetheless, in comparison to larger computers/chips with non-exclusive functions, embedded systems also require significantly less electrical power, space and manufacturing cost

Furthermore, their limited functionality can be overcome by aggregating networks of multiple embedded systems in order to better manage the systems at unit and network levels, providing a much better service than what any individual embedded system could offer. As they are dedicated systems, it is also much easier to optimize manufacturing increasing the dependability of the devices while minimize the cost of production.

Generally, embedded systems can be broken down into to categories, namely: ordinary microprocessors(μP) and microcontrollers(μC). The difference between the two is that microprocessors use external memory and peripherals, while microcontrollers have them on chip. As a general rule of thumb, microcontrollers require less power, space and are cheaper than their counterparts since their peripherals are on-chip.

Today, embedded systems are ubiquitous and found in nearly all electronic computing machines ranging from automotive systems to toasters, given that multiple systems need work together to achieve a given task. Furthermore, as we continue to develop new technologies — particularly in the Internet of Things (IoT) — more and more embedded systems will be needed. In fact, the embedded systems industry which was valued at 154 billion in 2015 is expected to be worth 259 billion by 2023 (Source 3).

Questions for Speaker

  1. With Moore's law having hit a wall, what's next for embedded systems?
  2. Is there potential for embedded systems to grow smaller?
  3. What sort of jobs are involved in the embedded systems field?

Sources and More info

  1. Tech Target: Embedded System Definition
  2. Tutorials Point: Embedded Systems Overview
  3. Engineer's Garage: Difference Between Microporcessor and Microcontroller
  4. Market Stats: Global Embedded System Market.
Gianni Di Caro – Robotics and AI

The field of robotics involves the creation of automated mechanical and electrical machines that are able to function with autonomy. Artificial intelligence is a computer’s ability to “think” as a human being would. The merging of these fields brings together to create artificially intelligent robots that can interact with the real-world environment as human beings would.

The robotics field is characterized with the research, development, building and use of robots, as well as the mechanisms that allow them to function, including but not limited to sensors and computing chips. Artificial intelligence on the other hand, studies how computers can take input from its environment in the form of data and take actions on that data in order to achieve a certain goal. While the field is still in its early years, AIs should eventually also be able to adapt to the environment by altering its own self.

The applications of robotics are diverse! They range from manufacturing to assistance with daily lives. When interacting with people on a daily basis, machine “intelligence” is necessary as the tasks greatly vary as the environments in which they’re executed. The computer science field of artificial intelligence, therefore, tries to create “general” machine intelligence that allow robotics to function in a range of diverse and previously unseen environments. To do this, AI research looks into how the algorithms can mimic human’s ability to maneuver different environments by creating models of computing that resemble the human brain. As we don’t have a deep understanding of how the human brain works, AI research attempts to hypothesize on how and why we learn and think.

As the field of AI continues to mature, it is robots will definitely play a greater role in our daily. In the near future, robotic applications will gradually shift from their current role in industry and science into daily life, similar to how computer did between their creation and today.

Questions for Speaker

  1. What is to come for AI?
  2. What are the prospects for swarm robotics in the near future?
  3. Do you think leaving robots to take care of the elderly is unethical?

Sources and Further Reading:

  1. How Stuff Works - Robotics and AI
  2. Unesco - Ethics of Robotics
  3. Wikipedia - Robotics
  4. Investopedia - AI
Home LinkedIn Github HackerRank Resume Schedules