Flavio's Media Storage

Lecture 7

Computer Networks

Many people believe that Network only refers to the idea of internet. However, the concept of Computer Networks has more to it than simply internet.

Some may consider CN a sub-discipline of electrical engineering, but the idea of CN and most of researches fall under the category of Computer Sciences.

In basic terms, CN is the concept of various machines connected to each other and sharing information between one and another. Yes, internet is a part of it, but it is only a mean in which CN happens and not the reason why it happens, neither it explains how it works.

Network is a very broad term and it can be classified in various different ways. Some of which include the medium in which data is transported, communications protocol and such.

For data to be transferred there must be a medium in which to transport the information. One of the most used medium of transferring is wired technologies. Wired Technologies is basically the idea that all machines are directly connected to each other through some cable. The type of cable can vary however.

Wireless Technologies is also a very much used in order to transfer data. The best example is the Wi-Fi technology in which most of us use in our daily lives when going to work or to college and so on. The way in which this technology works is through the communication between machines through waves. That is, each machine will send waves with certain information and will communicate with each other to share information. It can happen through terrestrial microwaves, Satellites, Wireless LANs and so on.

Protocols are extremely important because they determine rules and the formats in which the data/information is being exchanged. Some of the main protocols used are Ethernet, hardware and so on.

Another point that is important in order to classify CN is by the scale in which data is share. Limited networks happens when the devices connected with each other and sharing data with each other are very close together and the amount of information shared is very small, such as a computer and a printer in your house.

Local area networks are networks that may involve a whole building and so on. For example, the wireless internet connection in your university building or the connection in the building in which you work on. The amount of information transferred is much larger.

Lastly, there are networks in which can cover a whole metropolitan area or even wider spaces. This type network is available to many people at the same and so it transfers a lot of data.

Computer Networks also work close together with computer security and cloud computing in order to be able to share the information and secure it. 

by Flavio Fenley

---------

Lecture 7

Computer Networks - Revised by Reham Shaikh

Many people believe that Network only refers to the idea of internet. However, the concept of Computer Networks has more to it than simply internet.

Some may consider CN a sub-discipline of electrical engineering, but the idea of CN and most of researches fall under the category of Computer Sciences.

In basic terms, CN is the concept of various machines connected to each other and sharing information between one and another. Yes, Internet is a part of it, but it is only a means through which CN happens, not the reason why it does or how it does. 

Computer Networks also works close together with other areas of Computer Sciences, such as Computer Security and Cloud Computer in order to be able to share the information and secure it.

Network is a very broad term and it can be classified in various different ways. Some of which include the medium in which data is transported, communications protocol and such.

For data to be transferred there must be a medium in which to transport the information. One of the most used medium of transferring is wired technologies. Wired Technologies is basically the idea that all machines are directly connected to each other through some cable. The type of cable can vary however.

Wireless technology is also very much used to transfer data. The best example is the Wi-Fi technology in which most of us use in our daily lives when going to work or to college and so on. The way in which this technology works is through the communication between machines through waves. That is, each machine will send waves with certain information and will communicate with each other to share information. It can happen through terrestrial microwaves, Satellites, Wireless LANs and so on.

Protocols are extremely important because they determine rules and the formats in which the data/information is being exchanged. Some of the main protocols used are Ethernet, hardware and so on.

Another point that is important in order to classify CN is by the scale in which data is share. Limited networks happens when the devices connected with each other and sharing data with each other are very close together and the amount of information shared is very small, such as a computer and a printer in your house.

Local area networks are networks that may involve a whole building and so on. For example, the wireless internet connection in your university building or the connection in the building in which you work on. The amount of information transferred is much larger.

Lastly, there are networks that can cover a whole metropolitan area or an even wider space. This type of network is available to many people simultaneously and so, can transfer a lot of data. 

--------

Questions:

1-How can networks be used against the user?

2-What are some of the researches in Computer Networks in CMUQ?

3-What are some of the future modifications for networks?

---------

Resources:

http://en.wikipedia.org/wiki/Computer_network

http://www.tecschange.org/networks/network-syllabus.html

http://www.functionx.com/networking/index.htm

http://compnetworking.about.com/od/basicnetworkingconcepts/Networking_Basics_Key_Concepts_in_Computer_Networking.htm

============================================

 

Lecture 6

Computer Security

In an age in which computer is starting to be used for nearly every single task, one question always rise – Is it secure?
Computer share a lot of information between each other.

Most of this information is quite personal; things that we do not want to share with other, such as bank account and contact information. Because of the issue of security, a branch of Computer Sciences was created in order to research and to try to come up with methods to increase security when sharing information. The branch is called Computer Security.

The branch of Computer Security targets mainly networks. Its objectives include protecting personal information from theft. It also looks into preventing disasters that could result on the loss of important information.

While trying to offer protection, researches in this area are aware that they cannot simply lock the information. It is extremely important that the users should have access to their own information without any sort of problems.

There are mainly fours types of approaches to this subject. The first one is the idea that the computer will trust all software and the software is not trustworthy. The second is to trust all software and the software is trustworthy. The third is when the computer sees all software and untrustworthy and but forces security measurements that are not. The last one is when computer do not trust a software and forces effective security policies.

In most cases, option number one is often used. This is because most user do not like to spend their time on searching for appropriate software that are capable of enforcing effective security policies or they do not wish to spend money on acquiring such software. The problem, however, is that once users start to share their information on the network, the software that should have been verifying the information and keeping track of the information fails to do so. Because of that, the user ends up suffering great loss of important personal data.

Without Computer Security it is impossible to determine whether the place in which the user is sharing information is secure and track the data. Most importantly, if the data that the user share online is somewhat destroy, the user would never have access to such data again unless Computer Security is involved.

It is always important to keep track of where one shares his or hers information. Always.

-----------------

Lecture 6

Computer Security revised by Reham Sheikh

In an age where computers are used for nearly every task, one question always arises – Is this usage secure?

Computers share a lot of information between each other. Most of this information is quite personal; things that we do not want to share with others, such as bank account and contact information. Because of the issue of security, a branch of Computer Sciences was created in order to research and to try to come up with methods to increase security when sharing information. This branch is called Computer Security.

Computer Security mainly targets networks and protecting personal information from theft. It also looks into preventing disasters that could result in the loss of important information.

While trying to offer protection, researchers in this area are aware that they cannot simply lock the information. It is extremely important that the users should have access to their own information without any sort of problems.

There are mainly fours types of approaches to this subject. The first one is the idea that the computer will trust all software and the software is not trustworthy. The second is to trust all software and the software is trustworthy. The third is when the computer sees all software and untrustworthy and but forces security measurements that are not. The last one is when computer do not trust a software and forces effective security policies.

In most cases, option number one is often used. This is because most users do not like to spend their time on searching for appropriate software that is capable of enforcing effective security policies. Also, users do not wish to spend money on acquiring such software. The problem, however, is that once users start to share their information on the network, the software that should have been verifying the information and keeping track of the information fails to do so. Because of that, the user ends up suffering great loss of important personal data.

Without Computer Security it is impossible to determine whether the place in which the user is sharing information is secure and track the data. Most importantly, if the data that the user shares online is destroyed, the user would never have access to such data again unless Computer Security is involved. It is always important to keep track of where one shares his or hers information.

-------------

Questions:

How do you breach this security?

What are some examples of 'natural disasters'? And how do they happen?

Could you tell me some of the research that you do here in CMUQ?

Where did the idea of computer security originates from?

--------------

Resources -

http://en.wikipedia.org/wiki/Computer_security

http://www.cert.org/homeusers/HomeComputerSecurity/

http://www.istl.org/02-fall/internet.html

http://onguardonline.gov/stopthinkclick

http://www.armor2net.com/knowledge/computer_security.htm

 

=========================================

 

Download:Resume

Lecture 5

Data Mining -

In this week’s lecture Professor Fossati will be covering the topic of Data Mining. Unfortunately, many of us have no knowledge on what is Data Mining.

Data Mining is basically the idea of analyzing different types of information and summarizing all of them. After summarizing, this information is used in order to increase profit and cut costs. In Computer Science, Data Mining is software used to analyze data. This allows user to analyze several set of data in a very short time and let them look at the information from various different angles and determine the relationship between them.

Data Mining, in Computer Science, is a relatively new interdisciplinary field. The field only started being internationally recognized in 1989 in a conference hosted by the Association of Computing Machinery and since then the same association produced several annual academic journals.

Although very important, Data Mining is only one of several stages of a much larger process called Knowledge Discovery in Databases (KDD). The whole process is made up of 5 different stages. The stages are, in order, Selection, Processing, Transformation, Data Mining and Evaluation.

Meanwhile, in the field of Data Mining alone, there are various tasks performed during the process. The first one is called Pre-Processing. Pre-Processing is when the program analyses a set of multivariate database. This is because for the algorithm of data mining to be used, it is necessary to assemble a target data set, and pre-processing makes sure to do so.

The other tasks that are performed during Data Mining are the following:

  • Anomaly Detection – That is, indentifying data that needs further analysis
  • Association Rule Learning – Indentifies what type of relationship is present between the data
  • Clustering – Checks for groups of similar structure in the data
  • Classification – This is the generalization of the data in order for easier use
  • Regression – Will try to create a function that will represent the data with the least percentage error
  • Summarization – It is basically the report that comes after all the tasks are performed

Data Mining is used in pretty much every aspect of the Business World. That is because it allows user of organize their data in an easier and more efficient way. For big enterprises, the least time they spend trying to analyze a set of data the more time they have in order to make decisions for the company.

It is also used in the World of Sciences. When making experiments and projects, make scientist make use of Data Mining in order to analyze their results thus letting the user spend his time analyzing the results.

As we can see, Data Mining is an extremely important field in the world of Computer Science that influences all of uses in a day to day basis.

 

 

Summary Revised by Reham Shaikh:

Data Mining -

Data Mining is the process of analyzing different types of information and summarizing it. After summarizing, this information is used in order to increase profit and cut costs. In Computer Science, Data Mining is used as software to analyze data. This allows users to analyze several sets of data in a very short time and lets them look at the information from various angles in order to determine the relationship between these sets.

Data Mining, in Computer Science, is a relatively new interdisciplinary field. The field only started being internationally recognized in 1989 in a conference hosted by the Association of Computing Machinery. Since then, the same association has produced several annual academic journals.

Although very important, Data Mining is only one of several stages of a much larger process called Knowledge Discovery in Databases (KDD). The whole process is made up of 5 different stages. The stages are, in order - Selection, Processing, Transformation, Data Mining, and Evaluation.

Meanwhile, in the field of Data Mining alone, there are various tasks performed during the process. The first one is called Pre-Processing where the program analyses a set of multivariate databases. This is because for the algorithm of data mining to be used, it is necessary to assemble a target data set, and pre-processing makes sure to do so.

The other tasks that are performed during Data Mining are the following:

  • Anomaly Detection –Identifying data that needs further analysis
  • Association Rule Learning – Identifies what type of relationship is present between the data
  • Clustering – Checks for groups of similar structure in the data
  • Classification – This is the generalization of the data in order for easier use
  • Regression – Will try to create a function that will represent the data with the least percentage error
  • Summarization –the report that comes after all the tasks are performed

Data Mining is used in almost every aspect of the business world. That is because it allows users to organize their data in an easier and more efficient way. For big enterprises, the less time they spend trying to analyze a set of data the more time they have in order to make decisions for the company. It is also used in Science when carrying out experiments and projects. Scientists make use of Data Mining in order to analyze their results, thus letting the user spend his time analyzing the results.

As we can see, Data Mining is an extremely important field in the world of Computer Science that influences all of uses on a day to day basis.

-----------------------

Questions:

- What is the future of data mining?

- What are the disadvantages anf limitations for Data Mining?

- Where does the idea of Data Mining was first introduced?

- What are some of the researches in CMUQ for Data Mining?

-------------------------

Sources:

http://en.wikipedia.org/wiki/Data_mining

http://www.anderson.ucla.edu/faculty/jason.frand/teacher/technologies/palace/datamining.htm

http://www.thearling.com/text/dmwhite/dmwhite.htm

http://www.laits.utexas.edu/~norman/BUS.FOR/course.mat/Alex/

http://www.autonlab.org/tutorials/

http://www.the-data-mine.com/bin/view/Misc/DataMining

 

===============================================

 

Lecture 4

Math in Computer Sciences –


Up until the 20th century Mathematics was the ultimate subject. It was impossible to perform any type of research and develop new technologies without the use of Mathematics. Unfortunately with the level of technology that we have reached nowadays, it became harder and harder to improve what we have by simple Math. In order to be able to advance again, the subject of Computer Sciences was created and now we are again capable of developing new systems more efficiently.

Different from what many people believe, Computer Sciences did not come to substitute Mathematics. Computer Science is actually used together with Math. As a matter of fact, Computer Sciences would not exist without Math.

The System of Binary Numbers is one of the main components in how computers work. The way this system works is by utilizing 0 and 1 in order to represent various numerical values. By combining the 0 and 1 in a logical order, computers are capable of performing the most basics of tasks. Furthermore, nearly all modern computers make use of this system in order to work. An example is the 64-bit Windows.

Now, since computers are capable of performing mathematical calculations better than most humans, it would be safe to assume that learning math would be useless. Wrong. If the programing has little or a simple mathematical knowledge, the level of the program that he creates will be extremely low. This is because to create a program it is necessary to write algorithms in which the computer will follow and then perform the specific task that the programmer wants the computer to perform. Now, these algorithms should follow a logical order of thought that also needs to be math based in most cases.

One way that Math is 100% associated with CS is Theoretical Computer Sciences. This is a subset of CS in which focus mostly in the abstract and mathematical aspects of the subject. The scope of this subject involves the analysis of algorithms and formal semantics of programing languages. With this, it is then possible to analyze data structures, parallel computation, cryptography and many others.

As possible to see, Computer Science depends mostly in mathematics. Before creating the program is it necessary to think on what is it that you want to achieve and then write how they would solve this problem and achieve the goal using mathematics. Only then the programmer will be capable of writing a code ordering to perform these tasks. Finally, the whole basic structure of computing relies mainly on mathematical terms.

 

 

Version Reviewed by Reham Shaikh -

 

Math in Computer Sciences –

Up until the 20th century, Mathematics was the ultimate subject. It was impossible to perform any type of research or to develop new technologies without the use of Mathematics. Unfortunately with the level of technology that we have reached nowadays, it has become harder to improve what we have through the use of simple Math. In order to be able to advance again Computer Science was created and now we are capable of developing new systems more efficiently.

However, contrary to popular belief, Computer Science did not come to substitute Mathematics but is actually used together with it. As a matter of fact, Computer Science would not exist without Math.

It is evident that Computer Science depends mostly on mathematics. Before creating the program it is necessary to think about what you want to achieve, how you would solve the problem, and how you would achieve the goal using logic. Most of which comes from mathematical thinking. Only then will the programmer will be capable of writing a code that orders the computer to perform these tasks. Finally, the whole basic structure of computing relies mainly on mathematical terms.

Now, since computers are capable of performing mathematical calculations better than most humans, it would be safe to assume that learning math would be useless. Wrong! If the programmer has little or a simple mathematical knowledge, the level of the program that he creates will be extremely low. This is because to create a program it is necessary to write algorithms in which the computer will follow and then perform the specific task that the programmer wants the computer to perform. These algorithms should follow a logical order of thought that also needs to be math based in most cases.

The System of Binary Numbers is one of the main components in how computers work. The way this system works is by utilizing 0 and 1 in order to represent various numerical values. By combining the 0 and 1 in a logical order, computers are capable of performing the most basics of tasks. Furthermore, nearly all modern computers make use of this system in order to work. An example is the 64-bit Windows.

One way that Math is 100% associated with CS is Theoretical Computer Sciences. This is a subset of CS which focuses mostly on the abstract and mathematical aspects of the subject. The scope of this subject involves the analysis of algorithms and formal semantics of programing languages. With this, it is then possible to analyze data structures, parallel computation, cryptography and many others.

-----

Questions:

How can Mathematics restrict the improvement of Computer Science?

If you want to write a program and the mathematical logic avaiable at the time was not enough to produce this program, what would be some of the requirements to produce a new mathematical logic?

Is there any other subject that could possible be used or substitute mathematics in Computer Science? Why is that?

 

------

Sources:

http://wesnerm.blogs.com/net_undocumented/2009/02/unifying-math-and-computer-science.html

http://www.clarku.edu/departments/mathcs/

http://journals.cambridge.org/action/displayJournal?jid=MSC

http://www.webmaster-forums.net/webmasters-corner/how-do-you-rate-importance-mathematics-your-programming-skill

http://developers.slashdot.org/story/11/03/12/1651207/cs-profs-debate-role-of-math-in-cs-education

 

=============================================

 

Download:Course Plan

 

Lecture 3

Programing Language and Verification

Flavio Fenley
In the world of programing, there are two points in which we must always remember: Programing Languages and Programing Verification. Understating both of these terms is more than essential for us, computer scientist.

Programing language is a set of vocabulary commands that makes the computer performs certain specific tasks. The term generally refers to the most advance programing language software, such as C++, BASIC, COBOL, etc. Each and every one of them has their own specific terms and vocabulary. This means that if one wants to write a C++ program with the COBOL language, the system would simply crash (this term refers to the idea that the system will not perform any task due to an error).

Languages are so essential to programing that without them there would be no computing. Once you turn your computer, the system reads a number of various codes at the same times in order to perform various actions. All these codes are written in their respective programing language, meaning that for whatever it is that you do in your computer, there’s programing language involved in.

With so many programing languages available to us at the moment and with the importance that these languages have to us, one question always, rise: What is the future of programing language? With computers becoming faster and with higher RAM programmers will need to find more complex programing languages that can perform different task in a higher degree in order to keep the pace with the development. Because of that, many programmers and companies are betting their odd into a new language called C#.

Writing programs are extremely important for the development of new technologies or just for the usage of the current ones. However, it is definitely not easy to write a program. Various bugs (mistakes that force the program to crash) appear when writing the program. Because of that professionals came with ways to debug their program and their software in the most efficient way, and that is called “Programing Verification”.

Programing Verification is, in simple words, the usage of mathematical skills and logical thinking in order to identifying mistakes and all done by the program itself.  In the search of methods to find mistakes easier, programmers and researchers found several methods in which they can find programing mistakes in a very efficient way. One of the methods found is the Automana model. Computers make use of binary codes to specify each and every character. Automana simply takes all these binary numbers and reduce them into a more simple form which then enables the user to analyze the program and prove the ability to work of the program.

------

Version Revised by Reham Shaikh - Peer Tutor for writing

Flavio Fenley
In the world of programing, there are two points we need to comprehend: Programing Languages and Programing Verification. Understating both of these terms is more than essential for us, Computer Scientist.

Programing language is a set of vocabulary commands that makes the computer perform specific tasks. The term generally refers to the most advance programing language software, such as C++, BASIC, COBOL, etc. Each of them have their own specific terms and vocabulary. This means that if one wants to write a C++ program with the COBOL language, the system would simply crash (this term refers to the idea that the system will not perform any task due to an error).

Languages are so essential to programing that without them there would be no computing. Once you turn your computer on, the system reads a number of various codes at the same time in order to perform various actions. All these codes are written in their respective programing language, meaning that for whatever you do in your computer, a programing language is involved.

With so many programing languages available to us at the moment and with the importance that these languages have to us, one question always arises - What is the future of programing language? With computers becoming faster and with higher RAM, programmers will need to find more complex programing languages that can perform different task in a higher degree in order to keep the pace with the development. As a result, many programmers and companies are changing their old preferred language into a new called C#.

Writing programs is extremely important for the development of new technologies or just for the usage of the current ones. However, it is definitely not easy to write a program. Various bugs (mistakes that force the program to crash) appear when writing the program. Due to this professionals came up with ways to debug their program and their software in the most efficient way, which is “Programing Verification”.

Programing Verification is done by the program, and is the usage of mathematical skills and logical thinking in order to identifying mistakes.  In the search to find mistakes easier, programmers and researchers found several methods in which they can find programing errors in a very efficient way. One of the methods found is the Automana model. Computers make use of binary codes to specify each and every character. Automana simply takes all these binary numbers and reduce them into a more simple form which then enables the user to analyze the program and prove the ability of it to work. 

------

Question:

In my research, I found out about this specific method called Automana, what does this method do specifically?

How can the creating of new languages improve the understanding of programing and the development of new technologies?

What was the first Program Verification method invented and how does it differentiate from the ones used today?

In your own experience, what is the best programing language why is it the best compared to others?

If you were to develop a method for program verification, what would it be?

-------

Sources:

http://en.wikipedia.org/wiki/Programming_language

http://www.webopedia.com/TERM/P/programming_language.html

http://cplus.about.com/od/introductiontoprogramming/p/programming.htm

http://www.cs.utexas.edu/~boyer/jar.pdf

http://www.cse.yorku.ca/~franck/teaching/2000-01/3341/

http://www.ams.org/notices/200005/fea-kurshan.pdf

 

 

================================================

 

Lecture 2

Cloud Computing

Flavio Fenley

Cloud computing, nowadays, has various definitions however one of them stands out. That is, cloud computing is a project used to increase capability and capacity without investing in new infrastructure. In a way, Cloud computing simply uses internet to store data and documents and applications. This allows users and IT personnel to have access to their information from any computer as the information is in the “cloud”. Thus, cloud computing is simply a hosting system on the internet.

One example of cloud computing that is used in nearly every aspect of our daily lives is e-mail.  Servers such as Yahoo and Gmail are one of the best representatives of cloud computing in our age. What happens is that through e-mail, one sends information to the internet and this information is then stored and is available for access in any given place.

According to one of the sources, cloud computing has three main characteristics which differs cloud computing to other types of hosting systems. Firstly, it is acquired on demand and most of the times by the minute. Secondly, it is extremely elastic. This means the user determines how much of service they will need or want. Lastly, the service is fully supervised by the provider. Therefore, the user only needs to worry about receiving the information.

There are many advantages to use cloud computing in our day. One of the main advantages, off course, is that the user is not required to carry their information all over the place. The service does that to the user. However, one of the main advantages is that once the user acquires the service, they do not need to worry about buying updates for the software. The provider is responsible for all that happens within the service. If an update is made by the provider, the user will have immediate access to this update without any cost. If cost is required, then the service that the user is using is NOT cloud computing.

Another huge advantage to cloud computing is that there is a highly significant shift in the workload. Now users and providers and computers are required to do much less work to gain access to the information and the service itself. This is because the network handles most of the work. Because of that users, especially, are much less required to learn and know about the programs and how to handle. All they need to know is how to make use of the cloud computing interface software. That is, a web browser.

With cloud computing, gaining access to one’s information is much easier and thus companies are making much more use of this new system to store and transport their information and documents. More importantly, cloud computing will be so effective that one will be capable of keeping in track or their service and updating it as fast as one updates their information in their social network websites, such as Facebook.

Flavio Fenley

 

Summary Reviewed by Rehan Shaikh, in the ACR office -

Cloud computing, nowadays, has various definitions. Cloud computing is a service used to increase capability and capacity without investing in a new infrastructure. In a way, cloud computing simply uses internet to store data, documents and applications, thus allowing the user and IT personnel to have access to their information from any computer. Therefore, cloud computing is simply a hosting system on the internet.

One example of cloud computing that is used in nearly every aspect of our daily lives is the e-mail.  Servers such as Yahoo and Gmail are one of the best representatives of cloud computing in our age. What happens is that through e-mail one sends information to the internet and this information is then stored, becoming available for access in any given place.

Cloud computing has three main characteristics which differs cloud computing to other types of hosting systems. Firstly, it is acquired on demand and most of the times by the minute. Secondly, it is extremely elastic, which means that the user determines how much of service they will need or want. Lastly, the service is fully supervised by the provider and therefore the user only needs to worry about receiving the information.

There are many advantages to the use of cloud computing in our day. One of the most obvious advantages is that the user is not required to carry their information around. The service does that to the user. Moreover once the user acquires the service they do not need to worry about buying updates for the software. The provider is responsible for all that happens within the service. If an update is made by the provider, the user will have immediate access to this update without any cost. If cost is required, then the service that the user is using is not cloud computing.

Another huge advantage to cloud computing is that there is a highly significant shift in the workload. Currently users, providers and computers are required to do less work to access the information and the service. This is because the network handles most of the work. Because of that users, especially, are not required to know about the programs or how to handle them. All they need to know is how to make use of the cloud computing interface software (web browser).

Because of cloud computing people are capable of having easier access to their information and that is why companies are making more use of this system to store and transport information and documents. In addition, cloud computing will be so effective that one will be capable of uploading and downloading any document as fast as one updates their information in their social networking websites, such as Facebook.

Questions -

What does Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) means and what are each one of the used for?

What are the main characteristics of a hybrid cloud?

Compared to the simple service of cloud computing, what are the advantages and disadvatages to using a hybrid cloud?

If cloud computing presents many advantages to users, why would a company chose not to use this system?

Cloud2 is a new system that will allow faster interaction between user and data. However, what is the main technical difference between original idea and cloud2?

If the system of cloud computing breaks down, how would users and providers be capable of retrieving their personal data? or will they simply lose all of that?

Source -

http://www.infoworld.com/d/cloud-computing/what-cloud-computing-really-means-031

http://www.wikinvest.com/concept/Cloud_Computing

http://searchcloudcomputing.techtarget.com/definition/cloud-computing

http://www.salesforce.com/cloudcomputing/

http://computer.howstuffworks.com/cloud-computing/cloud-computing.htm

 

 

 

 

========================

 

Class 3

Randy Pausche Lecture on Time Management

 

The lecture is about hwo to manage your time properly in a way that will lead one to success. The following points are very important ideas presented in the lecture:

1)Time and Money are very acquitablemuch alike. You should learn how to manage your time just how you manage your money. To make you think about that, calculate how much you are worth an hour/

2)"The time famine" - It is the simple idea that one never has enough time to do whatever they have to do. However, that is only caused by the fact that one does not know how to manage time properly. As a matter of fact, whenever you are going to manage your time, you should always consider as further in the future as possible and not a one day solution.

3)If one is organized with their own materion, one wastes less time in trying to find it and thus they end up having more free time to do something else. Statistics show that an employee wastes around 2 hours a day trying to find their material. Organization goes with time management.

4)Doing something the right way is not more important than doing the right thing. It is much more important to do the right thing for then you wont be wasting time doing somethign wrong in the right way that will not serve you in the future.

5)Plans are very much important, but so is knowing how to change your plan. However, it is impossible to change your plan if you do not even have one. Always have a plan and always be ready to change.

6)Grow the people around you. Do not only thing about yourself. The boss of a company is the boss because he also spends his time growing his employees by teachign them how to work.

7)Always face the ugliest and hardest thing first otherwise you wont be able to finish the challange. For exaple, if you are going to have to eat a couple of animes and three of them are frogs, start with the frog. And eat the smallest one last.

8)Know how to prioritaze your work. Use the Covey's Quadrant TO DO. Do things important and due soon first. Then do things importand and not due soon. Then do things not important and due soon and finally do things not important and not due soon last.

All of the points listed abover contain information from Randy Pausche lecture on time management posted on youtube.

--------------------------------------------------------------------------------

 

I am Flavio Fenley and I have created this website with the purpose of uploading information on several different topics related to both work and computers.

To contact me, email me on :

fafenley@gmail.com

and to know my free time, you can always check my calendar -

<iframe src="https://www.google.com/calendar/embed?src=fafenley%40gmail.com&ctz=Asia/Qatar" style="border: 0" width="800" height="600" frameborder="0" scrolling="no"></iframe>