The Development of Computers

The following sample Computer Science essay is 2748 words long, in APA format, and written at the undergraduate level. It has been downloaded 915 times and is available for you to use, free of charge.

Introduction

The use of computers brought profound changes to business, industry, and even to personal habits during the late 20th century. These changes are still developing, and will most likely bring even more profound changes to human life within the next few decades. Computers have made life much easier and more interesting. While they were once expensive to acquire and difficult to use, computers are now accessible all over the world with many more benefits than drawbacks. Computers help people take online coursework, complete homework, perform research, keep in touch and educate and entertain children. Computers make study convenient and global communication and socialization more interactive. Computers have helped researchers in nursing, medicine and technology to develop cures and innovation and to ensure accuracy. In effect, computers have continually evolved to meet the changing and increasingly complex needs of people and will continue to do so over the next twenty to thirty years.

The need for a computing tool led to devices like the abacus early in human history, and people have continued to look for ways to automate routine calculations. A number of ideas developed during the 19th century that led to the development of the modern computer. For example, in 1801, Joseph Jacquard introduced the idea of punched paper cards to provide a template for his looms that automated the weaving of intricate pattern. This Jacquard loom provided the important concept of programmability (Hook Norman, 2001; Wojciech Ivengar, 2002; Jamsa, 2011).

However, the Jacquard loom would not be the final technological innovation the century would offer. In 1822 Charles Babbage proposed a machine that he called a Difference Engine that would be capable of automatically computing numbers and printing out a result. Babbage began work on the engine, but never completed it because of funding problems. His invention was eventually completed from his plans by the London Science Museum, with all parts of in complete in the year 2000. Babbage went on to propose the first mechanical computer, which he called an Analytical Engine, which included basic flow controls and an integrated memory. Again, because of funding issues, this machine was not built in Babbage’s lifetime (O’Regan, 2012).

Computer innovation has always been aided by math and those select geniuses who have mastered the art and science of applying it. Ada Lovelace was a mathematician who assisted in analysis of Babbage’s Engine, and while working on a translation, she provided the first algorithm that was encoded for processing by the machine. She noted that “the Analytical Engines weaves algebraic patterns just as the Jacquard-loom weaves flowers and leaves” (Fuegi Francis, 2003, 19). She also proposed the idea of computer generated music. Her contributions have been timeless (Zhu Huang, 2007).

By the end of the 19th century, computers would continually steamroll a slew of innovations. Herman Hollerith invented a method for recording data on a machine-readable medium in the late 1880s. He first tried paper tape but eventually decided on paper cards the same way Jacquard did. Hollerith invented a tabulator and a keypunch machine to prepare these cards, providing the foundation for the modern information processing industry. These punch cards were used to process data from the 1890 United States (US) Census by Hollerith’s company, which later became IBM. Other concepts that would assist in development of computers also began appearing about this time, including the vacuum tube and Boolean algebra (O’Regan, 2012).

Programmable computers were an important technological adaptation. The first programmable computer was the Z1, completed by Konrad Zuse in his parents’ German home during the years from 1936 to 1938. This was the first electro-mechanical binary computer, and the first to really be functional for computing as it is defined today. About this same time, Alan Turing proposed the Turing Machine, providing the foundation for theories on computers and the fundamentals of how they should work. His machine had a device that printed symbols on paper based on the way a person would follow logical instruction (Zuse, 1993).

The advent of World War II brought a renewed focus on development of computing power to provide a technological advantage. The Colossus was the first electric programmable computer, designed to help British code breakers read German encryption. The machine was developed by Tommy Flowers and first demonstrated in1943. The first digital computer was the Atanasoff-Berry computer (ABC) which was developed between 1937 and 1942 at Iowa State College. The computer used vacuum tubes for computations with binary math and Boolean logic. The computer had no central processing unit (CPU). The ENIAC computer was developed by J. Presper Eckert and John Mauchly about this same time at the University of Pennsylvania and was completed in 1946. It used about 18,000 vacuum tubes and weighed about 50 tons. In 1949 the EDSAC computer was the first to store electronic programs. In 1949 this computer ran the first computer game (O’Regan, 2012).

By the time the 1950s hit, computers had evolved enough to be used in more personal settings. The first computer company was founded in 1949 by J. Presper Eckert and John Mauchley, inventors of the ENIAC computer. The company went on to release a series of UNIVAC mainframe computers. The UNIVAC 1101 was delivered to the US government in 1950 and both stored and ran a program from memory. Konrad Zuse released the first commercial computer in 1942. His Z4 computer was sold to the Swiss Federal Institute of Technology in Zurich. In 1953 IBM introduced it first electric, mass produced computer. In 1981 IBM released its first personal computer, called the IBM PC. The device had a processor, used MS-DOS as an operating system and had 16 KB of memory. In 1956 the first transistorized computer was demonstrated at the Massachusetts Institute of Technology (MIT). IBM released the first portable computer in 1975. It weighed 55 pounds and had a CRT display, a processor and 64 KB of RAM. In 1976 Steve Wozniak designed the first Apple computer (Hook Norman, 2001).

During the period when these major advances were taking place in computer development, other changes were also taking place in the business and regulatory landscape of the United States. One important development for the telecommunications industry was the breakup of the Bell System in the 1980s by the Reagan administration. The divestiture was initiated in 1974 when the US Department of Justice filed an antitrust lawsuit against ATT. This broke the monopoly that had provided local telephone service in the US and opened the way for new, innovative communications technologies, including wireless phones (Frum, 2000).

Before 1973, mobile communications were limited to radio car phones, but during that year, Motorola released the first handheld mobile phone that worked on a radio system. The first analog cellular phone system that was widely deployed in the US was the Advanced Mobile Phone System. In the 1990, as second generation system was developed and which reduced some of the problems with encryption, and in 1993, IBM released what was likely the first smart phone. The development of 2G technology during this period allowed the use and transmission of media through smart phones, and in the mid-2000s, the development of 3G allowed access to the Internet through mobile devices (Kling, 2010).

The development of mobile technology revolutionized the development of computers, as the focus shifted to the commercial applications available for both personal and business communications. The advent of applications to handle business transactions means that business in the present day is mobile and can be operated from a Website and a mobile phone or tablet. The rate of change has increased with the commercial possibilities, as companies that manufacture electronic devices attempt to position themselves to take advantage of changes in culture and business. The rate of change has accelerated, leading researchers to forecast even more profound changes to information technology in the near future (Rajkuman, Broberg Goscinski, 2011).

Objectives

The objectives of this paper are to investigate the past history, present usage and future possibilities of computer technology.

Scope

The scope of this research includes investigation of the past history of computers, how they are used in the present and what changes computing technology might bring in the near future. The research will be conducted through a literature search and analysis that will provide conclusions and implications for the future.

Methodology

There were a number of keyword phrases that helped to move this research in the ideal direction. A search for resources was performed using Google, Google Books and Google Scholar for terms including “computer,” “history of computers,” “history of computing,” “wireless communications,” “artificial intelligence” and “technological singularity.” The sources returned in the various searches were evaluated and prioritized according to their likely contributions to the objectives of the research. A number of sources were discarded after evaluation as having little bearing on the objectives. Once assembled, the sources were reviewed for pertinent information, which was incorporated into the research paper, along with citations.

Analysis

Current trends and research in computing can be extrapolated to forecast some of the changes that might take place in the future. These include cloud computing, artificial intelligence and brain-machine interfaces. “Cloud computing” is a collection of concepts that involve computers connected through a real-time network like the Internet. This means it is basically a type of distributed computing over a network, which can provide the ability to run an application on any or a number of computers connected through the network at the same time. The term is also commonly applied to network-based services that appear to run on a virtual server that does not physically exist and can therefore be adjusted without affecting the end use. Cloud computing generally means that the computer system has become invisible. For example, when people use a search engine, it accesses a massive computer network that is completely invisible to the end user. Other applications are also available through the Cloud, including methods of working on a shared document or similar projects.

Cloud computing changes the focus to maximizing use of shared resources. In Cloud applications, the resources are generally shared by multiple users and also dynamically reallocated based on demand to the different users. For example, a Cloud facility might serve European customers during peak business hours, and then reallocate resources to serve Asian users during the Asian peak business hours. This type of approach not only maximizes use of computing power, but also conserves both environmental and economic resources.

One current area of research and development that is likely to greatly change computing in the future is artificial intelligence. This is a branch of computer science that is focused on developing intelligent hardware and software with the aim of producing an intelligent agent or system that perceives the environment around it and takes actions based on these perceptions. AI research is divided into subfields based on a number of cultural and social issues, and divided by technical issues. There are a number of different approaches and possible solutions to different problems (Langley, 2011).

Research is currently taking place on a number of possible interfaces to connect the human brain with electronic technology, often called a direct neural interface, a brain-computer interface (BCI), a mind-machine interface (MMI) or a brain-machine interface (BMI). This interface is planned to augment, assist or repair human cognition or sensory-motor functions. This research began in the 1970s at the University of California, Los Angles (UCLA) through a grant from the National Science Foundation. The research has focused mainly on neuro-prosthetics applications that might restore damaged human functions like sight, movement, hearing and so on. As a result, the first neuro-prosthetic devices for implantation began to appear in the mid-1990s (Langley, 2011).

Applications for this kind of technology also include such innovations as neuro-gaming that uses non-invasive BCI to interact with a game console without use of traditional technology. These software applications can use such inputs as heart rate, expression, brain waves, pupil dilation and emotions to affect changes in the game, including a more realistic gaming experience. The Kinect is a recent introduction to gaming that allows users to use gestures and movements that are perceived by the computer system through cameras and infrared detection systems (Schmorrow Fidopiastis, 2011). This type of input offers possibilities through the cloud where sensors in a room might record and respond to a user for other purposes than gaming.

Another application under development is an initiative to develop devices that would facilitate telepathic communications. This research focuses on use of electro-corticography signals to interpret the vowel and consonant sounds embedded in either spoken or imagined words. This would provide the basis for brain-based communications without the medium of speech. Research on application to make use of subvocalizations is also taking place based on early experiments in the 1960s that created Morse code using alpha brain waves. Although communications through an electroencephalograph would be less accurate than an electrode implanted in the brain, it has the advantage of being non-invasive (Bland, 2008).

Conclusions

The most interesting conclusion that can be drawn from these recent innovations and current research is that the environment in developed countries may soon track and be ready to anticipate every move that an individual makes. Computerized sensing and broadcasting is already incorporated into much of the physical environment through the ownership of smart technology devices like watches and phones that are carried everywhere. Data flows into the network behind these devices through sensors, surveillance cameras, radio-frequency identification tags (RFID), global positioning systems (GPS), unmanned aerial vehicles and geo-tagged media posts. In the near future, these data will be integrated into services that will greatly change the way developed countries now handle both personal and commercial business.

There was a fairly significant tipping point toward this type of environment between 2008 and 2009 when the number of devices connected to the Internet outnumbered the human population for the first time (Carroll, Kotzé, Van der Merwe, 2012). Data brokerage companies are ready to sift and analyze the available data for the purpose of demographic analysis and commercial recommendation.

Another consideration is the ability of governments to track and monitor citizens. Revelations about the scope of the US National Security Agency’s (NSA) wire-tapping and surveillance activities in 2012 and 2013 has led to concerns about the use of data by the US government. Not only was the NSA found to be monitoring the personal phones of foreign state leaders, but also the private communications of US citizens. These disclosures have led to questions about privacy and civil rights for the future.

Implications

Advances in artificial intelligence have led to discussion of a technological singularity, often called The Singularity, which is a theoretical moment when artificial intelligence will have passed to greater than human intelligence. This theoretical moment is expected to make radical changes in human civilization and possibly to the nature of human beings, as such capabilities might be integrated into the human brain (Eden, Moor, Søraker Steinhart, 2013).

References

Bland, E. (2008). Army developing‘synthetic telepathy’. Discovery News.

Carroll, M., Kotzé, P. Van der Merwe, A. (2012). Securing virtual and cloud environments. In I. Ivanov et al.Cloud Computing and Services Science, Service Science: Research and Innovations in the Service Economy. Berlin: Springer Science.

Eden, A., Moor, J., Søraker, J. Steinhart, E. (2013). Singularity hypotheses: A scientific and philosophical assessment. New York, NY: Springer.

Frum, D. (2000). How we got here: The '70s. New York, New York: Basic Books.

Fuegi, J. Francis, J. (2003). Lovelace Babbage and the creation of the 1843 'notes'. Annals of the History of Computing, 25 (4): 18–26.

Hook, D. Norman, J. (2001). Origins of cyberspace: A library on the history of computing, networking and telecommunications. Norman, CA: History of Science.

Jamsa, K. (2011). Cloud computing. Burlington, MA: James and Bartlett.

Kling, A. (2010). Cell phones. Farmington Hills, MI: Lucent Books.

Langley, P. (2011). The changing science of machine learning. Machine Learning, 82(3): 275–279.

O’Regan, G. (2012). A brief history of computing. New York, NY: Springer.

Rajkuman, B., Broberg, J. Goscinski, A. (2011). Cloud computing: Principles and paradigms. New York, NY: James Wiley Sons.

Schmorrow, D. Fidopiastis, C. (2011). Foundations of augmented cognition. Berlin: Springer.

Wojciech, C. Ivengar, A. (2002). Internet technologies, applications and societal impact. Berlin: Springer.

Zhu, Z. Huang, T. (2007). Multimodal surveillance: Sensors, algorithms, and systems. Boston, MA: Artech.

Zuse, K. (1993). The Computer - My life. Berlin: Pringler-Verlag.