Saturday, February 27, 2010

Digital World Paper

Bolter and Gramola tell us that we are in a digital revolution. Our very perceptions of the world are influenced by the digital activities in which we are engaged. The digital world is not a passive one; we are actively part of our electronic surroundings:
Even when we are performing the mundane tasks of
information processing, we always bring part of ourselves
to the digital applications with which we interact. Digital
applications are never fully transparent; they always reflect
the user.(Bolter, 62)

From our choices of music to the software we utilize, our transactions in this electronic world reveal something about ourselves. Many activities that we regularly do have something to do with the digital world. When we want to get in touch with someone, we can send them mail via the virtual world. We can write letters by typing on the computer and with a click of the ‘send’ button on the screen your message is sent instantly to the receiver. It is completely different from writing your message on paper, sealing it in an envelope, and possibly waiting a few days to get a response. In the past, letter-writing was considered almost an art form, with time set aside for carefully composing our thoughts and sending them out. The letter writer was conscious of the fact that grammar, spelling, and correct punctuation were important. Nowadays, people just scribble out whatever they want in emails, emails filled with abbreviated words/phrases such as “lol” or “btw.” We would appear to have replaced the long task of letter writing with the quick, short bursts of emails, a process very much like what Neil Postman described in his book Amusing Ourselves to Death, where television and its constant barrage of short messages erodes our concentration. (This is also picked up in Walter Ong’s Orality and Literacy. And this “snippet”mentality is addressed in the article “Is Google Making Us Stupid?”) Did this happen overnight? Have computers and by extension, the Internet, become such an important part of our society that even political races are now fought in the digital world, where Obama is called the “wired president.” (Griggs)
The Internet was first developed by the U.S. Department of Defense to communicate electronically with its allied countries to prevent an attack from the Soviet Union and other communist nations. It was comprised our linked computers among the Department of Defense, major industrial corporations, research institutions at universities, and selected overseas sites. Bolter and Gromala state that at the beginning: “The Internet once belonged exclusively to the Structuralist, a community composed mostly of graduate students and professors in computer science, who seldom ventured outside their cubicles” (p.3) but today, it can be found worldwide and has multiple purposes: it can speed along medical diagnoses, reports on upcoming weather, broadcasts live entertainment as well as breaking news, allows you to make purchases on line, and leads you to millions of online books, many available for free. The designers were right about one thing: the World Wide Web would be similar to a magazine page with a good amount of information. Now, the computer has been able to combine several media into one. We can watch clips of TV show on YouTube, listen to radio shows and music videos via iTunes; the possibilities appear to be endless. Again, our wired president can get on YouTube: “…Obama on YouTube suits today’s world, in which people want to be in touch with their president, or at least hear and see him, at times of their rather than the president’s choosing….In the fast-changing 21st century, the biggest reassurance about information is knowing it’s there.”(Levinson, 66)
Computers have gone through many stages throughout history from the abacus to multi-piece equipment today. The abacus, invented about 5,000 years ago in Mesopotamia, was used by merchants to keep a record of trading transactions. It is still in use today and consists of sliding beads. It revolutionized business by allowing merchants to correctly tabulate not only prices, but it also allowed them to accurately count their merchandise. Thousands of years would pass, however, before the next great step into computation would be made.
In 1642, Blaise Pascal invented what he called a numerical wheel calculator to help his father with his tax collecting duties. The device, which would be called a Pascaline , was designed mechanically so every time each digit moved ten notches, the digit to its immediate left moved one notch. The device had a capacitance of eight digits. While useful, the Pascaline did have its limitations – it could only add.
In 1694, a German mathematician and philosopher, Gottfried Whilhem von Leibniz improved the Pascaline by creating a machine that could also multiply. While an improvement, these calculators were not widely used. “It wasn’t until 1820, however, that mechanical calculators gained widespread use.”(JT)
A Frenchman by the name of Charles Xavier Thomas de Colmar invented what he called the arithometer which could perform the four basic mathematical functions: addition, subtraction, division, and multiplication. His device was used until World War I.
The real start of computers as we know them today started with Charles Babbage. In 1822, Babbage theorized that the “Difference Engine,” powered by steam, could perform many calculations using a stored program. Babbage worked on the design of the Difference Engine for ten years and then decided to go to the next level: he proposed the creation of an “Analytical Engine,” again powered by steam and using stored programs. Perforated cards would contain the operating instructions and a “store” of 1000 numbers up to 50 digits long would also be housed in this engine. The engine would also print out its results. This engine was never constructed because it involved over 50,000 parts, but the essential features of his intended invention are very similar to what a modern computer does. His great supporter and collaborator in this project was Augusta Ada King. So influential was her help and creativity that the Department of Defense named a programming language – ADA- after her. And what is very interesting is the significant role women eventually had in the development of modern-day computers and programming.
An American inventor named Herman Hollerith found a way to store data on cards, which was very useful in taking data for the U.S. Census. Officials at the Census Bureau were concerned about the time it would take them to count the results from the decennial census; the 1880 census had taken seven years to count, and the officials were afraid that the 1890 census would take much longer because the population had grown so much. Hollerith devised a method where single punch in a card represented one number, and any combination of two punches represented one letter; as many as 80 variables could then be stored on a card. (What we call a “punch card.”) With this new mechanical calculating device, the results for the 1890 census were completed in six weeks, compared to seven years from the 1880 Census. Not only did the punch cards reduce computational errors, but they also acted as storage devices for future reference. Punch cards were the most commonly used way of computing data in the manufacturing and business industries; in fact, they were still being used in the 1970s.
The era of modern computing began during World War II. There was a race between the Nazis and the Allies to come up with the first electronic computer. While the Germans used computers to build airplanes, the British invented Colossus, a code-breaking computer that could decode secret German messages. Thus, the Allies had a huge advantage since they could track every move of their enemies, whether it was on the ground, in the air, or beneath the sea. However, the contribution to the computer industry was limited because the Colossus was only programmed to decipher secret codes. “American efforts produced a broader achievement, Howard H. Aiken (1900-1973), a Harvard engineer working with IBM, succeeded in producing an all-electronic calculator.” (JT) The Mark I used electromagnetic signals, to move the mechanical parts to perform addition and other complex functions. The calculations took about 3 to 5 seconds for computing result, and were inflexible. It was only used to graph ballistic charts for the U.S. Navy.
The ENIAC was developed in the 1940s and was jointly produced by the U.S. Government and the University of Pennsylvania. They were only using the computer as a machine to compute problems in science and engineering. A scientist named AlanTuring knew that computers could do more than transform and translate messages; they should be capable of producing their own results. “For Turing and others, who followed him, the computer should not just be a channel for human messages; it should be a thinking machine, capable of producing its own messages.”(B&G) He wanted computers to socialize with each other instead of acting independently. Turning came up with the Artificial Intelligence (AI) movement saying that computers, physical objects, could exhibit qualities of the human mind. Even though this turned out to be not true, the idea remains very strong. “All the scientific and engineering uses of the computer, the business information systems, the databases and text archives, and more recently the spreadsheets and word processors in personal computers, are expressions of the computer as symbol manipulator” (B&G).
The ENIAC was the first general-use computer and could make calculations one thousand times faster than the Colossus or Mark I. The machine consumed about 160 kilowatts of electricity. The EDVAC (Electronic Discrete Variable Automatic Computer) was the first computer to have a stored memory as well as data. The central processing unit (CPU) allowed all computer functions to be controlled from a single source. The CPU socialized with the computer to perform what functions needed to be executed at any given moment. “First Generation computers were characterized by the fact that operating instructions were made-to-order for the specific task for which the computer was to be used.” (JT).
However, the transistor came into play during 1948 which helped computers become smaller, faster, more energy-reliable and efficient machines. Second generation computers also had stored program and programming language. The stored program allowed computers to become more flexible by having a setoff instructions inside the memory so one function could perform one task, and another task could be done immediately. Programming languages came into use during this time, which had words, sentences, and formulas so a computer became much easier to program.
The third generation or Modern Computers was from 1964 to 1971. An important development during this era was the integrated circuit. The IC had the electronic components on a small disc. Later on, more components were put on a chip, meaning the computer could store and perform more functions. An operating system allowed the machines to run on many different programs while a central program coordinated the memory.
All these advancements, however, had not really impacted on the everyday lives of people. While computers had been necessary to get men on the moon, the benefits of computers on a more personal level had yet to be realized. That would come about in later years.
A big development during the Fourth Generation from 1971 to today were the number of components on one chip; this increased exponentially. “The Intel 4004 chip, developed in 1971, took the integrated circuit one step further by locating all the components of a computer…on a minuscule chip.” (JT) What this meant was that computers were becoming more accessible. A timeline for this generation of computers reads:
1970s – The microcomputer for enthusiasts
1975 – Computers in the office
1980 – The IBM PC
1984 – The Macintosh
The period of the “HOME” computer
And since then…
The INTERNET
For the Millions (Lee, 6)

Starting with this generation, the computer starts to become a vital part of the office and later the home. It is during this time period that the first email systems start; that the term “personal computing” is used. The first computerized systems were, by our standards, rather primitive. There were no graphics, the results were all text-based. But, of course, that would change:
Just as those glued to their television sets for
six or seven hours a night reasonably prefer sets
with PIP(picture in picture), which lets one see
the action of more than one channel at once….,
so, too, do workers chained to computer monitors…
naturally prefer to have a large screen with 16 million
possible shades and hues of color, with a number of
programs opened at once. (Phelan, 52)

Now that computers have become an essential part of our existence (who would give up what we have?), a lot of questions crossing many societal functions need to be addressed. With computers in charge of everything thing from traffic lights to bullet trains, how is society being transformed? What questions do we need to address in this digital age?
Like the printing press of the 15th century became an “agent of change” for Western Europe, how has the computer and, by extension, the Internet, become a transformer of everyday life? The printing press wrested away from the hands of the few (nobility and the Church) communication and information and widely disseminated them to others not under the control of the elite. So, too, computers have changed the very society in which we live, allowing information to travel freely and instantly to all who have a computer and Internet access. The rise of the “citizen-journalist” is seen again and again, from the plaza of Tiananmen Square to the horrible pictures out of Haiti. Government accountability and public scrutiny have opened up channels of dialog previously unheard of. Look how many members of Congress have Twitter accounts, Facebook pages, or YouTube channels. (Visit the sites of our two New Jersey Senators and see how “wired” in they are: Robert Menendez can be reached at http://menendez.senate.gov/ or Frank Lautenberg at http://lautenberg.senate.gov/).
Obviously, this digital world is not going away, but as so often happens with new technology, we develop new tools before we understand the full repercussions of them. While the ease and instant access of the Internet are beneficial, the quickness with which lies and deceit can be generated also has to be accounted for. Identity theft and cybercrime flourish to such an extent that the FBI has special task forces devoted to these two types of crime. And instances of both these crimes keep on growing. How often are we told not to open emails from addresses we do not know? Or aren’t we always being warned about giving out any personal information over the Internet? Craigslist was even used for a bank robbery! (Levinson, 175-76)
It would seem as if for every good thing that computers/the Internet can give us, there is a dark side as well. But maybe that is just human nature.
No one doubts that business has been transformed by the Internet: instant shipping and billing, the ability to track purchases, the immediate electronic transfer of funds, less reliance on paper and its attendant costs are among some the benefits enjoyed nowadays. It is also true that trying to get a problem corrected in the digital business world is very frustrating. Amazon is known for its vast inventory and shipping abilities, but have you ever tried to reach customer service? For one, Amazon does not even list a customer service number, just a generic “contact us” link, and even when you finally get the number, no one ever answers at the other side. Many customers are frustrated by this level of service, not just with Amazon but with many large companies. That is why the web site “gethuman” has proven so popular; this site allows you to speak with a real agent in seconds, rather than being bounced around like an electronic ping-pong. In building a Web presence, many companies have lost sight of the importance of the human touch; it does not have to be that way in the digital realm.
Another pitfall as far as businesses go is their checking on job applicants by accessing their Facebook or MySpace accounts. Interviewees have no idea of how any negative pictures, comments, or postings on what they think are “private” sites actually are being accessed by people who are definitely not their “friends.” While this is certainly not an invasion of privacy, businesses are certainly exploiting the Internet for their own good. I guess a businessperson could just say they were looking out for their own company’s well-being. Hopefully, people will be more restrained in what they post on web sites. There would appear to be a disconnect between business and people. Business uses the Internet as a way of streamlining their operations, while people use it to connect with others. And if businesses are making it harder, not easier, to communicate with customers, where is this going to lead us?
Computers have indeed impacted all aspects of society, including education. My Dad tells me that when he was in school, his computer was a slide rule; nowadays, every classroom in every school building in my town is connected to the Internet. The library in my town has waiting lines for people to use its computers for all kinds of purposes. “Online learning” is now an accepted part of the academic experience; FDU requires that its undergraduates take four online courses. And the University of Phoenix now enrolls more students in its online courses than any other university in this country. (Berger) Levinson, in his article “Online Education Unbound” written in 2003 states “…one may wonder why online education has not taken the academic world by storm.”(222) In 2010, I think the storm has already arrived. According to an article in the Bergen Record:
Americans have flocked to online courses in
the past decade. More than 4.6 million post-
secondary students, or 1 in 4, took at least one
class via the Internet. That’s up 17 percent in just
a year…. (Whitley)
More and more students are opting for this form of education, especially adults who cannot leave work at odd hours to attend physical classes. Since much online learning is asynchronous (meaning it is not a real-time conversation between students and instructor, but rather an interaction which takes place when the student gets around to it. The instructor’s notes, assignments, and comments await the students in cyberspace and can be accessed at any time without the intervention of the instructor), students can plan more easily as to when to access their classes. Such classes require no travel time, and in the event of bad weather such as we had not so long ago, classes in cyberspace are not canceled because of bad weather unlike their physical counterparts. More and more universities are advertising entire master’s or doctoral programs which are totally online (with a few weeks’ of actually being on campus to satisfy accrediting agencies’ rules). This is proving to be a great advance for many people who previously were shut out of higher education. In certain instances, you can now get a degree from a major university, not even in your state, by “attending” classes online. At Rutgers, “Online enrollments in 2010 project to be 80 percent higher than in 2009….”It took us ten years to get to 500 students…And three semesters to get to 5,000.’”(Whitley) In England, the Open University is really paving the way with its various programs and innovative offerings. Even a U.S. citizen can take classes there. Imagine attending a British or European university without actually having to be there! So widespread is this becoming that YouTube has a separate channel called YouTube.edu which features thousands of university lectures and courses on a wide variety of topics. Of course, watching these videos does not give you course credit (after all, higher education is still a business and relies on tuition), but it does open up new worlds for you to explore freely. This democratization of higher education will have great impact on the workforce in the near future. Online higher education is still not just available to anyone, you still have to meet entrance qualifications. But is does allow you, once you are accepted, to take courses from the convenience of your home. Not everyone lives within an easy commute of a college campus; we in the Northeast tend to forget that.
Another thing we forget is how lucky we are to have access to the digital world. There are computers and Internet connections wherever we go: home, school, or library. There are wireless points of access in so many places now; in fact, entire cities are going wireless. We can connect to the Internet via a wide variety of products, from a desktop computer to our cell phone. Nowadays we take this for granted, especially those of us in college where connecting to computers and the Internet is a given. But not everyone has such ease of access to such high-powered connections; not everyone has broadband access; not everyone even has a computer. That is where the term “digital divide” comes in, the phrase used to separate the haves from the have nots. It exists not only in other countries, but it is here in the United States as well. It is good to remember that not everyone even in the most powerful country on Earth has electronic tools. We assume that poor countries like Mexico and others in Latin America(Estache) do not have access to computers or the Internet. In Mexico, for instance, the government launched a program called e-Mexico which had as its goal the ability for its citizens to use the Internet through “telecenters,” or cybercafés equipped with all the modern technologies.(Executive Summary) But have we really thought that this lack is also prevalent in our country? That digital divides exist is not under question here:
Throughout history, there have been divides between
those who adopt and fail to adopt new technology. It
took centuries for literacy to become the received
position in most societies and it is still not universal.
there are still those who have either no or severely
restricted access to education, to literacy, to books,…
and so on. (Dance, 176)
It certain cases the lack of access is geographic (Vermont), economic (inner cities), religious (those communities which reject modern technology like the Amish), or political (repressive regimes). However, in all cases, these people are disenfranchised, they cannot participate in the global dialog occurring around them. Efforts to bring computer access to all peoples have been a slow process; out of the world’s seven billion people, less than 2 billion have access to the Internet. (Internet World Statistics 2010) Most people are concerned with more urgent matters, such as securing clean water and food, or finding a safe place to live. Again, we take this digital world so much or granted that we do not realize that most of the world is disconnected. It is changing, that is true, but it will be years before the entire world can be said to be connected.
Repressive states control the flow of information whether that information is in print or electronic form. For example, in Saudi Arabia:
Journalism is strictly controlled, and journalists must
exercise self-censorship in order to avoid government
scrutiny and dismissal….the media environment within
Saudi Arabia is likely the most tightly-controlled in the
region. The kingdom’s four TV networks…and its radio stations
are operated by the state-owned Broadcasting Service of the
Kingdom of Saudi Arabia….Private television and radio
stations are prohibited on Saudi soil.
-(Internet Filtering in Saudi Arabia, 1.)
In such an oppressive environment, access to the Internet is severely limited. In this 2009 report, the 25 ISPs allowed in the country are connected to the national network which is controlled and filtered by a government entity.(p.2)
A recent article in the Wall Street Journal examines state-controlled Internet access:
It’s fashionable to hold up the Internet as a road
to democracy and liberty in countries like Iran,
but it can also be a very effective tool for
quashing freedom (Morozov)
OpenNet Initiative has documented dozens of countries in which complete and open Internet access is denied. Surely, the people living within these countries fall on the wrong side of the digital divide. Many are educated individuals aware of the benefits of the Internet but are denied its use because of governmental filtering. Even in the United States, filtering of the Internet is allowed, especially when dealing with child pornography. (Internet Filtering in the United States and Canada, 1-9)
Those who live in the lower socioeconomic brackets cannot afford the broadband access to the Internet or even afford the price of a computer. Hence the schools and the public libraries play a major role in allowing people access to computers and the Internet. But, as always, there is more demand than resources, there are long lines, and no building is open 24/7. And with increasing pressure on local budgets as tax receipts go down and state aid is cut, the local public library cannot offer the hours of service or the staff expertise to assist people. For example, the State of New Jersey is NOT providing paper copies of income tax forms this year to libraries or post offices; people will have to download these forms. But what if you don’t have access to a computer or the Internet? And even if you do, do you even know how to use the keyboard, let alone navigate the web sites? And to add insult to injury, libraries which are always strapped for money, can no longer afford to print these documents for free. The library patron must now pay for this service, and what if you need to print out the lengthy booklets that go with the forms? Again, these people are on the wrong side of the digital divide.
We need to remember that just because we take something for granted doesn’t mean everyone has it. There are probably people in my own town who fit the above profile. They are being denied an equal opportunity to avail themselves of a powerful communications/information product.
Another area of concern is the amount and quality of information available via computers and the Internet. A recent report – How Much Information – details the overwhelming amount of information out there. Most information is not in paper or print form, that medium now comprises the smallest share of the information world. (p.18) Our problem is how are we to establish what sources of information are valid? No one controls what is placed on the Internet; there is no editor to make sure that what we read is actually correct. The Internet is like the Wild West, and there is no marshal in town. Those who “have not” cannot even access the bad data, let alone the good. One has to be very careful in determining the validity of the information posted online. Levinson devotes an entire chapter to Wikipedia which is an online encyclopedia that can be edited by anyone. In it he discusses the pitfalls of information which is open to manipulation by any and all comers by recounting the Pericles vs. Pickles versions in this source. (p 85) There is an excellent series of articles (Giles), initiated by a review in Nature on the validity of this tool; these articles compare the accuracy of entries in Wikipedia to the same entries in the Britannica, the error rate, and the response from the editors of the Britannica. It makes for interesting reading to say the least. In fact, so suspect is the information in Wikipedia that it has been banned as a citable source by the history department at Middlebury College. (The comments at the end of this piece contain very revealing perspectives from both sides.)
Users of online materials need to be aware of the strengths and weaknesses of the
sites they use. Unfortunately, this can be a hit-or-miss affair if one does not have the necessary skills to determine if the online information is correct or not. When you Google “whitehouse,” which should you choose: this site, which is listed as the “White House”; or this site, which is listed at the high end of the hit list – “Welcome to the White House”? Once is real, the other is a spoof. But would the person who did not possess the skills set to tell them apart pick the correct one?
Another area of concern is the rapidity with which information can either disappear or change in the digital world. An article one read in Wikipedia may not be the same article you go back to a couple of weeks later. In paper format, the information is stable, unchanging, and if the correct type of paper is used, can be read for 500 years. In their landmark study, The Myth of the Paperless Office, Sellen and Harper demonstrate that paper:
…affords rich variegated marks that are persistent and static, also
has a variety of different implications for perception and action in work
situations….it also means that marks on paper are difficult to modify,
transform, or incorporate into other documents….any changes made
to a text leave a kind of audit trail of actions that contains information
about the history of changes on a document, and who made which marks.
(Sellen 201)
However, there is no such guarantee in the digital world. Such a great problem is this shifting or changing of electronic information that the U.S. government is now in the process of digitally certifying that its documents are the true and final copies, not to be changed.
If changing or inaccurate information in the digital world is a societal problem, what about the complete disappearance of web resources? It is hard enough dealing with information that morphs into something different over time, but how about information that has been cited but is now gone? It is estimated that two percent of web sites disappear every week. (McCown) Another recent study stated that web references are disappearing at an alarming rate: 40% of web references from 10-year-old publications are missing, while 5-year-old publications average about 26% of their web references missing. (Bhat) Also, hackers can corrupt the documents or alter them in subtle ways. Another problem with the electronic world is the rapid turnover and obsolescence of the hardware needed to interface with it. More than once I have come across the term “digital dark age.” (Bollacker)
The digital world, born of computers, is here to stay. What we do with this newfound medium will radically change the way we view information and ourselves. Some have said that as we become more and more involved with the digital world, we are divorcing ourselves from the “real world,” that we are becoming isolated. Yet, this report - Social Isolation and New Technology – says otherwise. As with any technology, there are good points and bad points; let’s hope we see more the former.









BIBLIOGRAPHY

Berger, Noah. "For-profit Colleges Change Higher Education's Landscape." Chronicle of Higher
Education 7 Feb. 2010. Web. 22 Feb. 2010
Bhat, Mohammad H. "Missing Web References - A Case Study of Five Scholarly Journals." Liber Quarterly 19.2 (2009): 131-39. Web.18 Feb. 2010
Bollacker, Kurt D. "Avoiding a Digital Dark Age." American Scientist 98.2 (2010): 106-10. Web. 24 Feb. 2010.

Bolter, Jay D., and Diane Gromala. Windows and Mirrors. Cambridge, MA: MIT Press, 2003.
Print.
Carr, Nicolas. "Is Google Making us Stupid?" Atlantic July 2008. Web. 18 Feb. 2010.
Computers: History and Development." JT: Jones Telecommunications and Multimedia
Encyclopedia. N.p., n.d. Web. 5 Feb. 2010.
Dance, Frank. "The Digital Divide." Communication and Cyberspace. Ed. Lance Strate. 2nd ed.
Cresskill, NJ: Hampton Press, 2003. Print
Estache, Antonio. "Telecommunications Reform, Access Regulation, and Internet Adoption in
Latin America." World Bank, Mar. 2002. Web. 18 Feb. 2010.
Executive Summary of the e-Mexico National Plan. N.p., 2001. Web. 16 Feb. 2010
Giles, Jim. "Internet Encyclopedias Go head to Head." Nature 14 Dec. 2005. Web. 18 Feb. 20
Griggs, Brandon. "Obama Poised to become the First "Wired" President." CNN.com. N.p., 15
Jan. 2009. Web. 12 Feb. 2010.
Internet Filtering in Saudi Arabia." OpenNet Initiative. N.p., 2009. Web. 13 Feb. 2010.
"Internet Filtering in the United States and Canada." OpenNet Initiative. N.p., n.d. Web. 19 Feb.
2010
Internet World Statistics 2010. N.p., 2010. Web. 20 Feb. 2010.
Lee, J A. "Social Impact of the Computer." Virginia Tech, 17 Nov. 2000. Web. 17 Feb. 2010.
Levinson, Paul. "Online Education Unbound." Communication and Cyberspace. Ed. Lance
Strate. 2nd ed. Cresskill, NJ: Hampton Press, 2003. Print
Levinson, Paul. New New Media. Boston: Allyn & Bacon, 2009. Print.
McCown, Frank, Catherine Marshall, and Michael Nelson. "Why Web Sites are lost (and How
They're Sometimes Found)." Communications of the ACM 52.11 (2009): 141-45. ACM
Digital Library. Web. 19 Feb. 2009
Morozov, Evgeny. "The Digital Dictatorship." Wall Street Journal 20 Feb. 2010. Web. 23 Feb.
2010
Phelan, John M. "CyberWalden: The Online Psychology of Politics and Culture."
Communication and Cyberspace. Ed. Lance Strate. 2nd ed. Cresskill, NJ: Hampton Press,
2003. Print
Sellen, Abigail, and Richard Harper. The Myth of the Paperless Office. Cambridge, MA: MIT
Press, 2002. Print
Stelter, Brian. "F C C Takes a Close Look at the Unwired." The New York Times 23 Feb. . Web.
23 Feb. 2010.
Whitley, Brian. "Rutgers University Taps Booming Online Education." The Bergen Record 21 Feb. 2010. Web. 21 Feb. 2010

8 comments:

  1. Fred, I really enjoyed your journey through the computer age. As a child I can reminder when the first hand handled calculator came out, educators worried that students would forget their math skills, but we survived. It is the same with the birth of the digital age; I think if we don’t embrace it, we will be at a disadvantage. Your discussion about the digital divide was very interesting. Nice paper!

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. There are a lot of computer applications available that aid our everyday activities and it became an essential part of our existence. I agree that whenever we utilize digital software, we are the ones who are deciding in what way we are going to use it like what messages we send to our friends and what information do we process. Also, this “digital revolution” has in fact modified the way of written communication, wherein it was even perceived as an “art”. I think that because we are enabled to communicate anywhere we are using digital software, people do not care to compose messages the best way they can; on the contrary, they just speak out what they think spontaneously. Gone are the times when written communication was regard as a form of art; now, I think that this has become more informal and spontaneous than before.
    A comprehensive paper, good job.

    ReplyDelete
  4. Fred, Excellent paper!!! I really enjoyed it. I especially appreciated how you brought in some of the old time history. So often we dont think of "technology" occuring that far back in history, a vital thing to consider in the progression of computing technology. I especially enjoyed the information you brought to light about the digital divied. So often we do not stop to think about how many people can not afford to take part in the new digital society...even in our own country. I know many people who have computers, but are too financially strapped to upgrade them or purchase software upgrades making the technology they own virtually useless. I saw that same thing when I used to work in the Public library in Brooklyn. Our bulding was located not too far from the local public school, a public school that had many less advantaged students. Once classes let out for the day students flocked by the dozens to use the library's computers because they didnt have their own at home. Amazing to think in this day in age how some people could be shut out from the digital world. Great point Fred!!!

    ReplyDelete
  5. Very nice paper Fred. You did a very good job in summarizing the bulk of the class as well as the evolution of technology to where we are today. I also thought it was funny that you mentioned the White House in your paper - many people learned the hard way that going to whitehouse.com used to take you directly to a porn site back in the day. It's true that we really do take our technology for granted today, but honestly I think the digital divide will start to shrink in the coming years.

    By the way, good looks on the gethuman site. I'll have to keep that one in mind. I'm tired of listening to options "that have recently been changed"

    ReplyDelete
  6. Fred, you have a good point on your paper! We all live in the digital world, and people have to get use to live with new media. It is necessary for us to do that. And you have many good thinkings about digital world. New Media let us live easily, but it also brings a lot of problems. I believe that it depends on people. If we can figure out what the problem is, and find a way to solve it, then we can live with new media without troubles.

    ReplyDelete
  7. Fred,

    The computer has changed so much over the decades; from a set of women calculating various parts of a single problem to get the answer quicker to today’s highly innovative tool, the essence of what a computer is and what it is used for has changed dramatically. However, its progress is always parallel to that of the global cultural development.

    I particularly enjoyed the references to Neil Postman and his book Amusing Ourselves to Death. That was a very refreshing reminder of seeing the world through a rather traditional perspective.

    The anamneses of life without a device providing instant gratification keep fading in the background.

    Once again, technology is a tool without specific intentions – good or bad; it is up to each user to define and determine how he is going to implement its powerful nature of spreading information.

    Excellent paper!

    Margaret M. Roidi

    ReplyDelete
  8. Fred,

    I really enjoyed your paper and it was a great recap of all the important themes we have been discussing. I really enjoy the way that you implemented links into your blog, and connected themes to historical examples. Also, I didn't know about OpenNet Initiative, and about all the countries that are just being denied internet access.

    Also, I think it was a great point to bring up the fact that now more important and official state paperwork is being done, well, paperless. I see it more and more that papers that were formerly always in hardcopy are now available only online. Especially now in the job market, almost any company does their entire hire search process through online databases and applications.

    -Jessica Vanacore

    ReplyDelete