On The Hysteria Over “The Cloud”
It is as Jerry Holkins and Mike Krahulik wrote (regarding a different situation):
I would like to point out that (by computer science standards) the cloud is not new and has for some time been considered inevitable. But what is “The Cloud?” What the cloud is depends a bit on what conversation you are being drawn into. If the conversation is about computing then the cloud is remote computers, software and services like Wikipedia, GMail, SalesForce.com, Google Docs, Amazon EC2/S3 and Google App Engine. If the conversation is about human interaction then the cloud is ecosystems like Facebook, Twitter and RSS. Each of these are facets of important longer term trends, but for individual companies and technologies the pendulum is about as fast on the down-swing as it was on the up-swing. At this time we can safely declare a number of recent important players dead: Friendster, AltaVista, WSDL, Usenet, IRC and Web2.0.It’s like trying to make fun of a clown. What, are you going to make fun of his tiny car? His floppy shoes? It just doesn’t work.
It is true that the network itself is more useful than the computer, but this idea is not new to our third millennium. The current people getting rich promoting this idea did not invent this idea, they grew up in its shadow. The early big thinkers on computers had big plans. Plans much larger than Tetris, payroll processing, COBOL and punched cards.
Take the article “As We May Think” (by Vannevar Bush, The Atlantic Monthly (1945)). In it Vannevar Bush writes:
At first this sounds like nothing more than “Danny Dunn and the Homework Machine” (by Jay Williams (1964), Scholastic Press)Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and, to coin one at random, “memex” will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.
.
Obviously we are reading this with a modern eye, but here we have the antecedents of hypertext and the Wikipedia.Wholly new forms of encyclopedias will appear, ready made with a mesh of associative trails running through them, ready to be dropped into the memex and there amplified. The lawyer has at his touch the associated opinions and decisions of his whole experience, and of the experience of friends and authorities.
We can trace this thread further forward to “Augmenting Human Intellect: A Conceptual Framework” (by Douglas C Engelbart (1962)) and the famous 1968 demo.
And we can further trace the ideas passing through: “Literary Machines: The report on, and of, Project Xanadu concerning word processing, electronic publishing, hypertext, thinkertoys, tomorrow’s intellectual revolution, and certain other topics including knowledge, education and freedom” (by Ted Nelson (1981), Mindful Press, Sausalito, California.)
These works were all about knowledge engineering, information storage, networking and communication. There was an extreme urgency in these works. Both Engelbart and Nelson felt we had a limited window to gain the ability to organize the world’s information before some catastrophic error or misunderstanding eliminated us all. This feeling of urgency and doom came from another exciting application of real time networked computers: SAGE. SAGE was the “Semi Automatic Ground Environment” first made operational in 1959. It involved networked computers, light pen based operator terminals and was the system that the United States had ready to fight World War III.
This was the era of near infinite budgets, block sized computer complexes, massive mainframes and IT priesthoods that ran the whole show.
The inevitable march was on. Some large fraction of the GDP would be forever dedicated to building and maintaining monument sized networked computing facilities. Your degree of relevance and power in society would be directly determined by how close you could get to these facilities. Then something happened and distracted everyone. The distraction was so immediate and so complete that by the time the inevitable march restarted (block sized Google data centers and a proposed Yahoo data center to be built attached to Niagara falls) everyone thought it was a new thing.
What happened was the 1958 demonstrations of successful integrated circuits. This and the transistor started an era of micro-miniaturization that took the world by storm. By 1971 Intel had released a single chip CPU (the 4004) as a commercial product. This chip implemented the core of a computer in a fingertip size package that contained 2300 transistors.
From here on everything was desktop calculators, pocket calculators and digital watches. And then the personal computer and the personal computer revolution hit.
IBM kicked the PC revolution into high gear when they pushed into the market in 1981. The personal computer was a supreme distraction that pulled attention away from the monolithic computers for fifteen years. And for a long while networking and shared information were both nearly forgotten. Computers were for spreadsheets, desktop publishing and other non-networked tasks.
However, out of public view the monolithic network continued to develop. The Internet was started as ARPAnet and grew connecting universities and defense contractors from 1969 through now. The messaging formats (it is inappropriate to use the more common term “technology” to describe HTTP and HTML) we call “The World Wide Web” were invented (without much fanfare) in 1989. Netscape was founded in 1994 and made the World Wide Web and Internet available to the PC. And then the Internet hit like a Tsunami. Electronic commerce and speculation funded the the initial burst. Then on-line advertising took over and we are back to building new encyclopedias, tracking everyone and once again building city block sized computers (now called data centers).
Once again we are being told our data is too important to be locked in our desk (or PC) and everything is migrating back to the mainframe (now called “the cloud”).
Will the cycle reverse? If applications are moving into the cloud now will they ever move back out?
Moore’s law has a way of shrinking things (a current smart phone outperforms many early mainframes, super computers and data centers). Will individual PCs once again be more important than the network? Some of the more useful parts of the Internet (like the Wikipedia) are small enough to put on current PCs. The data centers and networks will not go away any time soon, but excitement and attention could move on to something else. Devices that you could carry everywhere and that have intermittent or expensive connections to the Internet might have an advantage in being able to cache some of the Internet. And excitement follows what is new, so a stable pervasive cloud would likely be taken for granted (like roads, power, telephone and other utilities).
Another thing that could migrate applications back out of the cloud (assuming they migrate in) is if access to the user becomes too important to delegate to the cloud. eCommerce applications take user access when they can get it, but many other applications may depend more on immediate access to the user than on grabbing fresh data from the network. For example a pacemaker is likely to run most of its application from an embedded computer- this computer might talk to the cloud when it can, but the application will be designed to stand alone as long as possible.
In the end evangelizing the coming triumph of factory scale computing and networking is pointless. It is already here and has no great need for cheerleaders.
Categories: Computer Science Expository Writing Opinion
jmount
Data Scientist and trainer at Win Vector LLC. One of the authors of Practical Data Science with R.
One of the more hysterical things I’ve read is a series of white papers on cloud computing as it relates to “high frequency finance.” Some dude actually thought the scalpers would all hang out in the same cloud. Um, no. Goldie Stix might build a cloud of their own, however.
Still, pretty useful stuff. Beats fiddling with hardware in many instances. Economy of scale and all.
A few years back…Bob Metcalfe wrote an article in Wired predicting this return when he wrote.
Engelbart got left behind because he embodied his visions in the time-shared computers of his day and missed the detour we all took into stand-alone personal computers. With the emergence of the Web, though, he’ll be back.
http://www.wired.com/wired/archive/7.11/visionaries_pr.html
-Buddy Smith—You can find out more about Engelbart’s powerful vision in the new book-“The Engelbart Hypothesis: Dialogs with Douglas Engelbart” by Valerie Landau and Eileen Clegg in conversation with Douglas Engelbart.
http::engelbartbook.com
“Devices that you could carry everywhere and that have intermittent or expensive connections to the Internet might have an advantage in being able to cache some of the Internet.”
That last paragraph got me thinking about Netbooks, and how trends seem to indicate a slow death of the Desktop in favor of smaller, more mobile computers that have the same processing capacity.
I’m not knowledgeable of developments in microchip/computer processor technology, but it just seems more and more likely that we are moving towards human/computer integration at a physical level – most people are already tied to their laptop/iphone/blackberry, which can provide nearly 24/7 access to the internet.