Getting From Internet1 to Internet2
by mitchipd, Mon May 22, 2006 at 10:14:05 AM EDT
My previous post talked about the self-serving flaws in the telco/cable case against net neutrality and in favor of their proposed network development strategy. This post summarizes an alternative vision laid out in Congressional testimony early this year by Gary Bakula, a VP in the Internet2 organization. What Gary describes happening in the university community can also happen in local communities. Marrying this technical vision with a local grassroots and national netroots progressive vision and mobilization makes good sense to me. I'll have more on this later.
Gary Bakula's testimony at today's Senate Commerce Committee hearing reminded me of the vast potential of the Internet--some of it already realized, some being explored by Bakula's Internet2 organization, some just a twinkle in the eye of an inventor or entrepreneur, and much of it not yet imagined. That's been the Internet's amazing story since its beginnings, and, like most Americans, Bakula clearly wants to avoid seeing its vitality and ability to create value sapped by bad public policy.
Bakula described Internet2 as "a not-for-profit partnership of 208 universities, 70 companies and 51 affiliated organizations, including some federal agencies and laboratories [whose] mission is to advance the state of the Internet...primarily by operating...a very advanced, private, ultra-high-speed network called Abilene."
As Bakula put it:
Abilene...enables millions of researchers, faculty, students and staff to "live in the future" of advanced broadband. By providing very high speed pipes - 10,000 times faster than home broadband, in our backbone - we enable our members to try new uses of the network, develop new applications, experiment with new forms of communications, experiencing today what we hope the rest of America will be able to have and use in just a few years.
Noting that "our Abilene network does not give preferential treatment to anyone's bits, but our users routinely experiment with streaming HDTV, hold thousands of high quality two-way video conferences simultaneously, and transfer huge files of scientific data around the globe without loss of packets," Bakula explained how that came to be:
When we first began to deploy our Abilene network, our engineers started with the assumption that we should find technical ways of prioritizing certain kinds of bits, such as streaming video, or video conferencing, in order to assure that they arrive without delay. For a number of years, we seriously explored various "quality of service" schemes, including having our engineers convene a Quality of Service Working Group. As it developed, though, all of our research and practical experience supported the conclusion that it was far more cost effective to simply provide more bandwidth. With enough bandwidth in the network, there is no congestion and video bits do not need preferential treatment. All of the bits arrive fast enough, even if intermingled.
Bakula suggested that the focus of network development should be on continuation of the Internet's simplicity, coupled with an investment focus on bandwidth expansion, including construction of a fiber-based infrastructure that can be very cost-effectively upgraded over time.
We would argue that rather than introduce additional complexity into the network fabric, and additional costs to implement these prioritizing techniques, the telecom providers should focus on providing Americans with an abundance of bandwidth - and the quality problems will take care of themselves.
For example, if a provider simply brought a gigabit Ethernet connection to your home, you could connect that to your home computer with only a $15 card. If the provider insists on dividing up that bandwidth into various separate pipes for telephone and video and internet, the resulting set top box might cost as much as $150. Simple is cheaper. Complex is costly.
Bakula speaks of a future where, once a basic fiber network is in place, bandwidth could be increased by an order of magnitude every five years for a per-port cost of $30-$50 per year.
It does not cost all that much, relatively, to upgrade a network once the basic wiring is in place -- that's the big original cost. For example, a university campus in the Midwest that serves 14,000 students and faculty, recently estimated it would cost about $150 per port (per end user) to replicate their current 100 Mbps network for a five year period, or about $30 a year per user. To upgrade to 1000 Mbps (1 gigabit) it would cost $250, or about $50 per year.
Innovation, says Bakula, is another factor that favors the open, standards-based, "neutral-core/intelligent-edge" Internet model. He also points out that very little of the Internet's steady flow of innovation has come from either telephone or cable companies.
A simple design is not only less expensive: it enables and encourages innovation...The original Internet grew so fast, and spurred so many new uses, in part because of the way it was designed. It was designed to have an agnostic, neutral "core" whose job was to pass packets back and forth -- and not to discriminate or examine the packets themselves. This allowed the network to be very cost efficient and economical. It also allowed all of the "intelligence" in the network to be at the "edge," that is, in the hands of the user.
This was very important to the evolution of the Internet. The network provider did not have control, the user did. As long as the user utilized the standardized protocols, he could expect to send and receive packets to anyone else on the network in a completely understandable, predictable manner. That allowed the user to experiment with new programs, new applications, slightly tweaked applications, and even new devices -- and the user would know that the network would treat the packets all exactly alike.
Innovation was possible and could happen very quickly at "the edge" because you didn't have to re-architect or re-build the entire network in order to make a tweak or improvement in an end-user technology (such as improving a web search engine or developing a new video encoding program).
As a result of this remarkable design, sometimes called "end-to-end architecture," an explosion of new Internet technologies were developed over the past decade, many of them on university campuses or by recent graduates. The World Wide Web, the Web browser, the search engine, instant messaging, and many other technologies were innovations by users of the network. Not one of these innovations was developed by telephone or cable companies.