Getting From Internet1 to Internet2

This is kind of a long and wonky post, but it touches on some critical pieces of the emerging progressive agenda, and how network architecture works. - Matt

My previous post talked about the self-serving flaws in the telco/cable case against net neutrality and in favor of their proposed network development strategy.  This post summarizes an alternative vision laid out in Congressional testimony early this year by Gary Bakula, a VP in the Internet2 organization. What Gary describes happening in the university community can also happen in local communities.  Marrying this technical vision with a local grassroots and national netroots progressive vision and mobilization makes good sense to me.  I'll have more on this later.

Gary Bakula's testimony at today's Senate Commerce Committee hearing reminded me of the vast potential of the Internet--some of it already realized, some being explored by Bakula's Internet2 organization, some just a twinkle in the eye of an inventor or entrepreneur, and much of it not yet imagined. That's been the Internet's amazing story since its beginnings, and, like most Americans, Bakula clearly wants to avoid seeing its vitality and ability to create value sapped by bad public policy.  

Bakula described Internet2 as "a not-for-profit partnership of 208 universities, 70 companies and 51 affiliated organizations, including some federal agencies and laboratories [whose] mission is to advance the state of the Internet...primarily by operating...a very advanced, private, ultra-high-speed network called Abilene." 

As Bakula put it:

Abilene...enables millions of researchers, faculty, students and staff to "live in the future" of advanced broadband.  By providing very high speed pipes - 10,000 times faster than home broadband, in our backbone - we enable our members to try new uses of the network, develop new applications, experiment with new forms of communications, experiencing today what we hope the rest of America will be able to have and use in just a few years.

Noting that "our Abilene network does not give preferential treatment to anyone's bits, but our users routinely experiment with streaming HDTV, hold thousands of high quality two-way video conferences simultaneously, and transfer huge files of scientific data around the globe without loss of packets," Bakula explained how that came to be:

When we first began to deploy our Abilene network, our engineers started with the assumption that we should find technical ways of prioritizing certain kinds of bits, such as streaming video, or video conferencing, in order to assure that they arrive without delay. For a number of years, we seriously explored various "quality of service" schemes, including having our engineers convene a Quality of Service Working Group. As it developed, though, all of our research and practical experience supported the conclusion that it was far more cost effective to simply provide more bandwidth. With enough bandwidth in the network, there is no congestion and video bits do not need preferential treatment. All of the bits arrive fast enough, even if intermingled.

Bakula suggested that the focus of network development should be on continuation of the Internet's simplicity, coupled with an investment focus on bandwidth expansion, including construction of a fiber-based infrastructure that can be very cost-effectively upgraded over time.

We would argue that rather than introduce additional complexity into the network fabric, and additional costs to implement these prioritizing techniques, the telecom providers should focus on providing Americans with an abundance of bandwidth - and the quality problems will take care of themselves.

For example, if a provider simply brought a gigabit Ethernet connection to your home, you could connect that to your home computer with only a $15 card. If the provider insists on dividing up that bandwidth into various separate pipes for telephone and video and internet, the resulting set top box might cost as much as $150. Simple is cheaper. Complex is costly.

Bakula speaks of a future where, once a basic fiber network is in place, bandwidth could be increased by an order of magnitude every five years for a per-port cost of $30-$50 per year.

It does not cost all that much, relatively, to upgrade a network once the basic wiring is in place -- that's the big original cost.  For example, a university campus in the Midwest that serves 14,000 students and faculty, recently estimated it would cost about $150 per port (per end user) to replicate their current 100 Mbps network for a five year period, or about $30 a year per user. To upgrade to 1000 Mbps (1 gigabit) it would cost $250, or about $50 per year.

Innovation, says Bakula, is another factor that favors the open, standards-based, "neutral-core/intelligent-edge" Internet model.  He also points out that very little of the Internet's steady flow of innovation has come from either telephone or cable companies.  

A simple design is not only less expensive: it enables and encourages innovation...The original Internet grew so fast, and spurred so many new uses, in part because of the way it was designed. It was designed to have an agnostic, neutral "core" whose job was to pass packets back and forth -- and not to discriminate or examine the packets themselves. This allowed the network to be very cost efficient and economical. It also allowed all of the "intelligence" in the network to be at the "edge," that is, in the hands of the user.

This was very important to the evolution of the Internet. The network provider did not have control, the user did. As long as the user utilized the standardized protocols, he could expect to send and receive packets to anyone else on the network in a completely understandable, predictable manner. That allowed the user to experiment with new programs, new applications, slightly tweaked applications, and even new devices -- and the user would know that the network would treat the packets all exactly alike.

Innovation was possible and could happen very quickly at "the edge" because you didn't have to re-architect or re-build the entire network in order to make a tweak or improvement in an end-user technology (such as improving a web search engine or developing a new video encoding program).

As a result of this remarkable design, sometimes called "end-to-end architecture," an explosion of new Internet technologies were developed over the past decade, many of them on university campuses or by recent graduates. The World Wide Web, the Web browser, the search engine, instant messaging, and many other technologies were innovations by users of the network.  Not one of these innovations was developed by telephone or cable companies.

Tags: net neutrality (all tags)



This sounds a lot like Tad Williams Otherland

Where the "net" is the communications channel, used for communications and entertainment. There's no difference between TV and streaming video, because it is all the same channel and you can access any program in the world that way. His conception of the future (save perhaps the virtual reality) is pretty interesting.

by MNPundit 2006-05-22 10:33AM | 0 recs
Re: Getting From Internet1 to Internet2

Excellent post!  When does the Congress get to hear about this?  We need to make it widely known that we don't need the stupid telephone company to do our innovation.  The next generation networks already exist, and the telcos are standing in the way of their deployment.

The future, as they say, is already here.  It's just not well distributed.

by jwb 2006-05-22 11:22AM | 0 recs
Re: Getting From Internet1 to Internet2

My favorite part about IPv6:

IP addresses change significantly with IPv6. IPv6 addresses are 16 bytes (128 bits) long rather than four bytes (32 bits). This larger size means that IPv6 supports more than

     300,000,000,000,000,000,000,000,00 0,000,000,000,000

possible addresses! In the coming years, as an increasing number of cell phones, PDAs, and other consumer electronics expand their networking capability, the smaller IPv4 address space will likely run out and IPv6 address become necessary.

by Gigadafud 2006-05-22 11:39AM | 0 recs
Re: Getting From Internet1 to Internet2

There's a pioneer of sorts involved with wireless network development in Champagne-Urbana area named, Sascha Meinrath.  As I understand what he says, wireless community networks have basically two layers.  One layer is the infrastructure for relaying bits through the air.  The other layer is the infrastructure of computers receiving and sending the bits.  If a community doesn't have access to the infrastructure for connecting to the Internet, they can still develop connection between computers on a local wide area network.  All that's needed are cheap computer cards (described by Bakula) and cheap tin can antennas on every roof (cost is between $3 and $7).  Once this local wireless network is in place, communities can then explore how to expand and include global network access.

The regional phone monopoly and cable monopoly don't want you and I to understand how cheap it is to gain enormous speeds for HDTV streaming and VoIP calling capability.  Think about it this way.  If you have two computers in a room, and connect them with a tin can antenna, moving bits from one computer to another is almost instantaneous.  If you move the antenna to your roof, and move your second computer down the street, the speed is diminished in undetectable fashion.  The addition of a WiMax antenna stretches that distance to miles, allowing neighboring communities to connect in the same high speed manner.  Cost?  Cheap.  Cheap.

Everything that the universities do, our community can do, and the cost is affordable.  Our emergency services, city government services, schools programs, concerts, etc., can enjoy state-of-the-art technology at a low cost.

Maybe it's time to rethink how we communicate.  Maybe it's time to gather tin cans and get those tin can antennas on every roof.  I've taken the first steps for our tin can antenna project in my community.  Hope you do too.  

by tompoe 2006-05-23 06:46AM | 0 recs
Re: Getting From Internet1 to Internet2

What Sascha is doing is great and, if I remember correctly, the software is open-source, which also helps reduce costs. There are also a lot of other people and companies working on this front, and a lot of cities have put out RFPs for muni-wireless networks, which is encouraging.

There are, however, performance challenges for citywide deployments based on existing Wi-Fi spectrum (especially if you're talking HD-quality signals, but even for things like VoIP).  I don't think its clear yet how serious these are, and lots of work is being done to address them, so I'm pretty confident that muni-wireless networks can be viable and valuable.

One of the things I'd like to see Congress do is free up some of the spectrum broadcasters will (FINALLY!) be returning (in 2008, I think) for unlicensed use similar to today's unlicensed Wi-Fi uses.  

This spectrum has much better characteristics for penetrating inside buildings vs. current Wi-Fi spectrum bands, which is important if you're trying to build a communitywide public-access network that serves homes and businesses without having to install outdoor antennas at each location.  

Intel and Microsoft have come out strongly in favor of this use of spectrum(for their own corporate reasons) and several bills have been introduced in Congress (it's even in Stevens otherwise-pretty-awful telecom bill).  The New America Foundation in DC has doing good work on this front.

To me, the best option is a "smart build" strategy that combines wireless (quick, cheap, mobile, but relatively limited in bandwidth) and fiber (more costly and slower to deploy, but potentially unlimited bandwidth and relatively low-cost upgrades as bandwidth demands increase).  

The book "America at the Internet Crossroads" presents an argument and a strategy for a "smart build" approach communities could take to create this kind of hybrid network over time.  The book also takes on the FUD arguments presented by the duopoly pipe owners.  I recommend it to local community leaders and anyone else interested in these issues:

More than a dozen states have passed laws (pushed by cable/telco interests) that restrict cities' ability to deploy muni-networks.  We need federal legislation (proposed by Lautenberg-McCain) to prohibit such laws.  Provisions in some bills floating around (COPE, for example) address this issue, some of them badly (including COPE, which might do more harm than good on that score).  

To me, this is as or even more important than a net neutrality provision, especially as a long-term solution--both technically and in terms of really insuring we've got an ultra-high-capacity ubiquitous "neutral" Internet that's not controlled by interests with strong incentives to "play favorites."  Ultimately, I'd rather we spend money on building that kind of network community-by-community than spend it on lawyers to handle the endless lawsuits that can be expected if we rely on net neutrality rules.  

That's what telcos do, and they're good at it.  If they can't successfully lobby Congress and regulators, they sue them into submission and, in the meanwhile, they do pretty much what they want and investors don't invest in businesses that could be at risk if the telcos win.  And, unfortunately, the court that usually gets these cases seems to have a decidedly Reagan-esque view of these issues.

As a point of reference in terms of cost, it might cost somewhere in the neighborhood of $200 billion to deploy a future-proof all-fiber network to every American home and business (including a wireless component as well).  If I'm not mistaken, that's less than the Bush Admin. has spent to bring Iraq to the point of civil war.  It boggles the mind to even think about it.

by mitchipd 2006-05-23 12:07PM | 0 recs


Advertise Blogads