Software Defined Internet… So called “Net Nutrality” would kill that dream for good

image

In recent weeks, we’ve seen both American and European regulators offer more narrowly confined contexts for their principles of net neutrality. In Europe, lawmakers are eager to codify into law a new definition that allows service providers to innovate to deliver high-quality services, such as the class needed for high-definition videoconferencing, so long as they don’t degrade the quality of service for others. And in the U.S., FCC Chairman Tom Wheeler (albeit with some difficulty getting his message through to the press) drew distinctions between the interconnection or peering agreements that service and content providers may make (such as Comcast and Netflix), and the sacrifices that all providers must make to maintain a discrimination-free Internet.

If everyone ends up agreeing on this–or rather, to borrow a phrase from “The Simpsons,” “everyone whocounts“–then this calls into question the guiding principle that then-FCC Chairman Julius Genachowski cited as the basis of all Internet innovation since its very origin: “TCP/IP reflects a so-called ‘end-to-end’ system design, in which the routers in the middle of the network are not optimized toward the handling of any particular application, while network endpoints (the user’s computer or other communicating device) are expected to perform the functions necessary to support specific networked applications.”

It occurred to me that this flies right in the face of the concept of software-defined networking: the idea that the application may influence the schematic of the network over which it provides services. If this is a perfectly acceptable and even preferable model for corporate networks, why does it suddenly become verboten when applied in theory to the Internet at large–to the idea that applications that need more bandwidth, should be offered a path of least resistance?

In 2012, a team of researchers from UC Berkeley, and from the International Computer Science Institute affiliated with UC Berkeley, put forth publicly their concept of a software-defined Internet: essentially, one that decouples Internet architecture from Internet infrastructure.

In their preamble to “Making the Internet More Evolvable” (.pdf) the team states, “Some argue that we require a radically different architecture to enable evolution. To the contrary, we contend that a simple re-engineering of the basic Internet interfaces to make them more modular and extensible–as one would in any software system–is sufficient to produce a far more evolvable Internet.”

Because the infrastructure is so inflexible, these researchers argue, the architecture has become almost impossible to evolve–as evidenced by the still-ongoing transition to IPv6. For their solution, the Berkeley team would put Genachowski’s assertion to the ultimate test. Wiping the blackboard clean first, they then reassemble the context of the Internet’s data plane as comprised of a network core with an internal address scheme, and a network edge that employs software-based forwarding.

They then delegate the task of defining how packets are forwarded to the edge. This way, it becomes unnecessary for protocol to define the behavior of routers in the middle of the network, as this behavior is entirely anticipated and verified at the edge. Framed like this, the Berkeley/ICSI team’s network (which they code-named “Omega,” perhaps after the NSA left behind no remaining code-words for anyone else) appears to be the gold standard for Genachowski’s original vision, as embodied in the FCC’s Open Internet guidelines (which, for now, are suspended). It’s end-to-end design in real-time, where the routers are essentially slaves.

Then you come across the following passage in “Software-Defined Internet Architecture” (.pdf) [emphasis mine]:

“…One need not specify the forwarding behavior of each box beforehand because as long as two routers talk to the same controller they can be made to interoperate by the controller.  This allows us to take a top-down perspective, by which we mean that we focus not on what each box does individually but instead first look at how to decompose Internet service into well-defined tasks, and then consider how to implement those tasks in a modular fashion.”

Uh-oh. The eureka moment here happens when Berkeley’s expertise and Genachowski’s vision coalesce into something that Genachowski might never have anticipated. You see, if the edge is endowed with intelligence such that it can steer the direction of routing tasks according to application class, then even if the Internet as a whole is greatly improved, it is no longer neutral in any way, shape or form.

The danger in this will be highlighted at some point, despite the as-yet-unfathomably enormous potential benefits of a software-defined Internet. Someone will raise the specter of evil, and an advocacy group will declare it a conspiracy.

So let’s get it out of the way now, lest we lose the courage to discuss “Omega’s” potential. If software can define routes according to service class, then it will become feasible, and certainly tempting, for service providers to lock down those service classesto carve the maps for their premium Internet services in advance. Indeed, there may be valid engineering reasons for them to do so. But the business reasons will also be there, and they will be given the blanket designation of “innovation.”

If Comcast or something like it has the power to designate “fast-lanes” for exclusive content provider customers (itself included) entirely in software, then the only remaining reason why a government regulator should prohibit it from doing so is that it can establish artificially high prices for such premium service–whose eventual costs are passed down to consumers. If the Internet gets faster, this passing down will only happen sooner.

The FCC presently lacks the power to regulate commerce at this level. Perhaps another agency has that power. In any event, it will be up to Congress to make that determination, and to give that agency the authority and mandate. And right now, Congress is incapable of deciding the proper way to crack an egg. At some point, there will need to be an open panel of influential people with the intelligence and wherewithal to reason a way through this problem. And right now, there isn’t one. – Scott

Read more about: software-defined InternetFCC

By Jarrett Neil Ridlinghafer 
CTO of the following –
4DHealthware.com
Synapse Synergy Group
EinDrive.com
HTML5Deck.com
PerfectCapacity.com
CSPComply.com
Chief Technology Analyst, Author & Consultant
Compass Solutions, LLC
Atheneum-Partners
Hadoop Magazine
BrainBench.com
Cloud Consulting International

Advertisements