Clark, David D. „The Contingent Internet“. Daedalus 145, Nr. 1 (Januar 2016): 9–17. https://doi.org/10.1162/DAED_a_00361.

This possibility may itself seem surprising: the Internet today is so omnipresent, so much a fixture of our lives that it seems almost as if it “had to be that way.” What might an alternate Internet have looked like? This is an important question, because to recognize that there were multiple options for the early Internet, and that the Internet as we know it is contingent  on decisions that could have led to different outcomes, is to recognize that the future of the Internet is itself contingent. Society will meet forks in the road that will determine the future of the Internet, and recognizing these points and discussing the alternatives, rather than later looking back and wondering if we chose the right path, is an opportunity we cannot forego.

The Internet is a “general purpose” network, designed for a variety of uses. It is suited to email, watching video, playing a computer game, looking at Web pages, and myriad other applications. To an Internet engineer, the Internet is the system that moves data, and the applications (like a Web browser, which users might lump into the larger concept of “Internet”) run on top of that data-transport service. This modularity, and this generality, seem a natural way to structure a network that hooks computers together: computers are general-purpose devices; since the Internet hooks computers together, it too ought to be general. But this idea was quite alien to the communications engineers of the early-Internet era, who largely worked for telephone companies. They asked what was to them an obvious question: how can you design something if you don’t know what it is for? The telephone system was designed for a known purpose: to carry telephone calls. The requirements implied by that purpose drove every design decision of the telephone system; thus, the engineers from the world of telephone systems were confounded by the task of designing a system without knowing what its requirements were. The early history of the Internet was therefore written by people who came from a computing background, not a classical network (telephone) background. Most computers are built without a singular purpose, and this mind-set drove the Internet’s design.

But this generality has a price. The service the Internet delivers is almost certainly  not optimal for any particular application. Design for optimal performance and design for generality are two distinct objectives. And it may take more effort to design each application in a general network than in a network that was tailored to each application. Over the decades of the Internet’s evolution, there has been a succession of dominant applications. In the early years of the Internet, the Internet was equated to email, and to ask people if they were “on the Internet” was to ask if they had an email address. Email is an undemanding application to support, and if the Internet had drifted too far toward exclusively supporting it (as was happening to some degree), the Web might not have been able to emerge. But the Web succeeded, and its presence as a complement to email reminded engineers of the value of generality. But this cycle repeats, and the emergence of streaming audio and video in the early 2000s tested the generality of an Internet that had drifted toward a presumption that now the Web, and not email, was the application. Today, streaming, high-quality video drives the constant reengineering of the Internet, and it is tempting once again to assume that we know now what the Internet is best suited for, and optimize it accordingly. The past teaches us that we should always be alert to protect the generality of the Internet, and allow for the future even when faced with the needs of the present.

There is another aspect of generality: the applications that run over the basic transport service of the Internet are not designed or distributed by the same entity that provides the basic data-transport service. This characteristic has been called the “open” Internet, and again, this separation made sense to a computer engineer but did not fit conceptually with the telecommunication engineer. The telephone company installed that wire to your house to sell you telephone service, not to enable some other company
to sell you theirs. From the telephone company’s perspective, it is expensive to install all those wires, and how could they get a reasonable return on investment if they were not the exclusive service provider?

In the early days of the Internet, the only way to access the Internet from home was to use a modem to make a dial-up connection to an Internet service provider (isp). A residential user paid the telephone company for the telephone service, and then paid the isp for providing access. This seemed then like a minor shift in the business model of the telephone companies. But as the possibility of expanding broadband services to the home emerged in the 1990s, the corporate resistance to an open platform became quite clear. One telephone executive explained to me at the time: “If we don’t come to your party, you don’t have a party. And we don’t like your party very much. The only way you will get broadband to the home is if the fcc forces us to provide it.”

That was a fork in the road, and the Internet certainly might have taken another path. In fact, the force that led the Internet toward residential broadband was, to a considerable extent, the emergence of the cable television industry as a credible and competitive provider of high-speed residential Internet. We continue to see echoes of this tension between the Internet as an open platform for third-party applications and broadband access as an expensive investment that should work to the advantage of its owner. The current debates around the concept of “network neutrality” are at their heart about whether broadband providers should be regulated to provide a neutral, open platform for third-party services, or if they have the right to define the services they offer (and perhaps favor) over the infrastructure they invested in building.

Another consequence of generality is that the data-transport layer of the Internet has no concept of what the application is trying to do (as opposed to the design of the telephone system, which at all levels reflects the centrality of the telephone call). If the  esign of the Internet required that the network understand what the application were doing, deploying a new application would require its designer to somehow modify the core of the network to include this knowledge. To the early designers, this was a fork in the road down which they did not want to go. If an application designer had to alter the network before deploying a new application, this would both complicate the process of innovation and create potential for the network to block one or another application.

The Internet has been called the stupid network, the telephone system being the intelligent network; the open-design approach of the Internet makes perfect sense – that is, until things go wrong. If the network itself is impairing the operation of an application, the network cannot always detect or correct this. The network may be able to detect that one of its components has failed, but more complex failures may go undetected, leaving frustrated users who can see that their application is not working, but who have
no remedy available to them. Had we taken the fork in the road that enabled the network to know more about what each application was trying to do, the network might have been less supportive of easy innovation, but might also have been less frustrating to use when unexpected problems inevitably arose.

Finally, the division of responsibility between the provider of the data-transport service and the provider of the application means that responsibility for core requirements like security is divided among several actors. This both makes the objective harder to achieve and adds incentive to delegate the task to another party. In this way, the design decisions that shaped the Internet as we know it likely did not optimize secure and trustworthy operation.


  • No labels