Exzerpt aus: Galloway, Alexander R.Protocol: how control exists after decentralization. Leonardo. Cambridge, Mass: MIT Press, 2004. S. 7f


Protocol is not a new word. Prior to its usage in computing, protocol referred to any type of correct or proper behavior within a specific system of conventions. It is an important concept in the area of social etiquette as well as in the fields of diplomacy and international relations. Etymologically it refers to a fly-leaf glued to the beginning of a document, but in familiar usage the word came to mean any introductory paper summarizing the key points of a diplomatic agreement or treaty. However, with the advent of digital computing, the term has taken on a slightly different meaning. Now, protocols refer specifically to standards governing the implementation of specific technologies. Like their diplomatic predecessors, computer protocols establish the essential points necessary to enact an agreed-upon standard of action. Like their diplomatic predecessors, computer protocols are vetted out between negotiating parties and then materialized in the real world by large populations of participants (in one case citizens, and in the other computer users). Yet instead of governing social or political practices as did their diplomatic predecessors, computer protocols govern how specific technologies are agreed to, adopted, implemented, and ultimately used by people around the world. What was once a question of consideration and sense is now a question of logic and
physics.

To help understand the concept of computer protocols, consider the analogy of the highway system. Many different combinations of roads are available to a person driving from point A to point B. However, en route one is compelled to stop at red lights, stay between the white lines, follow a reasonably direct path, and so on. These conventional rules that govern the set of possible behavior patterns within a heterogeneous system are what computer scientists call protocol. Thus, protocol is a technique for achieving voluntary regulation within a contingent environment. These regulations always operate at the level of coding—they encode packets of information so they may be transported; they code documents so they may be effectively parsed; they code communication so local devices may effectively communicate with foreign devices. Protocols are highly formal; that is, they encapsulate information inside a technically defined wrapper, while remaining relatively indifferent to the content of information contained within. Viewed as a whole, protocol is a distributed management system that allows control to exist within a heterogeneous material milieu.

It is common for contemporary critics to describe the Internet as an unpredictable mass of data—rhizomatic and lacking central organization. This position states that since new communication technologies are based on the elimination of centralized command and hierarchical control, it follows that the world is witnessing a general disappearance of control as such. This could not be further from the truth. I argue in this book that protocol is how technological control exists after decentralization. The “after” in my title refers to both the historical moment after decentralization has come into existence, but also—and more important—the historical phase after decentralization, that is, after it is dead and gone, replaced as the supreme social management style by the diagram of distribution.

What contributes to this misconception (that the Internet is chaotic rather than highly controlled), I suggest, is that protocol is based on a contradiction between two opposing machines: One machine radically distributes control into autonomous locales, the other machine focuses control into rigidly defined hierarchies. The tension between these two machines—a dialectical tension—creates a hospitable climate for protocological control.


TCP/IP

Emblematic of the first machinic technology, the one that gives the Internet its common image as an uncontrollable network, is the family of protocols known as TCP/IP. TCP and IP are the leading protocols for the actual transmission of data from one computer to another over the network. TCP and IP work together to establish connections between computers and move data packets effectively through those connections. Because of the way TCP/IP was designed, any computer on the network can talk to any other computer, resulting in a nonhierarchical, peer-to-peer relationship. As one technical manual puts it: “IP uses an anarchic and highly distributed model, with every device being an equal peer to every other device on the global Internet.” (That a technical manual glowingly uses the term “anarchic” is but one symptom of today’s strange new world!)


DNS

Emblematic of the second machinic technology, the one that focuses control into rigidly defined hierarchies, is the DNS. DNS is a large decentralized  database that maps network addresses to network names. This mapping is required for nearly every network transaction. For example, in order to visit “www.rhizome.org” on the Internet one’s computer must first translate the name “www.rhizome.org,” itself geographically vague, into a specific address on the physical network. These specific addresses are called IP addresses and are written as a series of four numbers like so: 206.252.131.211. All DNS information is controlled in a hierarchical, inverted-tree structure. Ironically, then, nearly all Web traffic must submit to a hierarchical structure (DNS) to gain access to the anarchic and radically horizontal structure of the Internet. As I demonstrate later, this contradictory logic is rampant throughout the apparatus of protocol.


Address Resolution

The process of converting domain names to IP addresses is called resolution. At the top of this inverted tree are a handful of so-called “root” servers holding ultimate control and delegating lesser control to lower branches in the hierarchy. There are over a dozen root servers located around the world in places like Japan and Europe, as well as in several U.S. locations. To follow the branches of control, one must parse the address in reverse, starting with the top-level domain, in this case “org.” First, the root server
receives a request from the user and directs the user to another machine that has authority over the “org” domain, which in turn directs the user to another machine that has authority over the “rhizome” subsection, which in turn returns the IP address for the specific machine known as “www.” Only the computer at the end of the branch knows about its immediate neighborhood, and thus it is the only machine with authoritative DNS information. In other words resolution happens like this: A new branch of the tree is followed at each successive segment, allowing the user to find the authoritative DNS source machine and thus to derive the IP address from the domain name. Once the IP address is known, the network transaction can proceed normally.

Because the DNS system is structured like an inverted tree, each branch of the tree holds absolute control over everything below it. 

...
The inventor of the World Wide Web, Tim Berners-Lee, describes the DNS system as the “one centralized Achilles’ heel by which [the Web] can all be brought down or controlled.” If hypothetically some controlling authority wished to ban China from the Internet (e.g., during an outbreak of hostilities), they could do so very easily through a simple modification of the information contained in the root servers at the top of the inverted tree. Within twenty-four hours, China would vanish from the Internet. As DNS renegade and Name.Space founder Paul Garrin writes: “With the stroke of a delete key, whole countries can be blacked out from the rest of the net. With the “.” [root file] centralized, this is easily done. . . . Control the “.” and you control access.” Since the root servers are at the top, they have ultimate control over the existence (but not necessarily the content) of each lesser branch. Without the foundational support of the root servers, all lesser branches of the DNS network become unusable.

Such a reality should shatter our image of the Internet as a vast, uncontrollable meshwork. Any networked relation will have multiple, nested protocols. To steal an insight from Marshall McLuhan, the content of every new protocol is always another protocol. Take, for example, a typical transaction on the World Wide Web. A Web page containing text and graphics (themselves protocological artifacts) is marked up in the HTML protocol. The protocol known as Hypertext Transfer Protocol (HTTP) encapsulates this HTML object and allows it to be served by an Internet host. However, both client and host must abide by the TCP protocol to ensure that the HTTP object arrives in one piece. Finally, TCP is itself nested within the Internet Protocol, a protocol that is in charge of actually moving data packets from one machine to another. Ultimately the entire bundle (the primary data object encapsulated within each successive protocol) is transported according to the rules of the only “privileged” protocol, that of the physical media itself (fiber-optic cables, telephone lines, air waves, etc.). The flexible networks and flows identified in the world economy by Manuel Castells and other anchormen of the Third Machine Age are not mere metaphors; they are in fact built directly into the technical specifications of network protocols. By design, protocols such as the Internet Protocol cannot be centralized.

...

centralized network

A distributed network differs from other networks such as centralized and decentralized networks in the arrangement of its internal structure. A centralized network consists of a single central power point (a host), from which are attached radial nodes. The central point is connected to all of the satellite nodes, which are themselves connected only to the central host. A decentralized network, on the other hand, has multiple central hosts, each with its own set of satellite nodes. A satellite node may have connectivity
with one or more hosts, but not with other nodes. Communication generally travels unidirectionally within both centralized and decentralized networks:
from the central trunks to the radial leaves.


distributed network

The distributed network is an entirely different matter. Distributed networks are native to Deleuze’s control societies. Each point in a distributed network is neither a central hub nor a satellite node—there are neither trunks nor leaves. The network contains nothing but “intelligent end-point systems that are self-deterministic, allowing each end-point system to communicate with any host it chooses.” Like the rhizome, each node in a distributed network may establish direct communication with another node,  without having to appeal to a hierarchical intermediary. Yet in order to initiate communication, the two nodes must speak the same language. This is why protocol is important. Shared protocols are what defines the landscape of the network—who is connected to whom. As architect Branden Hookway writes: “[d]istributed systems require for their operation a homogenous standard of interconnectivity.” Compatible protocols lead to network articulation, while incompatible protocols lead to network disarticulation. For example, two computers running the DNS addressing protocol will be able to communicate effectively with each other about network addresses. Sharing the DNS protocol allows them to be net-worked. However, the same computers will not be able to communicate with foreign devices running, for example, the NIS addressing protocol or the WINS protocol. Without a shared protocol, there is no network.






  • No labels