Galloway, Alexander R.Protocol: how control exists after decentralization. Leonardo. Cambridge, Mass: MIT Press, 2004. S. 7f
Protocol is not a new word. Prior to its usage in computing, protocol re-
ferred to any type of correct or proper behavior within a specific system of
conventions. It is an important concept in the area of social etiquette as well
as in the fields of diplomacy and international relations. Etymologically it
refers to a fly-leaf glued to the beginning of a document, but in familiar us-
age the word came to mean any introductory paper summarizing the key
points of a diplomatic agreement or treaty.
However, with the advent of digital computing, the term has taken on
a slightly different meaning. Now, protocols refer specifically to standards
governing the implementation of specific technologies. Like their diplo-
matic predecessors, computer protocols establish the essential points neces-
sary to enact an agreed-upon standard of action. Like their diplomatic
predecessors, computer protocols are vetted out between negotiating parties
and then materialized in the real world by large populations of participants
(in one case citizens, and in the other computer users). Yet instead of gov-
erning social or political practices as did their diplomatic predecessors, com-
puter protocols govern how specific technologies are agreed to, adopted,
implemented, and ultimately used by people around the world. What was
once a question of consideration and sense is now a question of logic and
physics.
To help understand the concept of computer protocols, consider the anal-
ogy of the highway system. Many different combinations of roads are avail-
able to a person driving from point A to point B. However, en route one is
compelled to stop at red lights, stay between the white lines, follow a rea-
sonably direct path, and so on. These conventional rules that govern the set
of possible behavior patterns within a heterogeneous system are what com-
puter scientists call protocol. Thus, protocol is a technique for achieving vol-
untary regulation within a contingent environment.
These regulations always operate at the level of coding—they encode
packets of information so they may be transported; they code documents so
they may be effectively parsed; they code communication so local devices
may effectively communicate with foreign devices. Protocols are highly for-
mal; that is, they encapsulate information inside a technically defined wrap-
per, while remaining relatively indifferent to the content of information
contained within. Viewed as a whole, protocol is a distributed management
system that allows control to exist within a heterogeneous material milieu.
It is common for contemporary critics to describe the Internet as an un-
predictable mass of data—rhizomatic and lacking central organization. This
position states that since new communication technologies are based on the
elimination of centralized command and hierarchical control, it follows that
the world is witnessing a general disappearance of control as such.
This could not be further from the truth. I argue in this book that proto-
col is how technological control exists after decentralization. The “after” in
my title refers to both the historical moment after decentralization has come
into existence, but also—and more important—the historical phase after
decentralization, that is, after it is dead and gone, replaced as the supreme
social management style by the diagram of distribution.
What contributes to this misconception (that the Internet is chaotic
rather than highly controlled), I suggest, is that protocol is based on a con-
tradiction between two opposing machines: One machine radically distrib-
utes control into autonomous locales, the other machine focuses control into
rigidly defined hierarchies. The tension between these two machines—a di-
alectical tension—creates a hospitable climate for protocological control.
Emblematic of the first machinic technology, the one that gives the In-
ternet its common image as an uncontrollable network, is the family of pro-
tocols known as TCP/IP. TCP and IP are the leading protocols for the actual
transmission of data from one computer to another over the network. TCP
and IP work together to establish connections between computers and move
data packets effectively through those connections. Because of the way
TCP/IP was designed, any computer on the network can talk to any other
computer, resulting in a nonhierarchical, peer-to-peer relationship.
As one technical manual puts it: “IP uses an anarchic and highly distrib-
uted model, with every device being an equal peer to every other device on
the global Internet.”11 (That a technical manual glowingly uses the term
“anarchic” is but one symptom of today’s strange new world!)
Emblematic of the second machinic technology, the one that focuses con-
trol into rigidly defined hierarchies, is the DNS. DNS is a large decentralized
database that maps network addresses to network names. This mapping is re-
quired for nearly every network transaction. For example, in order to visit
“www.rhizome.org” on the Internet one’s computer must first translate the
name “www.rhizome.org,” itself geographically vague, into a specific address
on the physical network. These specific addresses are called IP addresses and
are written as a series of four numbers like so: 206.252.131.211.
All DNS information is controlled in a hierarchical, inverted-tree struc-
ture. Ironically, then, nearly all Web traffic must submit to a hierarchical
structure (DNS) to gain access to the anarchic and radically horizontal struc-
ture of the Internet. As I demonstrate later, this contradictory logic is ram-
pant throughout the apparatus of protocol.
The process of converting domain names to IP addresses is called resolu-
tion. At the top of this inverted tree are a handful of so-called “root” servers
holding ultimate control and delegating lesser control to lower branches in
the hierarchy. There are over a dozen root servers located around the world
in places like Japan and Europe, as well as in several U.S. locations.
To follow the branches of control, one must parse the address in reverse,
starting with the top-level domain, in this case “org.” First, the root server
receives a request from the user and directs the user to another machine that
has authority over the “org” domain, which in turn directs the user to an-
other machine that has authority over the “rhizome” subsection, which in
turn returns the IP address for the specific machine known as “www.”
Only the computer at the end of the branch knows about its immediate
neighborhood, and thus it is the only machine with authoritative DNS in-
formation. In other words resolution happens like this: A new branch of the
tree is followed at each successive segment, allowing the user to find the au-
thoritative DNS source machine and thus to derive the IP address from the
domain name. Once the IP address is known, the network transaction can
proceed normally.
Because the DNS system is structured like an inverted tree, each branch
of the tree holds absolute control over everything below it. For example, in
the winter of 1999, a lawsuit was brought against the Swiss art group Etoy.
Even though the basis of the lawsuit was questionable and was later dropped,
the courts would have been able to “turn off” the artist’s Web site during the
course of the trail by simply removing DNS support for “etoy.com.” (Instead
the artists were forced to pull the plug themselves until after the trial was
over.) A similar incident happened at The Thing, an Internet service provider
based in New York who was hosting some of Etoy’s agitprop. After some of
this material was deemed politically questionable by the Federal Bureau of
Investigation, the whole server was yanked off the Internet by the telecom-
munications company who happened to be immediately upstream from the
provider. The Thing had no recourse but to comply with this hierarchical
system of control.
The inventor of the World Wide Web, Tim Berners-Lee, describes the
DNS system as the “one centralized Achilles’ heel by which [the Web] can
all be brought down or controlled.”12
If hypothetically some controlling authority wished to ban China from
the Internet (e.g., during an outbreak of hostilities), they could do so very
easily through a simple modification of the information contained in the root
servers at the top of the inverted tree. Within twenty-four hours, China would
vanish from the Internet.
As DNS renegade and Name.Space founder Paul Garrin writes: “With the
stroke of a delete key, whole countries can be blacked out from the rest of
the net. With the “.” [root file] centralized, this is easily done. . . . Control
the “.” and you control access.”13 Since the root servers are at the top, they
have ultimate control over the existence (but not necessarily the content) of
each lesser branch. Without the foundational support of the root servers, all
lesser branches of the DNS network become unusable. Such a reality should
shatter our image of the Internet as a vast, uncontrollable meshwork.
Any networked relation will have multiple, nested protocols. To steal an
insight from Marshall McLuhan, the content of every new protocol is always an-
other protocol. Take, for example, a typical transaction on the World Wide
Web. A Web page containing text and graphics (themselves protocological
artifacts) is marked up in the HTML protocol. The protocol known as Hy-
pertext Transfer Protocol (HTTP) encapsulates this HTML object and al-
lows it to be served by an Internet host. However, both client and host must
abide by the TCP protocol to ensure that the HTTP object arrives in one
piece. Finally, TCP is itself nested within the Internet Protocol, a protocol
that is in charge of actually moving data packets from one machine to an-
other. Ultimately the entire bundle (the primary data object encapsulated
within each successive protocol) is transported according to the rules of the
only “privileged” protocol, that of the physical media itself (fiber-optic ca-
bles, telephone lines, air waves, etc.). The flexible networks and flows iden-
tified in the world economy by Manuel Castells and other anchormen of the
Third Machine Age are not mere metaphors; they are in fact built directly
into the technical specifications of network protocols. By design, protocols
such as the Internet Protocol cannot be centralized.
Protocol’s native landscape is the distributed network. Following Del-
euze, I consider the distributed network to be an important diagram for our
current social formation. Deleuze defines the diagram as “a map, a cartog-
raphy that is coextensive with the whole social field.”14 The distributed net-
work is such a map, for it extends deeply into the social field of the new
millennium. (I explore this point in greater detail in chapter 1.)
A distributed network differs from other networks such as centralized
and decentralized networks in the arrangement of its internal structure. A
centralized network consists of a single central power point (a host), from
which are attached radial nodes. The central point is connected to all of the
satellite nodes, which are themselves connected only to the central host. A
decentralized network, on the other hand, has multiple central hosts, each
with its own set of satellite nodes. A satellite node may have connectivity
with one or more hosts, but not with other nodes. Communication generally
travels unidirectionally within both centralized and decentralized networks:
from the central trunks to the radial leaves.
The distributed network is an entirely different matter. Distributed net-
works are native to Deleuze’s control societies. Each point in a distributed
network is neither a central hub nor a satellite node—there are neither
trunks nor leaves. The network contains nothing but “intelligent end-point
systems that are self-deterministic, allowing each end-point system to com-
municate with any host it chooses.”15 Like the rhizome, each node in a dis-
tributed network may establish direct communication with another node,
without having to appeal to a hierarchical intermediary. Yet in order to ini-
tiate communication, the two nodes must speak the same language. This is why
protocol is important. Shared protocols are what defines the landscape of the
network—who is connected to whom.
As architect Branden Hookway writes: “[d]istributed systems require for
their operation a homogenous standard of interconnectivity.”16 Compatible
protocols lead to network articulation, while incompatible protocols lead to
network disarticulation. For example, two computers running the DNS ad-
dressing protocol will be able to communicate effectively with each other
about network addresses. Sharing the DNS protocol allows them to be net-
worked. However, the same computers will not be able to communicate with
foreign devices running, for example, the NIS addressing protocol or the
WINS protocol.17 Without a shared protocol, there is no network.