|Internet: Towards a Holistic Ontology||Contents | Introduction | Chapters 1 2 | Conclusion | Notes | References | Appendices A B|
Clockware or Swarmware?
This chapter presents the arguments of a number of figures who dispute suggestions that the Net should best be left to take care of itself, or that a laissez-faire approach aimed at encouraging market competition is the best way to foster the development of the Net. Metcalfe (see Chapter 2), for one, sees limited value in the biological metaphor, preferring to conceptualize the Net in the image of clockwork predictability and controllability, and to place his faith in the Newtonian paradigm in which all machinery is thought of as being logical and overdetermined by human agency. A similar debate arises in the area of electronic commerce (e-commerce), where Rothschild's laissez-faire argument is opposed by evidence of market failure according to Newman and Schneiderman. Following the rhizomorphic argument towards the end of Chapter 1, I will now outline a compromise between these positions: the Net as "clockware" and the Net as "swarmware", to borrow terms from Kelly (1994: 2-f).
The Net as clockwork
Metcalfe, whose credentials include the invention of Ethernet technology and contributions to the arpanet project, has expressed his scepticism towards the biological metaphor of the Net:
[We have heard] the notion that the Internet is alive and how much it resembles biological creatures or that the anarchy the notion that nobody's in charge of the Internet is ... one of its lovely features. And these two ideas are all the current reigning intelligentsia of the Internet believe. ... I think what we need to do is to convince them that the Internet is actually a network of computers and that it's not alive and it's best it not be anarchistic, that it needs to be better managed. ...
I've read Kevin Kelly's book ... Out of Control and ... that's exactly the mindset which I think is inappropriate for the successful management of the Internet. (1996: transcript of interview)
Metcalfe's position exemplifies the Newtonian world view which embraces "[s]implicity, determinism and predictability" as well as linear causality (Marshall & Zohar, 1997: xx). Newton's law of conservation of momentum declares that a body maintains its state of rest or motion until acted upon by an external force. In the context of the Net, this linear causality translates into an uncomplicated relationship linking management to operation. The predictability and simplicity components of this notion empower human agency with direct control of cybernetic systems, regardless of scale or complexity. According to Marshall and Zohar, the eighteenth-century physicist Laplace went so far as to state that "[i]f the exact state of the world were known at any moment, ... all past and future states could be exactly calculated" (1997: 84).
This Newtonian world view has been contested by Rothschild who argues that the management of such complex systems as economies may be better served by an understanding of self-regulating market mechanisms than by a paradigm of central control (1990). It seems that when the Clinton administration declared cyberspace a free-trade zone in A Framework for Global Electronic Commerce (Clinton & Gore, 1997), this policy was based on the underlying assumption that market competition and privatization provided the best model for e-commerce.25 However, this assumption is disputed by Newman and Schneiderman, who both point out the failures of the free market when it comes to managing the Net.
Newman criticizes the Framework's faith in market competition and privatization as being misguided on more than one count (1997a). First, he argues that the vast increase in Internet participation between 1992 and 1995 (during which the Net was handed over to private management), while attributed by ISPs to the triumph of free-market competition, was in fact made possible by government-regulated protection of ISPs from market pricing:
The ISPs along with AT&T, Apple Computer, Netscape, Microsoft, Compaq Computer, IBM, and a host of other computer companies demanded and won continued FCC [Federal Communications Commission] intervention to prevent market pricing on local telephone company services used by ISPs to reach their customers in the first place. Since the initial breakup of AT&T back in 1983, the FCC has exempted Internet providers from paying the same kind of per-minute access charges to local phone companies that long distance companies have to pay to connect their customers. This has allowed Internet providers to pay the flat business rate to local phone companies that ordinary local business customers pay which in turn has allowed them to offer flat-rate service for the Internet to their customers. (1997a)26
So strictly speaking, the apparent market competition is more regulated than free; or to be more precise, the limited competition within the Internet service market is enabled by regulatory protection from external competition.
Second, Newman argues that this regulatory protection benefits the Internet service industry at the expense of low-income, non-Internet telephone users, and makes "comprehensive investments for upgrading the overall system nearly impossible" (1997a). In effect, as Newman puts it, "the profits of the private Internet industry have derived substantially from the cannibalization of past and present investments in the local phone infrastructure" (1997a).
Newman concludes that the exaggeration of the role of the free market reflects the hypocrisy of parties that stand to gain most from government subsidies aimed at promoting market competition (1997a). Based on current trends, he predicts that profit-driven ISPs will rapidly drop "low-profit customers" in favour of servicing "higher-income and business users" as soon as "subsidies from the local phone companies" run dry (1997a). Newman also makes the observation that "competition on day-to-day prices undermines long-term investments in infrastructure that have historically been served better by regulated monopoly" (1997a). This single observation, I believe, touches the essence of his entire argument: that compared with regulated monopoly, market mechanisms are poor at managing common utilities among which, one could include the Net.
Schneiderman makes a similar but more general argument against "the myth of Government Bad, Free Enterprise Good" adopted in the Framework. Like Newman, Schneiderman finds the Framework's underlying assumption that "the private sector should lead" to be mistaken. In particular, he singles out Clinton and Gore's assumption that "innovation, expanded services, broader participation, and lower prices will arise in a market-driven arena, not in an environment that operates as a regulated industry" (1997). Schneiderman points out that, historically, the Net was not a private sector initiative but a government one, so "it's hard to see the Net's history as evidence that the private sector must lead" (1997).27
Nevertheless, Schneiderman makes a distinction between initial genesis and subsequent development just because free enterprise played no part in the birth of the Net does not disqualify it from taking the Net further:
Even if the government did a great job creating the Internet, maybe it's time to turn it over to the free market. The Internet is moving so fast that government bureaucrats may not be able to keep up. That's what the Clinton Administration thinks. According to the Executive Summary, "The Internet should develop as a market driven arena not a regulated industry." (1997)
But Schneiderman sees this statement contradicted by another:
The report also says that "governments must adopt a non-regulatory, market-oriented approach to electronic commerce, one that facilitates the emergence of a transparent and predictable legal environment to support global business and commerce." In other words, the government should butt out, except for one, little, minor task. It must create a vast new infrastructure to make Net commerce work. (1997)
To Schneiderman, this exception represents a fundamental contradiction one which cannot be resolved through a compromise between regulation and deregulation, as the Framework has attempted to do through regulated competition. He holds that only the government can build the basic elements of Net infrastructure "intellectual property rights, patents, and copyright protection", "online equivalents of money, signatures (for signing contracts)" and the legal framework to enforce these (1997). The Framework's call to "establish a predictable and simple legal environment based on a decentralized, contractual model of law rather than one based on top-down regulation" (Clinton & Gore, 1997) would only create wasteful bureaucracy or, as Schneiderman puts it, not "less government", but "more lawyers" (Schneiderman, 1997). Furthermore, he continues, the labelling of this regulated competition as a "genuine market" would result in confusion and legal debates; and "many serious Net problems" would arise from companies hastening to implement "new features and create new toys without knowing whether they work well or are even useful" (1997). Schneiderman prefers a more deliberate approach, directed from the regulatory position of government, as against an anarchistic turning over of control to ungovernable forces.
Finally, in disputing the "myth ... that market competition is good for everybody", Schneiderman asserts that free enterprise does not necessarily work in the best interests of the general public, particularly in the areas of privacy, equality and government (1997). A conflict of interests arises when the Framework delegates the task of protecting consumer privacy to the market as Andrew Leonard points out:
The desire for online privacy runs directly at odds with one of the most attractive aspects of doing business online the Net's capacity for helping target marketing and advertising efforts directly at specific users. (Quoted in Schneiderman, 1997)
Schneiderman also suggests that increased competition would cause corporations to discriminate against customers who do not generate profitable business, and expresses fears that declaring the Net a "tariff-free zone" would encourage a migration of businesses to cyberspace thereby penalizing people who cannot afford Internet access and choking off government revenue from sales tax (1997).28 As Newman indicates, the possible loss of revenue due to untaxed sales represents a significant threat to the budgets of local and state governments, "including, ironically, those of Silicon Valley where the computer technologies fueling Internet commerce were created" (1997b).29 In short, Schneiderman presents the case that market competition is poor at managing what Rothschild terms "commons" a collectivistic task traditionally belonging to government.
The role of collectivistic commons
The arguments of Newman and Schneiderman are in stark contrast to Rothschild's criticism of centralized control in Marxism which "failed because its core elements violate processes essential to the functioning of all living, evolving systems" (1990: 107). These "core elements" competition, free-market pricing and the pursuit of self-interest are necessary, he argues, to avoid "stagnation, waste, and bureaucracy" as well as the "tragedy of the commons", in which an individual finds more sense in taking advantage of group resources than in letting the group profit at his or her expense, with the ultimate result that "the community exploits itself" (1990: 109, 112-3). However, bearing in mind that his biological analogy is no more than a metaphor, Rothschild acknowledges that humans "are different from all other creatures" because "we are socially conscious", and that our sense of community demands the existence of a cooperative commons (1990: 114). Hence the need for mechanisms to manage this commons: the agency of government. But instead of channelling all resources through the commons, as advocated by Marxism, Rothschild proposes a more limited role:
The issue is not whether a capitalist economy ought to have a commons, but rather what portion of an economy's output should be distributed through its commons. If the commons takes shape as a "social safety net," precisely how high should that net be? How can acknowledged community needs be met without creating unnecessary commons problems? If creating a commons is the only feasible way to cope with a particular social need, what techniques can be borrowed from the free market to manage the commons as efficiently as possible? Each nation ... will come to somewhat different conclusions on these questions. There are no absolute answers. (1990: 114)
Rothschild's remedy for the tragedy of the commons, which occurred on a grand scale in Marxist economies, is a dose of market competition; but from the perspectives of Newman and Schneiderman, allowing market competition to manage essential commons can engender its own tragedy what we might call the tragedy of capitalism.30
Schneiderman cautions that the risk of this tragedy of capitalism waste, profit-driven discrimination and the collapse of essential commons must be weighed against the potential benefits of unregulated capitalism:
Economic markets are a wonder to behold. Like natural ecosystems, they can produce marvels that are hard to imagine occurring any other way. And like nature, ultimately they resist our control. Even with the best of intentions, clumsy attempts to nurture or direct economic markets can turn around and bite us. The experience of Europe's ham-handed attempts to force the creation of a European computer industry was not so different from the experience of people who live in flood plains, who learn the hard way that Mother Nature respects no engineer.
But at the same time, we need to be careful that in respecting the power of markets we don't blind ourselves to the crucial role played by our government. Because when we do turn a blind eye, we stop debating an important question: who benefits? (1997)
Kelly, for his part, has not been blind to the tragedy of capitalism either: he points out that loss of complete control and "wasteful inefficiencies" are the tradeoffs for "adaptability", "flexibility" and reliability (1994: 2-f). He lists these drawbacks of neo-biological systems, or what he calls "the swarm model" as follows (1994: 2-f). They are:
"Nonoptimal". Due to the absence of centralized authority, "swarm systems" are burdened by redundancy and "duplication of effort".
"Noncontrollable". Control can never be absolute; a "swarm system" may only be steered "by applying force at crucial leverage points, and by subverting the natural tendencies of the system to new ends".
"Nonpredictable". A swarm network's "complexity" can make it dangerously unpredictable.
"Nonunderstandable". Because they do not work on linear logic but "lateral or horizontal causality", we may not be able to fully understand swarm networks, although that does not have to stop us from making use of them.
"Nonimmediate". Complexity takes time to build. "Each hierarchical layer has to settle down", and "organic complexity will entail organic time".
Based on the pros and cons of both models, Kelly makes these recommendations:
- For jobs where supreme control is demanded, good old clockware is the way to go.
- Where supreme adaptability is required, out-of-control swarmware is what you want. (1994: 2-f)
It follows that we would then have to decide which model we want for the Net. Kelly allows for compromise: "Most tasks will balance some control for some adaptability, and so the apparatus that best does the job will be some cyborgian hybrid of part clock, part swarm" (1994: 2-f).
Hybrid of chaos and design
I want to argue that the Net is just such a hybrid part human and part machine, part engineering and part anarchism, treading the balance between the tragedies of capitalism and the commons. The Net could not be otherwise, because human intervention is half the equation. The genesis of the Net could not have been possible without institutional commons, and the pattern is repeating itself with internet2, the project to create the next generation of the Net. Heather Boyles, the project's chief-of-staff, explains that once again financial support has to come from the government: "The federal government brings some essential seed money where commercial interests necessarily driven by shareholder responsibilities don't yet see a proven concept but do see too much of a risk to make large investments alone" (quoted in Jones, 1997). And as long as the Net is to continue operating as a common utility, governments will continue to intervene with regulation. The only form of 'market competition' to have entered the Net, as observed above by Newman, is the regulated variety, because true market anarchism, in the sense of Kelly's "swarmware", would take the much longer "organic time" to emerge (1994: 2-f). Nevertheless, as we have seen earlier, a degree of anarchistic autonomy forms the other half of the equation. It is interesting to note that redundancy, while considered economically wasteful and inefficient under most circumstances, actually increases the functionality and performance in the case of the Net; so the Net must reflect aspects of the "swarmware" model.
Towards the end of Chapter 1, we came to the conclusion that the Net has both rhizomatic and arborescent tendencies. Ideally, a rhizomatic Net would fulfil Baran's vision of a high level of redundancy (see Chapter 1), and this would correspond to the distributed network topology in Figure 1 below. A tree structure would look like the centralized topology, while telephone networks probably take after the decentralized model. The Net in reality probably looks more like a hybrid between the decentralized and distributed structures (Figures 2 and 3).31
Figure 1: Three types of network topology (Hafner & Lyon, 1996: 59)
Figure 2: Decentralized-distributed hybrid network
Figure 3: The NSFNET T1 Backbone and Regional Networks (Dery, 1994: 4)
The Net is also a strange new hybrid in evolutionary terms. Marshall and Zohar take a critical look at Lovelock's Gaia Hypothesis:
Certainly the planetary system ... is stabilized by many feedback loops, as is the physiology of a single organism. But the comparison cannot be pushed too far. Organisms have evolved by competition and natural selection; Gaia as a whole cannot have done so. Gaia may have many possible stable or metastable states, including the present one and the one before oxygen was abundant. (1997: 164)
Like Gaia, the Net is a planetary system physically composed of many machinic 'individuals', 'species' and 'environments' with circulating feedback loops; but unlike Gaia, it is not a closed system. Its genesis and structure are locked in a coevolutionary relationship with the human species software, hardware and 'liveware' must be understood as one holistic system coevolving along a temporal axis.
Furthermore, the Net may well be a double fulfilment of Lamarckian evolution, which is based on "the inheritance of acquired traits" and which would theoretically occur at a much faster rate than Darwinian evolution based on natural selection (Kelly, 1994: 15-g). On the one hand, physiological innovation in terms of hardware and software are 'genetically' incorporated into the Net (for instance, one new design may be adopted across the board without the overheads of creating many trial innovations to 'fight it out'). On the other hand, changes in coevolutionary configurations aspects of the broadly defined human-computer interface may be absorbed by the symbiosis or, to borrow a word from McHoul, the "cyberbeing" (1997). This Lamarckianism can only emerge in the duality of human agency and technological self-organization. Kelly suggests that this form of evolution is the result of what he calls the "evolution of evolution", or "deep evolution" (1994: 18-d).
A quantum theory approach
Above, I have argued that the Net is a hybrid of the "clockware" and "swarmware" models. A simple way of resolving these seemingly contradictory tendencies would be to say that the Net balances chaotic anarchism with engineered design. However, a relational holism borrowed from quantum theory might be more useful in understanding the ontological status of the Net how it is subject to both human agency and self-organization.
According to Marshall and Zohar, quantum theory breaks away from the "either/or" dualism of Newtonian philosophy and adopts a "both/and" duality (1997: 384-7); so the paradoxical nature of light is explained by a theory of wave/particle duality:
Unquestionably, light can behave like a stream of particles. There are other times when it behaves like a series of waves (interference and diffraction, for example). This paradox led to confusion until the 1920s, when it was shown to make mathematical, if not common, sense to say that light sometimes behaves like a particle, and sometimes like a wave. Light itself cannot be said to be either; it must instead be seen as a potentiality to be both at the same time, depending upon the circumstances and experimental surroundings in which it finds itself. (1997: 386)
Common sense dictates that when an individual, for instance, is subject to more than one separate authority, and circumstances arise in which they come into conflict, this conflict must somehow be resolved before one outcome is reached. Going by the philosophy of quantum theory, however, the focus is not on actuality but potentiality just as Schrödinger's cat is both alive and dead in a real sense before the box is opened, so too the Net is simultaneously determined by the seemingly contradictory agencies of human management and self-organization, until a frame of reference is brought to bear. Contradiction only appears if we assume that there is only one frame of reference from which to observe the Net that determination must flow in one direction only.
From the reductionist frame of reference, human engineering whether physical, social or economic is the only agency responsible for any conscious determination, since, strictly speaking, the Net is not a sentient entity, and, materially speaking, is wholly a product of human engineering. This perspective hinges on the "sharp divide between observer and observed in mechanistic science" the former exists outside the context of the latter (Marshall & Zohar, 1997: xxiv). Marshall and Zohar note that "contextualism" is fundamental to an understanding of quantum physics (1997: 112-3). From this holistic point of view, entirely new "emergent properties" are produced in synergistic relationships properties which are not apparent just by observing the components (Kelly, 1994: 2-c; Marshall & Zohar, 1997: 137-9).32 Marshall and Zohar attest that in quantum physics, emergence is extended far beyond properties: "the very identity the being, qualities, and characteristics of constituents depends upon their relation to others" (1997: 298; emphasis added). Ontologically speaking, McHoul's gui cyberbeing could be understood as being a quantum entity emerging from the human-computer interface, whose being literally requires the context of humans and machines.33
As such, it is conceivable that the Net as quantum cyberbeing should not only inherit the properties of both parents human agency on the one hand, and emergent self-organization on the other for all the ambiguities and contradictions that this entails, but also have its own emergent properties, which cannot be inherited from either parent. What these emergent properties are, I shall not speculate here, but their potentialities are there, and like the links ("promises") in Miles' Hyperweb (1996), we may never know what they are until they are performed.