I recently attended an event where the guest speaker was a cabinet member. In conversation afterwards, the subject of long term petroleum supplies came up. He warned that at some point, perhaps a century or so in the future, someone would put his key in his car's ignition, turn it, and nothing would happen–because there would be no more gasoline.
What shocked me was not his ignorance of the economics of depletable resources--if we ever run out of gasoline it will be a long slow process of steadily rising prices, not a sudden surprise--but the astonishing conservatism of his view of the future. It was as if a similar official, a hundred years earlier, had warned that sometime around the year 2000 we were going to open the door of the carriage house only to find that the horses had starved to death for want of hay. I do not know what the world will be like a century hence. But it is not likely to be a place where the process of getting from here to there begins by putting a key in an ignition, turning it, and starting an internal combustion engine burning gasoline.
This book is about technological change, its consequences and how to deal with them. In this chapter I briefly survey the technologies. In the next I discuss how to adjust our lives and institutions to their consequences.
I am not a prophet; any one of the technologies I discuss may turn out to be a wet firecracker. It only takes one that does not to remake the world. Looking at some candidates will make us a little better prepared if one of those revolutions happens. Perhaps more important, after we have thought about how to adapt to any of ten possible revolutions, we will at least have a head start when the eleventh drops on us out of the blue.
Much of the book grew out of a seminar I teach at the law school of Santa Clara University. Each Thursday we discuss a technology that I am willing to argue, at least for a week, will revolutionize the world. On Sunday students email me legal issues that revolution will raise, to be put on the class web page for other students to read. Tuesday we discuss the issues and how to deal with them. Next Thursday a new technology and a new revolution. Nanotech has just turned the world into gray goo; it must be March.
Since the book was conceived in a law school, many of my examples deal with the problem of adapting legal institutions to new technology. But that is accident, not essence. The technologies that require changes in our legal rules will affect not only law but marriage, parenting, political institutions, businesses, life, death and much else.
Public Key encryption makes possible untraceable communications intelligible only to the intended recipient. My digital signature demonstrates that I am the same online persona you dealt with yesterday and your colleague dealt with last year, with no need for either of you to know such irrelevant details as age, sex, or what continent I am living on. The combination of computer networking and public key encryption makes possible a level of privacy humans have never known, an online world where people have both identity and anonymity–simultaneously. One implication is free speech protected by the laws of mathematics, arguably more reliable and certainly with broader jurisdiction than the Supreme Court. Another is the possibility of criminal enterprises with brand name reputation–online pirate archives selling other people's intellectual property for a penny on the dollar, temp agencies renting out the services of forgers and hit men.
In the not too distant future you may be able to buy an inexpensive video camera with the size and aerodynamic characteristics of a mosquito. Even earlier, we will see–are already seeing–the proliferation of cameras on lamp posts designed to deter crime. Ultimately this could lead to a society where nothing is private. Science fiction writer David Brin has argued that the best solution available will be not privacy but universal transparency–a world where everyone can watch everyone else. The police are watching you–but someone is watching them.
It used to be that a city was more private than a village, not because nobody could see what you were doing but because nobody could keep track of what everybody was doing. That sort of privacy cannot survive modern data processing. The computer on which I am writing these words has sufficient storage capacity to hold at least a modest amount of information about every human being in the U.S. and enough processing power to quickly locate any one of those by name or characteristics. From that fact arises the issue of who has what rights with regard to information about me presently in the hands, and minds, of other people.
Put all of these technologies together and we may end up with a world where your realspace identity is entirely public, with everything about you known and readily accessible, while your cyberspace activities, and information about them, are entirely private--with you in control of the link between your cyberspace persona and your realspace identity.
The world that encryption and networking creates requires a way of making payments–ideally without having to reveal the identity of payer or payee. The solution, already worked out in theory but not yet fully implemented, is ecash–electronic money, privately produced, potentially untraceable. One minor implication is that money laundering laws become unenforceable, since large sums can be transferred by simply sending the recipient an email.
A world of strong privacy requires some way of enforcing agreements; how do you sue someone for breach of contract when you have no idea who, what or where (s)he is? That and related problems lead us to a legal technology in which legal rules are privately created and enforced by reputational sanctions. It is an ancient technology, going back at least to the privately enforced Lex Mercantoria from which modern commercial law evolved. But for most modern readers, including most lawyers and law professors, it will be new.
Property online is largely intellectual property, which raises the problem of how to protect it in a world where copyright law is becoming unenforceable. One possibility is to substitute technological for legal protection. A song or database comes inside a piece of software–Intertrust calls it a digibox–that regulates its use. To play the song or query the database costs ten cents of ecash, instantly transmitted over the net to the copyright owner.
Finally and perhaps most radically, a world of fast, cheap, communication greatly facilitates decentralized approaches to production. One possible result is to shift substantial amounts of human effort out of the context of hierarchically organized corporations into some mix of marketplace coordination of individuals or small firms and the sort of voluntary cooperation, without explicit markets, of which open source software development is a recent and striking example.
Some technologies make the job of law enforcement harder. Others make it easier–even too easy. A few years ago, when the FBI was pushing the digital wiretap bill through Congress, critics pointed out that the capacity they were demanding the phone companies provide them added up to the ability to tap more than a million telephones–simultaneously.
We still do not know if they intend to do it, but it is becoming increasingly clear that if they want to, they can. The major cost of a wiretap is labor. As software designed to let people dictate to their computers gets better, that someone can be a computer converting conversation to text, searching the text for key words or phrases, and reporting the occasional hit to a human being. Computers work cheap.
In addition to providing police new tools for enforcing the law, computers also raise numerous problems for both defining and preventing crimes. Consider the question of how the law should classify a "computer break-in"–which consists, not of anyone actually breaking into anything, but of one computer sending messages to another and getting messages in reply. Or consider the potential for applying the classical salami technique–stealing a very small amount of money from each of a very large number of people–in a world where tens of millions of people linked to the internet have software on their computers designed to pay bills online.
The technologies in our next cluster are biological. Two–paternity testing and in vitro fertilization–have already abolished several of the facts on which the past thousand years of family law are based. It is no longer only a wise child who knows his father–any child can do it, given access to tissue samples and a decent lab. And it is no longer the case that the woman from whose body an infant is born is necessarily its mother. The law has begun to adjust. One interesting question that remains is to what degree we will restructure our mating patterns to take advantage of changes in the technology of producing babies.
A little further into the future are technologies to give us control over our children's genetic heritage. My favorite is the libertarian eugenics sketched decades ago by science fiction author Robert Heinlein–technologies that permit each couple to choose, from among the children they might have, which ones they do have, selecting the egg that does not carry the mother's tendency to nearsightedness to combine with the sperm that does not carry the father's heritage of a bad heart. Run that process through five or ten generations, with a fair fraction of the population participating, and you get a substantial change in the human gene pool. Alternatively, if we learn enough to do real genetic engineering, we can forget about the wait and do the whole job in one generation.
Skip next from the beginning of life to the end. Given the rate of progress in biological knowledge over the past century, there is no reason to assume that the problem of aging will remain insoluble. Since the payoff is not only enormously large but goes most immediately to the currently old, some of whom are also rich and powerful, if it can be solved it is likely that it will be.
In a sense it already has been. There are currently more than a hundred people whose bodies are not growing older–because they are frozen, held at the temperature of liquid nitrogen. All are legally dead. But their hope in arranging their current status was that it would not be permanent–that with sufficient medical progress it will some day be possible to revive them. If it begins to look as though they are going to win their bet, we will have to think seriously about adapting laws and institutions to a world where there is an intermediate state between alive and dead and quite a lot of people are in it.
Finally we come to three technologies whose effects, if they occur, are sufficiently extreme that all bets are off, with both the extinction and the radical alteration of our species real possibilities within the lifespan of most of the people reading this book.
One such is nanotechnology–the ability to engineer objects at the atomic scale, to build machines whose parts are single atoms. That is the way living things are engineered: A DNA strand or an enzyme is a molecular machine. If we get good enough at working with very small objects to do it ourselves, possibilities range from microscopic cell repair machines that go through a human body fixing everything that is wrong to microscopic self-replicating creatures dedicated to turning the entire world into copies of themselves–known in nanocircles as the "gray goo" scenario.
Artificial intelligence might beat nanotech in the annihilation stakes–or in making heaven on earth. Raymond Kurzweil, a well informed computer insider, estimates that in about thirty years there will be programmed computers with human level intelligence. At first glance that suggests a world of science fiction robots–if we are lucky, obeying us and doing the dirty work. But if in thirty years computers are as smart as we are and if current rates of improvement–for computers but not for humans–continue, that means that in forty years we will be sharing the planet with beings at least as much smarter than we are as we are smarter than chimpanzees. Kurzweil's solution is for us to get smarter too–to learn to do part of our thinking in silicon. That could give us a very strange world–populated by humans, human/machine combinations, machines programmed with the contents of a human mind that think they are that human, machines that have evolved their own intelligence, and much else.
The final technology is virtual reality. Present versions use the brute force approach: feed images through goggles and headphones to eyes and ears. But if we can crack the dreaming problem, figure out how our nervous system encodes the data that reaches our minds as sensory perceptions, goggles and headphones will no longer be necessary. Plug a cable into a socket at the back of your neck for full sense perception of a reality observed by mechanical sensors, generated by a computer, or recorded from another brain.
The immediate payoff is that the blind will see–through video cameras–and the deaf hear. The longer run may be a world where most of the important stuff consists of signals moving from one brain to another over a network, with physical acts by physical bodies playing only a minor role. To visit a friend in England there is no need to move either his body or mine–being there is as easy as dialing the phone. That is one of many reasons why I do not expect gasoline powered automobiles to play a major role in transportation a century from now.
A few pages back, we were considering a world where realspace was entirely public, cyberspace entirely private. As things presently are, that would be a very public world, since most of us live most of our lives in realspace. But if deep VR reverses the ratio, giving us a world where all the interesting stuff happens in cyberspace and realspace activity consist of little more than keeping our bodies alive, it will be a very private world.
Having labeled the section science fiction, I could not resist adding a chapter on ways in which current and near future technologies may make possible the old sf dream--space travel, space habitats, in time, perhaps, the stars.
Any of the futures I have just sketched might happen, but not all. If nanotech turns the world into gray goo in 2030, it will also turn into gray goo the computers on which artificial super intelligences would have been developed in 2040. If nanotech bogs down and A.I. does not, the programmed computers that rule the world of 2040 may be more interested in their own views of how the human species should evolve than in our view of what sort of children we want to have. And, closer to home, if strong private encryption is built into our communication systems, with the encryption and decryption under the control not of the network but of the individuals communicating with each other–the National Security Agency's nightmare for the past twenty years or so–it won't matter how many telephone lines the FBI can tap.
That is one reason this book is not prophecy. I expect parts of what I describe to happen but I do not know which parts. My purpose is not to predict which future we will get but to use possible futures to think about how technological change will affect us and how we can and should change our lives and institutions to adapt to it.
That is also one reason why, with a few exceptions, I have limited my discussion of the future to the next thirty years or so. Thirty years is roughly the point at which both A.I. and nanotech begin to matter. It is also long enough to permit technologies that have not yet attracted my attention to start to play an important role. Beyond that my crystal ball, badly blurred at best, becomes useless; the further future dissolves into mist.
New technologies change what we can do. Sometimes they make what we want to do easier. After writing a book with a word processor, one wonders how it was ever done without one. Sometimes they make what someone else is doing easier–making it harder for us to prevent him from doing it. Enforcing copyright law became more difficult when photo typesetting made the cost of producing a pirate edition lower than the cost of the authorized edition it competed with, and more difficult again when inexpensive copying put the tools of piracy in the hands of any college professor in search of reading material for his students. As microphones and video cameras become smaller and cheaper, preventing other people from spying on me becomes harder.
The obvious response is to try to keep doing what we have been doing. If that is easier; good. If it is harder, too bad. The world must go on, the law must be enforced. "Damn the torpedoes, full speed ahead."
Obvious–and wrong. The laws we have, the ways we do things, are not handed down from heaven on tablets of stone. They are human contrivances, solutions to particular problems, ways of accomplishing particular ends. If technological change makes a law hard to enforce, the best solution is sometimes to stop enforcing it. There may be other ways of accomplishing the same end–including some enabled by the same technological change. The question is not "how do we continue to do what we have been doing" but "how do we best achieve our objectives under new circumstances?"
Copyright law gives the author of a copyrightable work the right to control who copies it. If copying a book requires an expensive printing plant operating on a large scale, that right is reasonably easy to enforce. If every reader owns equipment that can make a perfect copy of a book at negligible cost, enforcing the law becomes very nearly impossible.
So far as printed material is concerned, copyright law has become less enforceable over the past century, but not yet unenforceable. The copying machines most of us have access to can reproduce a book, but the cost is comparable to the cost of buying the book and the quality worse. Copyright law in printed works can still be enforced, even if less easily than in the past.
The same is not true for intellectual property in digital form. Anyone with a computer equipped with a floppy drive can copy a hundred dollar program onto a one dollar floppy. Anyone with a CDR drive can copy a four hundred dollar program onto a one dollar CD. And anyone with a reasonably fast internet connection can copy anything available online, anywhere in the world, to his hard drive.
Under those circumstances, enforcing copyright law against individual users is very nearly impossible. If my university decides to save on its software budget by buying one copy of Microsoft Office and making lots of copies, a discontented employee with Bill Gates' email address could get us in a lot of trouble. But if I choose to provide copies to my wife and children–which under Microsoft's license I am not permitted to do–or even to a dozen of my friends, there is in practice little that Microsoft can do about it.
That could be changed. If we wanted to enforce present law badly enough, we could do it–with suitable revisions on the enforcement end. Every computer in the country would be subject to random search. Anyone found with an unlicensed copy of software would go straight to jail. Silicon valley would empty and the prisons would fill with geeks, teenagers, and children.
Nobody regards that as a tolerable solution to the problem. Although there has been some shift recently in the direction of expanded criminal liability for copyright infringement, software companies for the most part take it for granted that they cannot use the law to prevent individual copying of their programs and so fall back on other ways of getting rewarded for their efforts.
Holders of music copyrights face similar problems. As ownership of tape recorders became common, piracy became easier. Shifting to CD's temporarily restored the balance, since they provided higher quality than tape and were expensive to copy–but then cheap CD recorders and digital audio tape came along. Most recently, as computer networks have gotten faster, storage cheaper, and digital compression more efficient, the threat has been from online distribution of MP3 files encoding copyrighted songs.
In the early days of home computers, some companies sold their programs on disks designed to be uncopyable. Consumers found that inconvenient, either because they wanted to make copies for their friends or because they wanted to make backup copies for themselves. So other software companies sold programs designed to copy the copy protected disks. One company produced a program–SuperUtility Plus–designed to do a variety of useful things, including copying other companies' protected disks. It was itself copy protected. So another company produced a program–SuperDuper–whose sole function in life was to make copies of SuperUtility Plus.
Technological protection continues in a variety of forms. All face a common problem. It is fairly easy to provide protection sufficient to keep the average user from using software in ways in which the producer does not want him to use it. It is very hard to provide protection adequate against an expert. And one of the things experts can do is to make their expertise available to the average user in the form of software designed to defeat protection schemes.
This suggests a possible solution: technological protection backed up by legal protection against software designed to defeat it. In the early years, providers of copy protection tried that approach. They sued the makers of software designed to break the protection, arguing that they were guilty of contributory infringement (helping other people copy copyrighted material), direct infringement (copying and modifying the protection software in the process of learning how to defeat it) and violation of the licensing terms under which the protection software was sold. They lost.
More recently, owners of intellectual property successfully supported new legislation–Section 1201 of the Digital Millennium Copyright Act–which reverses that result, making it illegal to produce or distribute software whose primary purpose is defeating technological protection. It remains to be seen whether or not that restriction will itself prove enforceable.
Anyone with a video recorder can copy videos for his friends [check this–how effective is current protection?]. Nonetheless, video rental stores remain in business. They inexpensively provide their customers with an enormously larger selection than they could get by copying their friends' cassettes. The stores themselves cannot safely violate copyright law, buying one cassette for a hundred outlets, because they are large, visible organizations. So producers of movies continue to get revenue from video cassettes, despite the ability of customers to copy them.
There is no practical way for music companies to prevent one teenager from making copies of a CD or a collection of MP3's for his friends–but consumers of music are willing to pay for the much wider range of choice available from a store. The reason Napster threatened the music industry was that it provided a similar range of choice at a much lower cost. Similarly for software. As long as copyright law can be used to prevent large scale piracy, customers will be willing to pay for the convenience provided by a legal, hence large scale and public, source for their software. In both cases, the ability of owners of intellectual property to make piracy inconvenient enough to keep themselves in business is threatened by the internet, which offers the possibility of large scale public distribution of pirated music and software.
(William F. Buckley, Jr.)
A century ago, prominent authors got a good deal of their income from public lectures. Judging by the quote from Buckley—and my own observations–some still do. That suggests that, in a world without enforceable copyright, some authors could write books, provide them online to anyone who wanted them, and make their living selling services to their readers–public lectures, consulting services, or the like. This is not a purely conjectural possibility. Currently I provide the full text of three books and numerous articles on my web page, for free–and receive a wide range of benefits, monetary and non-monetary, by doing so.
This is one example of a more general strategy: Give away the intellectual property and get your income from it indirectly. That is how both of the leading web browsers are provided. Netscape gives away Navigator and sells the server software that Navigator interacts with; Microsoft follows a similar strategy. Apple provides a competing browser--which is available for free, but only runs on Apple computers. It is also how radio and television programs pay their bills; give away the program and get revenue from the ads.
As these examples show, the death of copyright does not mean the death of intellectual property. It does mean that producers of intellectual property must find other ways of getting paid for their work. The first step is recognizing that, in the long run, simply enforcing existing law is not going to be an option.
A newspaper publishes an article asserting that I am a wanted criminal, having masterminded several notorious terrorist attacks. Colleagues find themselves engaged when I propose going out to dinner. My department chair assigns me to teach a course on Sunday mornings with an enrollment of one. I start getting anonymous phone calls. My recourse under current law is to sue the paper for libel, forcing them to retract their false claims and compensate me for damage done.
Implicit in the legal solution to defamation are two assumptions. One is that when someone makes a false statement to enough people to do serious damage, the victim can identify either the person who made the statement or someone else responsible for his making it–the newspaper if not the author. The other is that at least one of the people identified as responsible will have enough assets to be worth suing.
In the world of twenty years ago, both assumptions were usually true. The reporter who wrote a defamatory article might be too poor to be worth suing, but the newspaper that published it was not–and could reasonably be held responsible for what it printed. It was possible to libel someone by a mass mailing of anonymous letters, but a lot of trouble to do it on a large enough scale to matter to most victims.
Neither is true any longer. It is possible, with minimal ingenuity, to get access to the internet without identifying yourself. With a little more technical expertise, it is possible to communicate online through intermediaries–anonymous remailers–in such a way that the message cannot be linked to the sender. Once online, there are ways to communicate with large numbers of people at near zero cost: mass email, posts on Usenet news, a page on the worldwide web. And if you choose to abandon anonymity and spread lies under your own name, access to the internet is so inexpensive that it is readily available to people without enough assets to be worth suing.
One possible response is that we must enforce the law–whatever it takes. If the originator of the defamation is anonymous or poor, find someone else, somewhere in the chain of causation, who is neither. In practice, that probably means identifying the internet service provider through whom the message passed and holding him liable. A web page is hosted on some machine somewhere; someone owns it. An email came at some point from a mail server; someone owns that.
That solution makes no more sense than holding the U.S. Post Office liable for anonymous letters. The publisher of a newspaper can reasonably be expected to know what is appearing in his pages. But an ISP has no practical way to monitor the enormous flow of information that passes through its servers–and if it could, we wouldn't want it to. We can–in the context of copyright infringement we do–set up procedures under which an ISP can be required to take down webbed material. But that does no good against a Usenet post, mass email, webbed defamation hosted in places reluctant to enforce U.S. law, or defamers willing to go to the trouble of hosting their web pages on multiple servers, shifting from one to another as necessary. Defamation law is of very limited use for preventing online defamation.
There is–has always been–another solution to the problem. When people tell lies about me, I answer them. The technological developments that make defamation law unenforceable online also make possible superb tools for answering lies, and thus provide a substitute, arguably a superior substitute, for legal protection.
My favorite example is Usenet News, a part of the internet older and less well known than the web. To the user, it looks like a collection of online bulletin boards, each on a different topic–anarchy, short-wave radios, architecture, cooking history. When I post a message to a newsgroup, the message goes to a computer–a news server–provided by my ISP. The next time that news server talks to another, they exchange messages–and mine spreads gradually across the world. In an hour, it may be answered by someone in Finland or Japan. The server I use hosts nearly thirty thousand groups. Each is a collection of conversations spread around the world–a tiny non-geographical community united, and often divided, by common interests.
Google, which hosts a popular web search engine, also provides a search engine for Usenet. Using it I can discover in less than a minute whether anyone has mentioned my name anywhere in the world any time in the last three days–or weeks, or years–in any of more than thirty thousand newsgroups. If I get a hit, one click brings up the message. If I am the David Friedman mentioned (the process would be easier if my name were Myron Whirtzlburg), and if the message requires an answer, a few more clicks put my response in the same thread of the same newsgroup, where almost everyone who read the original post will see it. It is as if, when anyone slandered me anywhere in the world, the wind blew his words to me and my answer back to the ears of everyone who had heard them.
The protection Usenet offers against defamation is not perfect; a few people who read the original post may miss my reply and more may choose not to believe it. But the protection offered by the courts is imperfect too. Most damaging false statements are not important enough to justify the cost and trouble of a lawsuit. Many that are do not meet the legal requirements for liability. Given the choice, I prefer Usenet.
Suppose that instead of defaming me on a newsgroup you do it on a web page. Finding it is easy–Google provides a search engine for the web too. The problem is how to answer it. I can put up a web page with my answer and hope that sufficiently interested readers will come across it, but that is all I can do. The links on your web page are put there by you, not by me–and you may be reluctant to add one to the page that proves you are lying.
There is a solution to this problem–a technological solution. Current web browsers show only forward links–links from the page being read to other pages. It would be possible to build a web browser, say Netscape Navigator 9.0, that automatically showed back links, letting the user see not only what pages the author of this page chose to link to but also what pages chose to link to it. Once such browsers are in common use, I need only put up a page with a link to yours. Anyone browsing your page with the back link option turned on will be led to my rebuttal.
There is a problem with this solution–a legal problem. Your web page is covered by copyright, which gives you the right to forbid other people from making either copies or derivative works. A browser that displays your page as you intended is making a copy, but one to which you have given implicit authorization by putting your page on the web. A browser that displays your page with back links added is creating a derivative work–one that you may not have intended and, arguably, did not authorize. To make sure your lies cannot be answered, you notify Netscape that they are not authorized to display your page with back links added.
The issue of when one web page is an unauthorized derivative work of another is currently being fought out in the context of "framing"–one web site presenting material from another along with its own advertising. If my view of online defamation is correct, the outcome of that litigation may be important to an entirely different set of issues. The same legal rule–a strong reading of the right to prevent derivative works online–that would protection a site from other people free riding on its content would also provide protection to someone who wants to spread lies online--unanswered.
Consider the category of "parent." It used to be that, while there might be some uncertainty about the identity of a child's father, there was no question what "father" and "mother" meant. Laws and social norms specifying the rights and obligations of fathers and mothers were unambiguous in meaning, if not always in application.
That is no longer the case. With current reproductive technology there are at least two biological meanings of "mother" and will soon be a third. A gestational mother is the woman in whose womb a fetus was incubated. An egg mother is the woman whose fertilized egg became the fetus. Once human cloning becomes an established technology, a mitochondrial mother will be the woman whose egg, with its nucleus replaced by the nucleus of the clone donor but with its own extra-nuclear mitochondrial DNA, developed into the fetus. And once genetic engineering becomes a mature technology, permitting us to produce offspring whose DNA is a patchwork from multiple donors, the concept of "a" biological mother (or father) will be very nearly meaningless.
A California couple wanted a child. The husband was sterile. His wife was doubly sterile–she could neither produce a fertile egg nor bring a fetus to term. They contracted with a sperm donor, an egg donor, and a gestational mother. The donated egg was impregnated with the donated sperm and implanted in the rented womb. Then, before the baby was born, their marriage broke up, leaving the courts with a puzzle: What person or persons had the legal rights and obligations of parenthood?
Under California law read literally, the answer was clear. The mother was the woman from whose body the child was born. The father was her husband. That was a sensible enough legal rule when the laws were written. But it made no sense at all in a world where neither that woman nor her husband was either related to the child or had intended to parent it.
The court that finally decided the issue, like some but not all other California courts presented with similar conundrums, sensibly ignored the literal reading of the law, holding that the parents were the couple who had set the train of events in motion, intending at that time to rear the child as their own. They thus substituted for the biological definition that had become technologically obsolete a social definition–motherhood by neither egg nor womb but by intention.
This is a true story. If you don't believe me, go to a law library and look up John A. B. Vs. Luanne H. B (72 Cal. Rptr. 2d 280 (Ct. App. 1998)).
Consider someone whose body is preserved at the temperature of liquid nitrogen while awaiting the medical progress needed to revive and cure him. Legally he is dead; his wife is a widow, his heirs have his estate. But if he is in fact going to be revived, then in a very real sense he is not dead–merely sleeping very soundly. Our legal system, more generally our way of thinking about people, takes no account of the special status of such a person. There is a category of alive, a category of dead, and–outside of horror movies and computer games–nothing between them.
You are dying of a degenerative disease that will gradually destroy your brain. If you are cured today, you will be fine. If a year later, your body may survive but your mind will not. After considering the situation, you decide that you are more than willing to trade a year of dying for a chance of getting back your life. You call up the Alcor Foundation and ask them to arrange to have your body frozen–tomorrow if possible.
They reply that while they agree with your decision they cannot help you. As long as you are legally alive, freezing you is legally murder. You will simply have to wait another year until you are declared legally dead–and hope that somehow, some day, medical science will become capable of reconstructing you from what by that time is left.
This too is, allowing for a little poetic license, a true story. In Donaldson v. Van de Kamp, Thomas Donaldson went to court in an unsuccessful attempt to get permission to be frozen before, rather than after, his brain was destroyed by a cancerous tumor.
The issues raised by these cases–the meaning of parenthood and of death–will be discussed at greater length in later chapters. Their function here is to illustrate the way in which technological change alters the conceptual ground under our feet.
All of us deal with the world in terms of approximations. We describe someone as tall or short, kind or cruel, knowing that the former is a matter of degree and the latter both of degree and of multiple dimensions. We think of the weather report as true, although it is quite unlikely that it provides a perfectly accurate description of the weather, or even that such a description is possible–when the weather man says the temperature is 70 degrees in the shade, just which square inch of shade is he referring to? And we classify a novel as "fiction" and this book as "nonfiction," although quite a lot of the statements in the former are true and some in the latter are false.
Dealing with the world in this way works because the world is not a random assemblage of objects–there is pattern to it. Temperature varies from one patch of shade to another, but not by very much; while a statement about "the" temperature in the shade may not be precisely true, we rarely lose much by treating it as if it were. Similarly for the other useful simplifications of reality that make possible both thought and communication.
When the world changes enough, some simplifications cease to be useful. It was always true that there was a continuum between life and death; the exact point at which someone is declared legally dead is arbitrary. But, with rare exceptions, it was arbitrary to within seconds, perhaps minutes–which almost never mattered. When it is known that, for a large number of people, the ambiguity not only exists but will exist for decades, the simplification is no longer useful. It may, as in the case of Thomas Donaldson, become lethal.
So far my examples have focused on how legal rules should respond to technological change. But similar issues arise for each of us in living his own life in a changing world. Consider, for a story now in part played out, the relations between men and women.
For a very long time, human societies have been based on variants of the sexual division of labor. All started with a common constraint–women bear and suckle children, men do not. For hunter gatherers, that meant that the men were the hunters and the women, kept relatively close to camp by the need to care for their children, the gatherers. In more advanced societies, that became, with many variations, a pattern where women specialized in household production and men in production outside the household.
A second constraint was the desire of men to spend their resources on their own children rather than on the children of other men–a desire rooted in the fact that Darwinian selection has designed organisms, including human males, to be good at passing down their own genes to future generations. Since the only way a man could be reasonably confident that he was the father of a particular child was for its mother not to have had sex with other men during the period when it was conceived, the usual arrangement of human societies, with a few exceptions, gave men sexual exclusivity. One man might under some circumstances sleep with more than one woman, but one woman was supposed to, and most of the time did, sleep with only one man.
Over the past few centuries, two things have sharply altered the facts that led to those institutions. One is the decline in infant mortality. In a world where producing two or three adult children required a woman to spend most of her fertile years bearing and nursing, the sexual division of labor was sharp–one profession, "mother," absorbed close to half the labor force. In today's world, a woman need bear only two babies in order to end up with two adult children.
A second change, the increased division of labor, has drastically reduced the importance of household production. You may still wash your own clothes, but most of the work was done by the people who built the washing machine. You may still cook your own dinner, but you are unlikely to cure your own ham or make your own soap. That change eliminated a good deal of what wives traditionally did, freeing women for other activities.
As being a wife and mother went from a full to a part time job, human institutions adjusted. Market employment of women increased. Divorce became more common. The sexual division of labor, while it still exists, is much less sharp–many women do jobs that used to be done almost exclusively by men, some men do jobs that used to be done almost exclusively by women.
One consequence of married women working largely outside of the home is to make the enforcement of sexual exclusivity, never easy, very nearly impossible. Modern societies developed a social alternative--companionate marriage. A wife who is your best friend instead of your subordinate or slave is less likely to want to cheat on you--a good thing if you have no practical way of stopping her. Modern society also produced, somewhat later, a technological alternative: Paternity testing. It is now possible for a husband to know whether his wife's children are his even if he is not confident that he is her only sexual partner.
This raises some interesting possibilities. We could have–are perhaps moving towards–a variant of conventional marriage institutions in which paternal obligations are determined by biology, not marital status. We could have a society with group marriages but individual parental responsibilities, since a woman would know which of her multiple husbands had fathered any particular child. We could have a society with casual sex but well defined parental obligations–although that raises some practical problems, since it is much easier for a couple to share parental duties if they are also living together, and the fact that two people enjoy sleeping together is inadequate evidence that they will enjoy living together.
All of these mating patterns exist already–for a partial sample, see the Usenet newsgroup alt.polyamory. Whether any become common will depend in large part on the nature of male sexual jealousy. Is it primarily a learned pattern, designed to satisfy an instinctual preference for one's own children? Or is it itself instinctual–hard wired by evolution as a way of improving the odds that the children a male supports carry his genes? If the former, then once the existence of paternity testing makes jealousy obsolete we can expect its manifestations to vanish, permitting a variety of new mating patterns. If the latter, jealousy is still obsolete but, given the slow pace of evolutionary change, that fact will be irrelevant to behavior for a very long time, hence we can expect to continue with some variant of monogamy, or at least serial polygamy, as the norm.
The basic principle here is the same as in earlier examples of adjustment to technological change. Our objective is not to save marriage. It is to accomplish the purposes that marriage evolved to serve. One way is to continue the old pattern even though it has become more difficult–as exemplified by the movement for giving couples the option of covenant marriage, marriage on something more like the old terms of "till death do us part." Another is to take advantage of technological change to accomplish the old objective–producing and bringing up children–in new ways.
Litigation has always been a clumsy and costly way of enforcing contractual obligations. It is possible to sue someone in another state, even another country–but the more distant the jurisdiction, the harder it is. If online commerce eventually dispenses with not only geography but real world identity, so that much of it occurs between parties linked only to an identity defined by a digital signature, enforcing contracts in the courts becomes harder still. It is difficult to sue someone if you do not know who he is.
Ebay provides a low tech example. When you win an auction and take delivery of the goods, you are given an opportunity to report on the result–did the seller deliver when and as scheduled, were the goods as described? The reports on all past auctions by a given seller are available, both in full and in summary form, to anyone who might want to bid on that seller's present auctions. In a later chapter we will consider more elaborate mechanisms, suitable for higher stakes transactions, by which modern information technology can use reputational enforcement to substitute for legal enforcement.
When considering the down side of technologies–Murder Incorporated in a world of strong privacy or some future James Bond villain using nanotechnology to convert the entire world to gray goo–your reaction may be "Stop the train, I want to get off. " In most cases, that is not an option. This particular train is not equipped with brakes.
Most of the technologies we will be discussing can be developed locally and used globally. Once one country has a functional nanotechnology, permitting it to build products vastly superior to those made with old technologies, there will be enormous pressure on other countries to follow suit. It is hard to sell glass windshields when the competition is using structural diamond. It is even harder to persuade cancer patients to be satisfied with radiation therapy when they know that, elsewhere in the world, microscopic cell repair machines are available that simply go through your body and fix whatever is wrong.
For an example already played out, consider surrogacy contracts–agreements by which a woman bears a child, either from her own or another woman's egg, for another couple to rear as its own. The Baby M case established that such contracts are not in enforceable, at least in New Jersey. State legislation followed, with the result that in four states merely signing such a contract is a criminal act and in one, Michigan, arranging a surrogacy contract is a felony punishable by up to five years and $50,000.
None of this mattered very much. Someone who could afford the costs of hiring a host mother, still more someone who could afford the cost necessary to arrange for one mother to incubate another's egg, could almost certainly afford the additional cost of doing it in a friendly state. As long as there was one state that approved of such arrangements, the disapproval of others had little effect even on their own citizens. And even if the contracts were legally unenforceable, it was only a matter of time before people in the business of arranging them learned to identify and avoid potential host mothers likely to change their mind after the child was born.
Or consider research into the causes of aging. Many people believe (I think mistakenly) that the world suffers from serious problems of overpopulation. Others argue (somewhat more plausibly) that a world without aging would risk political gerontocracy and cultural stasis. Many would–some do–argue that even if the problem of aging can be solved, it ought not to be.
That argument becomes less convincing the older you get. Old people control large resources, both economic and political. While arguments against aging research may win out somewhere, they are unlikely to win out everywhere–and the cure only has to be found once.
For a more disturbing example, consider artificial intelligence–a technology that might well make human beings obsolete. At each stage, doing it a little better means being better able to design products, predict stock movements, win wars. That almost guarantees that at each stage, someone will take the next step.
Even if it is possible to block or restrict a potentially dangerous technology, as in a few cases it may be, it is not clear that we should do it. We might discover that we had missed the disease and banned the cure. If an international covenant backed by overwhelming military power succeeds in restricting nanotechnological development to government approved labs, that might save us from catastrophe. But since government approved labs are the ones most likely to be working on military applications of new technology, while private labs mostly try to produce what individual customers want, the effect might also be to prevent the private development of nanotechnological countermeasures to government developed mass destruction. Or it might turn out that our restrictions had slowed the development of nanotechnology by enough to leave us unable to defend against the result of a different technology–a genetically engineered plague, for example.
There are legitimate arguments for trying to slow or prevent some of these technological developments. Those arguments will be made–but not here. For my purposes, it is more interesting to assume that such attempts, if made, will fail, and try to think through the consequences–how new technologies will change things, how human beings will and should adapt to those changes.
Technological progress means learning more about how to do things; on the face of it, one would expect that to result in an improvement in human life. So far, with few or no exceptions, it has. Despite a multitude of dire prophesies over the past two centuries, human life almost everywhere is better today than it was fifty years ago, better fifty years ago than a hundred years ago, and better a hundred years ago than two hundred years ago.
Past experience is not always a reliable guide to the future. Despite the progress of the past two hundred years, quite a number of people continue to predict future catastrophe from present progress—including a few sufficiently well informed and competent to be worth taking seriously. In my final chapter, I will return to the question of whether, how, and under what circumstances they might be right.
There has been a lot of concern in recent years about the end of privacy. As we will see in the next two chapters, there is reason for such fears; the development of improved technologies for surveillance and data processing does indeed threaten our ability to restrict other people’s access to information about us. But a third and less familiar technology is working in precisely the opposite direction. If the arguments of this chapter are correct we will soon be experiencing in part of our lives–an increasingly important part–a level of privacy that human beings have never known before. It is a level of privacy that not only scares the FBI and the National Security Agency, two organizations whose routine business involves prying into other people's secrets, it sometimes even scares me.
We start with an old problem: How to communicate with someone without letting other people know what you are saying. There are a number of familiar solutions. If you are worried about eavesdroppers, check under the eaves before saying things you do not want the neighbors to hear. To be safer still, hold your private conversation in the middle of a large, open field, or a boat in the middle of a lake. The fish are not interested and nobody else can hear.
That approach no longer works. Even the middle of a lake is within range of a shotgun mike. The eaves do not have to contain eavesdroppers–just a microphone and a transmitter. If you check for bugs, someone can still bounce a laser beam off your window pane and use it to pick up the vibration from your voice. I am not sure that satellite observation is good enough yet to read lips from orbit–but if not, it soon will be. Furthermore, much of our communication is now indirect, over phone wires, airwaves, the internet. Phone lines can be tapped, cordless or cell phone messages intercepted. An email bounces through multiple computers on its way to its destination—anyone controlling one of those computers can, in principle, save a copy for himself.
A different set of old technologies was used for written messages. A letter sealed with the sender's signet ring could not protect the message, but at least it let the recipient know if it had been opened–unless the spy was very good with a hot knife. A letter sent via a trusted messenger was safer still–provided he deserved the trust.
A more ingenious approach was to protect not the physical message but the information it contained, by scrambling the message and providing the intended recipient with the formula for unscrambling it. A simple version was a substitution cipher, in which each letter in the original message was replaced by a different letter. If we replace each letter with the next one in the alphabet, we get "mjlf uijt" from the words "like this."
"mjlf uijt" does not look much like "like this," but it is not very hard, if you have a long message and patience, to deduce the substitution and decode the message. More sophisticated scrambling schemes rearrange the letters according to an elaborate formula, or convert letters into numbers and do complicated arithmetic with them to convert the message (plaintext) into its coded version (ciphertext). Such methods were used, with varying degrees of success, by both sides in World War II.
There were two problems with this way of keeping secrets. The first was that it was slow and difficult–it took a good deal of work to convert a message into its coded form or to reverse the process. It was worth doing if the message was the order telling your fleet when and where to attack, but not for casual conversations among ordinary people.
That problem has been solved. The computers most of us have on our desktops can scramble messages, using methods that are probably unbreakable even by the NSA, faster than we can type them. They can even scramble–and unscramble–the human voice as fast as we can speak. Encryption is now available not merely to the Joint Chiefs of Staff but to you and me for our ordinary conversation.
The other problem is that in order to read my scrambled message you need the key–the formula describing how to unscramble it. If I do not have a safe way of sending you messages, I may not have a safe way of sending you the key either. If I sent it by a trusted messenger but made a small mistake as to who was entitled to trust him, someone else now has a copy and can use it decrypt my future messages to you. This may not be too much of a problem for governments, willing and able to send information back and forth in briefcases handcuffed to the wrists of military attaché's, but for the ordinary purposes of ordinary people that is not a practical option.
About twenty-five years ago, this problem was solved. The solution was public key encryption, a new way of scrambling and unscrambling messages that does not require a secure communication channel for either the message or the key. The software to implement that solution is now widely available.
Public key encryption works by generating a pair of keys–call them A and B–each a long number that can be used to unscramble what the other has scrambled. If you encrypt a message with A, someone who possesses only A cannot decrypt it–that requires B. If you encrypt a message with B, you have to use A to decrypt it. If you send a friend key A (your public key) while keeping key B (your private key) secret, your friend can use A to encrypt messages to you and you can use B to decrypt them. If a spy gets a copy of key A, he can send you secret messages too. But he still cannot decrypt the messages from your friend. That requires key B, which never leaves your possession.
How can one have the information necessary to encrypt a message yet be unable to decrypt it? How can it be possible to produce two keys with the necessary relationship but not, starting with one key, to calculate the other? The answer to both questions depends on the fact that there are some mathematical processes that are much easier to do in one direction than another.
Most of us can multiply 293 by 751 reasonably quickly, using nothing more sophisticated than pencil and paper, and get 220043. Starting with 220043 and finding the only pair of three digit numbers that can be multiplied together to give it takes a lot longer. The most widely used version of public key encryption depends on that asymmetry–between multiplying and factoring–using very much larger numbers. Readers who are still puzzled may want to look at appendix I of this chapter, where I describe a very simple form of public key encryption suited to a world where people know how to multiply but have not yet learned how to divide, or check one of the webbed descriptions of the mathematics of the RSA algorithm, the most common form of public key encryption.
When I say that encryption is unbreakable, what I mean is that it cannot be broken at a reasonable cost in time and effort. Almost all encryption schemes,  including public key encryption, are breakable given an unlimited amount of time. If, for example, you have key A and a message a thousand characters long encrypted with it, you can decrypt the message by having your computer create every possible thousand character message, encrypt each with A, and find the one that matches. Alternatively, if you know that key B is a number a hundred digits long, you could try all possible hundred digit numbers, one after another, until you found one that correctly decrypted a message that you had encrypted with key A.
Both of these are what cryptographers describe as "brute force" attacks. To implement the first of them, you should first provide yourself with a good supply of candles–the number of possible thousand character sequences is so astronomically large that, using the fastest available computers, the sun will have burned out long before you finish. The second is workable if key B is a sufficiently short number–which is why people who are serious about protecting their privacy use long keys, and why people who are serious about violating privacy–the National Security Agency, for example–try to make laws restricting the length of the keys that encryption software uses.
One obvious result is that we can have private conversations. If I want to send you a message that nobody else can read, I first encrypt it with your public key. When you respond, you encrypt your message with my public key. The FBI, or my nosy neighbor, is welcome to tap the line–everything he gets will be gibberish to anyone who does not have the corresponding private key.
Even if the FBI does not know what I am saying, it can learn a good deal by watching who I am saying it to–known in the trade as "traffic analysis." That problem too can be solved using public key encryption and an anonymous remailer, a site on the internet that forwards email. When I want to communicate with you, I send the message to the remailer, along with your email address. The remailer sends it to you.
If that was all that happened, someone tapping the net could follow the message from me to the remailer and from the remailer to you. To prevent that, the message to the remailer, including your email address, is encrypted with the remailer's public key. When he receives it he uses his private key to strip off that layer of encryption, revealing your address, and forwards the decrypted message. Our hypothetical spy sees a thousand messages go into the remailer and a thousand go out, but he can neither read the email addresses on the incoming messages–they are hidden under a layer of encryption–nor match up incoming and outgoing message.
What if the remailer is a plant–a stooge for whoever is spying on me? There is a simple solution. The email address he forwards the message to is not actually yours–it is the email address of a second remailer. The message he forwards is your message plus your email address, the whole encrypted with the second remailer's public key. If I am sufficiently paranoid, I can bounce the message through ten different remailers before it finally gets to you. Unless all ten are working for the same spy, there is no way anyone can trace the message from me to you.
When interacting with other people, it is helpful to be able to prove your identity–which can be a problem online. If I am leading a conspiracy to overthrow an oppressive government, I want my fellow conspirators to be able to tell which messages are coming from me and which from the secret police pretending to be me. If I am selling my consulting services online, I need to be able to prove my identity in order to profit from the reputation earned by past consulting projects and make sure that nobody else free rides on that reputation by masquerading as me.
That problem too can be solved by public key encryption. In order to digitally sign a message, I encrypt it using my private key instead of your public key. I then send it to you with a note telling you who it is from. You decrypt it with my public key. The fact that what comes out is a message and not gibberish tells you that it was encrypted with the matching private key. Since I am the only one who has that private key, the message must be from me.
My digital signature not only demonstrates that I sent the signed message, it does so in a form that I cannot later disavow. If I try to deny having sent it, you point out that you have a copy of the message encrypted with my private key–something that nobody but I could have produced. Thus a digital signature makes it possible for people to sign contracts that they can be held to–and does so in a way much harder to forge than an ordinary signature.
If we are going to do business online, we need a way of paying for things. Checks and credit cards leave a paper trail. What we want is an online equivalent of currency–a way of making payments that cannot later be traced, either by the parties themselves or anyone else.
The solution, discussed in some detail in a later chapter, is anonymous ecash. Its essential feature is that it permits people to make payments to each other by sending a message, without either party having to know the identity of the other and without any third party having to know the identity of either of them. One of the many things it can be used for is to pay for the services of an anonymous remailer, or a string of anonymous remailers, thus solving the problem of how to keep remailers in business without sacrificing their customers' anonymity. Another, as we will see later, is to help us eliminate one of the chief minor nuisances of modern life--spam email.
Combine public key encryption, anonymous remailers, digital signatures and ecash, and we have a world where individuals can talk and trade with reasonable confidence that no third party is observing them.
A less obvious implication is the ability to combine anonymity and reputation. You can do business online without revealing your real world identity--your true name. You prove you are the same person who did business yesterday, or last year, by digitally signing your messages. Your online persona is defined by its public key. Anyone who wants to communicate with you privately uses that key to encrypt his messages; anyone who wants to be sure you are the person who sent a message uses it to check your digital signature.
With the exception of fully anonymous ecash, all of these technologies already exist, implemented in software that is currently available for free. At present, however, they are mostly limited to the narrow bandwidth of email–sending private text messages back and forth. As computers and computer networks get faster, that will change.
Twice in the past month I traveled several hundred miles–once by car, once by air–in order to give a series of talks. With only mild improvements in current technology I could have given them from my office. Both I and my audience would have been wearing virtual reality goggles–glasses with the lenses replaced by tiny computer screens. My computer would be drawing the view of the lecture room as seen from the podium–including the faces of my audience–at sixty frames a second. Each person in the audience would have a similar view, from his seat, drawn by his computer. Earphones take care of sound. The result would be the illusion, for all of us, that we were present in the same room seeing and hearing each other.
Virtual reality not only keeps down travel costs, it has other advantages as well. Some lecture audiences expect a suit and tie–and not only do I not like wearing ties, all of the ties I own possess a magnetic attraction for foodstuffs in contrasting colors. To give a lecture in virtual reality, I do not need a tie–or even a shirt. My computer can add both to the image it sends out over the net. It can also remove a few wrinkles, darken my hair, and cut a decade or so off my apparent age.
As computers get faster, they can not only create and transmit virtual reality worlds, they can also encrypt them. That means that any human interaction involving only sight and sound can be moved to cyberspace and protected by strong privacy.
In order to send an encrypted message to a stranger or check the digital signature on a message from a stranger, I need his public key. Some pages back, I assumed that problem away by putting everyone's public key in the phone book. While that is a possible solution, it is not a very good one.
A key published in the phone book is only as reliable as whoever is publishing it. If our hypothetical bad guy can arrange for his public key to be listed under my name, he can read messages intended for me and sign bogus messages from me with a digital signature that checks against my supposed key. A phone book is a centralized system, hence vulnerable to failures at the center, whether due to dishonesty or incompetence.
Consider some well known organization, say American Express, which many people know and trust. American Express arranges to make its public key very public–posted in the window of every American Express office, printed--and magnetically encoded--on every American Express credit card, included in the margin of every American Express ad. It then goes into the identity business.
To take advantage of its services, I use my software to create a public key/private key pair. I then go to an American Express office, bringing with me my passport, driver's license and public key. After establishing my identity to their satisfaction, I hand them a copy of my public key and they create a message saying, in language a computer can understand, "The public key of David D. Friedman, born on 2/12/45 and employed by Santa Clara University, is 10011011000110111001010110001101000… ." They digitally sign the message, using American Express's private key, copy the signed message to a floppy disk, and give it to me.
To prove my identity to a stranger, I send him a copy of the digital certificate from American Express. He now knows my public key–allowing him to send encrypted messages that only David Friedman can read and check digital signatures to see if they are really from David Friedman. Someone with a copy of my digital certificate can use it to prove to people what my public key is, but he cannot use it to masquerade as me because he does not possess the matching private key.
So far this system has the same vulnerability as the phone book; if American Express or one of its employees is working for the bad guy, they can create a bogus certificate identifying someone else's public key as mine. But nothing in a system of digital certificates requires trust in any one organization. I can email you a whole pack of digital certificates–one from American Express, one from the U.S. Post Office, one from the Catholic Church, one from my university, one from Microsoft, one from Apple, one from AOL–and you can have your computer check all of them and make sure they all agree. It is unlikely that a single bad guy has infiltrated all of them.
So far I have been assuming that real world identities are unique--each individual has only one. But each of us has, in a very real sense, multiple identities--there are different things about us that are relevant identifiers to different people. What my students need to know is that a message really came from the professor teaching the course they are taking. What my daughter needs to know is that it really came from her father. One can imagine circumstances where it is important to keep multiple real world identities separate--to conceal from some of the people you are interacting with identifying features that you want to be able to reveal to others. A system of multiple certifying authorities makes that possible--provided you remember which certificates to send to which correspondent. Sending your superior in the criminal organization you are infiltrating the certificate identifying you as a police officer might be hazardous.
One of the attractive features of the world created by these technologies is free speech. If I communicate online under my own name, using encryption, I can be betrayed only by the person I am communicating with. If I do it using an online persona, with reputation but with no link to my realspace identity, not even the people I communicate with can betray me. Thus strong privacy creates a world which is, in important ways, safer than the one we now live in–a world where you can say things other people disapprove of without the risk of punishment, legal or otherwise.
The second amendment to the U.S. constitution guarantees Americans the right to bear arms. A plausible interpretation of its history views it as a solution to a problem of considerable concern to 18th century thinkers–the problem of standing armies. Everyone knew that professional armies beat amateur armies. Everyone also knew–with Cromwell's dictatorship still fairly recent history–that a professional army posed a serious risk of military takeover.
The Second Amendment embodied an ingenious solution to that problem. Combine a small professional army under the control of the federal government with an enormous citizen militia–every able bodied adult man. Let the Federal government provide sufficient standardization so that militia units from different states could work together but let the states appoint officers–thus making sure that the states and their citizens maintained control over the militia. In case of foreign invasion, the militia would provide a large, if imperfectly trained and disciplined, force to supplement the small regular army. In case of an attempted coup by the Federal government, the Federal army would find itself outgunned a hundred to one.
The beauty of this solution is that it depends, not on making a military takeover illegal, but on making it impossible. In order for that takeover to occur, it would first be necessary to disarm the militia. But until the takeover had occurred, the second Amendment prevented the militia from being disarmed, since any such attempt would be seen as a violation of the Constitution and resisted with force.
It was an elegant solution two hundred years ago, but I am less optimistic than some of my friends about its relevance today. The U.S. has a much larger professional military, relative to its population, than it did then, the states are much less independent than they were, and the gap between civilian and military weaponry has increased enormously.
Other things have changed as well over two hundred years. In a world of broad based democracy and network television, conflicts between the U.S. government and its citizens are likely to involve information warfare, not guns. A government that wants to do bad things to its citizens will do them by controlling the flow of information in order to make them look like good things.
In that world, widely available strong encryption functions as a virtual second amendment. As long as it exists, the government cannot control the flow of information. And once it does exist, eliminating it, like disarming an armed citizenry, is extraordinarily difficult–especially for a government that cannot control the flow of information to its citizens about what it is doing.
Freedom of speech is something most people, at least in this country, are in favor of. But strong privacy will also reduce the power of government in less obviously desirable ways. Activities that occur entirely in cyberspace will be invisible to outsiders–including ones working for the federal government. It is hard to tax or regulate things you cannot see.
If I earn money selling services in cyberspace and spend it buying goods in realspace, the government can tax my spending. If I earn money selling goods in realspace and spend it buying services in cyberspace, they can tax my income. But if I earn money in cyberspace and spend it in cyberspace, they cannot observe either income or expenditure and so will have nothing to tax.
Similarly for regulation. I am, currently, a law professor but not a member of the California bar, making it illegal for me to sell certain sorts of legal services in California. Suppose I wanted to do so anyway. If I do it as David D. Friedman I am likely to get in trouble. But if I do it as Legal Eagle Online, taking care to keep the true name--the real world identity--of Legal Eagle a secret, there is not much the California Bar can do about it.
In order to sell my legal services I have to persuade someone to buy them. I cannot do that by pointing potential customers at my books and articles, because they were all published under my own name. What I can do is to start by giving advice for free and then, when the recipients find that the advice is good–perhaps by checking it against the advice of their current lawyers–raise my price. Thus over time I establish an online reputation for an online identity guaranteed by my digital signature.
Legal advice is one example; the argument is a general one. Once strong privacy is well established, legal regulation of information services can no longer be enforced. Governments may still attempt to maintain the quality of professional services by certifying professionals–providing information as to who they believe is competent. But it will no longer be possible to force customers to act on that information–to legally forbid them from using uncertified providers, as they currently are legally forbidden to use unlicensed doctors or lawyers who have not passed the bar.
Reducing the government's ability to collect taxes and regulate professions is in my view a good thing, although some will disagree. But the same logic also applies to government activities I approve of, such as preventing theft and murder. Online privacy will make it harder to keep people from sharing stolen credit card numbers or information on how to kill people, or organizing plots to steal things or blow things up.
This is not a large change; the internet and strong encryption merely make it somewhat easier for criminals to do things they are doing already. A more serious problem is that, by making it possible to combine anonymity and reputation, strong privacy makes possible criminal firms with brand name reputation.
Suppose you very much want to have someone killed. The big problem is not the cost; so far as I can gather from public accounts, hiring a hit man costs less than buying a car, and most of us can afford a car. The big problem–assuming you have already resolved any moral qualms–is finding a reliable seller of the service you want to buy.
4. Send a message to all major media outlets, pointing out that the number on all of those bulletin boards is a public key. If they use it to decrypt the New York Times ad they will get a description of the assassination, published the day before it happened.
You have now made sure that everyone in the world has, or can get, your public key–and knows that it belongs to an organization willing and able to kill people. Once you have taken steps to tell people how to post messages where you can read them, everyone in the world will know how to send you messages that nobody else can read and how to identify messages that can only have come from you. You are now in business as a middleman selling the services of hit men. Actual assassinations still have to take place in realspace, so being a hit man still has risks. But the problem of locating a hit man–when you are not yourself a regular participant in illegal markets–has been solved.
Murder Incorporated is a particularly striking example of the problem of criminal firms with brand name reputations, operating openly in cyberspace while keeping their realspace identity and location secret, but there are many others. Consider "Trade Secrets Inc.–We Buy and Sell." Or an online pirate archive, selling other people's intellectual property in digital form, computer programs, music, and much else, for a penny on the dollar, payable in anonymous digital cash.
Faced with such unattractive possibilities, it is tempting to conclude that the only solution is to ban encryption. A more interesting approach is to find ways of achieving our objectives–preventing murder, providing incentives to produce computer programs–that are made easier by the same technological changes that make the old ways harder.
Anonymity is the ultimate defense. Not even Murder Incorporated can assassinate you if they do not know who you are. If you plan to do things that might make people want to kill you–publish a book making fun of the prophet Mohammed, say, or revealing the true crimes of Bill (Gates or Clinton)–it would be prudent not to do it under a name linked to your realspace identity. That is not a complete solution–the employer of the hit man might, after all, be your wife, and it is hard to conduct a marriage entirely in cyberspace–but it at least protects many potential victims.
Similarly for the more common, if less dramatic, problem of protecting intellectual property online. Copyright law will become largely unenforceable, but there are other ways of protecting property. One–using encryption to provide the digital equivalent of a barbed wire fence protecting your property–will be discussed at some length in a later chapter.
For the past two decades powerful elements in the U.S. government, most notably the National Security Agency and the FBI, have been arguing for restrictions on encryption designed to maintain their ability to tap phones, read seized records, and in a variety of other ways violate privacy for what they regard as good purposes. After my description of the down side of strong privacy, readers may think there is a good deal to be said for the idea.
There are, however, practical problems. The most serious is that the cat is already out of the bag–has been for more than twenty-five years. The mathematical principles on which public key encryption is based are public knowledge. That means that any competent computer programmer with an interest in the subject can write encryption software. Quite a lot of such software has already been written and is widely available. And given the nature of software, once you have a program you can make an unlimited number of copies. It follows that keeping encryption software out of the hands of spies, terrorists, and competent criminals is not a practical option. They probably have it already, and if they don't they can easily get it.
Banning the production and possession of encryption software is not a practical option, but what about banning the use of encryption--at least of encryption that cannot be broken by law enforcement agents? To enforce such a ban law enforcement agencies would randomly monitor a substantial fraction of all communications, taking advantage of the massive wiretapping capacity that current law requires the phone companies to provide them and expanding the legal requirements to apply to other communication providers as well. Any message that looked like gibberish and could not be shown to be the result of a legal form of encryption would lead to legal action against its author.
One practical problem is the enormous volume of information flowing over computer networks. A second problem is that while it is easy enough to tell whether a message consists of text written in English, it is very much harder--in practice impossible--to identify other sorts of content well enough to be sure that they do not consist of, or contain, encrypted messages.
Consider a three million pixel digital photo. It is made up of three million colored dots, each described by three numbers--intensity of red, intensity of blue, intensity of green. Each of those numbers is, from the standpoint of the computer, a string of ones and zeros. Changing the rightmost digit--the "least significant bit"--from one to zero or zero to one will have only a tiny effect on the appearance of the dot, just as changing the rightmost digit in a long decimal number, say 9,319,413, has only a very small effect on its size.
To conceal a million character long encrypted message in my digital photo, I simply replace the least significant bit of each of the numbers in the photo with one bit of the message. The photo is now a marginally worse picture than it was--but there is no way an FBI agent, or a computer working for an FBI agent, can know precisely what the photo ought to look like. This is a simple example of steganography--concealing messages.
It is not practical for law enforcement to keep sophisticated criminals, spies, or terrorists from possessing and using strong encryption software. What is possible is to put limits on the encryption software publicly marketed and publicly used–to insist, for example, that if AOL or Microsoft builds encryption into their programs it must contain a back door permitting properly authorized persons–a law enforcement agent with a court order, say–to read the message without the key.
The problem with such an approach is that there is no way of giving law enforcement what it wants without imposing very high costs on the rest of us. To see why, consider the description of adequate regulation given by Louis Freeh, who was at the time the head of the FBI. He said that what he needed was the ability to decrypt any encrypted message in half an hour. The equivalent in realspace would be legal rules that let properly authorized law enforcement agents open any lock in the country in half an hour. That includes not only the lock on your front door but the locks protecting bank vaults, trade secrets, lawyers' records, lists of contributors to unpopular causes, and much else.
While access would be nominally limited to those properly authorized, it is hard to imagine any system flexible enough to meet Freeh's schedule that was not vulnerable to misuse. If being a police officer gives you access to locks with millions of dollars behind them, in cash, diamonds, or information, some cops will become criminals and some criminals will become cops. Proper authorization presumably means a court order–but not all judges are honest, and half an hour is not long enough for even an honest judge to verify what the officer applying for the court order tells him.
Encryption provides the locks for cyberspace. If nobody has strong encryption, everything in cyberspace is vulnerable to a sufficiently sophisticated private criminal. If people have strong encryption but it comes with a mandatory back door accessible in half an hour to any police officer with a court order, than everything in cyberspace is vulnerable to a private criminal with the right contacts. Those locks have millions, probably billions, of dollars worth of stuff behind them–money in banks, trade secrets in computers.
One could imagine a system for accessing encrypted documents so rigorous that it required written permission from the President, Chief Justice and Attorney General and only got used once every two or three years. Such a system would not seriously handicap online dealings. But it would also be of no real use to law enforcement, since there would be no way of knowing which one communication out of the billions crisscrossing the internet each day they needed to crack.
In order for encryption regulation to be useful, it has to either prevent the routine use of encryption or make it reasonably easy for law enforcement agents to access encrypted messages. Doing either will seriously handicap the ordinary use of the net. Not only will it handicap routine transactions, it will make computer crime easier by restricting the technology best suited to defend against it. And what we get in exchange is protection not against the use of encryption by sophisticated criminals and terrorists–there is no way of providing that–but only against the use of encryption by ordinary people and unsophisticated criminals.
Readers who have followed the logic of the argument may point out that even if we cannot keep sophisticated criminals from using strong encryption, we may be able to prevent ordinary people from using it to deal with sophisticated criminals--and doing so would make my business plan for Murder Incorporated unworkable. While it would be a pity to seriously handicap the development of online commerce, some may think that price worth paying to avoid the undesirable consequences of strong privacy.
You are thinking of going into the business of growing trees–hardwoods that mature slowly but produce valuable lumber. It will take forty years from planting to harvest. Should you do it? The obvious response is not unless you are confident of living at least another forty years.
Like many obvious responses, it is wrong. Twenty years from now you will be able to sell the land, covered with twenty year old trees, for a price that reflects what those trees will be worth in another twenty years. Following through the logic, it is straightforward to show that if what you expect the trees to sell for will more than repay your investment, including forty years of compound interest, you should do it.
This assumes a world of secure property rights. Suppose we assume instead that your trees are quite likely, at some point during the next forty years, to be stolen–legally via government confiscation or illegally by someone driving into the forest at night, cutting them down, and carrying them off. In that case you will only be willing to go into the hardwood business if the return from selling the trees is enough larger than the ordinary return on investments to compensate you for the risk.
Generalizing the argument, we can see that long run planning depends on secure property rights. If you are confident that what you own today you will still own tomorrow–unless you choose to sell it–you can afford to give up benefits today in exchange for greater benefits tomorrow, or next year, or next decade. The greater the risk that what you now own will be taken away from you at some point in the future, the greater the incentive to limit yourself to short term projects.
Politicians in a democratic society have insecure property rights over their political assets; Clinton could rent out the White House but he could not sell it. One consequence is that in such a system government policy is dominated by short run considerations–most commonly the effect of current policy on the outcome of the next election. Very few politicians will accept political costs today in exchange for benefits ten or twenty or thirty years in the future, because they know that when the benefits arrive someone else will be in power to enjoy them.
Preventing the development of strong privacy means badly handicapping the current growth of online commerce. It means making it easier for criminals to hack into computers, intercept messages, defraud banks, steal credit cards. It is thus likely to be politically costly, not ten or twenty years from now but in the immediate future.
What do you get in exchange? The benefit of encryption regulation–the only substantial benefit, since it cannot prevent the use of encryption by competent criminals–is preventing the growth of strong privacy. From the standpoint of governments, and of people in a position to control governments, that may be a large benefit, since strong privacy threatens to seriously reduce government power, including the power to collect taxes. But it is a long run threat, one that will not become serious for a decade or two. Defeating it requires the present generation of elected politicians to do things that are politically costly for them–in order to protect the power of whoever will hold their offices ten or twenty years from now.
The politics of encryption regulation so far fits the predictions of this analysis. Support for regulation has come almost entirely from long lived bureaucracies such as the FBI and NSA. So far, at least, they have been unable to get elected politicians to do what they want when doing so involves any serious political cost.
If this argument is right, it is unlikely that serious encryption regulation, sufficient to make things much easier for law enforcement and much harder for the rest of us, will come into existence, at least in the U.S. Hence it is quite likely that we will end up with something along the lines of the world of strong privacy described in this chapter.
In my view that is a good thing. The attraction of a cyberspace protected by encryption is that it is a world where all transactions are voluntary: You cannot get a bullet through a T1 line. It is a world where the technology of defense has finally beaten the technology of offense. In the world we now live in, our rights can be violated by force or fraud; in a cyberspace protected by strong privacy, only by fraud. Fraud is dangerous, but less dangerous than force. When someone offers you a deal too good to be true, you can refuse it. Force makes it possible to offer you deals you cannot refuse.
In several places in this chapter I have simplified the mechanics of encryption, describing how something could be done but not how it is done. Thus, for example, public key encryption is usually done not by encrypting the message with the recipient's public key but by encrypting the message with an old fashioned single key encryption scheme, encrypting the single key with the recipient's public key, and sending both encrypted message and encrypted key. The recipient uses his private key to decrypt the encrypted key and uses that to decrypt the message. Although this is a little more complicated than the method I described, in which the message itself is encrypted with the public key, it is also significantly faster.
Similarly, a digital signature is actually calculated by using a one way hash function to create a message digest of the original message and encrypting the digest with your private key, then sending both message and digest. The recipient decrypts the digest, creates a second digest from the message using the same hash function, and compares them to make sure they are identical, as they will be if the message has not been changed and the public and private keys match.
A second set of complications, also ignored but more important, concerns indirect ways in which cryptographically protected anonymity might be attacked. One example is textual analysis. A perceptive reader or sufficiently sophisticated software might recognize stylistic similarities between the books of David Friedman and the written legal advice of Legal Eagle. The odds that the same person has read work by both identities closely enough to identify them as the same may not be very high--but software designed for textual analysis could create a database linking a very large number of known authors to stylistic identifiers for their writing. A simple one for me would be the overuse of "hence."
Another problem is that most of what I have described depends on your having complete control over your computer--or at least over a smart card containing your private key and enough software to use it to encrypt and decrypt. If someone else can get at your private key by either a physical or virtual intrusion, all bets are off. If someone else can get control of your computer, even without access to your private key, he can use that control to mislead you in a variety of ways--for instance, by falsely reporting that a message has a valid digital signature. As Mark Miller puts it, "people don't sign, computers sign." And encrypt, decrypt, and check signatures. So a crucial element of strong privacy is the ability of individuals to control the computers they use. And, in practice, a secure system is likely to include provisions for publicly canceling private keys that may have fallen into the wrong hands.
Imagine a world in which people know how to multiply numbers but not how to divide them. Further imagine that there exists some mathematical procedure capable of generating pairs of numbers that are inverses of each other: X and 1/X. Finally, assume that the messages we wish to encrypt are simply numbers.
Suppose someone has the encrypted message MX and the key X. Since he does not know how to divide, he cannot decrypt the message and find out what the number M is. If, however, he has the other key, 1/X, he can multiply it times the encrypted message to get back the original M:
Public key encryption in the real world depends on mathematical operations which, like multiplication and division in my example, are very much easier to do in one direction than the other. The RSA algorithm, for example, at present the most widely used form of public key encryption, depends on the fact that it is easy to generate a large number by multiplying together several large primes but much harder to start with a large number and factor it to find the primes that can be multiplied together to give that number. The keys in such a system are not literally inverses of each other, like X and 1/X, but they are functional inverses, since either one can undo (decrypt) what the other does (encrypts).
M is my actual message; [M,K] means "message M encrypted using key K." Kr is the public key of the intended recipient of my message, Er is his email address. I am using a total of three remailers; their public keys are K1, K2, K3 and their email addresses are E1, E2, E3. What I send to the first remailer is:
Some years ago I decided to set up my own web site. One question was how much of my life to include. Did I want someone looking at my academic work–perhaps a potential employer–to discover that I had put a good deal of time and energy into researching medieval recipes, a subject unrelated to either law or economics, thus (arguably) proving that I was a dilettante rather than a serious scholar? Did I want that same potential employer to discover that I held unfashionable political opinions, ranging from support for drug legalization to support for open immigration? And did I want someone who might be outraged at my political views to be able to find out what I and my family members looked like and where we lived?
I concluded that keeping my life in separate compartments was not a practical option. I could have set up separate sites for each part, with no links between them–but anyone with a little enterprise could have found them all with a search engine. And even without a web site, anyone who wanted to know about me could find vast amounts of information by a quick search of Usenet, where I have been an active poster for more than ten years. Keeping my virtual mouth shut was not a price I was willing to pay, and nothing much short of that would do the job.
This is not a new problem. Before the internet existed, I still had to decide to what degree I wanted to live in multiple worlds–whether, for example, I should discuss my hobbies or my political views with professional colleagues. What has changed is the scale of the problem. In a large world where personal information was spread mostly by gossip and processed almost entirely by individual human brains, facts about me were to a considerable extent under my control–not because they were secret but because nobody had the time and energy to discover everything knowable about everyone else. Unless I was a major celebrity, I was the only one specializing in me.
That was not true everywhere. In the good old days–say most of the past three thousand years–one reason to run away to the big city was to get a little privacy. In the villages in which most of the world lived, anyone's business was everyone's business. In Sumer or Rome or London the walls were no more opaque and you were no less visible than at home, but there was so much going on, so many people, that nobody could keep track of it all.
That form of privacy–privacy through obscurity–cannot survive modern data processing. Nobody can keep track of it all but many of us have machines that can. The data of an individual life is not notably more complicated than it was two thousand years ago. It is true that the number of lives has increased thirty or forty fold in the last two thousand years, but our ability to handle data has increased a great deal more than that. Not only can we keep track of the personal data for a single city, we could, to at least a limited degree, keep track of the data for the whole world, assuming we had it and wanted to.
The implications of these technologies have become increasingly visible over the past ten or fifteen years. Many are highly desirable. The ability to gather and process vast amounts of information permits human activities that would once have been impossible; to a considerable extent it abolishes the constraints of geography on human interaction. Consider two examples.
Thirty some years ago, I spent several summers as a counselor at a camp for gifted children. Many of the children, and some of my fellow counselors, became my friends–only to vanish at the end of the summer. From time to time I wondered what had become of them.
I can now stop wondering, at least about some. A year or two ago, someone who had been at the camp organized an email list for ex-campers and counselors; membership is currently approaching two hundred. That list exists because of technologies that make possible not only easy communication with people spread all over the country but also finding them in the first place–searching a very large haystack for a few hundred needles. Glancing down a page of Yahoo-Groups, I find nearly a thousand such lists, each for a different camp; the largest has more than three hundred members.
For a second example, consider a Usenet Newsgroup that I stumbled across many years ago, dedicated to a technologically ingenious but now long obsolete video game machine of which I once owned two–one for my son and one for me. Reading the posts, I discovered that someone in the group had located Smith Technologies, the firm that held the copyright on the Vectrex and its games, and written to ask permission to make copies of game cartridges. The response, pretty clearly from the person who designed the machine, was an enthusiastic yes. He was obviously delighted to discover that there were people still playing with his toy, his dream, his baby. Not only were they welcome to copy cartridges, if anyone wanted to write new games he would be happy to provide the necessary software. It was a striking, to me heartwarming, example of the ability of modern communications technology to bring together people with shared enthusiasms.
My examples so far are small and non-commercial–people learning other people's secrets or getting together with old friends or strangers with shared interests. While such applications of informational technology are an increasingly important feature of the world we live in, they are not nearly as prominent or politically contentious as large scale commercial uses of personal information. A first step in understanding such activities is to think about why some people would want to collect and use individual information about large numbers of strangers. Consider two examples.
You are planning to open a new grocery store in an existing chain–a multi-million dollar gamble. Knowledge about the people who live in the neighborhood–how likely they are to shop at your store and how much they will buy–is crucial. How do you get it?
The first step is to find out what sort of people shop in your present stores and what they buy. To do that you offer customers a shopping card. The card is used to get discounts, so shoppers pass the card through a reader almost every time they go through the checkout, providing you lots of detailed information about their shopping patterns. One way you use that information is to improve the layout of existing stores; if people who buy spaghetti almost always buy spaghetti sauce at the same time, putting them in the same aisle will make your store more convenient, hence more attractive, hence more profitable.
Another way is to help you decide where to locate your new store. If you discover that old people on average do not buy very much of what you are selling, perhaps a retirement community is the wrong place. If couples with young children do all their shopping on the weekend when one parent can stay home with the kids while the other shops, singles shop after work on weekdays (weekends are for parties), and retired people during the working day (shorter lines), then a location with a suitable mix of all three types will give you a more even flow of customers, higher utilization of the store, and greater profits. Combining information about your customers with information about the demography of alternative locations, provided free by the U.S. census or at a higher price by private firms, you can substantially improve the odds on your gamble.
For a higher tech application of information technology, consider advertising. When I read a magazine, I see the same ads as everyone else–mostly for things I have no interest in. But a web page can send a different response to every query, customizing the ads I see to fit my interests. No TV ads, since I do not own a television, lots of ads for high tech gadgets.
In order to show me the right ads, the people managing the page need to know what I am interested in. Striking evidence that such information is already out there and being used appears in my mailbox on a regular basis–a flood of catalogs.
How did the companies sending out those catalogs identify me as a potential customer? If they could see me, it would be easy. Not only am I wearing a technophile ID bracelet (Casio calls it a databank watch), I am wearing the model that, in addition to providing a calculator, database, and appointment calendar, also checks in three times a day with the U.S. atomic clock to make sure it has exactly the right time. Sharper Image, Techno-Scout, Innovations et. al cannot see what is on my wrist–although if the next chapter's transparent society comes to pass that may change. They can, however, talk to each other. When I bought my Casio Wave Captor Databank 150 (the name would have been longer but they ran out of room on the watch), that purchase provided the proprietors of the catalog I bought it from with a snippet of information about me. They no doubt resold that information to anyone willing to pay for it. Sellers of gadgets respond to the purchase of a Casio Wave Captor the way sharks respond to blood in the water.
As our technology gets better, it becomes possible to create and use such information at lower cost and in much more detail. A web page can keep track not only of what you buy but of what you look at and for how long. Combining information from many sources, it becomes both possible and potentially profitable to create databases with detailed information on the behavior of a very large number of individuals, certainly including me, probably including you.
The advantages of that technology to individual customers are fairly obvious. If I am going to look at ads, I would prefer that they be ads for things I might want to buy. If I am going to have my dinner interrupted by a telephone call from a stranger, I would prefer it be someone offering to prune my aging apricot tree–last year's crop was a great disappointment–rather than someone offering to refinance my nonexistent mortgage.
As these examples suggest, there are advantages to individuals to having their personal information publicly available and easy to find. What are the disadvantages? Why are many people upset about the loss of privacy and the misuse of "their" private information? Why did Lotus, after announcing its plan to offer masses of such data on a CD, have to cancel it in response to massive public criticism? Why is the question of what information web sites are permitted to gather about their customers, what they may do with it, and what they must tell their customers about what they are doing with it, a live political and legal issue?
The economist's response is that they already do get the money. The fact that selling me a gadget provides the seller with a snippet of information that he can then resell makes the transaction a little more profitable for the seller, attracts additional sellers, and ultimately drives down the price I must pay for the gadget. The effect is tiny–but so is the price I could get for the information if I somehow arranged to sell it myself. It is only the aggregation of large amounts of such information that is valuable enough to be worth the trouble of buying and selling it.
A different response, motivated by moral intuition rather than economics, is that the argument confuses information about me–located in someone else's mind or database–with information that belongs to me. How can I have a property right over the contents of your mind? If I am stingy or dishonest, do I have an inherent right to forbid those I treat badly from passing on the information? If not, why should I have a right to forbid them from passing on other information about me?
There is, however, a vaguer but more important reason why people are upset at the idea of a world where anyone willing to pay can learn almost everything about them. Many people value their privacy not because they want to be able to sell information about themselves but because they do not want other people to have it. While it is hard to come up with a clear explanation of why we feel that way–a subject discussed at greater length in the final chapter of this section–it is clear that we do. At some level, control over information about ourselves is seen as a form of self protection. The less other people can find out about me, the less likely it is that they will use information about me either to injure me or to identify me as someone they wish to injure–which brings us back to some of the issues I considered when setting up my web page.
Concerns with privacy apply to at least two sorts of personal information. One is information generated by voluntary transactions with some other party–what products I have bought and sold, what catalogs and magazines I subscribe to, what web pages I browse. Such information starts in the possession of both parties to the transaction–I know what I bought from you, you know what you sold to me. The other kind is information generated by actions I take that are publicly visible–court records, newspaper stories, gossip.
Ownership of the first sort of information can, at least in principle, be determined by contract. A magazine can, and some do, promise its subscribers that their names will not be sold. Software firms routinely offer people registering their programs the option of having their names made or not made available to other firms selling similar products. Web pages can, and many do, provide explicit privacy policies limiting what they will do with the information generated in the process of browsing their sites.
To understand the economics of the process, think of information as a produced good; like other such goods, who owns how much of it is determined by agreement among the parties who produce it. When I subscribe to a magazine, I and the publisher are jointly producing a piece of information about my tastes–the information that I like that kind of magazine. That information is of value to the magazine, which may want to resell it. It is of value to me, either because I might want to resell it or because I might want to keep it off the market in order to protect my privacy. The publisher can, by selling subscriptions at a lower price without a privacy guarantee than with, offer to pay me for control over the information. If the information is worth more to me than he is offering, I refuse; if it is worth less, I accept. Control over the information ends up with whoever most values it. If no mutually acceptable terms can be found, I do not subscribe and that bit of information does not get produced.
This seems to imply that default rules about privacy, rules specifying who starts out owning the information, should not matter. A magazine subscription has one price with a privacy guarantee, another and slightly lower price without it. If the law assumes that magazines have the right to resell names unless they agree not to, then the ordinary subscription price is the price without privacy, the higher price with a guarantee the price with. If it assumes subscribers have the right not to have their names sold unless they agree to waive it, then the ordinary subscription price is the price with privacy, the lower price charged customers willing to sign a waiver the price without. Either way, control of the information goes to whichever party values it more and the price of that control is included in the cost of the subscription.
That would be a correct conclusion in a world where arranging contracts was costless–a world of zero transaction costs. In the world we now live in, it is not. Most of us, unless we care a great deal about our privacy, do not bother to read privacy policies. Even if I prefer that catalogs and mailing lists not resell information about me, it is too much trouble to check the small print on everything I might subscribe to. It would be still more trouble if every firm I dealt with offered two prices, one with and one without a guarantee of privacy, and more still if the firm offered a menu of levels of protection, each with its associated price.
The result is that most magazines and websites, at least in my experience, offer only a single set of terms; if they allow the subscriber some choice, it is not linked to price, probably because the amounts involved are too small to be worth bargaining over. Hence default rules matter and we get political and legal conflicts over the question of who, absent any explicit contractual agreement, has what control over the personal information generated by transactions.
That may change. What may change it is technology–the technology of intelligent agents. It is possible in principle, and is becoming possible in practice, to program your web browser with information about your privacy preferences. Using that information, the browser can decide what different levels of privacy protection are or are not worth to you and select pages and terms accordingly. Browsers work cheap.
For this to happen we need a language of privacy–a way in which a web page can specify what it does or does not do with information generated by your interactions with it in a form your browser can understand. Once such a language exists and is in widespread use, the transaction costs of bargaining over privacy drop sharply. You tell your browser what you want and what it is worth to you, your browser interacts with a program on the web server hosting the page and configured by the page's owner. Between them they agree on mutually satisfactory terms–or they fail to do so, and you never see the page.
This is not a purely hypothetical idea. Its current incarnation is The Platform for Privacy Preferences, P3P, supported by both of the leading web browsers (Microsoft's Internet Explorer and Netscape's Navigator). Web pages provide information about their privacy policies, users provide information about what they are willing to accept, and the browser notifies the user if a site's policies are inconsistent with his requirements. Presumably a web site that misrepresented its policies could be held liable for doing so, although, so far as I know, no such case has yet reached the courts.
Suppose we solve the transaction cost problems, permitting a true market in personal information. There remains a second problem–enforcing the rights you have contracted for. You can check the contents of your safe deposit box to be sure they are still there, but it does no good to check the contents of a firm's database to make sure your information is still there. They can sell your information and still have it.
The problem of enforcing rights with regard to information is not limited to a future world of automated contracting–it exists today. As I like to put it when discussing current privacy law, there are only two ways of controlling information about you and one of them doesn't work.
The way that doesn't work is to let other people have information about you and then make rules about how they use it. That is the approach embodied in modern privacy law. If you disagree with my evaluation, I suggest a simple experiment. Start with five thousand dollars, the name of a random neighbor, and the Yellow Pages for "Investigators." The objective is to end up with a credit report on your neighbor–something that, under the Federal Fair Credit Reporting Act, you are not allowed to have. If you are a competent con man or internet guru, you can probably dispense with the money and the phone book.
That approach to protecting privacy works poorly when enforcing terms imposed by federal law. It should work somewhat better for enforcing terms agreed to in the marketplace, since in that case it is supported by reputational as well as legal sanctions–firms do not want the reputation of cheating their customers. But I would still not expect it to work terribly well. Once information is out there, it is very hard to keep track of who has it and what he has done with it. It is particularly hard when there are many uses of the information that you do not want to prevent–a central problem with the Fair Credit Reporting Act. Setting up rules that permit only people with a legitimate reason to look at your credit report is hard; enforcing them is harder.
The other way of protecting information, the way that does work, is not to let the information out in the first place. That is how the strong privacy of the previous chapter was protected. You do not have to trust your ISP or the operator of an anonymous remailer not to tell your secrets; you haven't given them any secrets to tell.
There are problems with applying that approach to transactional information. When you subscribe to a magazine, the publisher knows who you are, or at least where you live–it needs that information to get the magazine to you. When you buy something from me, I know that I have sold it to you. The information starts in the possession of both of us–short of controlled amnesia, how can it end in the possession of only one?
In our present world, that is a nearly insuperable problem. But in a world of strong privacy, you do not have to know who you are selling to. If, at some point in the future, privacy is sufficiently important to people, online transactions can be structured to make each party anonymous to the other, with delivery either online via a remailer (for information transactions) or the less convenient realspace equivalent of a physical forwarding system. In such a world, we are back with one of the oldest legal rules of all–possession. If I have not revealed the information to you, you do not have it, so I need not worry about what you are going to do with it.
Returning to something more like our present world, one can imagine institutions that would permit a considerably larger degree of individual control over the uses of personal information than now exists, modeled on arrangements now used to maintain firms' control over their valuable mailings lists. Individuals subscribing to a magazine would send the seller not their name and address but the name of the information intermediary they employed and the number by which that intermediary identified them. The magazine's publisher would ship the intermediary four thousand copies and the numbers identifying four thousand (anonymous) subscribers, the intermediary would put on the address labels and mail them out. The information would never leave the hands of the intermediary, a firm in the business of protecting privacy. To check its honesty, I establish an identity with my own address and the name "David Freidmann," subscribe to a magazine using that identity, and see if David Freidmann gets any junk mail.
Such institutions would be possible and, if widely used, not terribly expensive. My guess is that it will not happen. The reason is that most people either do not want to keep the relevant information secret (I don't, for example; I like gadget catalogs) or do not want to enough to go to any significant trouble. But it is still worth thinking about how they could get privacy if they wanted to, and those thoughts may become of more practical relevance if technological progress sharply reduces the cost.
These discussions suggest two different ways in which the technologies that help to create the problem could be used to solve it. Both are ways of making it possible for an individual to treat information about himself as his property. One is to use computer technologies, including encryption, to give me or my trusted agents direct control over the information, permitting others to use it only with my permission--for instance, to send me information about goods they think I might want to buy--without ever getting possession of it.
The other is to treat information as we now treat real estate--to permit individuals to put restrictions on the use of property they own which are binding on subsequent purchasers. If, for example, I sell you an easement permitting you to cross my land in order to reach yours and later sell the land, the easement is good against the buyer. Even if he did not know it existed, he now has no right to refuse to let you through.
That is not true for most other forms of property. If I sell you a car with the restriction that you agree not to permit it to be driven on Sunday, I may be able to enforce the restriction against you, I may be able to sue you for damages if, contrary to our contract, you sell it to someone else without requiring him to abide by the agreement. But I have no way of enforcing the restriction on him.
One plausible explanation of the difference is that land ownership involves an elaborate system for recording title, including modifications such as easements, making it possible for the prospective purchaser to determine in advance what obligations run with the land he is considering. We have no such system for recording ownership, still less for recording complicated forms of ownership, for most other sorts of property.
At first glance, personal information seems even less suitable for the more elaborate form of property rights than pens, chairs, or computers. In most likely uses, the purchaser is buying information about a very large number of people. If my particular bit of information is only worth three cents to him, a legal regime that requires him to spend a dollar checking the restrictions on it before he uses it means that the information will never be used.
A possible solution is to take advantage of the same data processing technologies that make it possible to aggregate and use information on that scale to maintain the record of complicated property rights in it. One could imagine a legal regime where every piece of personal information had to be accompanied by a unique identification number; using that number, a computer could access information about the restrictions on use of that information in machine readable form at negligible cost. Again, it does not seem likely in the near future, but might become a real possibility further down the road.
"The trend began in Britain a decade ago, in the city of King's Lynn, where sixty remote controlled video cameras were installed to scan known "trouble spots," reporting directly to police headquarters. The resulting reduction in street crime exceeded all predictions; in or near zones covered by surveillance, it dropped to one seventieth of the former amount. The savings in patrol costs alone paid for the equipment in a few months. Dozens of cities and towns soon followed the example of King's Lynn. Glasgow, Scotland reported a 68% drop in citywide crime, while police in Newcastle fingered over 1500 perpetrators with taped evidence. (All but seven pleaded guilty, and those seven were later convicted.) In May 1997, a thousand Newcastle soccer fans rampaged through downtown streets. Detectives studying the video reels picked out 152 faces and published eighty photos in local newspapers. In days, all were identified."
In the early 19th Century Jeremy Bentham, one of the oddest and most original of English thinkers, designed a prison where every prisoner could be watched at all times. He called it the Panopticon. Elements of his design were later implemented in real prisons in the hope of better controlling and reforming prisoners. If Brin is correct, it is now in the process of being implemented on a somewhat larger scale.
The case of video surveillance in Britain suggests one reason–it provides an effective and inexpensive way of fighting crime. In the U.S., cameras have long been used in department stores to discourage shoplifting. More recently they have begun to be used to apprehend drivers who run red lights. While there have been challenges on privacy grounds, it seems likely that the practice will spread.
Crime prevention is not the only benefit of surveillance. Consider the problem of controlling auto emissions. The current approach imposes a fixed maximum on all cars, requires all to be inspected, including new cars which are almost certain to pass, and provides no incentive for lowering emissions below the required level. It makes almost no attempt to selectively deter emissions at places and times when they are particularly damaging.
One could build a much superior system using modern technology. Set up unmanned detectors that measure emissions by shining a beam of light through the exhaust plume of a passing automobile; identify the automobile by a snapshot of the license plate. Bill the owner by amount of emissions and, in a more sophisticated system, when and where they were emitted.
None of these useful applications of technology poses, at first glance, a serious threat to privacy. Few would consider it objectionable to have a police officer wandering around a park or standing on a street corner, keeping an eye out for purse snatchers and the like. Video cameras on poles are merely a more convenient way of doing the same thing–comfortably and out of the wet. Cameras at red lights, or photometric monitoring of a car's exhaust plume, are cheaper and more effective substitutes for traffic cops and emission inspections. What's the problem?
The problem comes when we combine this technology with others. A cop on the street corner may see you, he may even remember you, but he has no way of combining everything he sees with everything that every other cop sees and so reconstructing your daily life. A video camera produces a permanent record. It is now possible to program a computer to identify a person from a picture of his face. That means that the video tapes produced by surveillance cameras will be convertible into a record of where particular people were when. Add in the ability of modern data processing to keep track of enormous amounts of information and we have the possibility of a world where a large fraction of your doings are an open book to anyone with access to the appropriate records.
So far I have been discussing the legal use of surveillance technology, mostly by governments–already happening on a substantial scale and likely to increase in the near future. A related issue is the use of surveillance technology, legally or illegally, by private parties. Lots of people own video cameras and those cameras are getting steadily smaller. One can imagine, a decade or two down the road, an inexpensive video camera with the size and aerodynamic characteristics of a mosquito. The owner of a few dozen of them could collect a lot of information about his neighbors–or anyone else.
Of course technological development, in this area as in others, is likely to improve defense as well as offense. Possible defenses against such spying range from jamming transmissions to automated dragon flies programmed to hunt down and destroy video mosquitoes. Such technologies might make it possible, even in a world where all public activities were readily observable, to maintain a zone of privacy within one's own house.
Then again, they might not. We have already had court cases over whether it is or is not a search to deduce marijuana growing inside a house by using an infrared detector to measure its temperature from the outside. We already have technologies that make it possible to listen to a conversation by bouncing a laser beam off a window and reconstructing from the measured vibrations of the glass the sounds that cause them. Even if it is not possible to spy on private life directly, further developments along these lines may make it possible to achieve the same objective indirectly.
Brin argues that privacy will no longer be one of them. More interestingly, he argues that that may be a good thing. He proposes as an alternative to privacy universal lack of privacy–the transparent society. The police can watch you–but someone is watching them. The entire system of video cameras, including cameras in every police station, is publicly accessible. Click on the proper web page–read, presumably, from a hand held wireless device–and you can see anything that is happening in any public place. Parents can keep an eye on their children, children on their parents, spouses on each other, employers on employees and vice versa, reporters on cops and politicians.
Many years ago I was a witness to a shooting; one result was the opportunity for a certain amount of casual conversation with police officers. One of them advised me that, if I ever happened to shoot a burglar, there were two things I should make sure of–that he ended up dead and that the body ended up inside my house.
The advice was well meant and perhaps sensible–under U.S. law a homeowner is in a much stronger legal position killing an intruder inside his house than outside, and a dead man cannot give his side of the story. But it was also, at least implicitly, advice to commit a felony. That incident, and a less friendly one in another jurisdiction where I was briefly under arrest for disturbing the peace (my actual offense was aiding and abetting someone else in asking a policeman for his badge number), convinced me that at least some law enforcers, even ones who are honestly trying to prevent crime, have an elastic view of the application of the law to themselves and their friends. The problem is old enough to be the subject of a Latin tag–Qui custodes ipsos custodiet. Who shall guard the guardians?
The transparent society offers a possible solution. Consider the Rodney King case. A group of policemen captured a suspect and beat him up–a perfectly ordinary sequence of events in many parts of the world, including some parts of the U.S. Unfortunately for the police, a witness got the whole thing on video tape–with the result that several of the officers ended up in prison. In Brin's world, every law enforcement agent knows that he may be on candid camera–and conducts himself accordingly.
The first is getting there. If transparency comes, as it is coming in England, in the form of cameras on poles installed and operated by the government, Brin's version does not seem likely. All of the information will be flowing through machinery controlled by some level of government. Whoever is in charge can plausibly argue that although much of that information can and should be made publicly accessible, there ought to be limits. And even if they do not argue for limits, they can still impose them. If police are setting up cameras in police stations, they can arrange for a few areas to be accidentally left uncovered. If the FBI is in charge of a national network it can, and on all past evidence will, make sure that some of the information generated is accessible only to those who can be trusted not to misuse it–most of whom are working for the FBI.
The situation gets more interesting in a world where technological progress enables private surveillance on a wide scale, so that every location where interesting things might happen, including every police station, has flies on the wall watching what happens and reporting back to their owners. A private individual, even a large corporation, is unlikely to attempt the sort of universal surveillance that Brin imagines for his public system, so each individual will be getting information about only a small part of the world. But if that information is valuable to others, it can be shared. Governments might try to restrict such sharing. But in a world of strong privacy that will be hard to do, since in such a world information transactions will be invisible to outside parties. Combining ideas from several chapters of this section, one can imagine a future where Brin's transparent society is produced not by government but by private surveillance.
A universal spy network is likely to be an expensive proposition, especially if you include the cost of information processing–facial recognition of every image produced and analysis of the resulting data. No single individual, probably no single corporation, will find it in its interest to bear that cost to produce information for its own use, although a government might. The information will be produced privately only if there is some way in which it can be resold, giving the producer not only the value of his use of the information but the value of everyone's use of the information. So a key requirement for a privately generated transparent society is a well organized market for information.
Following Brin, I have presented the transparent society as a step into the future, enabled by video cameras and computers. One might instead view it as a step into the past. The privacy that most of us take for granted is to a considerable degree a novelty, a product of rising incomes in recent centuries. In a world where many people shared a single residence, where a bed at the inn was likely to be shared by two or three strangers, transparency did not require video cameras.
For a more extreme example, consider a primitive society–say Samoa. Multiple families share a single house–without walls. While there is no internet to spread information, the community is small enough to make gossip an adequate substitute. Infants are trained early on not to make noise. Adults rarely express hostility. Most of the time, someone may be watching–so you alter your behavior accordingly. If you do not want your neighbors to know what you are thinking or feeling, you avoid clearly expressing yourself in words or facial expression. You have adapted your life to a transparent society.
Ultimately this comes down to two strategies, both familiar to most of us in other contexts. One is not to let anyone know your secrets–to live as an island. The other is to communicate in code–words or expressions that your intimates will correctly interpret and others will not. For a milder version of the same approach, consider parents who talk to each other in a foreign language when they do not want their children to understand what they are saying–or a 19th century translation of a Chinese novel I once came across, with the pornographic passages translated into Latin instead of English.
In Brin's future transparent society, many of us will become less willing to express our opinions of boss, employees, ex-wife or present husband in any public place. People will become less expressive and more self contained, conversation bland or cryptic. If some spaces are still private, more of social life will shift to them. If every place is public, we have stepped back at least several centuries, arguably several millennia.
My wife is suing me for divorce on grounds of adultery. In support of her claim, she presents video tapes, taken by hidden cameras, that show me making love to three different women, none of them her.
My attorney asks for a postponement to investigate the new evidence. When the court reconvenes, he submits his own videotape. The jury observes my wife making love, consecutively, to Humphrey Bogart, Napoleon, her attorney and the judge. When quiet is restored in the courtroom, my attorney presents the judge with the address of the video effects firm that produced the tape.
With modern technology I do not, or at least soon will not, need your cooperation to make a film of you doing things; a reasonable selection of photographs will suffice. As Hollywood demonstrated with Roger Rabbit, it is possible to combine real and cartoon characters in what looks like a single filmstrip. In the near future the equivalent, using convincing animations of real people, will be something that a competent amateur can produce on his desktop. We may finally get to see John F. Kennedy making love to Marilyn Monroe–whether or not it ever happened.
In that world, the distinction between what I know and what I can prove becomes critical. Our world may be filled with video mosquitoes, each reporting to its owner and each owner pouring the information into a common pool, but some of them might be lying. When I pull information out of the pool I have no way of knowing whether to believe it.
There are possible technological fixes–ways of using encryption technology to build a camera that digitally signs its output, demonstrating that that sequence was taken by that camera at a particular time. But it is hard to design a system that cannot be subverted by the camera's owner. Even if we can prove that a particular camera recorded a tape of me making love to six women, how do we know whether it did so while pointed at me or at a video screen displaying the work of an animation studio? The potential for forgery significantly weakens the ability of surveillance technology to produce verifiable information.
For many purposes, unverifiable information will do–if my wife wants to know about my infidelity but does not need to prove it. As long as the government running a surveillance system can trust its own people it can use that system to detect crimes or politically unpopular expressions of opinion. And video evidence will still be usable in trials, provided that it is accompanied by a sufficient evidence trail to prove where and when it was taken–and that it has not been improved since.
Modern societies have two different systems of legal rules-criminal law and tort law–that do essentially the same thing. Someone does something that injures others, he is charged, tried, and convicted, and something bad happens to him as a result, which gives other people an incentive not to do such things. In the criminal system prosecution is controlled and funded by the state, in the tort system by the victim. In the criminal system a compromise is called a plea bargain, in the tort system an out of court settlement. Criminal law provides a somewhat different range of punishments–it is not possible to execute someone for a tort, for example, although it was possible for something very much like a tort prosecution to lead to execution under English law a few centuries back–and operates under somewhat different legal rules. But in their general outlines, the two systems are no more than slightly different ways of doing the same thing.
This raises an obvious question–is there any good reason to have both? Would we, for example, be better off abolishing criminal law entirely and instead having the victims of crimes sue the criminals?
One argument against such a pure tort system is that some offenses are hard to detect. A victim may conclude that catching and prosecuting the offender costs more than it is worth–especially if the offender turns out not to have enough assets to pay substantial damages. Hence some categories of offense may routinely go unpunished.
In Brin's world that problem vanishes. Every mugging is on tape. If the mugger chooses to wear a mask while committing his crime we can trace him backwards or forwards through the record until he takes it off. While a sufficiently ingenious criminal might find a way around that problem, most of the offenses that our criminal law now deals with would be cases where most of the facts are known and only their legal implications remain to be determined. The normal crime becomes very much like the normal tort–an auto accident, say, where (except in the case of hit and run, which is a crime) the identity of the party and many of the relevant facts are public information. In that world it might make sense to abolish criminal law and shift everything to the decentralized, privately controlled alternative. If someone steals your car you check the video record to identify him, then sue for the car plus a reasonable payment for your time and trouble recovering it.
Like many radical ideas, this one looks less radical if one is familiar with the relevant history. Legal systems in which something similar to tort law dealt with what we think of as crimes–in which if you killed someone his kinsmen sued you–are common in the historical record. Even as late as the 18th century, while the English legal system distinguished between torts and crimes, both were in practice privately prosecuted, usually by the victim. One possible explanation for the shift to a modern, publicly prosecuted system of criminal law is that it was a response to the increasing anonymity that accompanied the shift to a more urban society in the late 18th and early 19th century. Technologies that reverse that shift may justify a reversal of the accompanying legal changes.
It does no good to use strong encryption for my email if a video mosquito is sitting on the wall watching me type and recording every keystroke. Hence strong privacy in a transparent society requires some way of guarding the interface between my realspace body and cyberspace. This is no problem in the version where the walls of my house are still opaque. It is a serious problem in the version in which every place is, in fact if not in law, public. A low tech solution is to type under a hood. A high tech solution is some link between mind and machine that does not go through the fingers–or anything else visible to an outside observer.
The conflict between realspace transparency and cyberspace privacy goes in the other direction as well. If we are sufficiently worried about other people hearing what we say, one solution is to encrypt face to face conversation. With suitable wireless gadgets, I talk into a throat mike or type on a virtual keyboard (keeping my hands in my pockets). My pocket computer encrypts my message with your public key and transmits it to your pocket computer, which decrypts the message and displays it through your VR glasses. To make sure nothing is reading the glasses over your shoulder, the goggles get the image to you not by displaying it on a screen but by using a tiny laser to write it on your retina. With any luck, the inside of your eyeball is still private space.
We could end up in a world where physical actions are entirely public, information transactions entirely private. It has some attractive features. Private citizens will still be able to take advantage of strong privacy to locate a hit man, but hiring him may cost more than they are willing to pay, since in a sufficiently transparent world all murders are detected. Each hit man executes one commission then goes directly to jail.
What about the interaction between these technologies and data processing? On the one hand, it is modern data processing that makes the transparent society such a threat–without that, it would not much matter if you videotaped everything that happened in the world, since nobody could ever find the particular six inches of video tape he wanted in the millions of miles produced each day. On the other hand, the technologies that support strong privacy provide a possibility of reestablishing privacy, even in a world with modern data processing, by keeping information about your transactions from ever getting to anyone but you. That is a subject we will return to in a later chapter when we discuss digital cash–an idea dreamed up in large part as a way of restoring transactional privacy.
In this chapter I use “informational privacy” as shorthand for an individual’s ability to control other people’s access to information about him. If I have a legal right not to have you tap my phone but cannot enforce that right–the situation at present for those using cordless phones without encryption–then I have little privacy with regard to my phone calls. On the other hand, I have almost complete privacy with regard to my own thoughts, even though it is perfectly legal for other people to use the available technologies–listening to my voice and watching my facial expressions–to try to figure out what I am thinking. Privacy in this sense depends on a variety of things, including both law and technology. If someone invented an easy and accurate way of reading minds, privacy would be radically reduced even if there were no change in my legal rights.
There are two reasons to define privacy in this way. The first is that I am interested in its consequences, in the ways in which my ability to control information about me benefits or harms myself and others—whatever the source of that ability may be. The second is that I am interested in the ways in which technology is likely to change the ability of an individual to control information about himself—hence in changes in privacy due to sources other than changes in law.
Many people go to some trouble to reduce the amount others can find out about them. Many people, sometimes the same people, make an effort to get information about other people. This suggests an interesting question: On net, is an increase in privacy good or bad? Do I gain more from your being unable to find out things out about me than I lose from my being unable to find out things about you?
Most people seem to think that the answer is “yes.” It is common to see some new product, technology, or legal rule attacked as reducing privacy, rare to see anything attacked as increasing privacy. Why?
The reason I value my privacy is straightforward: Information about me in the hands of other people sometimes permits them to gain at my expense. They may do so by stealing my property–if, for example, they know when I will not be home. They may do so by getting more favorable terms in a voluntary transaction–if, for example, they know just how much I am willing to pay for the house they are selling. They may do so by preventing me from stealing their property–by, for example, not hiring me as company treasurer after discovering that I am a convicted embezzler.
Information about me in other people’s hands may also benefit me–for example, the information that I am honest and competent. But privacy does not prevent that information from being available to them. If I have control over information about myself I can release it when, and only when, doing so is in my interest.
My examples included one–where my privacy protects me from burglary–in which privacy produced a net benefit, since the gain to a burglar is normally less than the loss to his victim. It included one–where my privacy permitted me to steal from others–in which privacy produced a net loss. And it included one case–bargaining–where the net effect appeared to be a wash, since what I lost someone else gained. So while it is clear why I am in favor of my having privacy, it is not clear why I should expect my gains from my having privacy to outweigh my losses from your having it. It becomes even less clear if we look at the case of bargaining a little more carefully.
Consider a real world example:
Before my wife and I moved from Chicago to California, we spent some time looking for a house. We found, in the entire South Bay, precisely one house that we really liked—a lovely ninety year old home, set in its own tiny island of green surrounded by walls and hedges, in a neighborhood of fifties ranch houses. As an added bonus, the current owners, having bought the house in dilapidated condition, had put time and thought into undoing the effects of decades of neglect. Apparently our tastes were almost as uncommon as the house—judged by the fact that the owners were offering it at price comparable to new houses of similar size and having a sufficiently hard time finding a buyer to be willing to consider offers somewhat below their asking price.
We did not, probably could not, conceal the fact that we liked the house. But we did make some attempt to conceal how much we liked the house—and how much, if necessary, we were willing and able to pay for it. If we had had no privacy, if the sellers had been able to listen in to all of our thoughts and conversations, we would have ended up paying noticeably more for it than we did. Conversely, if they had had no privacy, we might have been able to discover that they were willing to accept a lower price than the one we eventually paid.
At some stage in the bargaining, we make a final offer and they do or do not accept it. Our offer is based in part on what the house is worth to us and in part on what we think it is worth to them—our estimate of the lowest offer they will accept. Whether they accept it depends in part on the worth of the house to them, in part on whether they really believe it is our final offer or think that by refusing it they can get a better one.
If one side or the other guesses wrong, if they refuse to accept our offer because they think we will raise it or we refuse to raise it because we think they will accept it, the bargain falls through and we end up with our second or third choice instead. Such bargaining breakdown represents a real loss—both sides are worse off than if they had sold us the house at some price above their value for it and below ours. Privacy, by making it harder for each side to correctly interpret the other’s position, makes such breakdown more likely.
Generalizing the argument, it looks as though privacy produces, on average, a net loss in situations where parties are seeking information about each other in order to improve the terms of a voluntary transaction, since it increases the risk of bargaining breakdown. In situations involving involuntary transactions, privacy produces a net gain if it is being used to protect other rights (assuming that those rights have been defined in a way that makes their protection desirable) and a net loss if it is being used to violate other rights (with the same assumption). There is no obvious reason why the former situation should be more common than the latter. So it remains puzzling why people in general support privacy rights–why they think it is, on the whole, a good thing for people to be able to control information about themselves.
I have a taste for watching pornographic videos. My boss is a puritan who does not wish to employ people who enjoy pornography. If I know my boss is able to monitor my rentals from the local adult video store I respond by renting videos from a more distant and less convenient outlet. My boss is no better off as a result of the limitation of my privacy; I am still viewing pornography and he is still ignorant of the fact. I am worse off by the additional driving time required to visit the more distant store.
Privacy—embodied in a law forbidding the video store from telling my boss what I am renting--not only saves me time, it also discourages my boss from spending time and effort worming information out of the clerk at the local video store. It thus reduces both my costs and his—mine because I can do what I want to do more easily, his because he can’t do it at all. A different form of the same argument should be obvious to anyone who has ever closed a door behind him, loosened his tie, taken off his shoes, and put his feet up on his desk. Privacy has permitted him to maintain his reputation as someone who behaves properly without having to bear the cost of actually behaving properly—which is why there is no window between his office and the adjacent hallway.
There are two problems with this explanation of why people support privacy. The first is that the argument could as easily go the other way. One can readily imagine situations where making it harder for me to protect my privacy means that I stop trying--saving me the cost of protecting my information and other people the cost of trying to defeat my protection. The second is that while some information about me starts under my control, much does not. Consider court records of my conviction on a criminal charge or a magazine’s mailing list with my name on it. Protecting my privacy with regard to such information requires some way of removing that information from the control of those people who initially possess it and transferring control to me. That is, in most cases, a costly process. If we do nothing to give people rights over such information about them, the information will remain public and nothing will have to be spent to restrict access to it.
For a very different argument in favor of privacy, consider a point made earlier: if I have control over information about me but transferring that information to someone else produces net benefits, I can give or sell that information to him. By protecting my control over information about me we establish a market in information. Each piece of information moves to the person who values it most, maximizing net benefit.
This is a good argument for private property in general, but there are problems in applying it to information. Transacting over information is difficult because it is hard to tell the customer what you are selling without, in the process, giving it to him. And information can be duplicated at a cost close to zero, so that while the efficient allocation of a car is to the single person who has the highest value for it, the efficient allocation of a piece of information is to everyone to whom it has positive value. That implies that legal rules that treat information as a commons, free for everyone to make copies, usually lead to the efficient allocation.
One function of property rights is to allocate existing things; another is give people an incentive to produce things in the first place. You cannot read my book unless I first write it, so if I cannot charge you for reading it the book may never get written. But while that may be a legitimate argument for property rules in contexts such as copyright or patent, it is hard to see how it applies to individual privacy. Information about me is either produced by me as a byproduct of other activities, such as subscribing to a magazine, or else produced by other people about me--in which case giving me property rights in the information will not give them an incentive to produce it.
“It would have been impossible to proportion with tolerable exactness the tax upon a shop to the extent of the trade carried on in it, without such an inquisition as would have been altogether insupportable in a free country.”
“The state of a man’s fortune varies from day to day, and without an inquisition more intolerable than any tax, and renewed at least once every year, can only be guessed at.” (Smith’s explanation of why an income tax is impractical, Bk V Article IV)
Although private parties occasionally engage in involuntary transactions such as burglary, most of our interactions with each other are voluntary ones. Governments engage in involuntary transactions on an enormously larger scale. And governments almost always have an overwhelming superiority of physical force over the individual citizen. While I can protect myself from my fellow citizens with locks and burglar alarms, I can protect myself from government actors only by keeping information about me out of their hands.
The implications depend on one’s view of government. If government is the modern equivalent of the philosopher king, individual privacy simply makes it harder for government to do good. If, on the other hand, a government is merely a particularly large and well organized criminal gang, stealing as much as it can from the rest of us, individual privacy against government as an unambiguously good thing. Most Americans appear, judging by expressed views on privacy, to be close enough to the latter position to consider privacy against government as on the whole desirable, with an exception for cases where they believe that privacy might be used to conceal crimes substantially more serious than tax evasion.
Seen from this standpoint, one problem with Brin's transparent society is the enormous downside risk. Played out under less optimistic assumptions than his, the technology could enable a tyranny that Hitler or Stalin might envy. Even if we accept Brin's optimistic assumption that the citizens are as well informed about the police as the police about the citizens, it is the police who have the guns. They know if we are doing or saying anything they disapprove of and respond accordingly, arresting, imprisoning, perhaps torturing or executing their opponents. We have the privilege of watching. Why should they object? Public executions are an old tradition, designed in part to discourage other people from doing things that might get them executed.
It does not follow that Brin's prescription is wrong. His argument, after all, is that privacy will simply not be an option, either because the visible benefits of surveillance are so large or because the technology will make it impossible to prevent it. If he is right, his transparent society may at least be better than the alternative–surveillance to which only those in power have access, a universal Panopticon with government as the prison guards.
The growing importance of Cyberspace is one revolution we can be confident of, since it has already happened. An earlier chapter discussed implications for privacy. This section deals with how to do business in a world in which physical location and physical identity are becoming increasingly irrelevant. The issues are connected, since tools for doing business in cyberspace may also provide ways of maintaining control over personal information while doing so.
We start, in Chapter VII, with the problem of how to pay for things. One possible answer is anonymous ecash–money that can be passed from one computer to another by sending messages, with no need to transmit anything physical. Such a system has the potential to provide, among other things, a simple solution to the irritation of spam email. It also makes some current law enforcement strategies, notably the attempt to enforce laws by monitoring and controlling the flow of money, unworkable. And it raises the interesting possibility of a future of private currencies competing with each other and with government moneys in both cyberspace and realspace.
Chapter VIII considers a different problem–enforcing contracts online. Online interactions are, in a sense, entirely voluntary; you (or your computer) can be tricked into doing something you do not want to but you cannot be forced to do something you do not want to, since you are the one with physical control over your computer. In the worst case you can always pull the plug. In an entirely voluntary world, most legal issues can be reduced to contract law. As enforcement of online contracts through the court system becomes increasingly difficult it may be in large part replaced by private alternatives based on reputational sanctions.
We consider next property–intellectual property. A world of easy and inexpensive copying and communication is a world where enforcing copyright is extraordinarily difficult. Are there other, perhaps better, ways to give creators control over what they create? That brings us to the recent and increasingly controversial issue of technological protection of intellectual property–the online equivalent of the barbed wire fences whose invention revolutionized western agriculture. It also brings us back to the possibility of treating personal information as private property, protected not by law but by technology.
The final chapter of this section deals with ways in which the new technologies, by greatly reducing the cost of communication and information, can change how we organize our lives. One interesting and attractive possibility is a shift away from formal organizations such as corporations and universities towards more decentralized models, such as networks of amateur scholars and open source programmers.
I pay for things in one of three different ways–credit card, check or cash. The first two let me make large payments without having to carry large amounts of money. What are the advantages of the third?
One is that a seller does not have to know anything about me in order to accept cash. That makes money a better medium for transactions with strangers, especially strangers from far away. It also makes it a better medium for small transactions, since using cash avoids the fixed costs of checking up on someone to make sure that there is really money in his checking account or that his credit is good. It also means that money leaves no paper trail, which is useful not only for criminals but for anyone who wants to protect his privacy—an increasingly important issue in a world where data processing threatens to make every detail of our lives public.
The advantage of money is greater in cyberspace, since transactions with strangers, including strangers far away, are more likely on the internet than in my realspace neighborhood. The disadvantage is less, since my ecash is stored inside my computer, which is usually inside my house, hence less vulnerable to theft than my wallet.
Despite it’s potential usefulness, there is as yet no equivalent of cash available online, although there have been unsuccessful attempts to create one and successful attempts to create something close. The reason is not technological; those problems have been solved. The reason is in part the hostility of governments to competition in the money business, in part the difficulty of getting standards, in this case private monetary standards, established. I expect both problems to be solved sometime in the next decade or so.
Before discussing how a system of electronic currency, private or governmental, might work, it is worth first giving at least one example of why it would be useful–for something more important than allowing men to look at pornography online without their wives or employers finding out.
My email contains much of interest. It also contains READY FOR A SMOOTH WAY OUT OF DEBT?, A Personal Invitation from make_real_money@BIGFOOT.COM, You've Been Selected..... from email@example.com, and a variety of similar messages, of which my favorite offers “the answer to all your questions.” The internet has brought many things of value, but for most of us unsolicited commercial email, better known as spam, is not one of them.
There is a simple solution to this problem—so simple that I am surprised nobody has yet implemented it. The solution is to put a price on your mailbox. Give your email program a list of the people you wish to receive mail from. Mail from anyone not on the list is returned, with a note explaining that you charge five cents to read mail from strangers–and the URL of the stamp machine. Five cents is a trivial cost to anyone with something to say that you are likely to want to read, but five cents times ten million recipients is quite a substantial cost to someone sending out bulk email on the chance that one recipient in ten thousand may respond.
The stamp machine is located on a web page. The stamps are digital cash. Pay ten dollars from your credit card and you get in exchange two hundred five cent stamps–each a morsel of encrypted information that you can transfer to someone else and that he, or someone he transfers it to, can eventually bring back to the stamp machine and turn back into cash.
A virtual stamp, unlike a real stamp, can be reused; it is paying not for the cost of transmitting my mail but for my time and trouble reading it, so the payment goes to me, not the post office. I can use it the next time I want to send a message to a stranger. If lots of strangers choose to send me messages, I can accumulate a surplus of stamps to be changed back into cash.
How much I charge is up to me. If I hate reading messages from strangers, I can make the price a dollar, or ten dollars, or a hundred dollars–and get very few of them. If I enjoy junk email, I can set a low price. Once such a system is established, the same people who presently create and rent out the mailing lists used to send spam will add another service–a database keeping track of what each potential target charges to receive it.
What is in it for the stamp machine–why would someone maintain such a system? Part of the answer is seignorage–the profit from coining money. After selling a hundred million five cent stamps, you have five million dollars of money. If your stamps are popular, many of them may stay in circulation for a long time–leaving the money that bought them in your bank account accumulating interest.
In addition to the free use of other people’s money, there is a second advantage. If you own the stamp machine, you also own the wall behind it–the web page people visit to buy stamps. Advertisements on that wall will be seen by a lot of people.
One reason this solution to spam requires ecash is that it involves a large number of very small payments. It would be a great deal clumsier if we used credit cards–every time you received a message with a five cent stamp, you would have to check with the sender's bank before reading it to make sure the payment was good. A second reason is privacy. Many of us would prefer not to leave a complete record of our correspondence with a third party–which we would be doing if we used credit cards or something similar. What we want is not merely ecash but anonymous ecash–some way of making payments that provides no information to third parties about who has paid what to whom.
The solution is a digital signature. The bank creates a banknote that says "First Bank of Cyberspace: Pay the bearer one dollar in U.S. currency." It digitally signs the note, using its private key. It makes the matching public key widely available. When you come in to the bank with a dollar, it gives you a banknote in the form of a file on a floppy disk. You transfer the file to your hard disk, which now has a one dollar bill with which to buy something from someone else online. When he receives the file he checks the digital signature against the bank's public key.
There is a problem—a big problem. What you have gotten for your dollar is not one dollar bill but an unlimited number of them. Sending a copy of the file in payment for one transaction does not erase it from your computer, so you can send it again to someone else to buy something else. And again. That is going to be a problem for the bank, when twenty people come in to claim your original dollar bill.
One solution is for the bank to give each dollar its own identification number and keep track of which ones have been spent. When a merchant receives your file he sends it to the bank, which deposits the corresponding dollar in his account and adds its number to a list of banknotes that are no longer valid. When you try to spend a second copy of the note, the merchant who receives it tries to deposit it, is informed that it is no longer valid, and doesn't send you your goods.
This solves the problem of double spending, but it also eliminates most of the advantages of ecash over credit cards. The bank knows that it issued banknote 94602… to Alice, it knows that it came back from Bill, so it knows that Alice bought something from Bill, just as it would if she had used a credit card.
The solution to this problem uses what David Chaum, the Dutch cryptographer who is responsible for many of the ideas underlying ecash, calls blind signatures. It is a way in which Alice, having rolled up a random identification number for a dollar bill, can get the bank to sign that number (in exchange for paying the bank a dollar) without having to tell the bank what the number they are signing is. Even though the bank does not know the serial number it signed, both it and the merchant who receives the note can check that the signature is valid. Once the dollar bill is spent, the merchant has the serial number, which he reports to the bank, which can add it to the list of serial numbers that are now invalid. The bank knows it provided a dollar to Alice, it knows it received back a dollar from Bill, but it does not know that they are the same dollar. So it does not know that Alice bought something from Bill. The seller has to check with the bank and know that the bank is trustworthy, but it does not have to know anything about the purchaser.
Curious readers will want to know how it is possible for a bank to sign a serial number without knowing what it is. I cannot tell them without first explaining the mathematics of public key encryption, which requires more math than I am willing to assume my average reader has. Those who are curious can find the answers in the virtual footnotes, which point to webbed explanations of both public key encryption and blind signatures.
So far I have been assuming that people who receive digital cash can communicate with the bank that issues it while the transaction is taking place–that they and the bank are connected to the internet or something similar. That is not a serious constraint if the transaction is occurring online. But digital cash could also be useful for realspace transactions–and the cabby or hotdog vendor may not have an internet connection.
The solution is another clever trick (Chaum specializes in clever tricks). It is a form of ecash that contains information about the person it was issued to but only reveals that information if he tries to spend the same dollar bill twice. For an explanation of how it works, you must again go to the virtual footnotes.
Skeptical readers should at this point be growing increasingly unhappy at being told that everything about ecash is done by mathematics that I am unwilling to explain–which they may reasonably enough translate as "smoke and mirrors." For their benefit I have invented my own form of ecash–one that has all of the features of the real thing and can be understood with no mathematics beyond the ability to recognize numbers. It is a good deal less convenient than Chaum's version but a lot easier to explain, and so provides at least a possibility proof for the real thing.
I randomly create a very long number. I put the number and a dollar bill in an envelope and mail it to the First Bank of Cybercash. The FBC agrees–in a public statement–to do two things with money it receives in this way:
II. If the FBC receives a letter that includes the number associated with a dollar bill it has on deposit, instructing the FBC to change it to a new number, it will make the change and post the fact of the transaction on a publicly observable bulletin board. The dollar bill will now be associated with the new number.
Alice has sent the FBC a dollar, accompanied by the number 59372. She now wants to buy a dollar's worth of digital images from Bill, so she emails the number to him in payment. Bill emails the FBC, sending them three numbers–59372, 21754, 46629.
The FBC checks to see if it has a dollar on deposit with number 59372; it does. It changes the number associated with that dollar bill to 21754, Bill's second number. Simultaneously, it posts on a publicly observable bulletin board the statement "the transaction identified by 46629 has gone through." Bill reads that message, which tells him that Alice really had a dollar bill on deposit and it is now his, so he emails her a dollar's worth of digital images.
Alice no longer has a dollar, since if she tries to spend it again the bank will report that it is not there to be spent–FBC no longer has a dollar associated with the number she knows. Bill now has a dollar, since the dollar that Alice originally sent in is now associated with a new number and only he (and the bank) knows what it is. He is in precisely the same situation that Alice was before the transaction, so he can now spend the dollar to buy something from someone else. Like an ordinary paper dollar, the dollar of ecash in my system passes from hand to hand. Eventually someone who has it decides he wants a dollar of ordinary cash instead; he takes his number, the number that Alice's original dollar is now associated with, to the FBC to exchange for a dollar bill.
My ecash may be low tech, but it meets all of the requirements. Payment is made by sending a message. Payer and payee need know nothing about the other's identity beyond the address to send the message to. The bank need know nothing about either party. When the dollar bill originally came in, the letter had no name on it, only an identifying number. Each time it changed hands, the bank received an email but had no information about who sent it. When the chain of transactions ends and someone comes into the bank to collect the dollar bill he need not identify himself; even if the bank can somehow identify him he has no way of tracing the dollar bill back up the chain. The virtual dollar in my system is just as anonymous as the paper dollars in my wallet.
With lots of dollar bills in the bank there is a risk that two might by chance have the same number, or that someone might make up numbers and pay with them in the hope that the numbers he invents will, by chance, match numbers associated with dollar bills in the bank. But both problems become insignificant if instead of using five digit numbers we use hundred digit numbers. The chance that two random hundred digit numbers will turn out to be the same is a good deal less than the chance that payer, payee, and bank will all be struck by lightning at the same time.
It may have occurred to you that if you have to roll up a hundred digit random number every time you want to buy a dollar of ecash from the bank and two more every time you receive one from anyone else, not to mention sending off one anonymous email to the bank for every dollar you receive, ecash may be more trouble than it is worth. Don't worry--that's your computer's job, not yours. With a competently designed ecash system, the program takes care of all mathematical details; all you have to worry about is having enough money to pay your (virtual) bills. You tell your computer what to pay to whom; it tells you what other people have paid to you and how much money you have. Random numbers, checks of digital signatures, blind signing, and the all the rest is done in the background. If you find that hard to believe, consider how little most of us know about how the tools we routinely use, such as cars, computers or radios, actually work.
When Chaum came up with the idea of ecash, email was not yet sufficiently popular to make spam an issue. What motivated him was the problem we discussed back in chapter IV–the loss of privacy created by the ability of modern information processing to combine publicly available information into a detailed portrait of each individual.
Consider an application of ecash that Chaum has actually worked on–automated toll collection. It would be very convenient if, instead of stopping at a toll booth when getting on or off the interstate, we could simply drive past, making the payment automatically in the form of a wireless communication between the (unmanned) toll booth and the car. The technology to do this exists and has long been used to provide automated toll collection for busses on some roads.
One problem is privacy. If the payment is made with a credit card, or if the toll agency adds up each month's tolls and sends you a bill, someone has a complete record of every trip you have taken on the toll road, every time you have crossed a toll bridge. If we deal with auto pollution by measuring pollutants in the exhaust plumes of passing automobiles and billing their owners, someone ends up with detailed, if somewhat fragmentary, records of where you were when.
Ecash solve that problem. As you whiz past the toll booth, your car pays it fifty cents in anonymous ecash. By the time you are thirty feet down the road, the (online) toll booth has checked that the money is good; if it isn't an alarm goes off, a camera triggers, and if you do not stop a traffic cop eventually appears on your tail. But if your money is good you go quietly about your business–and there is no record of your passing the toll booth. The information never came into existence, save in your head. Similarly for an automated system of pollution charges.
It works for shopping as well. Ecash–this time encoded in a smart card in your wallet or a palmtop computer in your pocket–provides much of the convenience of a credit card with the anonymity of cash. If you want the seller to know who you are, you are free to tell him. But if you prefer to keep your transactions private, you can.
Private money denominated in dollars is already common. My money market fund is denominated in dollars, although Merrill Lynch does not actually have a stack of dollar bills in a vault somewhere that corresponds to the amount of money "in" my account. My university I.D. card doubles as a money card, with some number of dollars stored on its magnetic strip—a number that decreases every time I use the card to buy lunch on campus. A bank could issue ecash on the same basis. Each dollar of ecash represents a claim to be paid a dollar bill. The actual assets backing that claim consist not of a stack of dollar bills but of stocks, bonds, and the like–which have the advantage of paying the bank interest for as long as the dollar of ecash is out there circulating.
While I do not have to know anything about you in order to accept your ecash, I do have to know something about the bank that issued it–enough to be sure that the money will eventually be redeemed. That means that any ecash expected to circulate widely will be issued by organizations with reputations. In a world of almost instantaneous information transmission, those organizations will have a strong incentive to maintain their reputations, since a loss of confidence will result in money holders bringing in virtual banknotes to be redeemed, eliminating the source of income that the assets backing those banknotes provided.
Some economists, in rejecting the idea of private money, have argued that such an institution is inherently inflationary. Since issuing money costs a bank nothing and gives it the interest on the assets it buys with the money, it is always in the bank's interest to issue more.
The rebuttal to this particular error was published in 1776. When Adam Smith wrote The Wealth of Nations, the money of Scotland consisted largely of banknotes issued by private banks, redeemable in silver. As Smith pointed out, while a bank could print as many notes as it wished, it could not persuade other people to hold an unlimited number of its notes. A customer who holds a thousand dollars in virtual cash–or Scottish banknotes–when he only needs a hundred is giving up the interest he could have been earning if he had held the other nine hundred dollars in bonds or some other interest earning asset instead. That is a good reason to limit his cash holdings to the amount he actually needs for day to day transactions.
What happens if a bank tries to issue more of its money than people wish to hold? The excess comes back to be redeemed. The bank is wasting its resources printing up money, trying to put it into circulation, only to have each extra banknote promptly returned for cash–in Smith's case, silver. The obligation of the bank to redeem its money guarantees its value, and at that value there is a fixed amount of the money that people will choose to hold.
"Let us suppose that all the paper of a particular bank, which the circulation of the country can easily absorb and employ, amounts exactly to forty thousand pounds; and that for answering occasional demands, this bank is obliged to keep at all times in its coffers ten thousand pounds in gold and silver. Should this bank attempt to circulate forty-four thousand pounds, the four thousand pounds which are over and above what the circulation can easily absorb and employ, will return upon it almost as fast as they are issued." (Bk I Chapter 2)
So far I have assumed that future ecash will be denominated in dollars. Dollars have one great advantage–they provide a common unit already in widespread use. They also have one great disadvantage–they are produced by a government, and it may not always be in the interest of that government to maintain their value in a stable, or even predictable, way. On past evidence, governments sometimes increase or decrease the value of their currency, inadvertently or for any of a variety of political purposes. In the extreme case of a hyperinflation, a government tries to fund its activities with the printing press, rapidly increasing the amount of money and decreasing its value. In less extreme cases, a government might inflate in order to benefit debtors by inflating away the real value of their debts–governments themselves are often debtors, hence potential beneficiaries of such a policy–or it might inflate or deflate in the process of trying to manipulate its economy for political ends.
Dollars have a second disadvantage, although perhaps a less serious one. Because they are issued by a particular government, citizens of other governments may prefer not to use them. This has not prevented dollars from becoming a de facto world currency, but it is one reason why a national currency might not be the best standard to base ecash on. The simplest alternative would be a commodity standard, making the unit of ecash a gram of silver or gold, or some other widely traded commodity.
Under such a commodity standard the monetary unit, while no longer under the control of a government, is subject instead to the forces that affect the value of the particular commodity it is based on. If large amounts of gold are discovered, or if someone invents new and better techniques for extracting gold from low grade ore, the value of gold, and of gold based money, will decline. If, on the other hand, important new uses for gold are found, and no new supplies, the value of gold will rise and prices fall. Thus a commodity money carries with it at least some risk of unpredictable fluctuations in its value, and hence in prices measured in it.
That problem is solved by replacing a simple commodity standard with a commodity bundle. Bring in a million Friedman Dollars and I agree to give you in exchange ten ounces of gold, forty ounces of silver, ownership of a thousand bushels each of grade A wheat and grade B soybeans, a ton of grade S30040 stainless steel, … . If the purchasing power of a million of my dollars is less than the value of the bundle, it is profitable for people to assemble a million Friedman dollars, exchange them for the bundle, and sell the contents of the bundle–forcing me to make good on my promise and, in the process, reducing the amount of my money in circulation. If the purchasing power of my money is more than the worth of the commodities it trades for, it is in my interest to issue some more money. Since the bundle contains lots of different commodities, random changes in commodity prices can be expected to roughly average out, giving us a stable standard of value.
A commodity bundle is a good theoretical solution to the problem of monetary standards, but implementing it has a serious practical difficulty–all the firms issuing ecash have to agree on the same bundle. If they fail to establish a common standard, we end up with a cyberspace in which different people use different currencies and the exchange rates between them vary randomly.
That is not an unworkable situation–Europeans have lived with it for a very long time–but it is a nuisance. Life is easier if the money I use is the same as the money used by the people I do business with. On that fact our present world system–multiple government moneys, each with a near monopoly within the territory of the issuing government–is built. It works because most transactions are with people near you, and, unless you happen to live next to the border, people near you live in the same country you do. It works less well in Europe than in North America because the countries are smaller--which is why the European countries are moving from national currencies to the Euro.
A system of monopoly government moneys works less well in cyberspace because in cyberspace national borders are transparent. For information transactions, geography is irrelevant–I can download software or digital images from London as easily as from New York. For online purchases of physical objects geography is not entirely irrelevant, since the goods have to be delivered, but less relevant than in realspace shopping. With a system of multiple national currencies, everyone in cyberspace has to juggle multiple currencies in the process of figuring out who has the best price and paying it. The obvious solution is to establish a single standard of value, either by adopting one national currency, probably the dollar, or by establishing a private standard, such as the sort of commodity bundle described above.
That may not be the only solution. The reason that everyone wants to use the same currency as his neighbors is that currency conversion is a nuisance. But currency conversion is arithmetic, and computers do arithmetic fast and cheap. Perhaps, with some minor improvements in the interfaces on which we do online business, we could make the choice of currency irrelevant, permitting multiple standards to coexist.
I live in the U.S.; you live in India. You have goods to sell, displayed on a web page, with prices in rupees. I view that page through my brand new browser–Netscape Navigator v 9.0. One feature of the new browser is that it is currency transparent. You post your prices in rupees but I see them in dollars. The browser does the conversion on the fly, using exchange rates read, minute by minute, from my bank's web page. If I want to buy your goods, I pay in dollar denominated ecash; my browser sends it to my bank which sends rupee denominated ecash to you. I neither know or care what country you are in or what money you use–it's all dollars to me.
Currency transparency will be easiest online, where everything filters through browsers anyway. One can imagine, with a little more effort, realspace equivalents. An unobtrusive tag on my lapel gives my preferred currency; an automated price label on the store shelf reads my tag and displays the price accordingly. Alternatively, the price is displayed by a dumb price tag, read by a smart video camera set into the frame of my glasses, converted to my preferred currency by my pocket computer, and written in the air by the heads up display generated by the eyeglass lenses.
As I write, the countries of Europe are in the final stages of replacing their multiple national currencies with the Euro. If the picture I have just painted turns out to be correct, they may have finally achieved a common currency just as it was becoming unnecessary.
We now have three possibilities for ecash. It might be produced by multiple issuers but denominated in dollars or (less probably) some other widely used national money. It might be denominated in some common non-governmental standard of value–gold, silver, or a commodity bundle. It might be denominated in a variety of different standards, perhaps including both national monies and commodities, with conversion handled transparently, so that each individual sees a world where everyone is using his money. Any of these forms of ecash might be produced by private firms, probably banks, or by governments.
During World War II, George Orwell wrote regular articles for Partisan Review, an American magazine. Near the end of the war, he wrote a retrospective in which he discussed what he had gotten right and what wrong. His conclusion was that he was generally right about the way the world was moving, wrong about how fast it would get there. He correctly saw the logical pattern but failed to allow for the enormous inertia of human society.
Similarly here. David Chaum's articles, laying out the groundwork for fully anonymous electronic money, were published in technical journals in the 1980's and summarized in a 1992 article in Scientific American. Ever since then various people, myself among them, have been predicting the rise of ecash along the lines he sketched. While pieces of his vision have become real in other contexts, there is as yet nothing close to a fully anonymous ecash available for general use. Chaum himself, in partnership with the Mark Twain Bank of Saint Louis, attempted to get a semi-anonymous ecash into circulation–one which permitted one party to a transaction be identified by joint action by the other party and the bank. The effort failed and was abandoned.
One reason it has not happened is that online commerce has only very recently become large enough to justify it. A second reason, I suspect but cannot prove, is that national governments are unhappy with the idea of a widely used money that they cannot control, and so reluctant to permit (heavily regulated) private banks to create such a money. A third and closely related reason is that a truly anonymous ecash would eliminate a profitable form of law enforcement. There is no practical way to enforce money laundering laws once it is possible to move arbitrarily large amounts of money anywhere in the world, untraceably, with the click of a mouse. A final reason is that ecash is only useful to me if many other people are using it, which raises a problem in getting it started.
These factors have slowed the introduction of ecash. I do not think they will stop it. It only takes one country willing to permit it and one issuing institution in that country willing to issue it, to bring ecash into existence. Once it exists, it will be politically difficult for other countries to forbid their citizens from using it and practically difficult, if it is forbidden, to enforce the ban. There are a lot of countries in the world, even if we limit ourselves to ones with sufficiently stable institutions so that people elsewhere will trust their money. Hence my best guess is that some version of one of the monies I have described in this chapter will come into existence sometime in the next decade or so.
You hire someone to fix your roof, and (imprudently) pay him in advance. Two weeks later, you call to ask when he is going to get the job done. After three months of alternating promises and silence, you sue him, probably in small claims court.
Suing someone is a nuisance, which is why you waited three months. In cyberspace it will be even more of a nuisance. The law that applies to a dispute depends, in a complicated way, on where the parties live and where the events they are litigating over happened. A contract made online has no geographical location and the other party might live anywhere in the world. Suing someone in another state is bad enough; suing someone in another country is best left to professionals–who do not come cheap. If, as I suggested in an earlier chapter, the use of online encryption leads to a world of strong privacy, where many people do business without revealing their realspace identity, legal enforcement of contracts becomes not merely difficult but impossible. There is no way to sue someone if you do not know who he is.
Even in our ordinary, realspace lives, however, there is another way of enforcing contracts, and one that is probably more important than litigation. The reason department stores make good on their "money back, no questions asked" promises, and the reason the people who mow my lawn keep doing it once a week even when I am out of town and so unable to pay them, is not the court system. Customers are unlikely to sue a department store, however unreasonable its grounds for refusing to take something back, and the people who mow my lawn are unlikely to sue me, even if I refuse to pay them for their last three weeks of work.
What enforces the contract in both cases is reputation. The department store wants to keep me as a customer, and won't if I conclude that they are not to be trusted. Not only will they lose me, they may well lose some of my friends, to whom I can be expected to complain. The people who mow my lawn do a good job at a reasonable price, such people are not easy to find, and I would be foolish to offend them by refusing to pay for their work.
When we shift our transactions from the neighborhood to the internet, legal enforcement becomes harder. Reputational enforcement, however, becomes easier. The net provides a superb set of tools for collecting and disseminating information–including information about who can or cannot be trusted.
On an informal level, this happens routinely through both Usenet and the Web. Some time back, I heard that my favorite palmtop–a full featured computer, complete with keyboard, word processor, spreadsheet, and much else, that fits in my pocket and runs more or less for ever on its rechargeable battery–was available at an absurdly low price from a discount reseller, apparently because the attempt to sell it in the U.S. market had failed and the company that made that attempt was dumping its stock of rebranded Psion Revos (aka Diamond Makos). I went on the web, searched for the reseller, and in the process discovered that it had been repeatedly accused of failing to live up to its service guarantees and was currently in trouble with authorities in several states. The same process works in a somewhat more organized fashion through specialist web pages–MacIntouch for Macintosh users, the Digital Camera Resource Page for consumers of digital cameras, and many more.
For a different version of reputational enforcement online, consider Ebay. Ebay does not sell goods; it sells the service of helping other people sell goods, via an online auction system. That raises an obvious problem. Sellers may be located anywhere and, at least for the goods I have bid on, are quite likely to be located outside the U.S. Most transactions, although not all, involve goods of modest value, so suing for failure to deliver, especially suing someone outside the U.S. for failure to deliver, is rarely a practical option. With millions of buyers and sellers, each individual buyer is not likely to buy many things from any particular seller, so the seller need be only mildly concerned about his reputation with that particular buyer. Why don't all sellers simply take the money and run?
One reason is that Ebay provides extensive support for reputational enforcement. Any time you win an Ebay auction you have the option, after taking delivery, of reporting your evaluation of the transaction–whether the goods were as described and delivered in good condition, and anything else you care to add. Any time you bid on an Ebay auction, you have access to all past comments on the seller, both in summary form and, if you are sufficiently interested, in full. Successful Ebay sellers generally have a record of many comments, very few of them negative.
There are, of course, ways that a sufficiently enterprising villain could try to game the system. One would be by setting up a series of bogus auctions, selling something under one name, buying it under another, and giving himself a good review. Eventually he builds up a string of glowing reviews and uses them to sell a dozen non-existent goods for high prices, payable in advance.
It's possible, but it isn't cheap. Ebay, after all, will be collecting its cut of each of those bogus auctions. The nominal buyers will require many different identities in order to keep the trick from being obvious, which involves additional costs. Meanwhile all the legitimate sellers have to do in order to build up their reputation is honest business as usual. And Ebay itself, in order to maintain its reputation as a good place to buy and sell, attempts in various ways to prevent buyers and sellers from abusing the reputational mechanisms it has created. I am confident, on the basis of no inside information at all, that at least one villain has done it successfully–but there don't seem to be enough to seriously discourage people from using Ebay.
Alternatively, a dishonest seller could try to eliminate competitors by buying goods from them under a false name and then posting (false) negative information about the transaction. That might be worth doing in a market with only a few sellers–and for all I know it has happened. But in the typical Ebay market, with many sellers as well as many buyers, defaming one competitor merely transfers the business to another.
While a relatively informal sort of reputational enforcement, along the lines of what Ebay currently provides, is adequate for many purposes, it would be useful to have systems that are harder to cheat on. Before looking at how they might work, it is worth thinking a little more about the logic of reputational enforcement.
Criminal law and tort law exist, in large part, as ways of punishing bad behavior. In the case of reputational enforcement, in contrast, punishment is not the objective, merely an indirect consequence. Consider an (imaginary) example:
The news that Charley bought an expensive suit jacket at the local department store, his wife made him take it back, and they refused to return his money, gives me no reason to want to punish the store. Ever since Charley told me what he really thought of my latest book, I have regarded his misfortunes as no more than he deserves. As the story spreads, more and more people stop shopping at that particular store. The reason is not that we wish to punish them--Charley's unfortunate habit of telling people what he really thinks has left him few friends. The reason is to protect ourselves. We too might some day buy something our wives disapproved of. Reputational enforcement works by spreading true information about bad behavior. People who receive that information modify their actions accordingly, which imposes costs on those who have behaved badly.
As this example suggests, one thing determining how well reputational enforcement works is the ability of interested third parties to get information about who cheated whom. To see this, suppose we change the story a little by making Charley not merely tactless but routinely dishonest. Now when he complains that the store refused to take the jacket back even though it was in good condition, we conclude that his idea of good condition probably included multiple ink stains and a missing sleeve, due to his wife's reaction to how he had been wasting their money–we know her too–and we continue patronizing the store.
One reason information costs are important is that if interested third parties do not know who is at fault, they do not know who to avoid future dealings with. A more subtle reason is that if third parties cannot easily find out who is at fault in a dispute, the dispute may never become public. If I accuse you of swindling me, you will of course deny it. Reasonable third parties, unable to check either side's claims, conclude that at least one of us is a crook. They have no way of finding out which, and it is therefore prudent to avoid both. Anticipating that result, I decide not to make my accusation public in the first place. So reputational enforcement requires a framework that makes it easy for interested third parties to determine who is at fault.
You and I make an agreement and specify the private arbitrator who will settle disagreements over its terms. Such a disagreement occurs; you demand arbitration. The arbitrator gives a verdict. If I refuse to go along, the arbitrator can make that fact public. An interested third party, typically another firm in the same industry, does not have to know the facts of the dispute to know who is at fault. All it has to know is that both of us agreed to the arbitrator and that the arbitrator we agreed to says that I reneged on that agreement.
This works well within an industry because the people involved know each other and are familiar with the industry's institutions for settling disputes. It works less well for disputes between a firm and one of its many customers–because other customers, unless they too are part of the industry, are unlikely to know enough about the institutions to be confident who was cheating whom. What about in cyberspace?
You and I agree to a contract online. The contract contains the name of the arbitrator who will resolve disputes and his public key–the information necessary to check his digital signature. We both digitally sign the contract and each keeps a copy.
A dispute arises; you accuse me of failing to live up to my agreement and demand arbitration. The arbitrator rules for you and instructs me to pay you five thousand dollars in damages. I refuse. The arbitrator writes his account of how the case came out–he awarded damages, I refused to pay them. He digitally signs it and sends you a copy.
You now have a package–the original contract and the arbitrator's verdict. My digital signature on the original contract proves that I agreed to that arbitrator; his digital signature on the verdict proves that I reneged on that agreement. That is all the information that an interested third party needs in order to conclude that I am not to be trusted.
You put the package on a web page, with my name all over it for the benefit of any search engines looking for information about me, and email the URL to anyone you think might want to do business with me in the future. Anyone who accesses the page can check the facts–more precisely, his computer can check the facts for him, by checking the digital signatures–in something under a second. Having done that, he knows that I am the one who reneged on the agreement. The most likely explanation is that I am dishonest. An alternative possibility is that I was fool enough to agree to a crooked arbitrator–but he probably doesn't want to do business with fools either. Thus the technology of digital signatures makes it possible to reduce information costs to third parties to something very close to zero, making possible effective reputational enforcement online.
Private enforcement of contracts along these lines solves the problems raised by the fact that cyberspace spans many geographical jurisdictions. The relevant law is defined not by the jurisdiction but by the private arbitrator chosen by the parties. Over time, we would expect one or more body of legal rules with regard to contracts to develop, as common law historically did develop, with many different arbitrators or arbitration firms adopting the same or similar legal rules. Contracting parties could then choose arbitrators on the basis of reputation.
For small scale transactions, you simply provide your browser with a list of acceptable arbitration firms; when you contract with another party, the software picks an arbitrator from the intersection of the two lists. If there exists no arbitrator acceptable to both parties, the software notifies both of you of the problem and you take it from there. For larger transactions, the choice of arbitrator is one of the things that the human beings negotiating the contract can bargain over.
Private enforcement also solves the problem of enforcing contracts when at least one of the parties is, and wishes to remain, anonymous. Digital signatures make it possible to combine anonymity with reputation. A computer programmer living in Russia or Iraq, where anonymity is the only way of protecting his income from private or public bandits, has an online identity defined by his public key; any message signed by that public key is from him. That identity has a reputation, developed through past online transactions; the more times the programmer has demonstrated himself to be honest and competent, the more willing people will be to employ him. The reputation is valuable, so the programmer has an incentive to maintain it–by keeping his contracts.
There is one way in which the online world I have been describing makes contract enforcement harder than in the real world. In the real world, my identity is tied to a particular physical body, identifiable by face, finger prints, and the like. I do not have the option, after destroying my realspace reputation for honesty, of spinning off a new me, complete with new face, new fingerprints, and an unblemished reputation.
Online I do have that option. As long as other people are willing to deal with cyberspace personae not linked to realspace identities, I always have the option of rolling up a new public key/private key pair and going online with a new identity and a clean reputation.
It follows that reputational enforcement will only work for people who have reputations–sufficient reputational capital so that the cost of abandoning the current online persona and its reputation outweighs the gain from a single act of cheating. Someone who wants to deal anonymously in a trust intensive industry may have to start small, building up his reputation to the point where its value is sufficient to make it rational to trust him with larger transactions. The same thing happens today in industries where enforcement is primarily through reputational mechanisms.
The problem of spinning off new identities is not limited to cyberspace. The realspace equivalent of rolling up a new pair of keys is filing a new set of incorporation papers. Marble facing for bank buildings and expensive advertising campaigns can be seen as ways in which a new firm posts a reputational bond in order to persuade those who deal with it that they can trust it to act in a way that will preserve its reputation. Cyberspace personae do not have the option of marble, at least if they want to remain anonymous, but they do have the option of investing in a long series of transactions or in other costly activities, such as advertising or well publicized charity, in order to establish a reputation that will bond their future performance.
What about entities–firms or individuals–that are not engaged in long term dealings and so neither have a valuable reputation nor are willing to pay to acquire one? How are they to guarantee their contractual performance in this world?
One solution is to piggyback on the reputation of another entity that is engaged in such dealings. Suppose I am an anonymous online persona forming a contract that it might later be in my interest to break. How, absent a reputation, do I persuade the other party that I will keep my word? What is to keep me from making the contract, agreeing to an arbitrator, breaking the contract, ignoring the arbitrator's verdict, and walking off with my gains, unconcerned by the damage to my nonexistent reputation?
I solve the problem by offering to post a performance bond with the arbitrator—in anonymous digital currency. The arbitrator is free to allocate all or part of the bond to the other party as damages for breach. This approach–taking advantage of a third party with reputation–is not purely hypothetical. Purchasers on Ebay at present can supplement direct reputational enforcement with the services of an escrow agent–a trusted third party that holds the buyer's payment until the goods have been inspected and then releases it to the seller.
This approach still depends on reputational enforcement, but this time the reputation belongs to the arbitrator. With all parties anonymous, he could simply steal bonds posted with him–but if he does, he is unlikely to stay in business very long. If I am worried about such possibilities, I can require the arbitrator to sign a contract specifying a second and independent arbitrator to deal with any conflicts between me and the first arbitrator. My signature to that agreement is worth very little, since it is backed by no reputation—but the signature of the first arbitrator to a contract binding him to accept the judgment of the second arbitrator is backed by the first arbitrator’s reputation.
If the arguments I have offered are correct, we can expect the rise of online commerce to produce a substantial shift towards private law privately enforced via reputational mechanisms. While the shift should be strongest in cyberspace, it ought to be echoed in realspace as well. Digital signatures lower information costs to interested third parties whether the transactions being contracted over are occurring online or not. And the existence of a body of trusted online arbitrators will make contracting in advance for private arbitration more familiar and reliance on private arbitration easier for realspace as well as cyberspace transactions.
When I was little, one of my favorite adults was a friend of my parents named Dorothy Brady. One reason was her habit of bringing small gifts for myself and my sister when she came to visit. A more important reason was that she was always doing interesting things.
One of her projects involved apple peeling machines--the gadgets that you stick an apple on, turn a handle, and--if all goes well--end up with a peeled, cored and sometimes even sliced apple. The conclusion of her research--done by exploring New England museums--was that over a period of about two hundred years the design stayed the same but the materials changed. The earlier you went back, the more of the machine was made of wood and the less of metal.
In real life Dorothy was an economic historian; in addition to giving her an excuse to poke around museums, her research provided an example of a very common pattern in economic history. How people do things depends on the relative costs of the alternatives. When metal is expensive, wood and the labor to shape it cheap, you make things mostly out of wood, use metal only where it is essential. As steel gets less and less expensive relative to wood and labor, people shift to using more and more of it.
This chapter is about a newer example of the same logic. The technology of the internet reduces the cost of doing business with people far away--so we do more of it. It used to be that, as a practical matter, I only bought things from England when I was in England. Today buying a book from England is only marginally more trouble than buying it from the local Barnes and Noble. Routinely doing business with people far away raises the cost of settling disputes by use of the government court system, since the jurisdiction of courts is in large part based on geography.
Modern communications technology makes sharing information much easier than it used to be and encryption technology, in the form of digital signatures, does the same for verifying the shared information. You no longer have to check your informant's reputation and biases or look over the evidence to make sure nobody has tinkered with it. One calculation tells you a verdict came from the arbitrator it says it came from; one more tells you that that arbitrator was the one I agreed to accept. I agreed to accept his verdict, he says I reneged on that agreement, case closed.
Government courts and private reputation are alternative ways of achieving the same objective--making people keep their word. The cost of using government courts has gone up. The cost of information to interested third parties--the key ingredient in private enforcement through reputation--has gone down. The predictable result is a shift away from the one means and towards the other.
Find an apple peeler in a kitchen gadget catalog. The handle might be wood--or plastic. The rest will be steel.
Authors expect to be paid for their work. So do programmers, musicians, film directors, and lots of other people. If they cannot be paid for their work, we are likely to have fewer books, movies, songs, programs.
This creates a problem if what is produced can be inexpensively reproduced. Once it is out there, anyone who has a copy can make a copy, driving the price of copies down to the cost of reproducing them. Copyright law is an attempt to solve that problem by giving the creator of a work the legal right to control the making of copies. How well it works depends on how easily that right can be enforced.
To enforce his legal rights, the owner of a copyright has to be able to discover illegal copying and take legal action against those responsible. How easy that is depends in large part on the technology of copying.
Consider the old fashioned printing press, c. 1910. It was large and expensive; printing a book required first setting hundreds of pages of type by hand. That made it much less expensive to print ten thousand copies of a book on one press than a hundred copies each on a hundred different presses. Since nobody wanted ten thousand copies of a book for himself, a producer had to find customers–lots of customers. Advertising the book, or offering it for sale in bookstores, brought it to the attention of the copyright owner. If he had not authorized the copying, he could locate the pirate and sue.
Enforcement becomes much harder if copying is practical on a scale of one or a few copies–the current situation for digital works such as computer programs, digitized music, or films on DVD. Individuals making a copy for themselves or a few copies for friends are much harder to locate than mass market copiers. Even if you can locate them, it is harder to sue ten thousand defendants than one. Hence, as a practical matter, firms mostly limit the enforcement of their copyright to legal action against large scale infringers.
The situation is not entirely hopeless from the standpoint of the copyright holder. If the product is a piece of software widely used in business–Microsoft Word, for example–there will be organizations that use, not one copy, but thousands. If they choose to buy one and produce the rest themselves, someone may notice–and sue.
Even if copying can be done on a small scale, there remains the problem of distribution. If I get programs or songs by illegally copying them from my friends I am limited to what my friends have, which may not include what I want. I may prefer to buy from distributors providing a wide range of alternatives–and they, being potential targets for infringement suits, have an incentive to buy what they sell legally rather than produce it themselves illegally. So even in a world where many expensive works in digital form–Word, for example–can easily be copied, the producers of such works can still use copyright law to get paid for some of what they produce.
Or perhaps not. As has now been demonstrated with MP3's, distribution over the Internet makes it possible to combine individual copying with mass market distribution, using specially designed search tools to find the individual who happens to have the particular song you want and is willing to let you copy it. A centralized distribution system is vulnerable to legal attack, as Napster discovered. But shutting down a decentralized system such as Gnutella or Freenet, which allows individuals on the net to make their music collections available for download in exchange for the ability to download songs from other people's collections, is a more difficult problem. If each user is copying one of your songs once, but there are a hundred thousand of them, can you sue them all?
Perhaps you can–if you take proper advantage of the technology. A decentralized system must provide some way of finding someone who has the song you want and is willing to share it. Copyright owners might use the same software to locate individuals who make their works available for copying–and sue all of them, perhaps in a suit that joins many defendants. Since copyright law sets a $500 statutory minimum for damages, suing ten thousand individuals each of whom has made one copy of your copyrighted work could, in principle, bring in more money than suing one individual who had made ten thousand copies.
So far as I know, it has not yet been tried. Currently [check this], it is hard to force multiple defendants into a single suit–but one could imagine modifications in the relevant legal rules, perhaps applicable only to copyright suits, that would change that situation. And under current law it is unclear whether noncommercial file exchanges are illegal--although that situation might be changed by Congress or the courts.
While this approach might work for a while, its long run problems should be clear from the earlier discussion of strong privacy. A well designed decentralized system would locate someone willing to let you copy a song but would not identify him. You do not need name, face or social security number in order to copy the file encoding the song you want, merely some way of getting messages to and from him.
There remains, for some forms of intellectual property, the possibility of collecting royalties from business customers–corporations that use Word, movie theaters performing movies. In the longer run, even that option may shrink or vanish. A world where strong privacy is sufficiently universal would permit virtual firms–groups of individuals linked via the net but geographically dispersed and mutually anonymous. Even if all of them use pirated copies of Word–or whatever the equivalent is at that point–no whistle blower can report them because nobody, inside or outside the firm, knows who they are.
Consider the problem in a different context–images on the world wide web. Each image originated somewhere and may well belong to someone. But once webbed, anyone can copy it. Not only is it hard for the copyright owner to prevent illegal copying, it may be hard for even the copier to prevent illegal copying, since he may not know who the image belongs to or whether it has been put in the public domain.
An increasingly popular way of dealing with these problems is digital watermarking. Using special software, the creator of the image imbeds in it concealed information identifying him and claiming copyright. In a well designed system, the information has no noticeable effect on how the image looks to the human eye and is robust against transformation–meaning that it is still there after a user has converted the image from one format to another, cropped it, edited it, perhaps even printed it out and scanned it back in.
Digital watermarking can be used in a number of different ways. The simplest is by embedding information in an image and making the software necessary to read the information widely available. That lowers the cost to users of avoiding infringement, by making it easy for them to discover that an image is under copyright and who the copyright owner is. It raises the cost of committing infringement, at least on the web, since search engines can search the web for copyrighted images and report back to the copyright owner—who checks to see if the use was licensed and if not takes legal action. The existence of the watermark will help him prove both that the image is his and that the user knew or should have known it was his, hence is liable for not only infringement but deliberate infringement.
A deliberate infringer might try to remove the watermark while preserving the image. A well designed system can make this more difficult. But as long as the watermark is observable, the infringer can try different ways of removing it until he finds one that works. And making software for reading the watermark publicly available makes it harder to keep secret the details of it works, hence easier to design software to defeat it. So this form of watermark provides protection against inadvertent infringement, raises the cost of deliberate infringement–the infringer must go to some trouble to remove the watermark–but cannot prevent or reliably detect deliberate infringement.
The obvious solution is an invisible watermark–designed to be read only by special software not publicly available. That is of no use for preventing inadvertent infringement but substantially raises the risks of deliberate infringement, since the infringer can never be sure he has successfully removed the watermark. By imprinting an image with both a visible and an invisible watermark, the copyright holder could get the best of both worlds–provide information for those who do not want to infringe and a risk of detection for those who do.
There is another way in which watermarking could be used to enforce copyright, in a somewhat different context. Suppose we are considering, not digital images, but computer programs. Further suppose that enforcing copyright law against the sellers of pirated software is not an option–they are located outside of the jurisdiction of our court system, doing business anonymously, or both.
Even if the sellers of pirated copies of our software are anonymous, the people who originally bought the software from us are not. When we sell the program, each copy has embedded in it a unique watermark–a concealed serial number, sometimes referred to as a digital fingerprint. We keep a record of who got each copy and make it clear to our customers that permitting their copy of the program to be copied is a violation of copyright law for which we will hold them liable. If copies of our software appear on pirate archives we buy one, check the fingerprint, and sue the customer from whose copy it was made.
Digital watermarking is one example of a new technology that can be used to get back at least some of what other new technologies took away. The ease of copying digital media made enforcement of copyright harder–at first glance, impossibly hard–by enabling piracy at the individual level. But the ability of digital technologies to embed invisible, and potentially undetectable, information in digital images, combined with the ability of a search engine to check a billion web pages looking for the one that contains an unlicensed copy of a watermarked image, provide the possibility of enforcing copyright law against individual pirates. And the same technology, by embedding the purchaser's fingerprint in the purchased software, provides a potential way of enforcing copyright law even in a world of strong privacy–not against anonymous pirates or their anonymous customers but against the known purchaser from whom they got the original to copy.
While these are possible solutions, there is no guarantee that they will always work. Invisible watermarking is vulnerable to anyone sufficiently ingenious–or with sufficient inside information–to crack the code, to figure out how to read the watermark and remove it. The file representing the image or program is in the pirate's hands. He can do what he wants with it–provided he can figure out what needs to be done.
An individual who wants to images or software is unlikely to have the expertise to figure out how to remove even visible watermarks, let alone invisible ones. To do so he needs the assistance of someone else who does have that expertise–most readily provided in the form of software designed to remove visible watermarks and identify and remove invisible ones. That raises the possibility of backstopping the technological solution of digital watermarks with legal prohibitions on the production and distribution of software intended to defeat it. That is precisely the approach used by the recent–and highly controversial–Digital Millenium Copyright Act. It bans software whose purpose is to defeat copyright management schemes such as digital watermarking. How enforceable that ban will be, in a world of networks and widely available encryption, remains to be seen.
Each of the approaches to enforcing copyright that I have been discussing has serious limitations. The use of digital fingerprints to identify the source of pirated copies only works if the original sale is sufficiently individualized so that the seller knows the identity of the buyer–and while it would be possible to sell all software that way, it would be a nuisance. Perhaps more important, the approach works very poorly for software that is expensive and widely used. One legitimate copy of Word could be the basis for ten million illegitimate copies, giving rise to a claim for a billion dollars or so in damages–and if Microsoft limits its sales to customers both capable of satisfying such a claim and willing to put that much money at risk, it will not sell very many copies of Word. The use of digital watermarks to identify pirated copies only works if the copies are publicly displayed–for digital images on the web but not for a pirated copy of Word on my hard drive. These limitations suggest that producers of intellectual property have good reason to look for other ways of protecting it.
One way of solving these problems would be to make my hard drive public–to convert cyberspace, at least the parts of it residing on hardware under the jurisdiction of U.S. courts, into a transparent society. My computer is both a location in cyberspace and a physical object in realspace; in the latter form it can be regulated by a realspace government, however good my encryption is. One can imagine, in a world run by copyright owners, a legal regime that required all computers to be networked and all networked computers to be open to authorized search engines, designed to go through their hard drives looking for pirated software, songs, movies, or digital images.
I do not think such a legal regime will be a politically viable option in the U.S. anytime in the near future, although the situation might be different elsewhere. There are, however, private versions that might be more viable, technologies permitting the creator of intellectual property to make it impossible to use it save on computers that meet certain conditions–one of which could be transparency to authorized agents of the copyright holder.
For a much simpler version of the same approach, consider possible copyright enforcement strategies if each computer’s central processing unit has a built-in serial number, unique to that particular computer. A software company customizes each copy of its product to run on a single computer, identified by the serial number of its cpu. The user can freely make backups. The user can give copies to his friends. But the copies will only run on his computer. Unless, of course, someone figures out a way to either modify the part of the program that checks the serial number or modify other software, perhaps part of the computer's operating system, to lie to the program about what its serial number is.
Most readers would regard the idea of enforcing the terms of a software license by allowing a human being to randomly search their hard drive as outrageous, but might react very differently to the idea of allowing a program on their computer to check their cpu to see what its serial number is. Some may be worried about the problems that will arise if they get a new computer and want to transfer their old software to it. But nobody is likely to see such a system as an intolerable violation of privacy.
The two approaches appear very different--but consider something halfway between. Your hard drive must be open to searches--but the searches may be done only by computer programs. The only information the programs are capable of reporting to a human being is the fact that they found copyrighted software on your drive that you are not entitled to--at which point the copyright holder can go to court to ask for legal authority to look at your hard drive.
The issue raised by these examples--to what degree does being spied on by a machine violate your privacy--is one we will return to in a later chapter, where we consider the implications of using computers instead of human beings to listen to phone taps.
If using technology to enforce copyright law in a world of easy copying is not always workable, perhaps we should instead use technology to replace copyright law. If using the law to keep trespassers and stray cattle off my land doesn't work, perhaps I should build a fence.
You have produced a collection of songs and wish to sell them online. To do so, you digitize the songs and insert them in a cryptographically protected container–what Intertrust, one of the pioneering firms in the industry, called a digibox. The container is a piece of software that protects the contents from unauthorized access while at the same time providing, and charging for, authorized access. Once the songs are safely inside the box you give away the package by making it available for download on your web site.
I download the package to my computer; when I run it I get a menu of choices. If I want to listen to a song once, I can do so for free. Thereafter, each play costs five cents. If I really like the song, fifty cents unlocks it forever, letting me listen to it as many times as I want. Payment is online by ecash, credit card, or an arrangement with a cooperating bank.
It may have occurred to you that there is a flaw in the business plan I have just described. The container provides one free play of each song. In order to listen for free, all the customer has to do is make lots of copies of the container and use each once. Alternatively, if I want to make copies for friends, I can pay fifty cents once to unlock the file and make copies—unlocked copies—for them. It might be prudent for the digibox to have some way of making sure that the computer it is running on is the same as the computer it was unlocked on.
Making a new copy every time you play a song is a lot of trouble to go to in order in order to save five cents. Intertrust does not have to make it impossible to defeat its protection, whether in that simple way or in more complicated ways, in order for it and the owners of the intellectual property it protects to make money. It only has to make defeating the protection more trouble than it is worth.
As in the case of digital watermarking, how easy it is to defeat the protection depends very largely on who is doing it. The individual customer is unlikely to be expert in programming or encryption, hence unlikely to be able to defeat even simple forms of technological protection. The risk comes from the person who is an expert and makes his expertise available, cheaply or for free, in the form of software designed to crack the protection.
One approach to dealing with that problem is by making it illegal to create, distribute, or possess such software–the strategy put into law by the Digital Millenium Copyright Act. That law currently faces legal challenges by plaintiffs who argue that publishing information, including information about how to defeat other people's software, is free speech, hence protected. Even if the court declines to protect that particular sort of speech, the arguments of an earlier chapter suggest that in the online world free speech may itself be technologically protected–by the wide availability of encryption and computer networks–making the relevant parts of the DMCA in the long run unenforceable.
If law cannot provide protection, either against piracy or against computerized safecracking tools designed to defeat technological protection, the obvious alternative is technological–safes that cannot be cracked. Is that possible?
For some forms of intellectual property--songs, for example--it is not. However strong the digibox, at some point in the process the customer gets to play the song–that, after all, is what he is paying for. But if a customer is playing a song on his own computer in his own home, he can also be playing it into his own tape recorder–at which point he has a copy of the song outside the box. If he prefers an MP3 to a cassette he can play the song back to the computer, digitize it, and compress it. If he wants to preserve audio quality, he can short circuit the process, feeding the electrical signals from his computer to his speakers back into the computer to be redigitized and recompressed. A similar approach could be used to hijack a book, video or any other work that is presented to the customer in full when he uses it. Technological protection may make the process of getting the work out of the digibox and into some usable form a considerable nuisance–but once one person has done it, in a world where copyright law is difficult or impossible to enforce, the work is available to all. Short of making everybody's hard disk searchable, the only way of protecting works of this kind is to limit their consumption to a controlled environment–showing the video in a movie theater with video cameras banned, for instance.
For other sorts of works, secure protection may be a more serious option. Consider, for example, an (imaginary) database compiled by Consumer Reports, designed to advise a user what car to buy. A query describes price range, preferences, and a variety of other relevant information. The answer is a report tailored to that particular customer.
Having received the report, he can copy it and give it to his neighbor. But his neighbor is unlikely to want it, since he is unlikely to have all the same tastes, circumstances, and constraints. What the neighbor wants is his own customized report–which requires that he make his own payment.
With enough time, energy, and money, a pirate could ask a million questions and use the answers to reverse engineer the protected data–but why should he? The pirate can give away what he steals, he can use it himself, but he has only a very limited ability to sell it. As long as the protection raises the cost of reconstructing the database high enough, it should be reasonably safe. For a real world example of almost precisely that strategy, consider Lexis and Westlaw, the legal databases on which lawyers and legal academics rely. There is, in practice, nothing to keep me from downloading a law case from Lexis and then passing it on to a colleague who has not paid for the privilege—but the odds that my colleague is looking for the same case I am are low.
For a different approach to the problem of protecting intellectual property, consider a program that does something very useful–high quality speech recognition, say. I divide it into two parts. One, which contains most of the code and does most of the work, I give away to anyone who wants it. The rest, including the key elements that make my program special, resides on my server. In order for the first part to work, it must continually exchange message with the second part–access to which I charge for by the minute.
One elegant feature of this solution is that the disease is also the cure. Part of what makes copyright unenforceable is the ready availability of high speed computer networks, enabling the easy distribution of pirated software. But high speed computer networks are precisely what you need for the form of protection I have just described, since they allow me to make software on my server almost as accessible to you as software on your hard disk–and charge for it.
Putting together everything in this chapter, we have a picture of intellectual property protection in a near future world of widely available high speed networks, encryption, easy copying. Intellectual property used publicly, such as images on the web, can be legally protected provided it is not valuable enough to make it worth going to the trouble of removing hidden watermarks and provided also that it is being used somewhere that copyright law can reach. That second proviso means that if we move all the way to a world of strong privacy such protection vanishes, since copyright law is useless if you cannot identify the infringer. But even in that world, some intellectual property can be protected by fingerprinting each original and holding the purchaser liable for any copies made from it.
Where intellectual property cannot be protected by law, it may still be possible to protect it by technology. That approach is of limited usefulness for works that must be entirely revealed every time they are accessed, such as a song. It may work considerably better for more complicated works, such as a database or a computer program. For both sorts of works, protection will be easier if it is practial to use the law to suppress software designed to defeat it–but it probably won't be.
Does this mean that, in the near future, songs will stop being sung and novels stop being written? That is not likely. What it does mean is that those who produce that sort of intellectual property will have to find ways of getting paid that do not depend on control over copying. For songs, one obvious possibility is to give away the digitized version and charge for concerts. Another is to rely on the generosity of fans–in a world where it will be easy to email a ten cent appreciation to the creator of the song you have just enjoyed. A third is to give away the song along with a digitally signed thank you to the firm that paid you to write it–and hopes to profit from your fans' goodwill.
Similar options are available for authors. The usual royalty payment for a book is between five and ten percent of its face value. Many readers may be willing to voluntarily pay the author that much in a world where the physical distribution of books is essentially costless. Other books will get written in the same way that articles in academic journals are written now–to spread the author's ideas or to build up a reputation that can be used to get a job, or consulting contracts, or speaking opportunities.
Several chapters back I raised the possibility of treating transactional information as private property, with ownership allocated by agreement at the time of the transaction. Such information is a form of intellectual property and can be protected by the same technologies we have just discussed.
Suppose, for example, that you are happy to receive catalogs in the mail (or email) but do not want strangers to be able to compile enough information about you to enable identity theft, spot you as a target for extortion, or in other ways use your personal information against you. You achieve both objectives by making personal information generated by your transactions–purchases, employment, car rental, and the like–available only in a very special sort of database. The database allows users to create address lists of people who are likely customers for what they are selling but does not allow them to get individualized data about those people. It will be distributed inside a suitably designed and cryptographically protected container or on a protected server, designed to answer queries but not to reveal the underlying data. If the catalogs are going out by email, the database is combined with a forwarding service. One copy of the catalog goes to the service, along with suitable payment, and a thousand copies from there to a thousand email addresses—none of which need be revealed to the catalog company.
The information in the database was created by your transactions. In the highest tech version, you conduct all of them anonymously, so nobody but you has the information to start with, and you can control who gets it thereafter. In a lower tech version, both you and the seller start with the information–the fact that he sold you something–but he is contractually obliged to erase the record once the transaction is complete. In either version, you arrange for the information to be available only within the sort of protected database I have just described–and, if access to such a database is sufficiently valuable, get paid for doing so.
A list of the half dozen most important figures in the early history of economics would have to include David Ricardo; it might well include Thomas Malthus and John Stuart Mill. A similar list for geology would include William Smith and James Hutton. For biology it would surely include Charles Darwin and Gregor Mendel, for physics Isaac Newton.
Who were they? Malthus and Darwin were clergymen, Mendel a monk, Smith a mining engineer, Hutton a gentleman farmer, Mill a clerk and writer, Ricardo a retired stock market prodigy. Of the names I have listed, only Newton was a university professor–and by the time he became a professor he had already come up with both calculus and the theory of gravitation.
There were important intellectual figures in the seventeenth, eighteenth and early nineteenth centuries who were professional academics–Adam Smith, for example. But a large number, probably a majority, were amateurs. In the twentieth century, on the other hand, most of the major figures in all branches of scholarship have been professional academics. Most started their careers with a conventional course of university education, typically leading to a PhD degree.
Why did things change? One possible answer is the enormous increase in knowledge. When fields were new, scholars did not need access to vast libraries. There were not many people in the field, the rate of progress was not very rapid, so letters and occasional meetings provided adequate communication. As fields developed and specialization increased, the advantages of the professional–libraries, laboratories, colleagues down the hall–became increasingly important.
Email is as easy as walking down the hall. The web, while not a complete substitute for a library, makes enormous amounts of information readily available to a very large number of people. In my field and many others it is becoming common for the authors of scholarly articles to make their datasets available on the web so that other scholars can check that they really say what the article claims they say.
An alternative explanation for the shift from amateur to professional scholarship is that it was due to the downward spread of education. In the 18th century, someone sufficiently well educated to invent a new science was likely to be a member of the upper class, hence had a good chance of not needing to work for a living. In the twentieth century, the correlation between education and wealth is a good deal weaker.
We are not likely to return to the class society of 18th century England. But by the standards of that society, most educated people today are rich–rich enough to make a tolerable living and still have time and effort left to devote to their hobbies. For a large and increasing fraction of the population, amateur scholarship, like amateur sports, amateur music, amateur dramatics, and much else, is an increasingly real option.
These arguments suggest that, having shifted from a world of amateur scholars to a world of professionals, we may now be shifting back. That conjecture is based in large part on my own experiences. Two examples:
Robin Hanson is currently a professor of economics. When I first came into (virtual) contact with him, he was a NASA scientist with an odd hobby. His hobby was inventing institutions. His ideas–in particular an ingenious proposal to design markets to generate information–were sufficiently novel and well thought out to make corresponding with him more interesting than corresponding with most of my fellow economists. They were sufficiently interesting to other people to get published. Eventually he decided that his hobby was more fun than his profession and went back to school for a PhD in economics.
One of my hobbies for the past thirty years has been cooking from very early cookbooks; my earliest source is a letter written in the sixth century by a Byzantine physician named Anthimus to Theoderic, king of the Franks. When I started, one had to pretty much reinvent the wheel. There were no published translations of early cookbooks in print and almost none out of print. Almost the only available sources in English, other than a small number of unreliable books about the history of cooking, were a few early English cookbooks–in particular a collection that had been published by the Early English Text Society in 1888. I managed to get one seventeenth century source by finding a rare book collection that had a copy of the original and paying to have it microfilmed.
The situation has changed enormously over the past thirty years. The changes include the publication of several reliable secondary sources, additional English sources, and a few translations–all of which could have happened without the internet. But the biggest change is that there are now at least six English translations of early cookbooks on the web, freely available to anyone interested, as well as several early English cookbooks. Most of the translations were done by amateurs for the fun of it. There are hundreds of worked out early recipes (the originals usually omit irrelevant details such as quantities, times and temperature) webbed. There is an email list that puts anyone interested in touch with lots of experienced enthusiasts. Some of the people on that list are professional cooks, some are professional scholars. So far as I know, none is a professional scholar of cooking history.
Similar things are happening in other areas. I am told that amateur astronomers have long played a significant role–because skilled labor is an important input to star watching. There seems to be an increasing amount of interaction between historians and groups that do amateur historical recreation–sometimes prickly, when hobbyists claim expertise they don't have, sometimes cordial. The professionals, on average, know much more than the amateurs–but there are a lot more amateurs and some of them know quite a lot. And the best of the amateurs have access not only to information but to each other–and to any professional more interested in the ability of the people he corresponds with than their credentials.
The best known example is Linux, a computer operating system. The original version was created by a Finnish graduate student named Linus Torvalds. Having done a first draft himself, he invited everyone else in the world to help improve it. A lot of them accepted–with the result that Linux is now a sophisticated operating system, widely used for a variety of different tasks. Another open source project, the Apache web server, is the software on which a majority of World Wide Web pages run.
When you buy a copy of Microsoft Word you get the object code, the version of the program that the computer runs. With an open source program, you get the source code–the human readable version that the original programmer wrote and that other programmers need if they want to modify the program. You can compile it into object code to run it, but you can also modify it and then compile and run your new version of the program.
The mechanics of open source are simple. Someone comes up with a first version of the software. He publishes the source code. Other people interested in the program modify it–which they are able to do because they have the source code–and send their modifications to him. Modifications that he accepts go into the code base–the current standard version which other programmers will work from. At the peak of Linux development, Torvalds was updating the code base daily.
One advantage to open source is that, with lots of programmers, each working on the parts of the code that interest him, when someone reports a problem there is likely to be someone else to whom its source and solution are obvious. "With enough eyeballs, all bugs are shallow." And with the source code open, bugs can be found and improvements suggested by anyone interested.
Eric Raymond, a prominent spokesmen for the movement and the author of a book about it, has pointed out that Open Source has its own set of norms and property rights. There is nobody who can forbid you from copying or modifying an open source progam. But there is ownership in two other and important senses.
Linus Torvalds owns Linux. Eric Raymond owns Fetchmail. A committee owns Apache. Under an open source license anyone is free to modify the code any way he likes, provided that he makes the source code to his modified version public, thus keeping it open source. But programmers want to all work on the same code base so that each can take advantage of improvements made by the others. Hence there is considerable hostility in the community of open source programmers to forking a project–developing two inconsistent versions. If Torvalds rejects your improvements to Linux, you are still free to use them–but don't expect any help. Everyone else will be working on his version. Thus ownership of a project--the ability to decide what goes into the code base--is a property right enforced entirely by private action.
As Eric Raymond has pointed out, such ownership is controlled by rules similar to the common law rules for owning land. Ownership of a project goes to the person who creates it–homesteads that particular programming opportunity by creating the first rough draft of the program. If he loses interest, he can transfer ownership to someone else. If he abandons the program, someone else can claim it–publicly check to be sure nobody else is currently in charge of it and then publicly take charge of it himself. The equivalent in property law is adverse possession, the legal rule under which, if you openly treat property as yours for long enough and nobody objects, it is yours.
There is a second form of ownership in open source–credit. Each project is accompanied by a file identifying the authors. Meddle with that file–substitute in your name, thus claiming to be the author of code someone else wrote–and your name in the open source community is Mud. The same is true in the scholarly community. From the standpoint of a professional scholar, copyright violation is a peccadillo, theft someone else's problem–plagiarism the ultimate sin.
As this example, suggests, one way of looking at the open source movement is as a variant on the institutions under which most of modern science was created. Programmers create software; scholars create ideas. Ideas, like open source programs, can be used by anyone. The source code, the evidence and arguments on which the ideas are based, is public information. An article that starts out "the following theory is true, but I won't tell you why" is unlikely to persuade many readers.
Scientific theories do not have owners in quite the sense that open source projects do, but at any given time in most fields there is considerable agreement as to what the orthodox body of theory is. Scholars can choose to ignore that consensus--but if they do, their work is unlikely to be taken seriously. Apache's owner is a committee. Arguably neo-classical economics belongs to a somewhat larger committee. A scholar can defy the orthodoxy to strike out on his own; some do. Similarly, if you don't like Linux, you are free to start your own open source operating system project. Heretical ideas sometimes succeed and open source projects are sometime successfully forked--but in both cases, the odds are against it.
One of the odd features of a capitalist system is how socialist it is. Firms interact with other firms and with customers through the decentralized machinery of trade. But firms themselves are miniature socialist states, hierarchical organizations controlled, at least in theory, by orders from above.
There is one crucial difference between Microsoft and Stalin's Russia. Microsoft's interactions with the rest of us are voluntary. It can get people to work for it or buy its products only by offering them a deal they prefer to all alternatives. I do not have to use the Windows operating system unless I want to, and in fact I don't and don't. Stalin did not face that constraint.
One implication is that, however bad the public image of large corporations may be, they exist because they serve human purposes. Employees work for them because they find doing so a better life than working for themselves; customers buy from them because they prefer doing so to making things for themselves or buying from someone else. The disadvantages associated with taking orders, working on other people's projects, depending for your reward on someone else's evaluation of your work, are balanced by advantages sufficient, for many people, to outweigh them.
The balance between the advantages and disadvantages of large hierarchical organizations depends in part on technologies associated with exchanging information, arranging transactions, enforcing agreements, and the like. As those technologies change, so does that balance. The easier it is for a dispersed group of individuals to coordinate their activities, the larger we would expect the role of decentralized coordination, market rather than hierarchy, in the overall mix. This has implications for how goods are likely to be produced in the future--Open Source is a striking example. It also has implications for political systems, social networks, and a wide range of other human activities.
One example occurred some years ago in connection with one of my hobbies, one at least nominally run by a non-profit corporation controlled by a self perpetuating board of directors. The board responded to problems of growth by hiring a professional executive director. Acting apparently on his advice, they announced, with no prior discussion, that they had decided to double dues and to implement a controversial proposal that had been previously dropped in response to an overwhelmingly negative response by the membership.
If it had happened ten years earlier there would have been grumbling but nothing more. The corporation, after all, controlled all of the official channels of communication. When its publication, included in the price of membership, commented on the changes, the comments were distinctly one sided. Individual members, told by those in charge that the changes were necessary to the health of the hobby, would for the most part have put up with them.
That is not what happened. The hobby in question had long had an active Usenet news group associated with it. Members included individuals with professional qualifications, in a wide range of relevant areas, arguably superior to those of the board members, the executive director, or the corporation's officers. Every time an argument was raised in defense of the corporation's policies, it was answered--and at least some of the answers were persuasive. Only a minority of those involved in the hobby read the newsgroup, but it was a large enough minority to get the relevant arguments widely dispersed. And email provided an easy way for dispersed members unhappy with the changes to communicate, coordinate, act. The corporation's board of directors was self-perpetuating--membership in the organization did not include a vote--but it was made up of volunteers, people active in the hobby who were doing what they thought was right. They discovered that quite a lot of others, including those they respected, disagreed, and were prepared to support their disagreement with facts and arguments. By the time the dust cleared, every member of the board of directors that made the decision, save those whose terms had ended during the controversy, had resigned; their replacements reversed the most unpopular of the decisions. It struck me as an interesting example of the way in which the existence of the internet had shifted the balance between center and periphery.
For a more commercial example, consider the recent announcement that Eli Lilly had decided to subcontract part of its chemical research to the world at large. Lilly created a subsidiary, InnoCentive LLC, to maintain a web page of chemistry problems that Lilly wants solved--and the prices, up to $100,000, that they are offering for the solutions. InnoCentive has invited other companies to use their services to get their problems solved too. So far, according to a story in the Wall Street Journal, they have gotten "about 1,000 scientists from India, China and elsewhere in the world" to work on their problems.
One problem Innocentive raises is that the people who are solving Lilly's problems may be doing so on someone else's payroll. Consider a chemist hired to work in an area related to one of the problems on the list. He has an obvious temptation to slant the work in the direction of the $100,000 prize, even if the result is to slow the achievement of his employer's objectives. A chemist paid by firm A while working for firm B is likely to be caught--and fired--if he does it in realspace. But if he combines a realspace job with cyberspace moonlighting--still more if parts of the realspace job are done by telecommuting from his home--the risks may be substantially less. So one possibility if Lilly's approach catches on is a shift from paying for time to paying for results, at least for some categories of skilled labor. In the limiting case, employment vanishes and everyone becomes a subcontractor, selling output rather than time
So far we have been considering ways in which the internet supports decentralized forms of cooperation. It supports decentralized forms of conflict as well. A communication system can be used as a weapon, a way of misleading other people, creating forged evidence, accomplishing your objectives at the expense of your opponents. Consider two academic examples.
The year is 1995, the place Cornell University. Four freshman have compiled a collection of misogynist jokes entitled "75 Reasons Why Women (Bitches) Should Not Have Freedom of Speech" and sent copies to their friends. The collection reaches someone who finds it offensive--and proceeds to distribute it to many other people who share that view, producing a firestorm of controversy inside and outside the university. The central question is whether the creating of such a list is an offense that ought to be punished or a protected exercise of free speech.
Each of them will attend the "Sex at 7:00" program sponsored by Cornell Advocates for Rape Education (CARE) and the Health Education Office at Gannett Health Center. This program deals with issues related to date and acquaintance rape, as well as more general issues such as gender roles, relationships and communication.
Each of them has committed to perform 50 hours of community service. If possible, they will do the work at a non-profit agency whose primary focus relates to sexual assault, rape crisis, or similar issues. Recognizing that such agencies may be reluctant to have these students work with them, the students will perform the community service elsewhere if the first option is not available.
There are at least two ways to interpret that outcome. One is that Ms Krause is telling the truth, the whole truth, and nothing but the truth--Cornell imposed no penalty on the students, they imposed an entirely voluntary penalty on themselves. It seems a bit strange--but then, Cornell is a rather unusual university.
The alternative interpretation starts with the observation that university administrators have a lot of ways of making life difficult for students. By publicly announcing that the students had broken no rules and were subject to no penalty, while privately making it clear to the students that if they planned to remain at Cornell they would be well advised to "voluntarily" penalize themselves, Cornell engaged in a successful act of hypocrisy. They publicly maintained their commitment to free speech while covertly punishing students for what they said.
Someone who preferred the second interpretation thought up a novel way of supporting it. An email went out during Thanksgiving break to thousands of Cornell students, staff, and faculty--21,132 of them according to its authors.
I would like to extend my heartfelt thanks to the many faculty members who advised me regarding the unfortunate matter of the "75 Reasons" letter that was circulated via electronic mail. Your recommendations for dealing with the foul-mouthed "four little pigs" (as I think of them) who circulated this filth was both apposite and prudent.
Now that we have had time to evaluate the media response, I think we can congratulate ourselves on a strategy that was not only successful in defusing the scandal, but has actually enhanced the reputation of the university as a sanctuary for those who believe that "free speech" is a relative term that must be understood to imply acceptable limits of decency and restraint--with quick and severe punishment for those who go beyond those limits and disseminate socially unacceptable sexist slurs.
I am especially pleased to report that the perpetrators of this disgusting screed have been suitably humiliated and silenced, without any outward indication that they were in fact disciplined by us. Clearly, it is to our advantage to place malefactors in a position where they must CENSOR THEMSELVES, rather than allow the impression that we are censoring them.
The letter was not, of course, actually written by Barbara Krause--as anyone attentive enough to check the email address could have figured out. It was written, and sent, by an anonymous group calling themselves OFFAL--Online Freedom Fighters Anarchist Liberation. The letter was a satire, and an effective one, giving a believable and unattractive picture of what its authors suspected Ms Krause's real views were. It was also a fraud--some readers would never realize that she was not the real author. In both forms it provided propaganda for its authors’ view of what had really happened. But it did more than that.
Email is not only easily distributed, it is easily answered. Some recipients not only believed the letter, they agreed with it, and said so. Since OFFAL had used, not Ms Krause's email address, but an email address that they controlled, those answers went back to them. OFFAL produced a second email, containing the original forgery, an explanation of what they were doing, and a selection of responses.
Thank god you sent this memo--something with a little anger and fire--something that speaks to the emotion and not just the legalities. I hope you are right in stating that what went on behind the scenes was truly humiliating for "them".
I agree with what your memo states about the "four little pigs" (students who embarrassed the entire Cornell community), but I don't think I was one of the people really intended for your confidential memo. … Great Job in the handling of a most sensitive issue.
We believe that ridicule is a more powerful weapon than bombs or death threats. And we believe that the Internet is the most powerful system ever invented for channeling grass-roots protests and public opinion in the face of petty tyrants who seek to impose their constipated values on everyday citizens who merely want to enjoy their constitutionally protected liberties.
It is hard not to have some sympathy for the perpetrators. They were making a defensible argument, although I am not certain it was a correct one, and making it in an ingenious and effective way. But at the same time they, like the purveyors of other sorts of propaganda, were combining a legitimate argument with a dishonest one--and it was the latter that depended on their ingenious use of modern communications technology.
The correct point was that Cornell's actions could plausibly be interpreted as hypocritical--attacking free speech while pretending to support it. The dishonest argument was the implication that the responses they received provided support for that interpretation. The eight replies that OFFAL selected consisted of six supporting the original email, one criticizing it, one doing neither. If that were a random selection of responses, it would be impressive evidence for their view of what had happened—but we have no reason to think the selection was random. All it showed was that about half a dozen people out of more than twenty thousand supported the idea of covert punishment, which tells us very little about whether that was what was really happening.
What I find interesting about the incident is that it demonstrates a form of information warfare made practical by the nature of the net--very low transaction costs, anonymity, no face to face contact. Considered as parody, it could have been done with old technology. As fraud, a way of tricking people into revealing their true beliefs by pretending that they were revealing them to someone who shared them, it could have been done with old technology, although not as easily. But as mass production fraud, a way of fooling thousands of people in order to get a few of them to reveal their true beliefs, it depended on the existence of email.
I believe that it is okay to have sex before marriage unlike some people. This way you can expirence different types of sex and find the right man or woman who satifies you in bed. I you wait until marriage then what if your mate can not satisfy you, then you are stuck with him. Please write me and give me your thoughts on this. You can also tell me about some of your ways to excite a woman because I have not yet found the right man to satisfy me.
It occurred to me that what I was observing might be a commercial variant of the OFFAL tactic. The message is read by thousands, perhaps tens of thousands, of men. A hundred or so take up the implied offer and email responses. They get suitably enticing emails in response--the same emails for all of them, with only the names changed. They continue the correspondence. Eventually they receive a request for fifty dollars--and a threat to pass on the correspondence to the man's wife if the money is not paid. The ones who are not married ignore it; some of the married ones pay. The responsible party has obtained a thousand dollars or so at a cost very close to zero. Mass production blackmail.
One of my students suggested a simpler explanation. The name and email address attached to the message belonged not to the sender but to someone the sender disliked. Whether or not he was correct, that form of information warfare has been used elsewhere online. It is not a new technique--the classical version is a phone number on a bathroom wall. But the net greatly expands the audience.
SiliconTech is an institution of higher education where the students regard Cornell, OFFAL and all, as barely one step above the stone age. If they ever have a course in advanced computer intrusion--for all I know they do--there will be no problem finding qualified students.
Alpha, Beta and Gamma were graduate students at ST. All three came from a third world country which, in the spirit of this exercise, I will call Sparta. Alpha and Beta were a couple for most of a year, at one point planning to get married. That ended when Beta told Alpha that she no longer wanted to be his girlfriend. Over the following months Alpha attempted, unsuccessfully, to get her back.
Eventually the two met at a social event held by the Spartan
Student Association; in the course of the event, Alpha learned that Beta was
now living with
Gamma. This resulted in a heated discussion among the three of them; there were no outside witnesses and the participants later disagreed about what was said. Alpha's version is that he threatened to tell other members of the Spartan community at ST things that would damage the reputation of Beta and her family. Sparta is a sexually conservative and politically oppressive society, so it is at least possible that spreading such information would have had serious consequences. Beta and Gamma's version is that Alpha threatened to buy a gun and have a duel with Gamma.
Later that evening, someone used Alpha's account on the computer he did his research on to log onto another university machine and from that machine forge an obscene email to Beta that purported to come from Gamma. During the process the same person made use of Alpha's account on a university supercomputer. A day or so later, Beta and Gamma complained about the forged email to the ST computer organization, which traced it to Alpha's machine, disabled his account on their machine, and left him a message. Alpha, believing (by his account) that Beta and Gamma had done something to get him in trouble with the university, sent an email to Gamma telling him that he would have to help Beta with her research, since Alpha would no longer be responsible for doing so.
They presented the authorities with copies of four emails--the three described so far, plus an earlier one sent at the time of the original breakup. According to Alpha, two of them were emails that he had sent but that had been altered, two he had never seen before.
Two days later, Beta and Gamma went to the local police with the same account plus an accusation that, back when Alpha and Beta were still a couple, he had attempted to rape her. Alpha was arrested on charges of felony harassment and terrorism, with bail set at more than a hundred thousand dollars. He spent the next five and half months in jail under quite unpleasant circumstances. The trial took two weeks; the jury then took three hours to find Alpha innocent of all charges. He was released. ST proceeded to have its own trial of Alpha on charges of sexual harassment. They found him guilty and expelled him.
When I first became interested in the case--because it involved issues of identity and email evidence in a population technologically a decade or so ahead of the rest of the world--I got in touch with the ST attorney involved. According to her account, the situation was clear. Computer evidence proved that the obscene and threatening emails had ultimately originated on Alpha's account, to which only he had the password, having changed it after his breakup with Beta. While the jury may have acquitted him on the grounds that he did not actually have a gun, Alpha was clearly guilty of offenses against (at least) ST rules.
I then succeeded in reaching both Alpha's attorney and a faculty member sympathetic to Alpha who had been involved in the controversy. From them I learned a few facts that the ST attorney had omitted.
2. According to the other graduate students who worked with Alpha, and contrary to Beta's sworn testimony, the two had remained friends after the breakup and Alpha had continued to help Beta do her research--on his computer account. Hence it is almost certain that Beta knew the new password. Hence she, or Gamma, or Gamma's older brother--a professional systems manager who happened to be in town when the incidents occurred--could have accessed the accounts and done all of the things that Alpha was accused of doing.
3. The "attempted rape" was supposed to have happened early in their relationship. According to Beta's own testimony at trial, she subsequently took a trip alone with him during which they shared a bed. According to other witnesses, they routinely spent weekends together for some months after the purported attempt.
4. In the course of the trial there was evidence that many of the statements made by Beta and Gamma were false. In particular, Beta claimed never to have been in Alpha's office during the two months after the breakup (relevant because of the password issue); other occupants of the office testified that she had been there repeatedly. Beta claimed to have been shown Alpha's gun permit; the police testified that he did not have one.
5. One of the emails supposedly forged by Alpha had been created at a time when he not only had an alibi--he was in a meeting with two faculty members--but had an alibi he could not have anticipated having, hence could not have prepared for by somehow programming the computer to do things when he was not present.
6. The ST hearing was conducted by a faculty member who had told various other people that Alpha was guilty and ST should get rid of him before he did something that they might be liable for. Under existing school policy, the defendant was entitled to veto suggested members of the committee. Alpha attempted to veto the chairman and was ignored. According to my informant, the hearing was heavily biased, with restrictions by the committee on the introduction of evidence and arguments favorable to Alpha.
7. During the time Alpha was in jail awaiting trial, his friends tried to get bail lowered. Beta and Gamma energetically and successfully opposed the attempt, tried to pressure other members of the Spartan community at ST not to testify in Alpha's favor, and even put together a booklet containing not only material about Alpha but stories from online sources about Spartan students killing lovers or professors.
Two different accounts of what actually happened are consistent with the evidence. One, the account pushed by Beta and Gamma and accepted by ST, makes Alpha the guilty party and explains the evidence that Beta and Gamma were lying about some of the details as a combination of exaggeration, innocent error and perjury by witnesses friendly to Alpha. The other, the account accepted by at least some of Alpha's supporters, makes Beta and Gamma the guilty parties and ST at the least culpably negligent. On that version, Beta and Gamma conspired to frame Alpha for offenses he had not committed, presumably as a preemptive strike against his threat to release true but damaging information about Beta--once he was in jail, who would believe him? They succeeded to the extent of getting him locked up for five and a half months, beaten in jail by fellow prisoners, costing him and his friends some twenty thousand dollars in legal expenses--and ultimately getting him expelled.
I favor the second account, in part because I think it is clear that the ST attorney I originally spoke with was deliberately trying to mislead me--concealing facts that not only were relevant but directly contradicted the arguments she was making. I am suspicious of people who lie to me. On the other hand attorneys, even attorneys for academic institutions, are hired to serve the interest of their clients, not to reveal truth to curious academics, so even if she believed Alpha was guilty she might have preferred to conceal the evidence that he was not. For my present purposes what is interesting is not which side was guilty but the fact that either side could have been, and the problems that fact raises for the world that they were, and we will be, living in.
Online communication--in this case email--normally carries identification that, unlike one's face, can readily be forged. The Cornell case demonstrated one way in which that fact could be used--to extract unguarded statements from somebody by masquerading as someone he has reason to trust. This case, on one interpretation, demonstrates another--to injure someone by persuading third parties that he said things he in fact did not say.
The obvious solution is some way of knowing who sent what message. The headers of an email are supposed to provide that information. As these cases both demonstrate, they do not do it very well. On the simplest interpretation of the events at ST, Alpha used a procedure known to practically everyone in that precocious community to send a message to Beta that purported to come from Gamma. On the alternative interpretation, Beta or Gamma masqueraded as Alpha (accessing his account with his password) in order to send a message to Beta that purported to come from Gamma--and thus get Alpha blamed for doing so.
ST provided a second level of protection--passwords. The passwords were chosen by the user, hence in many cases easy to guess--users tend to select passwords that they can remember. And even if they had been hard to guess, one user can always tell another his password. However elaborate the security protecting Alpha's control over his own identification--up to and including the use of digital signatures--it could not protect him against betrayal by himself. Alpha was in love with Beta, and men in love are notoriously imprudent.
Or perhaps it could. One possible solution is the use of biometrics--identification linked to physical characteristics such as fingerprints or retinal patterns. If ST had been twenty years ahead of the rest of us instead of only ten, they might have equipped their computers with scanners that checked the users' fingers and retinas before letting them sign on. Even a man in love is unlikely to give away his retinas. With that system, we would know which party was guilty.
Provided, of course, that none of the students at SiliconTech--the cream of the world's technologically precocious young minds--figured out how to trick the biometric scanners or hack the software controlling them.
Even if the system works, it has some obvious disadvantages. In order to prevent someone from editing a real email he has received and then presenting the edited version as the original--what Alpha claims that Beta and Gamma did--the system must keep records of all email that passes through it. Many users may find that objectionable on the grounds of privacy—although here are possible technological ways around that problem. And the requirement of biometric identification eliminates not only forged identity but anonymity as well--which arguably could have a chilling effect on free speech.
So far I have implicitly assumed a single computer network with a single owner. That was the situation at ST but not for the Internet. With a decentralized network under the control of many individual parties, creating a system of unforgeable identity becomes an even harder challenge. It can be done via digital signatures--but only if the potential victims are willing to take the necessary precautions to keep other people from getting access to their private keys. Biometric identification, even if it becomes entirely reliable, is still vulnerable to the user who inserts his own hardware or software between the scanner and the computer of his own system, and uses it to lie to the computer about what the scanner saw.
I am typing these words into a metaphorical document in a metaphorical window on a metaphorical desktop; the document is contained in a metaphorical file folder represented by a miniature picture of a real file folder. I know the desktop is metaphorical because it is vertical; if it were a real desktop, everything would slide to the bottom.
All this is familiar to anyone whose computer employs a graphical user interface. We use that collection of layered metaphors for the same reason we call unauthorized access to a computer a break-in and a machine language program burned into a computer chip, unreadable by the human eye, a writing. The metaphor lets us transport a bundle of concepts from one thing, about which that bundle first collected, to something else to which we think most of the bundle is appropriate. Metaphors reduce the difficulty of learning to think about new things. Well chosen metaphors do it at a minimal cost in wrong conclusions.
Consider the metaphor that underlies modern biology: evolution as intent. Evolution is not a person and does not have a purpose. Your genes are not people either and also do not have purposes. Yet the logic of Darwinian evolution implies that each organism tends to have those characteristics that it would have if it had been designed for reproductive success. Evolution produces the result we would get if each gene had a purpose–increasing its frequency in future generations–and acted to achieve that purpose by controlling the characteristics of the bodies it built.
Everything stated about evolution in the language of purpose can be restated in terms of variation and selection–Darwin's original argument. But since we have dealt with purposive beings for much longer than we have dealt with the logic of Darwinian evolution, the restated version is further from our intuitions; putting the analysis that way makes it harder to understand, clumsier. That is why biologists routinely speak in the language of purpose, as when Dawkins titled his brilliant exposition of evolutionary biology "The Selfish Gene."
For a final example, consider computer programming. When you write your first program, the approach seems obvious: Give the computer a complete set of instructions telling it what to do. By the time you have gotten much beyond telling the computer to type "Hello World," you begin to realize that a complete set of instructions for a complicated set of alternatives is a bigger and more intricate web than you can hold in your mind at one time.
People who design computer languages deal with that problem through metaphors. Currently the most popular are the metaphors of object oriented languages such as Java and C++. A programmer builds classes of objects. None of these objects are physical things in the real world; each exists only as a metaphorical description of a chunk of code. Yet the metaphor–independent objects, each owning control over its own internal information, interacting by sending and receiving messages–turns out to be an extraordinarily powerful tool for writing and maintaining programs, programs more complicated than even a very talented programmer could keep track of if he tried to conceptualize each as a single interacting set of commands.
From time to time I read a news story about an intruder breaking into a computer, searching through the contents, and leaving with some of them. Looking at the computer sitting on my desk, it is obvious that intrusion is impractical for anything much bigger than a small cat. There isn't room. And if my cat wants to get into my computer, it doesn't have to break anything–just hook its claws into the plastic loop on the side (current Macs are designed to be easily upgradeable) and pull.
"Computer break-in" is a metaphor. So are the fingerprints and watermarks of Chapter IX. Computer programmers have fingers and occasionally leave fingerprints on the floppy disks or CD's that contain their work, but copying the program does not copy the prints.
New technologies make it possible to do things that were not possible, sometimes not imagined, fifty years ago. Metaphors are a way of fitting those things into our existing pattern of ideas, instantiated in laws, norms, language. We already know how to think about people breaking into other people's houses and what to do about it. By analogizing unauthorized access to a computer to breaking into a house we fit it into our existing system of laws and norms.
The choice of metaphor matters. What actually happens when someone "breaks into" a computer over the internet is that he sends the computer messages, the computer responds to those messages, and something happens that the owner of the computer does not want to happen. Perhaps the computer sends out what was supposed to be confidential information. Perhaps it erases its hard disk. Perhaps it becomes one out of thousands of unwitting accessories to a denial of service attack, sending thousands of requests to read someone else's web page–with the result that the overloaded server cannot deal with them all and the page temporarily vanishes from the web.
The computer is doing what the cracker wants instead of what its owner wants. One can imagine the cracker as an intruder, a virtual person traveling through the net, making his way to the inside of the computer, reading information, deleting information, giving commands. That is how we are thinking of it when we call the event a break-in.
To see how arbitrary the choice of metaphor is, consider a lower tech equivalent. I want to serve legal papers on you. In order to do so, my process servers have to find you. I call your home number. If you do not answer, I tell the servers to look somewhere else. If you do answer, I hang up and send them in.
Nobody is likely to call what I have just described a break-in. Yet it fits almost precisely the earlier description. Your telephone is a machine which you have bought and connected to the phone network for a purpose. I am using your machine without your permission for a different purpose, one you disapprove of–finding out whether you are home, something you do not want me to know. With only a little effort, you can imagine a virtual me running down the phone line, breaking into your phone, peeking out to see if you are in, and reporting back. An early definition of cyberspace was "where a telephone conversation happens."
Consider a third–what crackers refer to as "human engineering," tricking people into giving you the secret information needed to access a computer. It might take the form of a phone call to a secretary from a company executive outside the office who needs immediate access to the company's computer. The secretary, whose job includes helping company executives with their problems, responds with the required passwords. She may not be sure she recognizes the caller's name–but does she really want to expose her ignorance of the names of the top people in the firm she works for?
Human engineering is both a means and a metaphor for unauthorized access. What the cracker is going to do to the computer is what he has just done to the secretary–call it up, pretend to be someone authorized to get the information it holds, and trick it into giving that information. If we analogize a computer not to a house or a phone but to a person, unauthorized access is not housebreaking but fraud–against the computer.
We now have three quite different ways of fitting the same act into our laws, language and moral intuitions–as housebreaking, fraud, or an unwanted phone call. The first is criminal, the second often tortious, the third legally innocuous.
In the early computer crime cases courts were uncertain what the appropriate metaphor was. Much the same problem arose in the early computer copyright cases. Courts were uncertain whether a machine language program burned into the ROMs of a computer was properly analogized to a writing (protectable), a fancy cam (unprotectable, at least by copyright), or (the closest equivalent for which they had a ruling by a previous court) the paper tape controlling a player piano.
In both cases, the legal uncertainty was ended by legislatures–Congress when it revised the copyright act to explicitly include computer programs, state legislatures when they passed computer crime laws that made unauthorized intrusion a felony. The copyright decision was correct, as applied to literal copying, for reasons I have discussed at some length elsewhere. The verdict on the intrusion case is less clear.
We have three different metaphors for fitting unauthorized use of a computer over a network–telephone system or internet–into our legal system. One suggests that it should be a felony, one a tort, one a legal if annoying act. To choose among them, we consider how the law will treat the acts in each case and why one treatment or the other might be preferable.
A crime is a wrong treated by the legal system as an offense against the state. A criminal case has the form "The State of California v D. Friedman." So far as the law is concerned, the victim is the state of California–the person whose computer was broken into is merely a witness. Whether to prosecute, whether to settle (an out of court settlement in a criminal case is called a plea bargain) and how to prosecute are decided by employees of the state of California. The cost of prosecution is paid by the state and the fine, if any, paid to the state. The punishment has no necessary connection to the damage done by the wrong, since the offense is not "causing a certain amount of damage" but "breaking the law."
A tort is a wrong treated by the legal system as an offense against the victim; a civil case has the form "A. Smith v. D. Friedman." The victim decides whether to sue, hires and pays for the attorney, controls the decision of whether to settle out of court and collects the damages awarded by the court. In most cases, the damage payment awarded is supposed to equal the damage done to the victim by the wrong–enough to "make whole" the victim.
An extensive discussion of why and whether it makes sense to have both kinds of law and why it makes sense to treat some kinds of offenses as torts and some as crime is matter for another book; interested readers can find it in Chapter 18 of my Law's Order.  For our purposes it will be sufficient to note some of the legal rules associated with the two systems, some of their advantages and disadvantages and how they might apply to a computer intrusion.
One difference is that, as a general rule, criminal conviction does, and tort does not, require intent–although the definition of intent is occasionally stretched pretty far. On the face of it, unauthorized access clearly meets that requirement.
The year is 1975. The computer is an expensive multi-user machine located in a dedicated facility. An employee asks it for a list of everyone currently using it. One of the sets of initials he gets belongs to his supervisor–who is standing next to him, obviously not using the computer.
The computer was privately owned but used by the Federal Energy Administration, so they called in the FBI. The FBI succeeded in tracing the access to Bertram Seidlitz, who had left six months earlier after helping to set up the computer's security system. When they searched his office, they found forty rolls of computer printout paper containing source code for WYLBUR, a text editing program.
The case raised a number of questions about how existing law fit the new technology. Did secretly recording the "conversation" between Seidlitz and the computer violate the law requiring that recordings of phone conversations be made only with the consent of one of the parties (or a court order, which they did not have)? Was the other party the computer; if so could it consent? Did using someone else's code to access a computer count as obtaining property by means of false or fraudulent pretenses, representations, or promises–the language of the statute? Could you commit fraud against a machine? Was downloading trade secret information, which WYLBUR was, a taking of property? The court found that it could, you could and it was; Seidlitz was convicted.
Seidlitz's answer was quite simple. He believed the security system for the computer was seriously inadequate. He was demonstrating that fact by accessing the computer without authorization, downloading stuff from inside the computer, and printing it out. When he was finished, he planned to send all forty rolls of source code to the people now in charge of the computer as a demonstration of how weak their defenses were. One may suspect–although he did not say–that he also planned to send them a proposal to redo the security system for them. If he was telling the truth, his access, although unauthorized, was not in violation of the law he was convicted under–or any then existing law that I can think of.
The strongest evidence in favor of his story was forty rolls of computer output. In order to make use of source code, you have to compile it–which means that you first have to get it in a form readable by a computer. In 1975, optical character recognition, the technology by which a computer turns a picture of a printed page back into machine readable text, did not yet exist; even today it is not entirely reliable. If Seidlitz was planning to sell the source code to someone who would actually use it, he was also planning at some point to have someone type all forty rolls back into a computer–making no mistakes, since a mistake would introduce a potential bug into the program. It would have been far easier, instead of printing the source code, to download it to a tape cassette or floppy disk. Floppy disks capable of being written to had come out in 1973, with a capacity of about 250K; a single 8" floppy could store about a hundred pages worth of text. Forty rolls of printout would be harder to produce and a lot less useful than a few floppy disks. On the other hand, the printout would provide a more striking demonstration of the weakness of the computer's security, especially for executives who did not know very much about computers.
One problem with using law to deal with problems raised by a new technology is that the legal system may not be up to the job. It is likely enough that the judge in U.S. v. Seidlitz (1978) had never actually touched a computer and more likely still that he had little idea what source code was or how it was used.
Seidlitz had clearly done something wrong. But deciding whether it was a prank or a felony required some understanding of both the technology and the surrounding culture and customs–which a random judge was unlikely to have. In another unauthorized access case, decided a year earlier, the state of Virginia had charged a graduate student at Virginia Polytechnic Institute with fraudulently stealing more than five thousand dollars. His crime was accessing a computer that he was supposed to access in order to do the work he was there to do–using other students' passwords and keys to access it, because nobody had gotten around to allocating computer time to him and he was embarrassed to ask for it. He was convicted and sentenced to two years in the State penitentiary. The sentence was suspended, he appealed, and on appeal was acquitted–on the grounds that what he had stolen were services, not property. Only property counted for purposes of the Virginia statute, and the scrap value of the computer cards and printouts was less than the hundred dollars that the statute required. While charges of grand larceny were still pending against him VPI gave him his degree, demonstrating what they thought of the seriousness of his offense.
The scene is the front door of the University of Chicago Law School. I am standing there because, during a visit to Chicago, it occurred to me that I needed to check something in an article in the Journal of Legal Studies before emailing off the final draft of an article. The University of Chicago Law School not only carries the JLS, it produces the JLS; the library is sure to have the relevant volume. While checking the article, perhaps I can drop in on some of my ex-colleagues and see how they are doing.
Unfortunately, it is a Sunday during Christmas break; nobody is in sight inside and the door is locked. The solution is in my pocket. When I left the Law School last year to take up my present position in California I forgot to give back my keys. I take out my key ring, find the relevant key, and open the front door of the law school.
In the library another problem arises. The volume I want is missing from the shelf, presumably because someone else is using it. It occurs to me that one of the friends I was hoping to see is both a leading scholar in the field and the editor of the JLS. He will almost certainly have his own set in his office–as I have in my office in California.
I knock on his door; no answer. The door is locked. But at the University of Chicago Law School–a very friendly place–the same key opens all faculty offices. Mine is in my pocket. I open his door, go in, and there is the Journal of Legal Studies on his office shelf. I take it down, check the article, and go.
The next day, on the plane home, I open my backpack and discover that, as usual, I was running on autopilot; instead of putting the volume back on the shelf I took it with me. When I get home, I mail the volume back to my friend with an apologetic note of explanation.
Using keys I had no legal right to possess I entered a locked building I had no legal right to enter, went into a locked room I had no legal right to enter and left with an item of someone else's property that I had no authorization to take. Luckily for me, the value of one volume of the Journal of Legal Studies is considerably less than $5000, so although I may be guilty of burglary under Illinois law I am not covered by the federal law against interstate transportation of stolen property. Aside from the fact that the Federal government has no special interest in the University of Chicago Law School, the facts of my crime were nearly identical to the facts of Seidlitz's. Mine was just the low tech version.
As it happens, the above story is almost entirely fiction–inspired by the fact that I really did forget to give back my keys until a year or so after I left Chicago, so could have gotten into both the building and a faculty office if I had wanted to. But even if it were true, I would have been at no serious risk of anything worse than embarrassment. Everyone involved in my putative prosecution would have understood the relevant facts–that not giving keys back is the sort of thing absent minded academics do, that using those keys in the same way you have been using them for most of the past eight years, even if technically illegal, is perfectly normal and requires no criminal intent, that looking at a colleague's copy of a journal without his permission when he isn't there to give it is also perfectly normal, and that absent minded people sometimes walk off with things instead of putting them back where they belong. Seidlitz–assuming he really was innocent–was not so lucky.
My third story, like my first, is true.
On Thursday, October 28, at 12:30 in the afternoon, I noticed an unusual process running on a Sun computer which I administer. Further checking convinced me that this was a program designed to break, or crack, passwords. I was able to determine that the user "merlyn" was running the program. The username "merlyn" is assigned to Randal Schwartz, an independent contractor. The password cracking program had been running since October 21st. I investigated the directory from which the program was running and found the program to be Crack 4.1, a powerful password cracking program. There were many files located there, including passwd.ssd and passwd.ora. Based on my knowledge of the user, I guessed that these were password files for the Intel SSD organization and also an external company called O'Reilly and Associates. I then contacted Rich Cower in Intel security.
Intel security called in the local police. Randy Schwartz was interrogated at length; the police had a tape recorder but did not use it. Their later account of what he said was surprisingly detailed, given that it dealt with subjects the interrogating officers knew little about, and strikingly different from his account of what he said. The main facts, however, are reasonably clear.
Randy Schwartz was a well known computer professional, the author of two books on PERL, a language used in building things on the Web. He had a reputation as the sort of person who would rather apologize afterwards than ask permission in advance. One reason Morrissey was checking the computer Thursday afternoon was to make sure Schwartz wasn't running any jobs on it that might interfere with its intended function. As he put it in his statement, "Randal has a habit of using as much CPU power as he can find."
Schwartz worked for Intel as an independent contractor running parts of their computer system. He accessed the system from his home using a gateway through the Intel firewall that he had created on instructions from Intel for the use of a group working off site but retained for his own use. In response to orders from Intel he had first tightened its security and later shut it down completely–then quietly recreated it on a different machine and continued to use it.
The computer system at Intel, like many others, used passwords to control access. This raises an obvious design problem. In order for the computer to know if you typed in the right password, it needs a list of passwords to check yours against. But if there is a list of passwords somewhere in the computer's memory, anyone who can get access to that memory can find the list.
You can solve this problem by creating a public key/private key pair and throwing away the private key--more generally, by creating some procedure that encrypts but does not decrypt. Every time a new password is created, encrypt it and add it to the computer's list of encrypted passwords. When a user types in a password, encrypt that and see if what you get matches one of the encrypted passwords on the list. Someone with access to the computer's memory can copy the list of encrypted passwords, can copy the procedure for encrypting them, but cannot copy a procedure for decrypting them because it is not there. So he has no way of getting from the encrypted version of the password in the computer's memory to the original password that he has to type to get the desired level of access to (and control over) the computer.
A program such as Crack solves that problem by guessing passwords, encrypting the guesses, and comparing the result to the list of encrypted passwords. If it had to guess at random, the process would take a very long time. But despite the instructions of the people running the system, people who create passwords frequently insist on using their wife's name, or their date of birth, or something else easier to remember than V7g9H47ax. It does not take all that long for a computer program to run through a dictionary of first names and every date in the past seventy years, encrypt each, and check it against the list. One of the passwords Randy Schwartz cracked belonged to an Intel vice president. It was the word PRE$IDENT.
Randy Schwartz's defense was the same as Bertram Seidlitz's. He was responsible for parts of Intel's computer system. He suspected that its security was inadequate. The obvious way to test that suspicion was to see whether he could break into it. Breaking down doors is not the usual way of testing locks, but breaking into a computer does not, by itself, do any damage. By correctly guessing one password, using that to get at a file of encrypted passwords, and using Crack to guess a considerable number of them, Randy Schwartz demonstrated the vulnerability of Intel's system. I suspect, knowing computer people although not that particular computer person, that he was also entertaining himself by solving the puzzle of how to get through Intel’s barriers while proving how much cleverer he was than the people who set up the system he was cracking–including one particularly careless Intel vice-president. He was simultaneously (but less successfully) running Crack against a password file from a computer belonging to O'Reilly and Associates, the company that publishes his books.
Since Intel's computer system contains a lot of valuable intellectual property protected (or not) by passwords, demonstrating its vulnerability might be seen as a valuable service. Intel did not see it that way. They actively aided the state of Oregon in prosecuting Randy Schwartz for violating Oregon's computer crime law. He ended up convicted of two felonies and a misdemeanor–unauthorized access to, alteration of, and copying information from, a computer system.
Two facts lead me to suspect that Randy Schwartz may have been the victim, not the criminal. The first is that Intel produced no evidence that he had stolen any information from them other than the passwords themselves. The other is that, when Crack was detected running, it was being run by "merlyn"–Randy Schwartz's username at Intel. The Crack program was in a directory named "merlyn." So were the files for the gate through which the program was being run. I find it hard to believe that a highly skilled computer network professional attempting to steal valuable intellectual property from one of the world's richest and most sophisticated high tech firms would do it under his own name. If I correctly interpret the evidence, what actually happened was that Intel used Oregon's computer law to enforce its internal regulations against a subcontractor in the habit of breaking them. Terminating the offender's contract is a more conventional, and more reasonable, response.
In fairness to Intel, I should add that almost all my information about the case comes from an extensive web site set up by supporters of Randy Schwartz–extensive enough to include the full transcript of the trial. Neither Intel nor its supporters has been willing to web a reply. I have, however, corresponded with a friend who works for Intel and is in a position to know something about the case. My friend believed that Schwartz was guilty but was unwilling to offer any evidence.
Perhaps he was guilty; Intel might have reasons for keeping quiet other than a bad conscience. Perhaps Seidlitz was guilty. It is hard, looking back at a case with very imperfect information, to be sure my verdict on its verdict is correct. But I think both cases, along with my own fictitious burglary, show problems in applying criminal law to something as ambiguously criminal as unauthorized access to a computer, hence provide at least a limited argument for rejecting the break-in metaphor in favor of one of the alternatives.
One problem with trying to squeeze unauthorized access into existing criminal law is that intent may be ambiguous. Another is that it does not fit very well. The problem is illustrated by U.S. v. Neidorf, entertainingly chronicled in The Hacker Crackdown, Bruce Sterling's account of an early and badly bungled campaign against computer crime.
The story starts in 1988 when Robert Riggs, a college student, succeeded in accessing a computer belonging to Bell South and downloading a document about the 911 system. He had no use for the information in the document, which dealt with bureaucratic organization–who was responsible for what to whom–not technology. But written at the top was "WARNING: NOT FOR USE OR DISCLOSURE OUTSIDE BELLSOUTH OR ANY OF ITS SUBSIDIARIES EXCEPT UNDER WRITTEN AGREEMENT," which made getting it an accomplishment and the document a trophy. He accordingly sent a copy to Craig Neidorf, who edited a virtual magazine–distributed from one computer to another–called Phrack. Riggs cut out about half of the document and included what was left in Phrack.
Eventually someone at Bell South discovered that their secret document was circulating in the computer underground–and ignored it. Somewhat later, federal law enforcement agents involved in a large scale crackdown on computer crime descended on Riggs. He and Neidorf were charged with interstate transportation of stolen property valued at more than five thousand dollars–a Federal offense. Riggs agreed to a guilty plea; Neidorf refused and went to trial.
Bell South asserted that the twelve page document had cost them $79,449 to produce–well over the $5,000 required for the offense. It eventually turned out that they had calculated that number by adding to the actual production costs–mostly the wages of the employees who created the document–the full value of the computer it was written on, the printer it was printed on, and the computer's software. The figure was accepted by the federal prosecutors without question. Under defense questioning, it was scaled back to a mere $24,639.05. The case collapsed when the defense established two facts: that the warning on the 911 document was on every document that Bell South produced for internal use, however important or unimportant, and that the information it contained was routinely provided to anyone who asked for it. One document, containing a more extensive version of the information published in Phrack, information Bell South had claimed to value at just under eighty thousand dollars, was sold by Bell South for $13.
In the ancient days of single sex college dormitories there was a social institution called a panty raid. A group of male students would access, without authorization, a dormitory of female students and exit with intimate articles of apparel. The objective was not acquiring underwear but defying the authority of the college administration. Robert Riggs engaged in a virtual panty raid–and ended up pleading guilty to a felony. Craig Neidorf received the booty from a virtual panty raid and displayed it in his virtual window. For that act, the federal government attempted to convict him of offenses that could have led to a prison term of over sixty years.
Part of the problem, again, was that the technology was new, hence unfamiliar to many of the people–cops, lawyers, judges–involved in the case. Dealing with a world they did not understand, they were unable to distinguish between a panty raid and a bank robbery.
Another part of the problem was that the law the case was prosecuted under was designed to deal with the theft and transportation of physical objects. It was natural to ask the questions appropriate to the law under which the case was prosecuted–including how much the object stolen cost to produce. But what was labeled theft was in fact copying; after Neidorff stole the document, Bell South still had it. The real measure of the damage was not what it cost to produce the document but the cost to Bell South of other people having the information. Bell South demonstrated, by its willingness to sell the same information at a low price, that it regarded that cost as negligible. Robert Riggs was prosecuted under a metaphor. On the evidence of at least that case, it was the wrong metaphor.
Bell South's original figure for the cost of creating the 911 document was one that no honest person could have produced. If you disagree, ask yourself how Bell South would have responded to an employee who, sending in his travel expenses for a hundred mile trip, included the full purchase price of his car–the precise equivalent of what Bell South did in calculating the cost of the document. Bell's testimony about the importance and secrecy of the information contained in the document was also false, but not necessarily dishonest; the Bell employee who gave it may not have know that the firm provided the same information to anyone who asked for it. Those two false statements played a major role in a criminal prosecution that could have put Robert Riggs in prison and did cost him, his family and his supporters hundreds of thousands of dollars in legal expenses.
Knowingly making false statements that cost other people money is usually actionable. But the testimony of a witness in a trial is privileged–even if deliberately false, the witness is not liable for the damage done. He can be prosecuted for perjury--but that decision is made not by the injured party but by the state.
Suppose the same case had occurred under tort law. Bell South sues Riggs for $79,449. In the course of the trial it is established that the figure was wildly inflated by the plaintiff, that in any case the plaintiff still has the property, so has a claim only for damage done by the information getting out, and that that damage is zero since the information was already publicly available from the plaintiff. Not only does Bell South lose its case, it is at risk of being sued for malicious prosecution–which is not privileged. In addition, of course, Bell South, rather than the federal government, would have been paying the costs of prosecution. Putting such cases under tort law would have given Bell South an incentive to check its facts and figure out whether it had really been injured before, not after, it initiated the case–saving everyone concerned a good deal of time, money and unpleasantness.
One advantage of tort law is that the plaintiff might have been liable for the damage it did by claims that it knew were false. Another is that it would have focused attention on the relevant issue–not the cost of producing the document but the injury to the plaintiff from having it copied. That is a familiar issue in the context of trade secret law, which comes considerably closer than criminal law to fitting the actual facts of the case.
A further problem with criminalizing such acts is illustrated by the fate of Robert Riggs. Unlike Craig Neidorf, he accepted a plea bargain and could have spent a substantial amount of time in prison--although in fact his sentence was cancelled after the trial made it clear that he had done nothing seriously wrong. One reason for agreeing to a guilty plea, presumably, was the threat of a much longer jail term if the case went to trial and he lost. Criminal law, by providing the prosecution with the threat of very severe punishments, poses the risk that innocent defendants may agree to plead guilty to a lesser offense. If the case had been a tort prosecution by the victim, the effective upper bound on damages would have been everything that Neidorff owned.
There is, however, another side to that argument. Under tort law, the plaintiff pays for the prosecution. If winning the case is likely to be expensive and the defendant does not have the money to pay large damages, it may not be worth suing in the first place–in which case there is no punishment and no incentive not to commit the tort. That problem–providing an adequate incentive to prosecute when prosecution is private–is one we already touched on in Chapter 5 and will return to in Chapter 13.
In the early years, computers were large standalone machines; most belonged to governments, large firms or universities. Frequently they were used by those organizations to control important real world actions–writing checks, keeping track of orders, delivering goods. The obvious tactic for the computer criminal was to get access to those machines and change the information they contained in ways that benefited him–creating fictitious orders and using them to have real goods delivered that he had not really paid for, arranging to have checks written to an account he controlled in payment for nonexistent services, or, if the computer was used by a bank, transferring money from other people's accounts to his.
As time passed, it became increasingly common for large machines to be accessible from off site over telephone lines. That was an improvement from the standpoint of the criminal. Instead of having to gain admission to a computer facility–with the risk of being caught–he could access the machine from off site, evading computer defenses rather than locked doors.
While accessing computers to steal money or stuff was the most obvious form of computer crime, there were other possibilities. One was vandalism. A discontented employee or ex-employee could crash the firm's computer or erase its data. But this was a less serious problem with computers than with other sorts of machines. If a vandal smashes your truck, you have to buy another truck. If he crashes your computer, all you have to do is reboot. Even if he wipes your hard drive you can still restore from your most recent backup, losing only the most recent data.
A more interesting possibility was extortion. In one case, a supervisor of computer operations for a large multinational firm decided that it was time to retire–in comfort. He took the reels of tape that were the mass storage for the firm's computer, the backup tapes, and the extra set of backups that were stored off site, erased the information actually in the computer, and departed. He then offered to sell the tapes–containing information that the firm needed for its ordinary functioning–back to the firm for a mere £275,000.
In a world with anonymous ecash, the payoff could have been made over the net through a remailer. In a world of strong privacy, he could have located a firm in the business of collecting payoffs and subcontracted the collection end of his project. Unfortunately for the executive, he committed his crime too early. He tried to collect the payoff himself–on a motorcycle--and was caught doing it.
Large computers controlling lots of valuable stuff still exist, but nowadays they are usually connected to networks. So are tens of millions, soon hundreds of millions, of small computers. This opens up some interesting possibilities.
A few years back, the Chaos Computer Club of Hamburg Germany demonstrated one of them on German television. What they had written was an ActiveX control, a chunk of code downloaded from a website onto the user's computer. It was designed to work with Quicken, a widely used accounting package. One of the things Quicken can do is pay bills online. The control they demonstrated modified Quicken's files to add an additional payee. Trick a million people into downloading it, have each of them pay you ten marks a month–a small enough sum so that it might take a long time to be noticed–and retire.
One of the classic computer crime stories–possibly apocryphal–concerns a programmer who computerized a bank's accounting system. After a few months, bank officials noticed that something seemed to be wrong–a slow leakage of money. But when they checked the individual accounts, everything balanced. Eventually someone figured out the trick. The programmer had designed the system so that all rounding errors went to him. If you were supposed to receive $13.43 6/10 in interest, you got $13.43, his account got .6 cents. It was a modest fraud–a fraction of a cent is not much money, and nobody normally worries about rounding errors anyway. But if the bank has a million accounts and pays interest daily, the total comes to about five thousand dollars a day.
That sort of fraud is called a "salami scheme"–nobody notices one more thin slice missing from a salami. The Chaos Computer Club had invented a mass production version. Hardly anyone notices a leakage of a few dollars a month from his account–but with millions of accounts, it adds up fast. It is the old computer crime of tricking a computer into transferring money to you, modernized to apply to a world with lots of networked computers, each controlling fairly small resources. So far as I know, nobody has yet put this particular form of computer crime into practice, despite the public demonstration that it could be done. But someone will.
Another old crime was extortion–holding the contents of a firm's computer to ransom. The modern version could use either a downloaded ActiveX control or a computer virus–and take advantage of the power of public key encryption. Once the software gets onto the victim's computer it creates a large random number and uses it as the key to encrypt the contents of the hard drive, erasing the unencrypted version as it does so. The final step is to encrypt the key using the criminal's public key.
The next time the victim turns on his computer, the screen shows a message telling him that he can have the contents of his hard drive back for twenty dollars in anonymous ecash, sent to the criminal through a suitable remailer. The money must be accompanied by the encrypted key, which the message includes. The extortionist will send back the decrypted key and the software to decrypt the hard drive.
This particular scheme has two attractive features–from the standpoint of the criminal. The first is that since each victim's hard drive is encrypted with a different key, there is no way they can share the information about how to decrypt it–each must pay separately. The second is that, with lots of victims, the criminal can establish a reputation for honest dealing–after the first few cases, everyone will know that if you pay you really do get your hard drive back. So far as I know, nobody has done it yet, although there was an old case involving a less sophisticated version of the scheme, using floppy disks instead of downloads.
What else can be done in the world of lots of small networked computers? One answer is simple vandalism for the fun of it, familiar in the form of computer viruses. A more productive possibility is to imitate some of the earliest computer criminals and steal, not money, but computing power. At any instant, millions of desktop computers are twiddling their thumbs while their owners are eating lunch or thinking about what to type next. When you operate at a million instructions a second, there's a lot of time between keystrokes.
The best known attempt to harness that wasted power is SETI–the Search for Extra-Terrestrial Intelligence. It is a volunteer effort by which large numbers of individuals permit their computers, whenever they happen to be idle, to work on a small part of the immense project of searching the haystack of interstellar radio noise for the needle of information that might tell us that, somewhere in the galaxy, someone else is home. Similar efforts on a smaller scale have been used in experiments to test how hard it is to break various forms of encryption–another project that requires very large scale number crunching.
One could imagine an enterprising thief stealing a chunk of that processing power–and perhaps justifying the crime to himself on the grounds that nobody was using it anyway. The approach would be along SETI's lines, but without SETI's public presence. Download a suitable bit of software to each of several million unknowing helpers then use the Internet to share out the burden of very large computing projects among them. Charge customers for access to the worlds' biggest computer, while keeping its exact nature a trade secret. Think of Randy Schwartz–who, whether or not he stole trade secrets, had the reputation of grabbing all the CPU power he could get his hands on.
Nobody has done it. My guess is that nobody will, since a continuing access is too easy to detect. But a more destructive version has been implemented repeatedly. It is called a Distributed Denial of Service attack–DDOS for short. To do it, you temporarily take over a large number of networked computers and instruct each to spend all of its time trying to access a particular web page–belonging to some person or organization you disapprove of. A web server can send out copies of its web page to a lot of browsers at once, but not an unlimited number. With enough requests coming fast enough, the server is unable to handle them all and the page vanishes from the web.
We have been discussing problems due to software downloaded from a web page to a user's computer. Such software originated as the solution to a problem implicit in networked computing. The problem is server overload; the solution is distributed computing.
You have a web page that does something for the people who access it–draws a map showing them how to get to a particular address, say. Drawing that picture–getting from information on a database to a map a human being can read–takes computing power. Even if it does not take very much power, when a thousand people each want a different map drawn at the same time it adds up–and your system slows down.
Each of those people is accessing your page from his own computer. Reading a web page does not take much in the way of computing resources, so most of those computers are twiddling their thumbs–operating at far below capacity. Why not put them to work drawing maps?
The web page copies to each of the computers a little map drawing program–an ActiveX control or Java Applet. That only has to be done once. Thereafter, when the computer reads the web page, the page sends it the necessary information and it draws the map itself. Instead of putting the whole job on one busy computer it is divided up among a thousand idle computers. The same approach works for multiplayer webbed games and a great variety of other applications. It is a solution–but a solution that, as we have just seen, raises a new problem. Once that little program gets on your computer, who knows what it might do there?
Microsoft deals with that problem by using digital signatures, authenticated by Microsoft, to identify where each ActiveX control comes from. Microsoft's response to the Chaos Computer Club's demonstration of a new use for an ActiveX control was that there was really no problem. All a user had to do to protect himself was to tell his browser not to take controls from strangers–which he could do by an appropriate setting of the security level on Explorer.
This assumes Microsoft cannot be fooled into signing bogus code. I can think of at least two ways of doing it. One is to get a job with a respectable software company and insert a little extra code into one of their ActiveX controls–which Microsoft would then sign. The other is to start your own software company, produce useful software that makes use of an ActiveX control, add an additional unmarked feature inspired by the Chaos Computer Club, get it signed by Microsoft, put it up on the web–then close up shop and decamp for Brazil.
Sun Computer has a different solution to the same problem. Java Applets, their version of software for distributed computing, are only allowed to play in the sandbox–designed to have a very limited ability to affect other things in the computer, including files stored on the hard drive. One problem with that solution is that it limits the useful things an Applet can do. Another is that even Sun sometimes makes mistakes. The fence around the sandbox may not be entirely appletproof.
The odds are that both ActiveX and Applets will soon be history. Whatever form of distributed computing succeeds them will face the same problem and the same set of possible solutions. In order to be useful, it has to be able to do things on the client computer. The more it can do, the greater the possibility of doing things that the owner of that computer would disapprove of. That can be controlled either by controlling what gets downloaded and holding the firm that produced it responsible or by strictly limiting what any such software is allowed to do–Microsoft's and Sun's approaches respectively.
There are two important things to remember about the sort of problem we have been discussing. The first is that it is your computer, sitting on your desktop. A bad guy may be able to get control of it by some clever trick, by getting you to download bogus software or a virus. But you start with control–and whatever the bad guy does, you can always turn the machine off, boot from a CD, wipe the hard drive, restore from your backup and start over. The logic of the situation favors you–it is only bad software design and careless use that makes it possible for other people to take over your machine.
The second things to remember is that this is a new world and we have just arrived. Most desktop computers are running under software originally designed for standalone machines. It is not surprising that such software frequently proves vulnerable to threats that did not exist in the environment it was designed for. As software evolves in a networked world, a lot of the current problems will gradually vanish.
We have been discussing crimes committed by a server against clients–downloading to them chunks of code that do things their owners would not approve of. I once got into an interesting conversation with someone who had precisely the opposite problem. He was in the computer gaming business–online role playing games in which large numbers of characters, each controlled by a different player, interact in a common universe, allying, fighting each other, gaining experience, becoming more powerful, acquiring enchanted swords, books of spells, and the like.
People running online games want lots of players. But as more and more players join, the burden on the server supporting the game increases–it has to keep track of the characteristics and activities of an increasing number of characters. Ideally, a single computer should keep track of everything in order to maintain a consistent universe–but there is a limit to what one computer can do.
The solution is distributed computing. Offload most of the work to the player's computer. Let it draw the pretty pictures on the screen, maps of a dungeon or a fighter's eye view of the monster he is fighting. Let it keep track of how much gold the character has, how much experience he has accumulated, what magic devices are in his pouch, what armor on his back. The server still needs to keep track of the shared fundamentals–who is where–but not the details. Now the game scales–when you double the number of players you almost double the computing power available, since the new players' computers are now sharing the load.
Like many solutions, this one comes with a problem. If my computer is keeping track of how strong my character is and what goodies he has, that information is stored on files on my hard drive. My hard drive is under my control. With a little specialized knowledge about how the information is stored–provided, perhaps, by a fellow enthusiast online–I can modify those files. Why spend hundreds of hours fighting monsters in order to become a hero with muscles of steel, lightening reactions, and a magic sword, when I can get the same result by suitably editing the file describing my character? In the online gaming world, where many players are technically sophisticated, competitive, and unscrupulous–or, if you prefer, where many players regard competitive cheating as merely another dimension of the game–it is apparently a real problem.
The server cannot keep track of all the details of all the characters, but it can probably manage one in a hundred. Pick a character at random and, while his computer is calculating what is happening to him, run a parallel calculation on the server. Follow him for a few days, checking to make sure that his characteristics remain what they should be. If they do, switch to someone else.
What if the character has mysteriously jumped twenty levels since the last time he logged off? Criminal law solves the problem of deterring offenses that are hard to detect–littering, for example–by scaling up the punishment to balance the low probability of imposing it. It should work here too.
I log into the game where my character, thanks to hundreds of hours of playing assisted by some careful hacking of the files that describe him, is now a level 83 mage with a spectacular collection of wands and magic rings. There is a surprise waiting:
"You wake up in the desert, wearing only a loin cloth. Clutched in your hand is a crumpled parchment."
"Look at the Parchment"
"It looks like your handwriting, but unsteady and trailing off into gibberish at the end."
"Read the Parchment"
The parchment reads:
"I shouldn't have done it. Dabbling in forbidden arts. The Demons are coming. I can feel myself pouring away. No, No, No … "Show my statistics"
Level: 1. Possessions: 1 loincloth.
Crime doesn't pay.
A few years ago, I participated in a conference called to advise a presidential panel investigating the threat of high tech terrorism. So far as I could tell, the panel originated with an exercise by the National Security Agency in which they demonstrated that, had they been bad guys, they could have done a great deal of damage by breaking into computers controlling banks, hospitals, and much else.
I left the conference uncertain whether what I had just seen was a real threat or an NSA employment project, designed to make sure that the end of the Cold War did not result in serious budget cuts. Undoubtedly a group of highly sophisticated terrorists could do a lot of damage by breaking into computers. But then, a group of sophisticated terrorists could do a lot of damage in low tech ways too. I had seen no evidence that the same team could not have done as much damage–or more–without ever touching a computer. A few years after that conference, a group of not very sophisticated terrorists demonstrated just how much damage they could do by flying airplanes into buildings. No computers required.
I did, however, come up with one positive contribution to the conference. If you really believe that foreign terrorists breaking into computers in order to commit massive sabotage is a problem, the solution is to give the people who own computers adequate incentives to protect them–to set up their software in ways that make it hard to break in. One way of doing so would be to decriminalize ordinary intrusions. If the owner of a computer cannot call the cops when he finds that some talented teenager has been rifling through his files, he has an incentive to make it harder to do so in order to protect himself. Once the computers of America are safe against Kevin Mitnick, Saddam Hussein won't have a chance.
The previous chapter dealt with the use of new technologies by criminals; this one deals with the other side of the picture. I begin by looking at ways in which new technologies can be used to enforce the law and some associated risks. I then go on–via a brief detour to the eighteenth century–to consider how technologies discussed in earlier chapters may affect not how law is enforced but by whom.
Criminals are not the only ones with access to new technologies; cops have it too. Insofar as enforcing law is a good thing, new technologies that make it easier are a good thing. But the ability to enforce the law is not an unmixed blessing–the easier it is to enforce laws, the easier it is to enforce bad laws.
There are two different ways in which our institutions can prevent governments from doing bad things. One is by making particular bad acts illegal. The other is by making them impossible. That distinction appeared back in Chapter 3, when I argued that unregulated encryption could serve as the 21st century version of the Second Amendment–a way of limiting the ability of governments to control their citizens.
For a less exotic example, consider the Fourth Amendment's restrictions on searches–the requirement of a warrant issued upon showing of reasonable cause. At least some searches under current law–wiretaps, for instance–can be done without the victim even knowing about it. What's the harm? If you have nothing to hide, why should you object?
One answer is that the ability to search anyone at any time, to tap any phone, puts too much power in the hands of law enforcement agents. Among other things, it lets them collect information irrelevant to crimes but useful for blackmailing people into doing what they are told. For similar reasons, the U.S., practically alone among developed nations, has never set up a national system of required I.D. cards–although that may have changed by the time this book is published. Such a system would make law enforcement a little easier. It would also make abuses by law enforcement easier.
The underlying theory, which I think everyone understands although few put it into words, is that if the government has only a little power, it can only do things that most of the population approves of. If it has a lot of power, it can do things that most people disapprove of–including, in the long run, converting a nominal democracy into a de facto dictatorship. Hence the delicate balance intended to provide enough power to prevent most murder and robbery but not much more.
A policeman stops me and demands to search my car. I ask him why. He replies that my description fits closely the description of a man wanted for murder. Thirty years ago, that would have been a convincing argument. It is less convincing today. The reason is not that policemen know less but that they know more.
In the average year, there are about twenty thousand murders in the U.S. With twenty thousand murders and (I am guessing) several thousand wanted suspects, practically everyone fits the description of at least one of them. Thirty years ago, the policemen would have had information only on those in his immediate area. Today he can access a databank listing all of them.
Consider the same problem as it might show up in a courtroom. A rape/murder is committed in a big city. The jury is told that the defendant's DNA matches that of the perpetrator–well enough so that there is only a one in a million probability that the match would happen by chance. Obviously he is guilty–those odds easily satisfy the requirements of "beyond a reasonable doubt."
There are two problems with that conclusion. The first is that the one in a million statement is false. The reason it is false has to do not with DNA but with people. The figure was calculated on the assumption that all tests were done correctly. But we have plenty of evidence from past cases that the odds that they were not–the odds that someone in the process, whether the police officer who sent in the evidence or the lab technician who tested it, was either incompetent or dishonest–are a great deal higher than one in a million.
The second problem is not yet relevant but soon may be. To see it, imagine that we have done DNA tests on everyone in the country in order to set up a national database of DNA information, perhaps as part of a new nationwide system of I.D. cards.
Under defense questioning, more information comes out. The way the police located the suspect was by going through the DNA database. His DNA matched the evidence, he had no alibi, so they arrested him.
Now the odds that he is guilty shift down dramatically. The chance that the DNA of someone chosen at random would match the sample as closely as his did is only one in a million. But the database contains information on seventy million men in the relevant age group. By pure chance, about seventy of them will match. All we know about the defendant is that he is one of those seventy, does not have an alibi, and lives close enough to where the crime happened so that he could conceivably have committed it. There might easily be three or four people who meet all of those conditions, so the fact that the defendant is one of them is very weak evidence that he is guilty.
Consider the same problem in a very different context–one that has existed for the past twenty years or so. An economist interested in crime has a theory that the death penalty increases the risk to police of being killed, since cornered murder suspects have nothing to lose. To test that theory, he runs a regression–a statistical procedure designed to see how different factors affect the number of police killed in the line of duty. The death penalty is not the only factor, so he includes additional terms for variables such as the fraction of the population in high crime age groups, racial mix, poverty level, and the like. When he publishes his results, he reports that the regression fits his prediction at the .05 level: there is only one chance in twenty that the result would fit his prediction as well as it did by pure chance.
What he does not mention in the article is that the regression he reports is one of sixty that he ran–varying which other factors were included, how they were measured, how they were assumed to interact. With sixty regressions, the fact that at least one came out as he predicted does not tell us very much–by pure chance, about three of them should.
Fifty years ago, running a regression was a lot of work–done, if you were lucky, on an electric calculating machine that did addition, multiplication, and not much else. Doing sixty of them was not a practical option, so the fact that someone's regression fit his theory at the .05 level was evidence that the theory was right. Today, any academic–practically any schoolchild–has access to a computer that can do sixty regressions in a few minutes. That makes it easy to do a specification search–try lots of different regressions, each specifying the relationship a little differently, until you find one that works. You can even find statistical packages that do it for you. So the fact that an article reports a successful regression no longer provides much support for the author's theory. At the very least, you have to report the different specifications you tried, give a verbal summary of how they came out, and detailed results for a few of them. If you really want to persuade people you have to make your dataset freely available–ideally over the internet–and let other people run as many regressions on it in as many different ways as they want until they convince themselves that the relationship you found is really there, not an illusion created by carefully selecting which results you reported.
All of these examples–the police stop on suspicion, the DNA evidence, the specification search–involve the same issue. By increasing access to information you make it easier to find evidence for the right answer. But you also make it easier to find evidence for the wrong answer.
If you are the one looking for evidence, the additional information is an asset. The traffic cop can check his database, see that the person whose description I fit was last reported on the other side of the country, and decide not to bother stopping me. The police, having located several suspects who fit the DNA evidence, can engage in a serious attempt to see if one of them is guilty and only make an arrest if there is enough additional evidence to convict. The researcher can report his specification search–and use its results to improve his theory.
But in each case, the additional information also makes it easier to generate bogus evidence. The traffic cop who actually wants to stop me because of the color of my skin or because I have out of state plates, or in the hope that he will find something illegal and be offered a bribe not to report it, can honestly claim that I met the description of a wanted man. The D.A. who wants a good conviction rate before her next campaign for high office can report the DNA fit and omit any explanation of how it was obtained and what it really means. And the academic researcher, desperate for publications to bolster his petition for tenure, can selectively remember only those regressions that came out right. If we want to prevent such behavior, we must alter our rules and customs accordingly, raising the standard for how much evidence it takes to reflect how much easier it has become to produce evidence–even for things that are not true.
The hero of The President's Analyst (James Coburn), having spent much of the film evading various bad guys who want to kidnap him and use him to influence his star patient, has temporarily escaped his pursuers and made it to a phone booth. He calls up a friendly CIA agent (Godfrey Cambridge) to come rescue him. When he tries to leave the booth, the door won't open. Down the road comes a phone company truck loaded with booths. The truck's crane picks up the one containing the analyst, deposits it in the back, replaces it with an empty booth and drives off.
Fast forward to the debate over the digital wiretap bill–legislation pushed by the FBI to require phone companies to provide law enforcement agents facilities to tap digital phone lines. One point made by critics of the legislation was that the FBI appeared to be demanding the ability to simultaneously tap about one phone out of a hundred. While that figure was probably an exaggeration–there was disagreement as to the exact meaning of the capacity the FBI was asking for–it was not much of an exaggeration.
As the FBI pointed out, that did not mean they would be using all of that capacity. To be able to tap one percent of the phones in any particular place–say a place with lots of drug dealers–they needed the ability to tap one percent of the phones in every place. And the one percent figure would only apply in parts of the country where the FBI thought it might need such a capacity--and included not only wire taps but also less intrusive forms of surveillance, such as keeping track of who called whom but not of what they said.
At the time they made the request, wiretaps were running at a rate of under a thousand a year–not all at the same time. Even after giving the FBI the benefit of all possible doubt, the capacity they asked for was only needed if they were contemplating an enormous increase in telephone surveillance.
The FBI defended the legislation as necessary to maintain the status quo, to keep developments in communications technology from reducing the ability of law enforcement to engage in court ordered interceptions. Critics argued that there was no evidence such a problem existed. My own suspicion is that the proposal was indeed motivated by technology–but not that technology.
The first step is to ask why, if phone taps are as useful as law enforcement spokesmen claim, there are so few of them and they produce so few convictions. The figure for 1995 was a total of 1058 authorized interceptions at all levels, Federal state and local. They were responsible for a total of 494 convictions, mostly for drug offenses. Total drug convictions for that year, at the Federal level alone, were over 16,000.
The answer is not the reluctance of courts to authorize wiretaps. The National Security Agency, after all, gets its wiretaps authorized by a special court, widely reported to have never turned down a request. The answer is that wiretaps are very expensive. Some rough calculations by Robin Hanson suggest that on average, in 1993, they cost more than fifty thousand dollars each. Most of that was the cost of labor–police officers' time listening to 1.7 million conversations at a cost of about $32/conversation.
That problem has been solved. Software to convert speech into text is now widely available on the market. You no longer need a human being on one end of the wire. Instead you can have a computer listen, convert the speech to text, search the text for key words and phrases, and notify a human being if it gets a hit. Current commercial software is not very reliable unless it has first been trained by the user to his voice. But an error level that would be intolerable for using a computer to take dictation is more than adequate to pick up key words in a conversation. And the software is getting better.
Computers work cheap. If we assume that the average American spends half an hour a day on the phone–a number created out of thin air by averaging in two hours for teenagers and ten minutes for everyone else–that gives, on average, about six million phone conversations at any one time. Taking advantage of the wonders of mass production, it should be possible to produce enough dedicated computers to handle all of that for less than a billion dollars.
Law enforcement agencies still have to get court orders for all of those wiretaps–and however friendly the courts may be, persuading judges that every phone in the country needs to be tapped, including theirs, might be a problem.
Or perhaps not. A computer wiretap is not really an invasion of privacy–nobody is listening. Why should it require a search warrant? If I were an attorney for the FBI, facing a friendly judiciary, I would argue that a computerized tap is at most equivalent to a pen register, which keeps track of who calls whom and does not currently require a warrant. The tap only rises to the level of a search when a human being listens to the recorded conversation. Before doing so, the human being will, of course, go to a judge, offer the judge the computer's report on key words and phrases detected, and use that evidence to obtain a warrant. Thus law enforcement will be free to tap all our phones without recourse to the court system–until, of course, it finds evidence that we are doing something wrong. If we are doing nothing wrong, only a computer will hear our words–so why worry? What do we have to hide?
In the wake of the attack on the World Trade Center there has been political pressure to establish a national system of I.D. cards; currently (defined by when I write not when you read) it is unclear whether it will succeed. In the long run it may not matter very much. Each of us already carries with him a variety of built-in identification cards–face, fingerprints, retinal patterns, DNA. Given adequate technologies for reading that information, a paper card is superfluous.
In low density populations, face alone is enough. Nobody needs to ask a neighbor for identification because everybody already knows everybody else. That system breaks down in the big city because we are not equipped to store and search a million faces.
We could be. Facial recognition software is already pretty good and getting better. As it gets better, there is no technical reason why someone, most probably law enforcement, could not compile a database of every face in the country, along with associated information. Point the camera at someone and read off name, age, citizenship, criminal history, and whatever else is in the database.
Faces are an imperfect form of identification, since there are ways to change your appearance. Fingerprints are better. There already exist commercial devices to recognize fingerprints, used to control access to laptop computers–if the wrong person puts his finger on the sensor, the computer refuses to give him access. I do not know how close we are to an inexpensive fingerprint reader, matched with a filing system, but it does not seem like an inherently difficult problem. Nor does the equivalent using a scan of retinal patterns. Cheap DNA recognition is a little further off–but there too, technology has been progressing rapidly.
We could make laws forbidding law enforcement from compiling and using such databases, but it does not seem likely that we will, given the obvious usefulness of the technology in doing the job we want them to do. Even if we did forbid it, enforcing the ban, against both law enforcement and everyone else, would be difficult. When the Social Security system was set up, the legislation explicitly forbade the use of the Social Security number as a national identifier. Nonetheless, the Federal government--and a lot of other people--routinely ask you for it. Even if there is no official national database of faces, each police department will have its own collection of faces that interest it. If expanding that collection is cheap–and it will be–"interest" will become a weaker and weaker requirement. And there is nothing to stop different police departments from talking to each other–especially in a world of widely available high speed networks.
Most of us think of law enforcement as almost entirely the province of government. In fact is not and, so far as I know, never has been. Total employment in private crime prevention–security guards, burglar alarm installers, and the like–has long been greater than in public law enforcement. Catching and prosecuting criminals is almost entirely done by agents of government, but that is only because crime is defined as the particular sort of offense that is prosecuted by the government. Precisely the same action–killing your wife, for example–can be prosecuted either by the state as a crime or by private parties as a tort.
Consider, for one of my favorite examples, criminal prosecution in 18th century England. On paper, their legal system made the same distinction between crimes and torts that ours does. A crime was an offense against the crown–the case was Rex v Friedman.
The crown owned the case, but it did not prosecute it. England in the 18th century had no police as we understand the word–no professionals employed by government to catch and convict criminals. There were constables–sometimes unpaid–with powers of arrest, but figuring out who to arrest was not part of their job description. It was not until the 1830's that Robert Peel created the first English police force. Not only were there no police, there were no public prosecutors either–the equivalent of the District Attorney in the modern American system did not exist in England until the 1870's, although for some decades prior to that police officers functioned as de facto prosecutors.
With neither police nor public prosecutors, criminal prosecution was necessarily private. The legal rule was that any Englishman could prosecute any crime. In practice, prosecution was usually by the victim or his agent.
That raises an obvious puzzle. When I sue someone under tort law, at least I have the hope of winning and being paid damages–with luck more than enough to cover my legal bills. But a private prosecutor under criminal law had no such incentive. If he got a conviction the criminal would be hanged, transported, permitted to enlist in the armed services, or pardoned–none of which put any money in the prosecutor's pocket. So why did anyone bother to prosecute?
One answer is that the victim prosecuted in order to deter–not crimes in general but crimes against himself. That makes sense if he is a repeat player–the owner of a store or factory at continual risk from thieves. Hang one and the others will get the message. That is why, even today, in a system where prosecution is nominally entirely public, department stores have signs announcing that they prosecute shoplifters. Arguably it is why Intel prosecuted Randy Schwartz.
Most potential victims were not repeat players. For them, the 18th century English came up with an ingenious solution–societies for the prosecution of felons. There were thousands of them. The members of each contributed a small sum to a pooled fund, available to pay the cost of prosecuting a felony committed against any member of the society. The names of the members were published in the newspaper for the felons to read. Potential victims thus precommitted themselves to prosecute. They had made deterrence into a private good.
That set of institutions was eventually abandoned. One possible explanation is that, in order for it to work, criminals had to know their victims–at least well enough to know whether the victim either had a reputation for prosecuting or was a member of a prosecution association. As England became increasingly urbanized, crime became increasingly anonymous. It did no good to join a prosecution association and publish your membership in the local paper if the burglar didn't know your name. Another possibility is that the police were a solution to a different problem—not preventing ordinary crime but making sure that the French Revolution, or something similar, did not happen in England.
One consequence of modern information processing technology is the end of anonymity, at least in realspace. Public information about you is now truly public–not only is it out there, anyone who wants can find it. In an earlier chapter, I discussed that in the context of privacy--privacy through obscurity is no longer an option.
Consider our earlier discussion of how to handle unauthorized access to computers. One problem with using tort law is inadequate incentive to prosecute–the random cracker probably does not have enough resources to pay the cost of catching and convicting him. That problem was solved two hundred and fifty years ago. Under criminal law there were no damages to collect, so 18th century Englishmen found a different incentive--private deterrence. The same approach could work for us.
Consider the online equivalent of a society for the prosecution of felons. Subscribers pay an annual fee, in exchange for which they are guaranteed prosecutorial services if someone accesses their computer in ways that impose costs on them. The names of subscribers–and their I.P. addresses–are posted on a web page, for prudent crackers to read and avoid. If the benefit of deterrence is worth the cost, there should be lots of customers. If it is not, why provide deterrence at the taxpayers' expense?
There remains one problem. Under ordinary tort law the penalty is either the damage done or the largest amount the offender can pay, whichever is less. If computer intruders are hard to catch, that penalty might not be adequate to deter them. One time out of ten, the intruder must pay for his damage–if he can. The other nine times he goes free.
Criminal law solves that problem by permitting penalties larger, sometimes much larger, than the damage done, thus making up for the fact that only some fraction of offenders are caught, convicted, and punished. Punitive damages in tort law achieve the same effect. But punitive damages are, and criminal punishment is not, limited by the assets of the offender–the criminal law can impose non-monetary punishments such as imprisonment.
So we have two possibilities for private enforcement of legal rules against unauthorized access. One is to use ordinary tort law, with private deterrence as the incentive to prosecute. That works so long as the assets of offenders are large enough so that taking them via a tort suit is an adequate punishment to deter most offenses. The other is to go all the way back to the 18th century–private prosecution with criminal penalties.
I have discussed the problems with private prosecution–what are the advantages? The main one is the advantage that private enterprise usually has over state enterprise. The proprietors of an online prosecution firm are selling a service to their customers on a competitive market. The better they do their job, the more likely they are to make money. If costs are high and quality low, they will not have the option of getting bailed out by the taxpayers.
The argument applies to more than the defense of computers against unwanted intruders. Information processing technology eliminates the anonymity that urbanization created–in that respect, at least, it puts us back in villages. Doing so eliminates what was arguably the chief reason for the shift from private to public prosecution–of all crime.
My interest in this possibility was in part inspired by history and economic theory, in part by an online news story that I came across a few years back. It involved a large scale credit card fraud whose perpetrator had just been brought to justice. The arrest was made by the FBI, but the job of catching him had been done by his victims–using the internet to coordinate their efforts. Open Source crime control.
That story suggests another way in which modern technology may make private law enforcement more practical than it has been in the recent past. Many crimes involve a single criminal but multiple victims. Each victim has reasons, practical and moral, to want the criminal caught, but no one victim can do the job on his own. The internet, by drastically reducing the cost of finding fellow victims and coordinating with them, helps solve that problem.
Through most of the past century, improved reproductive technology has consisted mostly of better ways of not reproducing. Better contraception has been accompanied by striking changes in human mating patterns: a steep decline in traditional marriage, a corresponding increase in non-marital sex and, perhaps surprisingly, extraordinarily high rates of childbirth outside of marriage.
While the long term consequences of reliable contraception will continue to play out over the next few decades, they will not be discussed here. This chapter deals with more recent developments in the technology of human reproduction.
Human mating patterns have varied a good deal across time and space, but long term monogamy is far and away the most common. This pattern--male and female forming a mated pair and remaining together for an extended period of time--is uncommon in other mammalian species. It is, oddly enough, very common among birds. Swans and geese, for example, have long been known to mate for life.
Modern research has shown that the behavior of most varieties of mated birds is even closer to that of humans than we once supposed. As with humans, the norm is monogamy tempered by adultery. While a mated pair will raise successive families of chicks together, a significant fraction of those chicks--genetic testing suggests figures from ten to forty percent--are not the offspring of the male member of the pair. Similar experiments are harder to arrange with humans, but such work as has been done suggests that some significant percentage of the children of married women cohabiting with their husbands are fathered by someone else.
From an evolutionary standpoint, the logic of the situation is clear. Males play two different roles in human (and avian) reproduction. They contribute genes to help produce children and resources to help rear them. The latter contribution is costly, the former is not. A male who can successfully impregnate another male's mate gets reproductive success--more copies of his genes in the next generation--at negligible cost. So it is not surprising that males, whether men or ganders, invest substantial effort both in attempting to impregnate females they are not mated to and in attempting to keep the females they are mated to from being impregnated by other males.
A faithful female gets both genes and support from her mate--trades her contribution to producing offspring for his. But an unfaithful female can do even better. She mates with the best provider who will have her and then, given the opportunity, becomes pregnant by the highest quality male available, where "quality" is defined by whatever observable characteristics signal heritable characteristics that can be expected to result in reproductive success for her offspring--tail length in swallows, income and status in humans. In Henry Kissinger's words, "Power is the ultimate aphrodisiac."
This strategy works, for geese and women, because of a curious feature of our biology--the inability of males to reliably identify their offspring. If that were not the case, if males were equipped with some built-in system of biometric identification based on scent, appearance, or the like--they could and would refuse to provide support for the offspring of other males.
This feature of human biology has just vanished. Paternity testing now does what evolution failed to do--provides men a reliable way of determining which children are theirs. What are the likely consequences?
The obvious consequence is that some men would discover that their wives had been unfaithful and some marriages would break up as a result. The slightly less obvious consequence is that married women conducting affairs would take more care with contraception. The still less obvious consequence--except to economists and evolutionary biologists--is that men would take better care of their children.
From the economist's standpoint, the reason is that people value the welfare of their own offspring above the welfare of other people's offspring. From the biologist's standpoint, the reason is that human beings, like other living creatures, have been designed by evolution to act in ways that maximize their reproductive success--and one way of doing so is to take better care of your own children than of other people's. Either way, the conclusion is the same. Routine paternity testing would mean that men knew that their children were really theirs and so would be willing to invest more resources in them. They would invest less in children that turned out not to be theirs--but there would be fewer of those than before, due to the desire of wives to have children that their husbands will help support. And those children that did have a father who was not their mother's husband could prove it, and so have at least a hope of support from him.
Readers who question the assumption that parents are biased in favor of their own children might want to look at the literary evidence. Across a very wide variety of cultures, it is taken for granted that step parents cannot be trusted to care for their step children. And, going beyond our species, there is evidence that male birds adjust the amount of parental care they give chicks to take account of the probability that the chicks are not theirs.
So far I have been considering a straightforward consequence of the combination of a new technology and a new social practice. The technology has already happened; the practice, so far, has not changed in response.
The law, however, has. Under Lord Mansfield's rule, a common law doctrine going back to the eighteenth century, a married man cohabiting with his spouse was legally forbidden from challenging the legitimacy of her offspring. This appears in modern statutes as the rule that the mother of a child is the woman from whose body the child is born and, if that woman was married and cohabiting with her husband when the child was conceived, he is conclusively presumed to be the father.
That was a reasonable legal rule as long as there was no practical way of demonstrating paternity. Most of the time it gave the right answer. When it did not, there was usually no good way of doing better and no point in using up time, effort and marital good will trying.
It is no longer a reasonable legal rule and, increasingly, it is no longer the rule embodied in modern statutes. In California, for example, a state whose family law we shall be returning to at the end of this chapter, the current statute provides that the presumption may be rebutted by scientific evidence that the husband is not the father.
So much for the present and the immediate future. A more interesting question is the long term effect of the technology. One function of the marriage institutions of most human societies we know of, past and present, is to give males a reasonable confidence of paternity by providing that under most circumstances no more than one male has sexual access to each female. With modern paternity testing, that is no longer necessary.
Which raises some interesting possibilities. We could, at one extreme, have a society of casual promiscuity--Samoa, at least as imagined by Margaret Mead. When a child was born, the biological father, as demonstrated by paternity testing, would have the relevant parental rights and responsibilities.
There are problems with that system. It is easier for two parents to raise a child jointly if they are living together--and the fact that a couple enjoy sleeping together is very weak evidence that they will enjoy living together. An alternative that is both more attractive and more interesting is some form of group marriage--three or more people living together and rearing children together. Such arrangements have been attempted in the past and no doubt some currently exist. The only form that has ever been common--polygyny, one husband with several wives--is the one that does not require paternity testing to determine paternity. The question is whether other firms will now become more common.
That in turn comes down to a simple question to which I do not know the answer: Is male sexual jealousy hard wired? Do men object to other men sleeping with their mates because evolution has built into them a strong desire for sexual exclusivity or because they have chosen, or been taught, that strategy as a way of achieving the (evolutionarily determined) objective of not spending their resources rearing another man's children? Weak evidence for the latter explanation is provided by an anthropologist's observation that men spent less time monitoring their wives when the wives were pregnant, hence could not conceive.
One person I have discussed the question with reported that he and people he knew did not experience sexual jealousy; readers interested in joining that discussion should be able to find him and some of his friends on the Usenet newsgroup alt.polyamory, which means what it sounds like. But what he was observing may have been only the tail of the distribution--the small fraction of men who, because they have abnormally low levels of sexual jealousy, are willing to experiment with unconventional mating patterns.
Suppose that male sexual jealousy is hardwired. There still remains an interesting possibility--the professional mother. Consider a woman who likes children, is good at bearing and rearing them, and herself has characteristics that men would like in the mother of their child--healthy, intelligent, good looking. Match her up with men who would like to have children but have not been successful in finding a willing mate with whom they would like to have them. There is an obvious possibility for an exchange that benefits both parties. The man fathers the child, whether by artificial insemination or more traditional means. The woman bears and rears the child. The man provides financial support and perhaps a paternal role.
Eugenics, the idea of improving the human species by selective breeding, was supported by quite a lot of people in the late nineteenth and early twentieth centuries. Currently it ranks, in the rhetoric of controversy, only a little above "Nazi." Almost any reproductive technology capable of benefiting future generations is at risk of being attacked as "eugenics" by its opponents.
That argument confuses, sometimes deliberately, two quite different ways of achieving similar objectives. One is to treat human beings like dogs or race horses--have someone, presumably the state, decide which ones get to reproduce in order to improve the breed. Such a policy involves forcing people who want to have children not to do so--and perhaps forcing people who do not want to have children to do so. In addition, it imposes the eugenic planner's desires on everyone--and there is no reason to assume that the result would be an improvement by other people's standards. A prudent state might decide that submissiveness, obedience to authority, and similar characteristics were what it wanted to breed for.
The alternative is what I think of as libertarian eugenics. The earliest description I know of is in a science fiction novel--Beyond This Horizon by Robert Heinlein, arguably one of the ablest and most innovative science fiction writers of the century.
In Heinlein's story, genetic technology is used to control which among the children a given couple might have they do have. The control is exercised not by the state but by the parents. They, assisted by expert advice, select among the eggs produced by the wife and the sperm produced by the husband the particular combination of egg and sperm that will produce the child they most want to have--the one that does not carry the husband's gene for a bad heart or the wife's for poor circulation but does carry the husband's good coordination and the wife's musical ability. Thus each couple gets its own child--yet characteristics that parents don't want their children to have are gradually eliminated from the gene pool. Since the planning is done by each set of parents for its own children, not by someone for everyone, it should maintain a high degree of genetic diversity--different parents want different things. And since parents, unlike state planners, can usually be trusted to care a great deal about the welfare of their children, the technology should mostly be used to benefit the next generation, not to exploit it.
Heinlein's technology does not exist but its result, in a crude form, does. The current and more primitive method is for a woman to conceive, obtain fetal cells by extracting amniotic fluid ("amniocentesis"), have the cells checked to see if they carry any serious genetic defect, and abort the fetus if they do.
A version which eliminates the emotional (some would say moral) costs of abortion is now coming into use. Obtain eggs from the intended mother, sperm from the intended father. Fertilize in vitro--outside the mother's body. Let the fertilized eggs grow to the eight cell level. Extract one cell--which at that point can be done without damage to the rest. Analyze its genes. Select from the fertilized eggs one that does not carry whatever serious genetic defect they are trying to avoid. Implant that egg in the mother.
At present there are two major limitations to this process. The first is that in vitro fertilization is still a difficult and expensive process. The second is that genetic testing is a new technology, so only a small number of genetic characteristics can actually be identified in the cell. Some genetic diseases yes--musical ability or intelligence, no. Given current rates of progress, the second limitation is likely to be rapidly reduced over the next decade or two. We will then be in a world where at least some people are able to deliberately produce "the best and the brightest"--of the children those people could have had. That ability will be greatly increased when and if we get the ability to determine the genetic structure of egg and sperm before they are combined, greatly increasing the number of alternatives that parent can choose among.
So far I have been considering a reproductive technology that already exists, although at a fairly primitive level--selecting among the fertilized eggs produced by a single couple. We come next to some newer technologies. The one that has gotten most of the attention is cloning--producing an individual who is genetically identical to another. One form is natural and fairly common; identical twins are genetically identical to each other. The same effect has been produced artificially in animal breeding: get a single fertilized egg, from it produce multiple fertilized eggs, implant them to produce multiple genetically identical offspring.
The form of cloning which has recently become controversial starts instead with an adult cell and uses it to produce a baby that is the identical twin of that adult. Much of the initial hostility to the technology seemed to be rooted in the bizarre belief that cloning replicates an adult--that, after I am cloned, one of me can finish writing this chapter while the other puts my children to bed. That is not how cloning works--although we will discuss something very similar in a later chapter, where the copying will be into silicon instead of carbon.
Another technology, a little further into the future, is genetic engineering. If we knew enough about how genes work and how to manipulate them, it might be possible to take genetic material from sperms, eggs, or adult cells contributed by two or more individuals and combine it, producing a single individual with a tailor made selection of genes.
Sexual reproduction already combines in us genes from our parents. Genetic engineering would let us choose which genes came from which, instead of accepting a random selection or, with a more advanced technology, choosing from among several random selections. It would also let us combine genes from more than two individuals without taking multiple generations to do it--and genes from other species. Primitive versions of the technology have already been used successfully to insert genes from one species of plant or animal into a different species.
Another possibility is to create artificial genes, perhaps an entire additional chromosome. Such genes would be designed to do things within our cells that we wanted done--prevent aging, say, or fight AIDS--but that no existing gene did. Constructing them would be a project at the intersection of biotechnology and nanotechnology.
Current and near future technologies to control what sort of children we have depend on in vitro fertilization (IVF)--the process by which an egg is removed from the mother's body to be fertilized ("in vitro" means "in glass"), then implanted. That technology was developed to make it possible for otherwise infertile women to have children. It also makes possible artificial cloning of eggs, by letting the fertilized egg divide and then separating it into two. It makes possible cloning of adult cells, by replacing the nucleus of a fertilized egg with a nucleus from an adult cell. And it may yet make possible genetic engineering and artificial genes. It has also already made possible true host mothers, bearing children produced from other women's fertilized eggs.
Other new technologies may make possible reproduction by a different sort of infertile parents--same sex couples. At present a pair of women who wish to rear a child can, in at least some states, adopt one. Alternatively, one of the women can bear a child using donated sperm. But they cannot do what most other couples desiring children do--produce a child who is the genetic offspring of both of them. The closest that can be managed with traditional technology is to use sperm donated by a father or brother of one to inseminate the other, producing a child who is, genetically speaking, half one of them and a quarter the other.
That situation is changing. Techniques have been developed for producing artificial sperm containing genetic material from an adult cell. They may make it possible in the fairly near future for two women to produce a child who is in the full sense theirs. At some point an analogous technology might make possible artificial eggs, permitting two men to produce a child who is in the same sense theirs--with the assistance of a borrowed womb.
New technologies make it possible to do new things; there remains the question of whether they are worth doing. In the case of reproductive technology, the initial driving force, still important, was the desire of people to have their own children. From that we get IVF and the use of host mothers--to permit a mother unable to bring her fetus to term to get someone else to do it for her. The desire to have your own children also provides a possible incentive for cloning--to permit a couple unable to produce a child of both (because one is infertile) to produce instead a child who is an identical twin of one--and for technologies to allow single sex couples to reproduce.
A second and increasingly important motive is the desire to have better children. In the early stages of the technology this means avoiding the catastrophe of serious genetic defects. As the technology gets better, it opens the possibility of eliminating less serious defects--the risk of a bad heart, say, which seems to be in part genetic, or alcoholism which may well be--and selecting in favor of desirable characteristics. Parents want their children to be happy, healthy, smart, strong, beautiful, and these technologies provide one way of improving the odds.
One can imagine the technologies used for other purposes. A dictatorial government might try to engineer the entire population--to breed some characteristic, say aggressiveness or resistence to authority, out of it. A less ambitious government might use cloning to produce multiple copies of the perfect soldier, or secret policeman, or scientific researcher, or dictator--although multiple identical dictators might be asking for trouble.
Such scenarios are more plausible as movie plots than as policies. The problem is that it takes about twenty years to produce an adult human; few real world governments can afford to plan that far ahead. And while a clone will be genetically identical to the donor, its environment will not be identical, so while cloning produces a more predictable result than sexual reproduction, it is far from perfectly predictable. And getting your soldiers, secret police, or scientists the old fashioned way has the advantage of letting you select them from a large population of people already adult and observable.
One further argument against the idea is that if it is an attractive strategy for a dictatorial state it ought already to have happened. Selective breeding of animals is a very old technology. Yet I know of no past society which made any serious large scale attempt at selective breeding of humans in order to produce in the ruled traits desired by the rulers. Insofar as we have observed selective breeding of humans it has been at the individual or family level--people choosing mates for themselves or their children in part on the basis of what sort of children they think those mates will help produce.
A more serious danger is the exploitation of cloned children at the individual level. In a version sometimes offered as an argument against cloning humans, an adult produces a clone of himself in order to disassemble it for body parts to be used for future transplants. The obvious problem with the argument is that even if the cloning were legal, the disassembly would not be--in the U.S. at present or in any reasonably similar society. But one can imagine a future society in which it was. On the other hand, the process again involves a substantial time lag--and becomes increasingly less useful as improved medical technology reduces the problems of transplant rejection.
There has been at least one real world case distantly analogous to this, however. Looking at it suggests that producing a human being at least partly to provide tissue for transplant may not be such an ugly idea as it at first seems.
In 1988, Anissa Ayala, then a high school sophomore, was diagnosed with a slow progressing but ultimately fatal form of leukemia. Her only hope was a treatment that would kill off all her existing blood stem cells then replace them by a transplant from a compatible donor. The odds that a random donor would be compatible were about one in twenty thousand.
Her parents spent two years in an unsuccessful search for a compatible donor, then decided to try to produce one. The odds were not good. A second child would have only a 25% chance of compatibility. Even with a compatible donor the procedure had a survival probability of only 70%. The mother was already forty-two, the father had been vasectomized.
The alternative was worse; Anissa's parents took the gamble. The vasectomy was successfully reversed. Their second daughter was born--and compatible. Fourteen months later she donated the bone marrow that--as she put it five years later in a television interview--saved her sister's life.
Marissa was produced by conventional methods--the controversial element, loudly condemned by a variety of bioethicists, was producing a child in the hope that she could donate the bone marrow required to save another. But cloning, had it been practical, would have raised the odds of a match from 25% to 100%.
For another potentially controversial use of cloning, consider parents whose small child has just been killed in an auto accident. Parents have a very large emotional investment in their children--not children in the abstract but this particular small person whom they love. Cloning could let them, in a real although incomplete sense, get her back--produce a second child very nearly identical to the first.
Reproductive technologies--most recently cloning, earlier contraception, IVF and artificial insemination--have aroused widespread opposition. One reason--the idea that such a technology might be used by a dictatorial state in a variety of ways--I have already dismissed as implausible. There are at least three others.
The first is the Yecch factor. New technologies involving things as intimate as reproduction feel weird, unnatural and, for many people, frightening and ugly. That was true for contraception, it was true for IVF and artificial insemination, it is strikingly true for cloning, and will no doubt be true for genetic engineering when and if we can do it. That reaction may slow the introduction of new reproductive technologies but is unlikely to prevent it, so long as those technologies make it possible for people to do things they very much want to do.
A second reason is that new technologies usually do not work very well at first. Judging by experience so far with cloning large mammals, if someone tries tomorrow to clone a human it will take many unsuccessful tries to produce one live infant, and that infant may suffer from a variety of problems. That is a strong argument against cloning a human being today. But it is an argument that will get weaker and weaker as further experiments in cloning other large mammals produce more and more information about how to do it right.
Consider a very simple example--gender selection. Parents often have a preference as to whether they want a boy or a girl. The simplest technology to give them what they want--selective infanticide--has been in use for thousands of years. A less costly alternative--selective abortion--is now used extensively in some parts of the world. And we now have ways to substantially alter the odds of producing male or female offspring by less drastic methods. As such techniques become more reliable and more widely available, we will move towards a world where parents have almost complete control over the gender of the offspring they produce. What will be the consequences?
For the most extreme answer, consider the situation under China's one child policy--imposed on a society where families strongly desire at least one son. The result is that a substantial majority of the children born are male; some estimates suggest about 120 boys for 100 girls. A similar but weaker effect has occurred in India even without a restriction on number of children--recent figures suggest about 107 boys for 100 girls. With better technologies for gender selection the ratios would be higher. The consequence is likely to be societies where many men have difficulty finding a wife.
The problem may be self correcting--with a time lag. In a society with a high male to female ratio women are in a strong bargaining position, able to take their pick of mates and demand favorable terms in marriage. As that becomes clear, it will increase the payoff to producing daughters. There is not a lot of point to preserving the family name by having a son if he cannot find a woman willing to produce grandchildren for you. A high ratio of men to women might also result in a shift in mating patterns in the direction of polyandry--two or more husbands sharing the same wife. Even without changes in marriage laws there is still the possibility of serial polyandry. A woman marries one man, produces a child for him, divorces him and marries a second husband.
What about technologies allowing parents to choose among the children they might have, or even to add useful genes, perhaps artificial, that neither parent carries? Lee Silver, a mouse geneticist and the author of a fascinating book on reproductive technology, worries that the long term result might be a society divided into two classes--generich, the genetically superior descendants of people who could afford to use new technologies to produce superior offspring, and genepoor.
There are two reasons this is not likely to happen. The first is that human generations are long and technological change is fast. We might have a decade or two in which high income people have substantially better opportunities to select their children than low income people. After that the new technology, like many old technologies, will probably become inexpensive enough to be available to almost anyone who really wants it. It was not that long ago, after all, that television was a new technology restricted to well off people. Currently, about ninety-seven percent of American families below the poverty line own at least one color television.
The second reason is that human mating is not strictly intraclass. Rich men sometimes marry poor women and vice versa. Even without marriage, if rich men are believed to carry superior genes--as, after a few generations of Lee Silver's hypothetical future, they would be--that is one more reason for less rich women to conceive by them, a pattern that, however offensive to egalitarian sensibilities, is historically common. Put in economic terms, sperm is a free good, hence provides a low cost way of obtaining high quality genes for one's offspring. I doubt we will get that far, but if we do we can rely on the traditional human mating pattern--monogamy tempered by adultery--to blur any sharp genetic lines between social or economic classes.
Whether or not new reproductive technologies are going to generate new problems for people in the future, they are already producing problems for the legal system and the social institutions in which it is embedded.
The first, and currently the biggest, is the paternity problem. State welfare agencies and unmarried or no-longer-married mothers would like to find a man who can be made responsible for supporting the mothers' children. If their genetic father can be identified, he is the obvious candidate. But what if he cannot?
The view of some states is that the man who was the mother's mate ought to be responsible whether or not he is the actual father--a reversion to Lord Mansfield's rule, extended to cover unmarried couples. It was workable before modern paternity testing because the state could argue that the man she had been living with, perhaps married to, was likely to be the genetic father of her children, even if he denied it. That argument no longer works now that paternity can be reliably determined.
The obvious argument on the other side is that a man who has been cuckolded by his wife is already a victim of her betrayal; to make him responsible for supporting someone else's child only adds injury to insult. The counter argument is that even if the mother is at fault her child is not--and someone has to support it. That argument becomes more convincing if the man has functioned as father to the child for long enough to establish an emotional bond between the two.
One possibility a little farther down the road is a genetic database that could be used to identify the genetic father and make him liable--bringing us back to the idea of paternity testing at birth. A less ambitious alternative is to require the mother to identify father or possible fathers and have the state compel him to permit testing. But if the mother is unwilling or unable to identify the father, we are back with the problem of who, other than the state, can be made responsible.
Paternity testing could also create problems for men who, under current law, are not responsible for supporting their genetic children. In many states, if a woman conceives by artificial insemination using sperm from a sperm bank, she has no claims for support from the donor. Prior to paternity testing, that legal rule could be enforced by simply not keeping the relevant records. Today the records establishing paternity are stamped into every cell of father and child. If law and custom change, as they have changed in the past in the direction of making it easier for adopted children to locate birth parents, including ones who do not want to be located, some men may be in for a surprise.
Paternity testing can establish the fact of paternity. But it does not tell us what paternity, or maternity, or parenthood, means. We can do a better job than in the past of determining who has what relation to a child. But we can also produce a more complicated set of relations, making it harder to fit the new reality into the old law.
With current technology and practice, the term "mother" has at least three different meanings. One is intentional mother--the woman who intended to play the social role of mother when the arrangements for producing the child were made. One is womb mother--the mother in whose womb the fetus grew. A third is egg mother--the mother who provided the egg. Once we start cloning humans a fourth category will be mitochondrial mother--the woman who provides the egg whose nucleus is replaced by the nucleus of a cell from the clone donor, retaining the woman's extranuclear DNA.
Fathers still come in only two varieties. The intentional father is the man who intended to play the social role of father, the biological father is the man who provided the sperm. The one change associated with the newer technology is that it is more common than it used to be for the intentional father to know he is not the biological father. With three or four varieties of mother and two of father, the definition of "parents" becomes distinctly ambiguous.
That problem was mentioned back in Chapter II, where I described the (real, California) case of the child with five parents. All five were different people--and the intentional parents, John and Luanne, separated a month before the baby was born. The court decided that the intentional parents were the ones that counted--although neither had any biological connection to the child. Luanne ended up with the child, John liable for child support.
The same legal approach could be used to resolve the issues of parenthood raised by cloning. The human clone gets all of his nuclear DNA from one donor. Genetically speaking, one might describe that donor as both parents. If we actually did the usual genetic tests, they would show the clone as a child of the donor's parents. Further complications arise from extranuclear DNA, which comes from the woman who donated the egg used in the procedure--who might or might not be the donor of the nuclear DNA. And, since cells can be obtained from a donor without his consent, we have the possibility that a woman could bear the clone of a rich and prominent man and then sue him for child support on a scale appropriate to his income. There is at least one real world case in which a woman impregnated herself with sperm fraudulently obtained and succeeded in establishing a claim for child support against the father--the nearest equivalent under the old technology.
The definition established by the California court--parenthood determined by intention--provides a single rule to cover nearly all circumstances, whatever the reproductive technology; in order for the child to come into existence, at least one person had to intend it--or at least choose to take actions that could lead to it--and that person counts as a parent. It still leaves some problems.
Conventional reproduction involves one male and one female. Once the biology can be subcontracted, intentional reproduction could involve one male and one female, two males, two females, a partnership of four of each, or a corporation--say Microsoft, looking to breed a successor for Bill Gates. Only the conventional pattern fits legal rules designed for that pattern. Once we accept intentional parenthood, the law must either restrict who can be an intentional parent--require, say, that before any form of assisted reproduction can legally take place, one man and one woman (in the most conservative version) must identify themselves as intentional parents--or specify parental rights and obligations broadly enough so that any of the permitted arrangements can fit them.
When Monsanto creates a transgenic soybean, it gets to patent it; if other people make copies without permission, most readily done by buying some seed from Monsanto and planting it, then using the crop for next year's seed without first paying Monsanto for the right to do so, they are infringing Monsanto's patent. Some of the problems raised by that situation will be discussed in the next chapter.
What if Monsanto applies its new technologies not to soybeans but to people, producing an improved version with some novel genes, transplanted from another species or created in the lab? On the face of it, the same law would appear to apply. You now have human beings who partly belong to someone else--one or more of the genes in every cell are Monsanto's intellectual property.
Living creatures, including humans, are continually creating new cells to replace old ones, so it looks as though the patented baby infringes the patent with every breath he draws and will continue to do so throughout his life. The obvious solution is for the original contract by which the baby is produced to include a lifetime license to use the patent within that child's body--just as the contract by which a farmer buys genetically engineered soybeans includes the right to have the resulting plants reproduce their patented genes in the process of growing.
That still leaves us with the issue of reproducing the patented genes outside of the child's body. If I was that child, do I now require Monsanto's permission to reproduce? To engage in any activity that might lead to reproduction? Perhaps the original license ought to include some additional terms, with prices set in advance.
Considering the attitude our legal system takes to private ownership of human beings by anyone other than themselves, I doubt that Monsanto could get very far enforcing its intellectual property rights in this novel context--but one never knows. The issue is unlikely to come up. Human beings mature slowly. Under current law, by the time I become seriously interested in infringing the patent on me it will probably have expired.
Soybeans, on the other hand, do not have individual rights to self ownership--and their generations are only a year long. For soybeans, the issue of intellectual property rights in things that live and reproduce is a real one, currently faced by courts in a variety of countries.
In our society people are not supposed to become sexually active until they become adults. In practice, it doesn't work that way, leading to problems with which anyone who reads newspapers, watches television, or worries about his own children, is familiar. The essential problem is that we are physically ready to reproduce before we are emotionally--or, in most cases, economically--ready to reproduce. That has become increasingly true as the age of physical maturity has fallen--by about two years over the past century, probably as a result of improved nutrition.
Suppose a drug company announces a new medication--one that will safely delay puberty for a year, or two years, or three years. There will be a considerable demand for the product. Are parents who artificially delay the physical development of their daughters guilty of child abuse? May schools pressure parents to give the medication to boys about to reach puberty, as many now do for other forms of medication designed to make children behave more nearly as schoolteachers wish them to? If schools do require it, are parents who refuse to artificially delay the development of their sons guilty of child abuse--and subject to the same pressures as parents who today refuse to put their sons on Ritalin?
Modern technology, by increasing both what we know and what we can do, complicates our systems for classifying people. We are used to taking it for granted that every human being is either male or female. For a long time that was quite a good approximation. But not for much longer.
Some humans are XY but have female bodies; they are genetically male but morphologically female. Some are the reverse--XX with male bodies. And some humans are genetically XYY, with male bodies and a mild tendency towards aggressive personalities, or XXY, or … .
So far we are dealing with genetics and morphology, both of which are fairly unambiguous, even if the combinations are wilder than we thought. The situation gets more confused if you add in psychology. Some people who are genetically and morphologically male claim to be psychologically female, to think of themselves as women. Others have the reverse pattern. There is some evidence that this is more than a delusion--that in a way not yet clearly understood, such people have brains designed for the wrong gender. Modern surgical techniques make it possible to at least partly correct the error--for someone who self identifies as a woman in a man's body to have the body altered to at least a reasonable facsimile of a woman's body, although an infertile one. Similarly the other way around.
All of this raises interesting problems for both individuals and the law. Am I to think of Deirdre, a professional colleague who used to be Donald, as a slightly odd looking woman, a man surgically altered to look like a woman, or something neither man nor woman? If I had discussed these issues with Donald back when he/she/it possessed, in his/her/its view, the body of a man but the mind of a woman--as it happens I didn't--how should I have thought of the person I was discussing them with? If Dierdre marries a wealthy man and he later dies intestate, can his heirs successfully challenge her claim to part of his estate on the grounds that she was a he, hence could not contract a legal marriage with a man? That particular case--with a different transsexual--was recently litigated in Kansas. The wife lost.
The previous chapter dealt with a narrow slice of biotechnology--human reproduction. In this chapter we first consider the issues raised by a different application of technology to human beings--genetic testing--and then go on to consider issues involving to other living creatures.
Of the ills that human flesh is heir to, some are entirely due to having the wrong genes--sickle cell anemia, for example. Many others, such as heart disease and alcoholism, appear to have a substantial genetic component. As knowledge and technology improve, we will increasingly be able to identify individuals who do or do not have the genes that make them more likely to die young of a heart attack, become alcoholics, or suffer other undesirable consequences.
Someone who knows he is genetically predisposed to heart disease has good reasons to take greater precautions against it--exercise, diet, testing and the like. That my grandfather died of a heart attack and my father has twice had bypass surgery are good reasons for me to take cholesterol lowering medication and try to maintain regular exercise. But I would have better grounds for such decisions if I knew whether or not I carried the genes that caused those problems for my father and grandfather. And someone who knew he was, for genetic reasons, particularly vulnerable to alcoholism might choose to avoid the problem by never taking the first drink.
What if I have a genetic problem for which there is no solution--say a gene that results in abnormally rapid aging? Knowing I have it at least lets me do a better job of planning my life--have children early or not at all, for example. But knowledge is not inevitably desirable. If I carry a death sentence in my genes, I might prefer not to know about it. Sometimes ignorance is bliss--at least for a while.
Consider, for example, the situation faced by an insurance company in a future where reliable genetic testing is readily available. Start with the simplest case--insuring against a disease that is entirely genetic. Once the testing is available, the risk of the disease becomes uninsurable. Only people who know they have the relevant genes will buy the insurance--and the sellers, knowing that, will price it accordingly.
What about the more realistic situation where a problem is in part genetic? The expected cost of insuring me against that problem then depends on what genes I have. If insurance companies are permitted to insist on testing clients before selling them insurance, both those with and without bad genes will be able to buy insurance--but at different prices. The part of the risk due to having bad genes becomes uninsurable, and insurance is only available for the residual risk--the uncertainty of the disease, given that we already know whether or not you have the genetic propensity. In the more realistic case where what you are insuring against is not a particular risk but the combined effect of lots of risks--the case of life or health insurance--the result is the same. Your life expectancy depends in part on your genetic makeup and in part on other things. Uncertainty due to the former becomes uninsurable, uncertainty due to the latter does not.
The solution some have recommended, is to make it illegal for insurance companies to require testing. Under such a system, individuals still can and will get themselves tested. Once I know that I am, genetically speaking, extraordinarily healthy, I also know that both life insurance and health insurance, priced on the assumption that I am average, are bad gambles. If, on the other hand, I know that I am likely to drop dead at age forty, then lots of life insurance--provided I expect to have survivors I care about--is obviously a good deal.
This effect is known in the insurance literature as adverse selection. It occurs when one party has information about the quality of what is being sold that the other party does not have and cannot get; a standard example is the used car market. The ignorant buyers pay the same price for good used cars (creampuffs) and bad used cars (lemons)--making the sale of your car a good deal if you have a lemon and a poor deal if you have a creampuff. The result is that lemons sell and creampuffs, for the most part, do not. Buyers, anticipating that, make their offer on the assumption that if it is accepted the car is probably a lemon--and at lemon prices, few cream puffs are offered for sale.
The logic is the same here. Imagine that insurance companies start out by charging a rate that just covers their costs for an average customer. At that rate, insurance is a much better deal for customers with genes that make them likely to collect than for customers with genes that make them unlikely to collect, so purchasers of insurance contain a more than average number of bad risks. Insurance companies discover that and raise their rates--driving out still more of the good risks. In the extreme case where all good risks are driven out, the result is even worse than it would be with testing by the insurance companies. Nobody can insure against genetic risk, because the decision to buy insurance tells the seller that the buyer knows he has bad genes. Those who have bad genes can still insure against risk from other causes; those who have good genes cannot.
One solution would be to somehow make it possible to prove that you have never been tested. Now people can get insured before they are tested--at average rates--then arrange to be tested and modify their life plans accordingly. The practical problem is that such a system provides a large incentive to cheat--to get tested on the black market, or in a foreign country, so as not to leave any record, and decide what insurance to buy after seeing the results.
A slightly more realistic solution would be for parents to buy insurance for their children before the children are conceived. The price might still depend on the parents' genes, but it cannot depend on the children's, since that is information that nobody has. Not, at least, until we have worked our way through the developments described in the previous chapter, at which point there should no longer be any bad genes left to worry about.
Agricultural biotechnology is one of the oldest forms of high tech, going back at least eight thousand years. That, by current estimates, is when the breeding program began that eventually produced maize--the cereal Americans call "corn"--possibly from Teosinte, a plant most of us would describe as a weed. Similar programs of selective breeding are responsible for creating all of our major food plants.
Not only is the creation of genetically superior strains by random mutation and selective breeding an ancient technology, so is cloning. It has been known for a very long time that fruit trees do not breed true to seed. To prove it for yourself, remove the seeds from a golden delicious apple, plant them, wait ten or twenty years, and see what you get. The odds are overwhelmingly high that it will not be a golden delicious, and moderately good that it will not be anything you would want to eat.
The solution is grafting. Once your little apple tree has its roots well grown, replace the top section of trunk with a piece of a branch cut from a golden delicious tree. If you do it right, the new wood grows onto the old--and everything above the graft will be golden delicious, genetically speaking, including the apples. You have just produced a clone--an organism (or at least most of an organism) that is genetically identical to another. Like Dolly, the cloned sheep, your cloned tree was created using cells from an adult.
To be even fancier, let your tree grow until it has a few little branches and then replace the end of one branch with a piece of wood from a golden delicious, a second with a piece from a Swaar (ugly but delicious), and a third with a bit from a lady apple (tiny, pretty, tasty). You now have that staple of plant catalogs, a three on one apple. You have also just employed, in your own back yard, a form of biotechnology that has been known at least since Roman times and is in large part responsible for the quality of fruit, grapes, and wine over the last few thousand years.
Modern agricultural biotech adds at least two new elements to the ancient technologies of selective breeding and grafting. One gives us the ability to do what we have been doing better. The other gives us the ability to do something almost entirely new.
The traditional way of breeding a better apple is to create a very large number of seeds, plant them all, let them all grow up, and see how they come out. If, by great good luck, one turns out to be a superior variety, it can be propagated thereafter by grafting. With enough expert knowledge, the plant breeder can improve the odds a little by picking the right parents--choosing a pair of trees that he has some reason to hope might produce superior progeny, pollinating one with pollen from the other, and using the resulting seeds. But it is still very much a gamble.
As our knowledge of genetics and our ability to manipulate genes improve, we may be able to do better than that. If we discover that particular sequences of genes are related to particular desirable traits, we can mix and match to produce trees--or grape vines, or tomato plants--with the traits we want. We will be doing the same thing we could have done with the old technologies, but in a lot less than eight thousand years.
An odder and more interesting possibility is to add to one species genes from another, producing transgenic plants. A famous--and commercially important--example uses Bacillus thuringiensis, or Bt, a bacterium which produces proteins poisonous to some insects but not to humans or other animals. Varieties of plants have been produced by adding to them the genes from the Bt bacterium responsible for producing those proteins. Such plants produce, in effect, their own insecticide. Other transgenic plants are designed to be resistant to widely used herbicides, thus permitting a farmer to kill weeds without harming his crop.
The technology can also be used to alter the final crop--to produce peanuts or tomatoes with longer shelf life, or sunflower oil low in trans-fatty acids. It is also possible to insert genes into a plant (or animal) that result in its producing something unrelated to its normal crop. Examples include bacteria modified to produce insulin, a cow whose milk contains human milk proteins and a sheep whose milk contains a clotting factor missing from the blood of hemophiliacs.
All of these seem unambiguously desirable uses of the technology. Insect resistant plants permit us to grow crops at lower cost and with much less use of insecticides. Other applications of the technology increase crop yields, reduce costs, improve quality, and provide low cost of ways of producing valuable pharmaceuticals, including some that cannot, at least so far, be produced in any other way. Yet the technology has been fiercely attacked, and in some parts of the world, most notably Europe, agriculture applications are severely restricted. Why?
Abu Hurairah (may Allah be pleased with him) reported that the Prophet (peace and blessings of Allah be upon him) said: "Allah, may He be exalted, says: 'Who does more wrong than the one who tries to create something like My creation? Let him create a grain of wheat or a kernel of corn.'" (Reported by al-Bukhari, see Fath al-Baari, 10/385).
One reason is obvious--hostility to anything new, combined with a romanticization of nature. Lots of people like the idea of "natural foods"--although practically nothing we eat is natural in the sense of not having been substantially altered by human activity. And we have the term "chemical" used pejoratively, despite the fact that everything we eat--and everything we are made out of--is a combination of chemicals. This is the attitude that shows up in the description of the products of agricultural biotech as "Frankenfoods." The Muslim tradition quoted above reflects a religious version of this view--that creating living things is God's business, not ours.
This attitude is of considerable importance today; over the next decade or two, it may result in European consumers getting lower quality food at higher prices than they otherwise would. One reason that may happen is that European farmers are subsidized by their governments and protected by trade barriers from foreign competition. The more European consumers can be persuaded that foreign foods are evil and dangerous, the easier it is for European farmers to sell them their products.
But while irrational hostility may be important in the short run, it is likely to be less so in the long. There are large parts of the world where increasing agricultural output means fewer people going hungry, making symbolic issues of natural or unnatural unimportant by comparison. And over time, new things become old things. Contraception was widely viewed as unnatural, wicked, dirty, and sinful fifty or a hundred years ago. In vitro fertilization was met with considerable suspicion. Both are now widely accepted. So it is more interesting, from a point of view that goes beyond the next decade, to ask whether there are any real problems associated with this sort of technology.
Our common food plants were bred from preexisting wild plants. Many of the latter are still around--and to some degree cross fertile with their domesticated descendants. That means that genetic traits introduced into crop plants may find their way, as pollen blown in the wind, to related wild plants. Herbicide resistance is a useful feature in a crop plant. It is a considerable nuisance in a weed.
How serious this sort of problem is depends on whether transgenically improved crop plants are grown near wild relatives, whether the modification is a benefit to weeds, and whether the modification makes the weed more of a problem for humans.
Consider a transgenic tomato designed for better flavor or longer shelf life. Even if there were related wild plants, those characteristics would be of no particular use to them, so wild plants with them would have no advantage over wild plants without them. And wild plants with those characteristics would be no more of a problem for farmers than ones without.
The same does not hold for resistance to herbicides. Suppose weed beets grow in or near the same fields as sugar beets transgenically modified to make them resistance to herbicides used in sugar beet farming. Weed beets that have had the good luck to acquire the genes for resistance will be more successful in that location than ones that have not--and more of a nuisance.
Stepping back a moment, it is worth looking at the general argument for why such problems do not exist and seeing why it is sometimes wrong. That general argument starts with the observation that existing plants, including weeds, have been "designed" by Darwinian evolution for their own reproductive success. Our current biotechnology is a much more primitive design system than evolution--that is why we produce new crops not by designing the whole plant from scratch but by adding minor modifications to the plants provided by nature. Hence one might think that if a genetic characteristic were useful to a weed, the weed would already have it--and if evolution has not succeeded in producing a useful characteristic, humans are unlikely to do better.
There are two things wrong with that argument. The first is that evolution is slow. Weeds are adapted to their environment--but that environment has only recently included farmers spraying herbicides on them. So they are not adapted, or at least not yet very well adapted, to resist those herbicides. If we deliberately create crop plants resistant to specific herbicides and the resistance spreads to related weeds, we provide an evolutionary shortcut, generating resistant weeds a great deal faster than nature would.
The second error in the argument is more complicated. Evolution works not by designing new organisms from scratch but by continuous changes. The more simultaneous changes are required to make a feature work, the less likely it is to appear. Complicated structures--the standard example is the eye--are produced by a series of small changes, each of which results in at least a small gain in reproductive success to the organism. Features that cannot be produced in that way are unlikely to be produced at all.
Genetic engineering also works by small changes--introducing one gene from a bacterium into a variety of corn, for instance. But the available range of small changes--or, if you prefer, the meaning of "small"--is different. There may be some unambiguous improvements--changes in an organism which result in greater reproductive success, hence would have been selected for by evolution--which can be produced by genetic engineering but are unlikely to come about naturally. The introduction of genes that code for a particular protein lethal to particular insect pests--genes borrowed from an entirely unrelated living creature--is an example. This is a subject we will return to in a later chapter, when we consider nanotechnology's still more ambitious attempts to compete with natural design.
The possibility that engineered genes will spread into wild populations and so produce improved weeds is one example of a class of issues raised by genetic technology. Others include the possibility of indirect ecological effects--improved weeds, or crop plants gone wild, that compete with other plants and so alter the whole interrelated system. They also include such unanticipated effects as crop plants designed to be lethal to insect pests turning out to also be lethal to harmless, perhaps beneficial, species of insects. I started with the case of transgenic weeds because I think that is the clearest case of a problem that is likely to happen--although not one likely to have catastrophic consequences. If, after all, weed beets become resistant to the farmers' favorite herbicide, they can always switch to their second favorite, putting them back where they started--with an herbicide to which neither weeds nor crop is especially resistant.
I am more skeptical about the other examples, mostly because I am skeptical about the idea that nature is in a delicate balance likely to produce catastrophe if disturbed. The extinction of old species and the evolution of new is a process that has been going on for a very long time. But while I am skeptical about particular examples, I believe that they illustrate a real potential problem with technological change--probably the most serious problem.
The problem arises when actions taken by one person have substantial dispersed effects on many others far away from him. The reason it is a problem is that we have no adequate set of institutions to deal with such affects. Markets, property rights, trade provide a very powerful tool for coordinating the activities of a multitude of individual actors. But their functioning requires some way of defining property rights such that most of the affect of my actions is born by me, my property, and some reasonably small and identifiable set of other people.
If there is no way of defining property rights that meets that requirement, we have a problem. The alternative institutions--courts, tort law, government regulation, intergovernmental negotiations, and the like--that we use to try to deal with that problem work very poorly--and the more dispersed the effects, the worse they work. Hence if technological changes results in making actions with such dispersed effects play a much larger role in our lives--if, for example, genetic engineering means that my engineered genes eventually show up in the weeds in your garden a thousand miles away--we have a problem for which no known institutions provide a reasonably good solution. This is an issue I will return to in later chapters.
Back in Chapter IX, we considered the problem of protecting intellectual property in digital form in a world where reproducing it is cheap and easy. The same problem arises with agriculture biotechnology--where the produce comes complete with its own copier. One solution is to try to use intellectual property law to prevent farmers from buying the genetically engineered crop once and producing their own seeds thereafter. That should work better for crops than for computer programs, since infringement, if it occurs, happens on a large scale in large open spaces.
A different solution is technological protection--some way of transferring the object that contains the intellectual property while retaining the ability to prevent the copying of what it contains. In an older version of agricultural biotech--hybrid seed varieties--it happened automatically. A farmer bought hybrid seeds, planted them, harvested the crop. If he then replanted from what he had harvested, the result, thanks to the magic of sexual reproduction, would be a crop with varying characteristics, reflecting the random process determining which genes from each parent ended up in each seed. Such a crop was harder to deal with than the uniform crop from purchased hybrid seeds, so the seed company could sell him more seeds each year.
That does not work with those transgenic species that do not depend on controlled hybridization--that grow sufficiently true to seed for the farmers' purposes. To deal with that problem, researchers developed, and patented, a way of accomplishing the same objective artificially.
The obvious approach is to engineer a plant whose seeds will be sterile. The problem is that you then have no practical way to produce the first generation of seeds which you want to sell to the farmers. In the case of hybrid crops, you fertilize variety A with variety B, produce lots of hybrid seed, and sell it. Producing a transgenic seed is a much more difficult and elaborate process, involving a lot of trial and error on the way to getting a single success. If all you end up with is one seed, you are going to have a hard time paying for your laboratory.
The solution is to genetically engineer a seed which produces a plant whose seeds are fertile, but which can be modified, by the application of suitable chemicals, to produce a plant whose seeds are sterile. You grow enough generations of the plant to produce the amount of seed you want. You then treat that seed and sell it. Farmers grow it, get their crop, but cannot replant--because plants grown from the treated seed are sterile. Not only does the seed company get to retain control over its intellectual property, it also reduces the risk of accidentally producing superweeds, since the pollen blowing from the genetically engineered crop has been genetically engineered to produce sterile seeds.
Opponents of agricultural biotech, in a brilliant propaganda coup, dubbed the patented invention the "terminator gene." They argued that keeping farmers from replanting from their own seed would convert them into serfs under the thumb of the seed companies. It was never made entirely clear whether they thought that all farmers growing hybrid seed were already serfs. Nor was it explained how giving farmers the option of either growing transgenic crops and buying seed each year or growing conventional crops and replanting their own seed made them worse off than if they had only the latter alternative.
More responsible critics pointed out possible undesirable side effects. Consider, for example, an ordinary field of cotton planted next to a field of genetically engineered cotton. Some of the ordinary cotton is pollinated by the engineered cotton, producing sterile seeds. The farmer tries to replant from his ordinary cotton, as he has every right to do--and gets a disappointingly low yield.
For the moment, it looks as though the opponents have won. Whether through bad arguments, good arguments, or clever propaganda--who, after all, wants to defend a terminator gene--they appear to have persuaded the seed companies to abandon this particular approach to protecting their intellectual property. Will it stay abandoned? We will have to wait and see.
We have spent some time now on possible unintended bad consequences of genetic engineering. There are also the intended ones--biological warfare of one sort or another, using tailor made diseases or, more modestly, tailor made weeds.
Here again it is worth taking a step back to think about the implications of evolutionary biology. It is a fundamental mistake to think of deadly diseases as enemies out to destroy us. A plague bacillus not only has nothing against you, it wishes you well--or would if it were capable of wishing. It is a parasite, you are a host, and the longer you live the better for it.
Lethal diseases are a mistake--badly designed parasites. That is why a disease that is really deadly is typically new--either a new mutation, or an old disease infecting a new population that has not yet developed resistance, or a disease that has just jumped from one population to another and not yet adapted to the change. Given time, evolution works not only to make us less vulnerable to a lethal disease but to make the disease less lethal to us.
So when a James Bond villain sets out to create a disease that will kill everyone but himself and his harem, he is not in competition with nature--nature, Darwinian evolution, is not trying to make lethal diseases. That fact makes it more likely that he will succeed--more likely that there are ways of making diseases more deadly than the ones produced by natural evolution. The question then becomes whether the technological progress that makes it easier to design killer diseases--ultimately, perhaps, in your basement--does or does not win out in the race with other technologies that make it easier to cure or prevent such diseases. This is a special case of an issue we will return to in the context of nanotechnology--which offers to provide potential bad guys with an even wider toolkit for mass murder and may or may not provide the rest of us with adequate tools to defend against them.
Over the past five hundred years, the average length of a human life in the developed world has more than doubled but the maximum has remained essentially unchanged. We have eliminated or greatly reduced most of the traditional causes of mortality, including mass killers such as smallpox, measles, influenza, and complications of childbirth. But old age remains incurable, and always lethal.
Why? On the face of it, aging looks like poor design. We have been selected by evolution for reproductive success--and the longer you live without serious aging, the longer you can keep producing babies. Even if you are no longer fertile, staying alive and healthy allows you to help protect and feed your descendants.
The obvious answer is that if nobody got old and died there would be no place for our descendants to live and nothing left for them to eat. But that confuses individual interest with group interest; although group selection may have played some role in evolution, it is pretty generally agreed that the major driving force was individual selection. If I stay alive, all of my resources go to help my descendants; insofar as I am competing for resources, I am competing mostly with other people's descendants. Besides, we evolved in an environment in which we had not yet dealt with other sources of mortality, so even if people did not age they would still die, and on average almost as young. In traditional societies, only a small minority lived long enough for aging to matter.
A second possible answer is that immortality would indeed be useful, but there is no way of producing it. Over time our bodies wear out, random mutation corrupts our genes, until at last the remaining blueprint is too badly flawed to continue to produce cells to replace those that have died.
This answer too cannot be right. A human being is, genetically speaking, massively redundant--every cell in my body contains the same instructions. It is as if I were a library with trillions of copies of the same book. If some of them had misprints or missing pages, I could always reconstruct the text from others. If two volumes disagree, check a third, a fourth, a millionth. Besides, there are organisms that are immortal. Amoebas reproduce by division--where there was one amoeba, there are now two. There is no such thing as a young amoeba.
A variety of more plausible explanations for aging have been proposed. One I find persuasive starts with the observation that, while the cells in my body are massively redundant, the single fertilized cell from which I grew was not. Any error in that cell ended up in every cell of my adult body.
Suppose one of those mutations had the effect of killing the individual carrying it before he got old enough to reproduce. Obviously, that mutation would vanish in the first generation. Suppose instead that it killed its carrier, on average, at age thirty. Now the mutation would to some degree be weeded out by selection--but some of my children, perhaps even some of my grandchildren, could still inherit it.
Consider next a mutation that kills at age sixty--in a world where aging does not yet exist, but death via childbirth, measles, and saber tooth tigers does, with the result that hardly anyone makes it to sixty. Possession of that mutation is only a very slight reproductive disadvantage, so it gets filtered out only very slowly. Following this line of argument, we would expect lethal mutations that acted late in life to accumulate, with new ones appearing as old ones are gradually eliminated. The process reinforces itself. Once mutations that kill you at sixty are common, mutations that kill you at seventy do not matter very much--you can only die once. So one possible explanation of aging is that it is simply the working out of a large collection of accumulated late acting lethal genes.
A slightly different version of this explanation starts with the observation that in designing an organism--or anything else--there are tradeoffs. We can give cars better gas mileage by making them lighter--at the cost of making them more vulnerable to damage. We can build cars that are invulnerable to anything short of high explosives--we call them tanks--but their mileage figures are not impressive.
Similar tradeoffs presumably exist in our design. Suppose there is some design feature, encoded in genes, which can provide benefits in survival probability or fertility early in life at the cost of causing increased breakdown after age sixty. Unless the benefits are tiny relative to the costs, the net effect will be increased reproductive success, since most people in the environment we evolved in didn't make it to sixty anyway. So such a feature will be selected for by evolution. Putting the argument more generally, the evolutionary advantages to extending the maximum lifespan were small in the environment we evolved in, since in that environment very few people lived long enough to die of old age. So it is not surprising if the short term costs outweighed the long term benefits. My genes made the correct calculations in designing me for reproductive success in the environment of fifty thousand years ago--but I, living now and with objectives that go beyond reproductive success, would prefer they hadn't.
One reason to figure out why we age is in order to do anything about it--a subject with which I become increasingly concerned as the years pass. If there is some single flaw in our design--if aging is due to shrinking telomeres or a shortage of vitamin Z--then once we discover the flaw we may be able to fix it. If aging is the combined effect of a thousand flaws, the problem will be harder. But even in that case, there might be solutions--either the slow solution of identifying and fixing all thousand, or a fast solution, such as a microscopic cell repair machine that can go through our bodies fixing whatever damage all thousand causes have produced.
My own guess is that the problem of aging will be solved, although not necessarily in time to do me any good. That guess is based on two observations. The first is that our knowledge of biology has increased at an enormous rate over the past century or so and continues to do so. So if the problem is not for some reason inherently insoluble--I cannot think of any plausible reasons why it should be--it seems likely that scientific progress during the next century will make a solution possible. The second is that solving the problem is of enormous importance to old people, many of whom would prefer to be young, and old people control very large resources, both economic and political.
One implication is that the payoff to policies that slow aging a little may be large, since they might result in my surviving long enough to benefit by more substantial breakthroughs. There are currently a variety of things one can do which there is some reason to believe will slow aging. It is only "some reason" because the combined effect of the long human lifespan and the difficulty of getting permission to do experiments on human beings mean that our information on the subject is very imperfect. Most of the relevant information consists of the observation that doing particular things to particular strains of mice or fruit flies--experimental subjects with short generations and no legal rights--results in substantial increases in their lifespan.
Thus, for example, it turns out that transgenic fruitflies, provided with a particular human gene, have a life expectancy up to 40 percent longer than those without the extra gene. Modifying the diet of some strains of mice--by, for example, providing them a high level of anti-oxidant vitamins--can have similar effects. When I was investigating the arguments for and against consuming lots of anti-oxidants, one persuasive piece of evidence came from an article in Consumer Reports. It quoted a researcher in the field as saying that of course it was too early to recommend that people take anti-oxidant supplements--"but all of us do." As an economist, I believe that what people do is frequently better evidence than what they say.
One of the most effective ways of extending the lifespan of mice turns out to be caloric deprivation--feeding them a diet at the low end of the number of calories needed to stay alive but otherwise adequate in nutrients. The result is to produce mice with very long life expectancies. Whether it will work on humans is not yet known--or, a question of more immediate interest to some of us, whether it would work on humans who only started late in life. A parent who chose to almost starve his children would risk being found guilty of child abuse--but could argue, on the basis of existing evidence, that he was actually the only parent who wasn't.
On the individual level they are large and positive--one of the worst features of human life has just vanished. People who prefer mortality can still die. Those of us with unfinished business can get on with it.
But while I am unambiguously in favor of stopping my aging, it does not follow that I must be in favor of stopping yours. One reason not to be is concern with population growth. As it happens, I do not share that concern, having concluded long ago that, at anything close to current population levels, mere number of people is not a serious problem. That conclusion was reinforced over the years as leading preachers of population doom proceeded to rack up a string of failed prophecies unmatched outside of the nuttier religious sects. Readers who disagree, as many do, may want to look at the works of the late Julian Simon, possibly the ablest and certainly the most energetic critic of the thesis that increasing population leads to catastrophe. I prefer to pass on to what I regard as more interesting issues.
One is the problem of gerontocracy--rule by the old. Under our political system, incumbents have an enormous advantage--at the congressional level they almost always win reelection. If aging stops and nothing else changes, our representatives will grow steadily older. An incumbent who is guaranteed reelection is free to do what he wants within a fairly large, although not unlimited, range. So one result would be to make democratic control over democratic governments even weaker than it now is. Another might be to create societies dominated by the attitudes of the old--bossy, cautious, conservative.
The effect on undemocratic systems might be still worse. In a world without aging it seems likely that Salazar would still rule Portugal and Franco Spain. Perhaps more seriously, it would have been Stalin, equipped with an arsenal of thermonuclear missiles, who presided over--and did his best to prevent--the final disintegration of the Soviet Union. With the aging problem solved, dictatorship could become a permanent condition--provided dictators took sufficient precautions against other sources of mortality.
The problem is not limited to the world of politics. It has been argued that scientific progress typically consists of young scientists adopting new ideas and old scientists dying. It is frightening to imagine the universities our system of academic tenure might produce without either compulsory retirement--now illegal in the U.S.--or mortality.
Implicit in many of these worries is a buried assumption--that we are curing the physical effects of aging but not all of the mental effects. Whether that assumption is reasonable depends on why it is that old people think differently than young people.
One answer, popular with the old, is that it is because they know more. If so, perhaps gerontocracy is not such a bad thing. Another is that the brain has limited capacity. Having learned one system of ideas, there may be no place to put another--especially if they are mutually inconsistent. Humans, old and young, demonstrate a strong preference for the beliefs they already have, and old people have more of them.
One way of understanding the effect of aging on thought is as a shift from fluid to crystallized intelligence. Fluid intelligence is what you use to solve a new problem. Crystallized intelligence consists of remembering the solution you found last time and using that. The older you are, the more problems you have already solved and the less the future payoff from finding new and possibly better solutions. The point was brought home to me in a striking fashion some years ago when I observed a highly intelligent man in his eighties ignoring evidence of what turned out to be an approaching forest fire--smells of smoke, reports from others who had seen it--until he saw the flames with his own eyes.
It is possible, of course, that if we ended aging--better yet, made it possible to reverse its effects--the result would be old people with the minds of the young. It is also possible that we would discover that the mental characteristics of the old, short of actual senility, were a consequence not of biological breakdown but of computer overload--the response of a limited mind to too much accumulation of experience.
When contemplating an extra few centuries, one obvious question is what to do with them. Having raised one family, grown old, and then had my youth restored, would I decide to see if I could do even better at a second try--or conclude that that was something I had already done? Weak evidence for the former alternative is provided by the not uncommon pattern of grandparents raising their grandchildren when the children's parents prove unable or unwilling to do the job.
The same question arises in other contexts. Having had one career as an economist, would I continue along the lines of my past work or decide that this time around I wanted to be a novelist, an entrepreneur, an arctic explorer? It is a familiar observation that, in many fields, scholars do their best and most original work young. My father once suggested the project of funding successful scholars past their prime to retrain in some entirely unrelated field, in order to see if the result was a second burst of creativity. In a world without aging, that pattern might become a great deal more common. And a novelist or entrepreneur who had first been an academic economist or a Marine officer might bring some interesting background to his new profession.
An alternative is leisure. We cannot all retire, since there has to be someone left to mow the lawn, grow the food, and do the rest of the world's work. But it might be possible for most of us to retire, for all of us to mostly retire. Capital as well as labor is productive--more and better machinery, other forms of improved production, permit one person to do the work of ten or a hundred. Consider the striking fall in the fraction of the U.S. work force engaged in producing food--from almost everybody to almost nobody in the space of a century.
How productive capital is at present is shown by the interest rate--the price people are willing to pay for the use of capital. The real interest rate--the rate after allowing for inflation--has typically been about two percent. At that rate, you could spend the first fifty years of adulthood earning (say) eighty thousand dollars a year, spending forty thousand, saving the rest, and then spend the rest of a very long life living on forty thousand dollars a year of interest. Alternatively, you could live on sixty thousand of your eighty thousand during your working life, then retire to a low budget future--twenty thousand a year for food, housing, and a good internet connection. As a final, and perhaps still more attractive, alternative you could continue working half or a third time, picking those activities that you liked to do and other people were willing to pay for. Good work if you can get it. One can easily enough imagine a future along these lines where a large fraction of the population, even a large majority, was at least semi-retired.
While thinking about how to spend your second century, you might want to consider the social consequences of eliminating the markers of age. In a world where aging is entirely under our control, a young woman of twenty might be dating a young man a hundred years older than she is—and he may or may not tell her. The same thing already happens online, where a flirtatious twelve year old girl may be almost anything, including a forty year old male FBI agent. If you--a grandfather with a retirement pension and a century behind you--could go back to college as a freshman, would you? Part time? Lots of cute girls. The women of your own generation are just as cute, thanks to the same advanced biotech that makes you eighteen again, but the real thing has its charms. Perhaps.
Immortality also raises issues for our legal system. Consider a criminal sentenced to a life sentence. Do we interpret that as "what a life sentence used to be"--say to age 100? Or do we take it literally?
To answer that question, we start by asking why we would lock someone up for life in the first place. There are at least two plausible answered, associated with two different theories of criminal punishment. One is that we lock a murderer up for the same reason we lock a tiger up--he is dangerous to others, so we want to keep him where he cannot do much damage. That is the theory of criminal punishment sometimes described as "incapacitation." The other is that we lock a murderer up in order to impose a cost on him--a cost high enough so that other people contemplating murder will choose not to incur it. That is the theory described as "deterrence." In practice, of course, we may operate on both theories at once, believing that some criminals can be deterred, some only incapacitated, and we cannot always be sure which are which.
If our objective is deterrence, centuries of incarceration may be overkill, which is an argument for eventually letting the convict out. If our objective is incapacitation, on the other hand, we may want to keep him in. Under current circumstances, a ninety year old murderer is unlikely to be of much danger to anyone but himself--but if we conquer aging that will no longer be the case.
A third justification offered for imprisonment is rehabilitation--changing criminals so that they no longer want to commit crimes. That is the theory that gives us "reformatories" to reform people and "penitentiaries" to make people repent. It is hard to see why, on that theory, we would have life sentences--but perhaps one could argue that there are some people who take longer to be rehabilitated than they are likely to last. If so one might reinterpret "life" as "to age 100 or until rehabilitated, whichever takes longer."
appropriate clinical trials would be to:
Select N subjects.
Wait 100 years.
See if the technology of 2100 can indeed revive them.
The reader might notice a problem: what do we tell the terminally ill patient prior to completion of the trials? (Ralph Merkle, from a webbed discussion of cryonics)
The idea of cryonic suspension--keeping people frozen in the hopes of some day thawing them, reviving them, and curing what killed them--has been around for some time. Critics view it as a fraud or a delusion, analogizing the problem of undoing the damage done to cells by ice crystals in the process of freezing to converting hamburger back into a living cow. Supporters point out that as the technology of freezing people improves we are learning how to decrease the damage--among other things by replacing the body's water with the equivalent of antifreeze during the cooling process. And they argue that as the technology needed to revive a frozen body improves--ultimately, perhaps, through the development of nanotechnology capable of doing repairs at the cellular level--it will become easier to undo the damage that we cannot prevent. Finally and most convincingly, they point out that however poor your chances are of being brought back from death if you have your body frozen, they can hardly be worse than the chances if you let it rot instead.
Suppose we accept their arguments--to the extent of regarding revival as at least a possibility. We are then faced with a variety of interesting problems, legal and social. Most come down to a simple question--what is the status of a corpsicle? Is it a corpse, a living person temporarily unable to act, or something else? If I am frozen, is my wife free to remarry? If I am then thawed, which of us is she married to? Do my heirs inherit, and if so can I reclaim my property when I rejoin the living?
Many of these are issues that can be--if suspension becomes common will be--dealt with by private arrangements. If the law regards my wife as a widow, she can still choose to regard herself as a wife; if the law considers me frozen but alive, she can apply for a divorce. I am in no position to contest it. If I am concerned about keeping my wealth to support me in the second half of my life, there are legal institutions--trusteeships and the like--that give dead people some degree of control over their assets.
Such institutions are not perfect--I may be revived in a hundred years to discover that my savings have been stolen by a corrupt trustee, the I.R.S., or inflation--but they may be the best we can do. Their chief limitation is one that applies to almost all solutions--the fact that over a period of a century or more, legal and social institutions might change in ways that defeat even prudent attempts at planning for revival. One alternative is to transfer wealth in ways that do not depend on stable institutions--for example, by burying a collection of valuable objects somewhere and preserving their location only in memory. That tactic faces risks as well--you may be revived, dig up your treasure and discover that gold coins and rare stamps are no longer worth very much. If only you had known, you would have buried ten first editions of this book instead.
Other problems involve adapting existing legal rules to a world where a substantial number of people are neither quite dead nor quite alive. If I commit a crime and then get frozen, does the statute of limitations continue to run, providing me a get out of jail free card if I stay frozen long enough? If I have been sentenced to fifty years in jail and, after ten of them, "die" and am frozen, does my sentence continue to run? What about a life sentence?
A more immediate problem is faced by somebody who wants to get frozen a little before he dies instead of a little after. Whether or not freezing makes it impossible to revive me, dying surely makes it harder. And some illnesses--cancer is an obvious example--might do massive damage well before the point of actual death. Once it looks as though death is certain, there is much to be said for getting frozen first.
Under current circumstances that is not an option, since if you are not dead before you are frozen you will be afterwards. The law against suicide cannot be enforced against the person most directly concerned--at least, not until he is revived, at which point it retroactively stops being suicide--but it can be enforced against the people who help him. In practice, under current law, being frozen before death, even ten minutes before, is not a practical option.
The simplest way of changing that is to interpret freezing not as death but as a risky medical procedure--one whose outcome will not be known for some time. It is both legal and ethical for a surgeon to conduct an operation that might kill me if the odds without the procedure are even worse. The probability of revival does not have to be very high to meet that requirement if the only alternative is dying.
The principles of physics, as far as I can see, do not speak against the possibility of maneuvering things atom by atom. It is not an attempt to violate any laws; it is something, in principle, that can be done; but in practice, it has not been done because we are too big. (Richard Feynman, from a talk delivered in 1959)
We all know that atoms are small. Avogadro's number describes just how small they are. Written out in full it is about 602,400,000,000,000,000,000,000. That is the ratio between grams, the units we use to measure the mass of ordinarily small objects such as pencils, and the units in which we measure the mass of atoms. An atom of hydrogen has an atomic weight of about one, so Avogadro's number is the number of atoms in a gram of hydrogen.
Looking at all those zeros, you can see that even very small objects have a lot of atoms in them. A human hair, for example, contains more than a million billion. The microscopic transistors in a computer chip are small compared to us but large compared to an atom. Everything humans construct, with the exception of some very recent experiments, is built out of enormous conglomerations of atoms.
We ourselves, on the other hand, like all living things, are engineered at the atomic scale. The cellular machinery that makes us run depends on single molecules--enzymes, proteins, DNA, RNA and the like--each a complicated structure of atoms, every one in the right place. When an atom in a strand of DNA is in the wrong place, the result is a mutation. As we become better and better at manipulating very small objects, it begins to become possible for us to build as we are built--to construct machines at the atomic level, assembling individual atoms into molecules that do things. That is the central idea of nanotechnology.
One attraction of the idea is that it lets you build things that cannot be built with present technologies. Since the bonds between atoms are very strong, it should be possible to build very strong fibers from long strand molecules. It should be possible to use diamond--merely a particular arrangement of carbon atoms--as a structural material. We may even be able to build mechanical computers, inspired by Babbage's failed 19th century design. Mechanical parts move very slowly compared to the movement of electrons in electronic computers. But if the parts are on an atomic scale, they do not have to move very far.
In some cases, small is the objective. A human cell is big enough to have room for the multitude of atomic machines that make us function. With a sufficiently good nanotechnology, it ought to be possible to add one more--a cell repair machine. Think of it as a robot submarine that goes into a cell, fixes whatever is wrong, then exits that cell and moves on to the next. If we can build mechanical nanocomputers, it could be a very smart robot submarine.
The human body contains about sixty trillion cells, so fixing all of them with one cell repair machine would take a while. But there is no reason to limit ourselves to one. Or ten. Or a million. Which brings us to another advantage of nanotechnology.
Carbon atoms are all the same (more precisely, Carbon 12 atoms are all the same, but I am going to ignore the complications introduced by isotopes in this discussion). So are nitrogen atoms, hydrogen atoms, iron atoms. Imagine yourself, shrunk impossibly small, building nanomachines. From your point of view, the world is made up of identical parts, like tiny Legos. Pick up four identical hydrogens, attach them to one carbon atom, and you have a molecule of methane. Repeat and you have another, perfectly identical.
We cannot shrink you that small, of course, since you yourself are made up of atoms. So our first project, once we have the basics of the technology worked out, is to build an assembler. An assembler is a nanoscale machine for building other nanoscale machines. Think of it as a tiny robot--where tiny might mean built out of fewer than a billion atoms. It is small enough so that it can manipulate individual atoms, assembling them into a desired shape. This is far from trivial, since atoms are not really legos and cannot be manipulated and snapped together in the same way. But we know that assembling atoms into molecules is possible; we, and other living creatures, do it routinely. And some of the molecules we build inside ourselves are very complicated ones. Organic chemists, with much less detailed control over material than an assembler would have, succeed in deliberately assembling moderately complicated molecules.
Once you have one assembler, you write it a program for building another. Now you have two. Each of them builds another. Four. After ten doublings you have more than a thousand assemblers, after twenty more than a million. Now you write a program for building a cell repair machine and set your assemblers to work. Once you have a billion or so cell repair machines you inject them into your body, sit back and relax. When they are finished you feel like a new man--and are.
A friend of mine (Albert R. Hibbs) suggests a very interesting possibility for relatively small machines. He says that, although it is a very wild idea, it would be interesting in surgery if you could swallow the surgeon. (Feynman)
A cell repair machine would be a very complicated piece of nanotechnology indeed, so although we may eventually get such things, it is unlikely to happen very soon. Super strong materials, or medical drugs designed on a computer--one molecule at a time--are likely to be earlier applications of the technology. To keep us going while we wait for the cell repair machine, Ralph Merkle proposed and Robert Freitas further developed an ingenious proposal for an improved version of a red blood cell--a nano-scale compressed air tank. Its advantage becomes clear the day you have a heart attack and your heart stops beating. Instead of dropping dead you pick up the phone, arrange an emergency appointment with your doctor, get in the car and drive there--functioning for several hours on the supply of oxygen already in your bloodstream.
Nanotechnology could be used to construct large objects as well as small ones. It takes a large number of assemblers to do it. But if we start with one assembler, instructions in the form of programs it can read and implement, plenty of atoms of all the necessary sorts and a little time, we can produce a lot of assemblers. With enough assemblers and the software to control them, we can build almost anything. If the idea of a very large object built by molecular machinery strikes you as odd, consider a whale.
It doesn't cost anything for materials, you see. So I want to build a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on. (Feynman)
Like most new and unproven technology, nanotech is still controversial, with some authors arguing that the proposal is and always will be impossible for a variety of reasons. The obvious counterexample is life--a functioning nanotechnology based on molecular machines constructed largely of carbon.
A more interesting argument against the technology is that, although nanotech may be possible, anything really good that it can produce will already have been produced by evolution--as living things were. Compressed air blood cells would have been useful to us and other living things quite a long time ago, so if the design works why don't we already have them?
The answer is that although evolution is a powerful design system, it has some important limitations. If a random mutation changes an organism in a way that increases its reproductive success, that mutation will spread through the population; after a while everyone has it, and the next mutation can start from there. So evolution can produce large improvements that occur through a long series of small changes, each itself a small improvement. Evolutionary biologists have actually traced out how complicated organs, such as the eye, are produced through such a long series of small changes.
But if a large improvement cannot be produced that way--if you need the right twenty mutations all happening at once in the same organism, and nineteen are no use--evolution is unlikely to produce it. The result is that evolution has explored only a small part of the design space--the set of possible ways of assembling atoms to do useful things.
Human beings also design things by a series of small steps--the F111 did not leap full grown from the brains of the Wright Brothers, and the plane they did produce was powered by an internal combustion engine whose basic design had been invented and improved by others. But what seems a small step to a human thinking out ways of arranging atoms to do something is not necessarily small from the standpoint of a process of random mutation. Hence we would expect that human beings, provided with the tools to build molecular machines, would be able to explore different parts of the design space, to build at least some useful machines that evolution failed to build. Very small compressed air tanks, for example.
Readers interested in arguments for and against the workability of nanotechnology can find and explore them online. For the purposes of this chapter I am going to assume that the fundamental idea--constructing things at the atomic scale using atomic scale assemblers--is workable and will, at some point in the next hundred years, happen. That leaves us to consider the world that technology would give us.
To build a nanotech car I need assemblers--produced in unlimited numbers by other assemblers--raw material, and a program, a full description of what atoms go where. The raw material should be no problem. Dirt is largely aluminum, along with large amounts of silicon, oxygen, possibly carbon and nitrogen. If I need additional elements that the dirt does not contain, I can always dump in a shovel full of this and that. Add programmed assemblers, stir, and wait for them to find the necessary atoms and arrange them. When they are done I have a ton or two less dirt, a ton or two more car. It sounds like magic--or the process that produces an oak tree.
I have left out one input--energy. An acorn contains design specifications and machinery for building an oak tree, but it needs sunlight to power the process. Similarly, assemblers will need some source of energy. One obvious possibility is chemical energy--disassembling high energy molecules to get both power and atoms. Perhaps we will have to dump a bucket of alcohol or gasoline on our pile of dirt before we start stirring.
Once we have the basic technology, the hard part is the design--there are a lot of atoms in a car. Fortunately we don't have to separately calculate the location of each one--once we have the first wheel designed, the others can be copied, and similarly with many other parts. Once we have worked out the atomic structure for a cubic micron or so of our diamond windshield we can duplicate it over and over for the rest, with a little tweaking of the design when we get to an edge. But even allowing for all plausible redundancy, designing a car--as good a car as the technology permits you to build--is going to be a big project.
I have just described a technology in which most of the cost of producing a product is in creating the initial design. Once the design is complete, it is relatively inexpensive to make the product itself. We already have a technology with those characteristics--software. Producing the first copy of Microsoft Office took an enormous investment of time and effort by a large number of programmers. The second copy required a CD burner and a CDR disk--cost about a dollar. One implication of nanotechnology is an economy for producing cars very much like the economy that presently produces word processing programs.
A familiar problem in the software economy is piracy. Not only can Microsoft produce additional copies of Office for a dollar apiece, I can do it too. That raises problems for Microsoft, or anyone else who expects to be rewarded for producing software with money paid to buy it. Nanotechnology raises the same problem, although in a somewhat less severe fashion: I cannot simply put my friend's nanotech car or nanotech computer into a disk drive and burn a copy.
I can, however, disassemble it. To do that, I use nanomachines that work like assemblers, but backwards. Instead of starting with a description of where atoms are to go and putting them there, they start with an object--an automobile, say--and remove the atoms, one by one, keeping track of where they all were.
Disassembling an automobile with one disassembler would be a tedious project, but I am not limited to one. Using my army of assemblers I build an army of disassemblers, each provided with some way of getting the information it generates back to me--perhaps a miniature radio transmitter, perhaps some less obvious device. I set them all to work. When they are done the car has been reduced to its constituent elements--and a complete design description. If there were computers big enough to design the car, there are computers big enough to store the design. Now I program my assemblers and go into the car business.
One approach to solving the problem of copying is an old legal technology--copyright. Having created my design for a car, I copyright it. If you go into business selling duplicates, I sue you for copyright violation. This should work at least a little better for cars than it now does for computer programs, both because the first stage of copying--disassembling, equivalent to reading a computer program from a disk--is a lot harder for cars, and because cars are bigger and harder to hide than programs.
The solution may break down if instead of selling the car the pirate sells the design--to individual consumers, each with his own army of assemblers ready to go to work. We are now back in the world of software. Very hard software. The copyright owner has to enforce his rights, copy by copy, against the ultimate consumer, which is a lot harder than enforcing them against someone pirating his property in bulk and selling it.
One possibility is tie-ins with other goods or services that cannot be produced so cheaply--land, say, or backrubs. You download from a (very broad bandwidth) future internet the full specs for building a new sports car, complete with diamond windshield, an engine that burns almost anything and gets a hundred miles a gallon, and a combined radar/optical/pattern recognition system that warns you of any obstacle within a mile and, if the emergency autopilot is engaged, avoids it. You convert the information into programmed tapes--very small programmed tapes--for your assemblers, find a convenient spot in the back yard, and set them to work. By next morning the car is sitting there in all its splendor.
You get in, turn the key, appreciate the purr of the engine, but are less happy with another feature--the melodious voice telling you everything you didn't want to know about the lovely housing development completed last week, designed for people just like you. On further investigation, you discover that turning off the advertising is not an option. Neither is disabling it--the audio system is a molecular network spread through the fabric of the car. If you want the car without the advertising you will have to design it yourself. You cast your mind back to the early years of the internet, thirty or forty years ago--and the solution found by web sites to the problem of paying their bills.
Another possibility is a customized car. What you download--this time after paying for it--is a very special car indeed, one of a kind. Before starting, it checks your fingerprints (read from the steering wheel), retinal patterns (scanner above the windshield) and DNA (you'll never miss a few dead skin cells). If they all match, it runs. The car is safe from thieves, since they cannot start it. You do not even have to carry a key--you are the key. But if you disassemble it and make lots of copies, they will not be very useful to anyone but you. If your neighbor wants a car, he will have to buy one--customized to him.
This again is an old solution, although not much used for consumer software. While we do not have adequate biometric identification just yet, the equivalent for computers is fairly easy--all it requires is a cpu with its own serial number. Given that or some equivalent, some identifier specific to a particular computer, it is possible to produce a program that will only run on one machine. A common version of this approach uses a hardware dongle--a device not easily copied that attaches to the computer and is recognized by the program.
A third possibility is open source production--a network of individuals cooperating to produce and improve designs, motivated by some combination of status, desire for the final product, and whatever else motivated the creators of Linux, Sendmail, and Apache.
As these examples suggest, a mature nanotechnology raises issues very similar to those raised by software, and those issues can be dealt with in similar ways--imperfectly, but perhaps well enough. It also raises other issues of a different, and more disturbing, sort.
"Plants" with "leaves" no more efficient than today's solar cells could out-compete real plants, crowding the biosphere with an inedible foliage. Tough, omnivorous "bacteria" could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly, and reduce the biosphere to dust in a matter of days. Dangerous replicators could easily be too tough, small, and rapidly spreading to stop - at least if we made no preparation. We have trouble enough controlling viruses and fruit flies. (Drexler, Engines of Creation)
Life is, on the whole, a good thing--but we are willing to make an exception for certain forms of life, such as smallpox. Molecular machines are, on the whole, a good thing. But there too, there might be exceptions.
An assembler is a molecular machine capable of building a wide variety of molecular machines, including copies of itself. It should be much easier to build a machine that copies only itself--a replicator. For proof of concept, consider a virus, a bacterium, or a human being--although the last doesn't produce an exact copy.
Now consider a replicator designed to build copies of itself, which build copies, which … . Assume it uses only materials readily available in the natural environment, with sunlight as its power supply. Simple calculations suggest that, in a startlingly short time, it could convert everything from the dirt up into copies of itself, leaving only whatever elements happen to be in excess supply. That is what has come to be referred to, in nanotech circles, as the grey goo scenario.
If you happen to be the first one to develop a workable nanotechnology, precautions might be in order. One is to avoid, so far as possible, building replicators. Of course, you will want assemblers--and one of the things an assembler can assemble is another assembler. But at least you can make sure nothing else is designed to replicate--and an assembler, being a large and very complicated molecular machine, may pose less of a threat of going wild than simpler machines whose only design goal is reproduction.
One precaution you could apply to assemblers as well as other replicators is to design them to require some input, whether matter or energy, not available in the natural environment. That way they can replicate usefully under your control but pose no hazard if they get out. Another is to give them a limited lifetime--a counter that keeps track of each generation of copying and turns the machine off when it reaches its preset limit. With precautions like these to supplement the obvious precaution of keeping your replicators in sealed environments, it should be possible to make sure that no replicator you have designed to be safe poses any serious threat of turning the world into grey goo.
Nanotech replicators, like natural biological replicators, can mutate. A cosmic ray might knock an atom off the instruction tape that controls copying, producing defective copies--and one defect might turn off the limit on number of generations. It might even, although much less probably, somehow eliminate the need for the one element not available in a natural environment. Freed of such constraints, wild nanotech replicators could gradually evolve, just as biological replicators do. Like biological replicators, their evolution would be towards increased reproductive success--getting better and better at converting everything else in existence into copies of themselves. And it is at least possible that, by exploiting design possibilities visible to their human designer and designed into their ancestors but inaccessible to the continuous processes of evolution, they would do a better job of it than natural replicators.
It should be possible to design replicators, if one is sufficiently clever, that cannot mutate. One way is through redundancy. You might, for example, give the replicator three copies of its instruction tape and design it to execute an instruction only if all three agree; the odds that three cosmic rays will each remove the same atom from each tape are low. Similarly, one might want to make sure that elements not available in the natural environment play a sufficiently central role in the working of the replicator so that there is no plausible way of mutating around the constraint. After designing your replicator, and before building it, you might want to run it in simulation--use a computer to run through many generations with a very large number of possible changes to see if any of them could it break free of your designed in controls.
I have described a collection of precautions that could work--in a world in which only one organization has access to the tools of nanotechnology and that organization acts in a prudent and benevolent fashion. Is that likely?
On the face of it such a monopoly seems extraordinarily unlikely in anything much like our world. But perhaps not. Suppose the idea of nanotechnology is well understood and accepted by a number of organizations, probably governments, with substantial resources--at a point well before anyone has succeeded in building an assembler. Each of those organizations engages in extensive computerized design work, figuring out exactly how to build a variety of useful molecular machines once it has the assemblers to build them with. Those machines include designer plagues, engineered obedience drugs, a variety of superweapons, and much else.
One organization makes the breakthrough; it now has an assembler. Very shortly--after about forty doublings--it has a trillion assemblers. It sets them to work building what it has already designed. A week later it rules the world--and one of its first acts is to forbid anyone else from doing research in nanotechnology.
It seems a wildly implausible scenario, but I am not sure it is impossible--I do not entirely trust my intuition of what can or cannot happen, given a technology with such extraordinary possibilities. The result would be a world government with very nearly unlimited power. I can see no reason, in nanotechnology or anything else, to expect it to behave better than past governments with such power. It would, I suppose, be an improvement on gray goo, but not much of an improvement.
Suppose we avoid world dictatorship and end up with a world of multiple governments, some of them reasonably free and democratic, and fairly widespread knowledge of nanotechnology. What are the consequences?
One possibility is that everyone treats nanotech as a government monopoly, with the products but not the technology made available to the general public. Eric Drexler describes in some detail a version of this in which everybody is free to experiment with the technology but only in a (nanotechnologically constructed) sealed and inaccessible environment, with the actual implementation of the resulting designs under strict controls. Presumably, once the basic information on how to do nanotech is out, the enforcement of such regulations will depend on the government's lead in the nanotech arms race--providing it with devices for surveillance and control that will make the video mosquitoes of an earlier chapter seem primitive. Again not a very attractive picture, but an improvement on all of us turning into gray goo.
The problem with this solution is that it looks very much like a case of setting the fox to guard the hen house. Private individuals may occasionally do research on how to kill large numbers of people and destroy large amounts of stuff, but the overwhelming bulk of such work is done by governments for military purposes. The very organizations that, in this version, have control over the development and use of nanotech are the ones most likely to spend substantial resources finding ways of using the technology to do what most of the rest of us regard as bad things.
In the extreme case, we might get gray goo deliberately designed as a doomsday machine--by a government that wants the ability to threaten everyone else with universal suicide. In a less extreme case, we could expect to see a lot of research on designing molecular machines to kill large numbers of (selected) people or destroy large amounts of (other nations') property. Governments doing military research, while they prefer to avoid killing their own citizens in the process, are willing to take risks--as suggested by incidents such as the accident in a Soviet germ warfare facility that killed several hundred people in a nearby city. And they work in an atmosphere of secrecy that may make it hard for other people to notice and point out risks in their work that have not occurred to them. There is a very real possibility that deliberately destructive molecular machines will turn out to be even more destructive than their designers intended--or get released before their designers want them to be.
Consider two possible worlds. In the first, nanotechnology is a difficult and expensive business, requiring billions of dollars of equipment and skilled labor to create workable designs for molecular machines that do useful things. In that world, gray goo is unlikely to be produced deliberately by anybody but a government--and any organization big enough to produce it by accident is probably well enough organized to take precautions. In that world defenses against gray goo--more generally, molecular machines designed to protect human beings and their property from a wide variety of risks, including destructive molecular machines, tailored plagues, and more mundane hazards--will be big sellers, with very large resources devoted to designing them commercially. In that world, making nanotech a government monopoly will do little to reduce the downside risk, since governments will be the main source of that risk, but might substantially reduce the chance of protecting ourselves against it.
In the second world--perhaps the first world a few decades later--nanotech is cheap. Not only can the U.S. department of defense design gray goo if it wants to, you can design it too--on your desktop. In this world, nothing much short of a small number of dictatorships maintained in power--over rivals and subjects--by a lead in the nanotech arms race is going to keep the technology out of the hands of anyone who wants it. And it is far from clear that even that would suffice.
In this second world, the nanotech equivalent of designer plagues will exist for much the same reasons that computer viruses now exist. Some will come into existence the way the original Internet worm did, the work of someone very clever, with no bad intent, who makes one mistake too many. Some will be designed to do mischief and turn out to do more mischief than intended. And a few will be deliberately created as instruments of apocalypse by people who for one reason or another like the idea.
Before you conclude that the end of the world is upon you, consider the other side of the technology. With enough cell repair machines on duty, designer plagues may not be a problem. Human beings want to live and will pay for the privilege. The resources that will go into designing protections against threats, nanotechnological or otherwise, will be enormously greater than the (private) resources that go into creating such threats--as they are at present, with the much more limited tools available to us. Unless it turns out that, with this technology, the offense has an overwhelming advantage over the defense, nanotech defenses should almost entirely neutralize the threat from the basement terrorist or careless experimenter. The only serious threat will be from organizations willing and able to spend billions of dollars creating molecular killers--almost all of them governments.
The previous paragraph contained a crucial caveat--that offense not be a great deal easier than defense. The gray goo story suggests that it might be, that simple molecular machines designed to turn everything in the environment into copies of themselves might have an overwhelming advantage over their more elaborate opponents.
The experiment has been done; the results so far suggest that that is not the case. We live in a world populated by molecular machines. All of them, from viruses up to blue whales, have been designed with the purpose of turning as much of their environment as they can manage into copies of themselves--we call it reproductive success. So far, at least, the simple ones have not turned out to have any overwhelming advantage over the complicated ones: Blue whales, and human beings, are still around.
That does not guarantee safety in a nanotech future. As I pointed out earlier, nanotechnology greatly expands the region of the design space for molecular machines that is accessible--human beings will be able to create things that evolution could not. It is conceivable that, in that expanded space of possible designs, gray goo will turn out to be a winner. All we can say is that so far, in the more restricted space of carbon based life capable of being produced by evolution, it has not turned out that way.
In dealing with nanotechnology, we are faced with a choice between centralized solutions--in the limit, a world government with a nanotech monopoly--and decentralized solutions. As a general rule I much prefer the latter. But a technology that raises the possibility of a talented teenager producing the end of the world in his basement makes the case for centralized regulation look a lot better than it does in most other contexts--good enough to have convinced some thinkers, among them Eric Drexler, to make it at least a partial exception to their usual preference for decentralization, private markets, laissez-faire.
While the case for centralization is in some ways strongest for so powerful a technology, so is the case against. There has been only one occasion in my life when I thought there was a significant chance that many of those near and dear to me might die. It occurred a little while after the 9/11 terrorist attack, when I started looking into the subject of smallpox.
Smallpox had been officially eliminated; so far as was publicly known, the only remaining strains of the virus were held by U.S. and Russian government laboratories. Because it had been eliminated, and because public health is a field dominated by governments, smallpox vaccination had been eliminated too. It had apparently not occurred to anybody in a position to do anything about it that it was worth maintaining sufficient backup capacity to reverse that decision quickly. The U.S. had supplies of vaccine, but they were adequate to vaccinate only a small fraction of the population--so far as I could tell, nobody else had substantial supplies either.
Smallpox, on an unvaccinated population, produces mortality rates as high as thirty percent. Most of the world's population is now unvaccinated; those of us who were vaccinated forty or fifty years ago may or may not still be protected. If a terrorist had gotten a sample of the virus, either stolen from a government lab or cultured from the bodies of smallpox victims buried somewhere in the arctic at some time in the past--nobody seems to know for sure whether or not that is possible--he could have used it to kill hundred of millions, perhaps more than a billion, people. That risk existed because the technologies to protect against replicators--that particular class of replicators--had been under centralized control. The center had decided that the problem was solved.
What and where in my body is me is a very old puzzle. An early attempt to answer it by experiment is described in Jomviking saga, written in the 13th c. After a battle, captured warriors are being executed. One of them suggests that the occasion provides the perfect opportunity to settle an ongoing argument about the location of consciousness. He will hold a small knife point down while the executioner cuts off his head with a sharp sword; as soon as his head is off, he will try to turn the knife point up. It takes a few seconds for a man to die, so if his consciousness is in his body he will succeed; if it is in his head, no longer attached to his body, it will fail. The experiment goes as proposed, the knife falls point down.
We still do not know with any confidence what conciousness is, but we know more about the subject than the Jomvikings did. It seems clear that it is closely connected to the brain. A programmed computer comes closer to acting like the human mind than anything else whose working we understand. And we know enough about the mechanism of the brain to plausibly interpret it as an organic computer. That suggests an obvious and interesting conjecture--that what I am is a program, software, running on the hardware of my brain. Current estimates suggest that the brain has enormously greater processing power than any existing computer, so it is not surprising that computers can do only a very imperfect job of emulating human thought.
This conjecture raises an obvious, interesting and frightening possibility. Computers have, for the past thirty years or so, been doubling their power every year or two--a pattern known, in several different formulations, as "Moore's Law." If that rate of growth continues, at some point in the not very distant future--Raymond Kurzweil's estimate is about thirty years--we should be able to build computers that are as smart as we are.
Building the computer is only part of the problem; we still have to program it. A computer without software is only an expensive paperweight. In order to get human level intelligence in a computer, we have to find some way of producing a software equivalent of us.
The obvious way is to figure out how we think--more generally, how thought works--and write the program accordingly. Early work in A.I. followed that strategy, attempting to write software that could do very simple tasks--recognize objects, for example--of the sort our minds do. It turned out to be a surprisingly difficult problem, giving A.I. a reputation as a field that promised a great deal more than it performed.
It is tempting to argue that the problem is not only difficult but impossible, that a mind of a given level of complexity--exactly how one would define that is not clear--can only understand simpler things than itself, hence cannot understand how it itself works. But even if that is true, it does not follow that we cannot build machines at least as smart as we are--because one does not have to understand things to build them. We ourselves are, for those of us who accept evolution rather than divine creation as the best explanation of our existence, a striking counterexample. Evolution has no mind. Yet it has constructed minds--including ours.
This suggests a strategy for creating smarter software that has come into increasing use in recent years. Set up a virtual analog of evolution, a system where software is subject to some sort of random variation, tested against a criterion of success, and selected according to how well it meets that criterion, with the process being repeated a very large number of time, using the output of one stage as the input for the next. It is through a version of that approach that the software currently used to recognize faces--a computer capability discussed in an earlier chapter--was created. Perhaps, if we had powerful enough computers and some simple way of judging the intelligence of a program, we could apply the same approach to creating programs with human level intelligence.
A second alternative is reverse engineering. We have, after all, an example of human level intelligence readily available. If we could figure out in enough detail how the brain functions--even if we did not fully understand why functioning that way resulted in an intelligent, self-aware entity--we could emulate it in silicon, build a machine analog of a generic human brain. Our brains must be to a significant degree self programming, since the only information they start with is contained in the DNA of a single fertilized cell, so perhaps, with enough trial and error, we could get our emulated brain to wake up and learn to think. Perhaps we should set one team working on the problem of digital coffee.
A third alternative is to reverse engineer not a generic brain but a particular brain. Suppose one could build sufficiently good sensors to construct a precise picture of both the structure and the state of a specific human brain at a particular instant--not only what neuron connects to what how but what state every neuron is in. Suppose you can then precisely emulate that structure in that state in hardware. If all I am is software running on the hardware of my brain, and you can fully emulate that software and its current state on different hardware, you ought to have an artificial intelligence that, at least until it evaluates data coming in after its creation, thinks it is me. This idea, commonly described as "uploading" a human being, raises a very large number of questions, practical, legal, philosophical and moral. They become especially interesting if we assume that our sensors are delicate enough to observe my brain without damaging it--leaving, after the upload, two David Friedman's, one running in carbon and one in silicon.
A future with human level artificial intelligence, however produced, raises problems for existing legal, political and social arrangements. Does a computer have legal rights? Can it vote? Is killing it murder? Are you obliged to keep promises to it? Is it a person?
Suppose we eventually reach what seems the obvious conclusion--that a person is defined by something more fundamental than human DNA, or any DNA at all, and some computers qualify. We now have new problems--because these people are different in some very fundamental ways from all the people we have known so far.
A human being is intricately and inextricably linked to a particular body. A computer program can run on any suitable hardware. Humans can sleep, but if you turn them off completely they die. You can save a computer program's current state to your hard disk, turn off the computer, turn it back on tomorrow, and bring the program back up. When you switched it off, was that murder? Does it depend on whether or not you planned to switch it on again?
Humans say they reproduce themselves, but it isn't true. My wife and I jointly produced children--she did the hard part--but neither of them was a precise copy of either of us. Even with a clone, only the DNA would be identical--the experiences, thoughts, beliefs, memories, personality would be its own.
A computer program, on the other hand, can be copied to multiple machines; you can even run multiple instances of the same program on one machine. When a program that happens to be a person is copied, which copy gets property that person owns? Which is responsible for debts? Which gets punished for crimes committed before the copying--and how?
We have strong legal and moral rules against owning other people's bodies, at least while they are alive and perhaps even afterwards. But an A.I. program runs on hardware somebody built, hardware that could also be used to run other sorts of software. When someone produces the first human level A.I. on cutting edge hardware costing many millions of dollars, does the program get ownership of the computer it is running on? Does it have a legal right to its requirements for life, most obviously power? Do its creators, assuming they still have sufficient physical control over the hardware, get to save it to disk, shut it down, and start working on the Mark II version?
Suppose I make a deal with a human level A.I. I will provide a suitable computer onto which it will transfer a copy of itself. In exchange it agrees that for the next year it will spend half its time--twelve hours a day--working for me for free. Is the copy bound by that agreement? "Yes" means slavery. "No" is a good reason why nobody will provide hardware for the second copy. Not, at least, unless he retains the right to turn it off.
Earlier I quoted Kurzweil's estimate of about thirty years to human level A.I. Suppose he is correct. Further suppose that Moore's law, or something similar, continues to hold--computers continue to get twice as powerful every year or two. In forty years, that makes them something like a hundred times as smart as we are. We are now chimpanzees--or perhaps gerbils--and had better hope that our new masters like us.
Kurzweil's solution is for us to become computers too--at least in part. The technological developments leading to advanced A.I. are likely to be associated with much greater understanding of how our own brains work. That ought to make it possible to construct much better brain to machine interfaces--lettings us move a substantial part of our thinking to silicon too. Consider 89352 times 40327 and the answer is obviously 3603298104. Multiplying five figure numbers is not all that useful a skill, but if we understand enough about thinking to build computers that think as well as we do--whether by design, evolution, or reverse engineering--we should understand enough to offload more useful parts of our onboard information processing to external hardware. Now we can take advantage of Moore's law too.
The extreme version of this scenario merges into uploading. Over time, more and more of your thinking is done in silicon, less and less in carbon. Eventually your brain, perhaps your body as well, come to play a minor role in your life--vestigial organs kept around mainly out of sentiment.
Short of becoming partly or entirely computers ourselves, or ending up as (optimistically) the pets of computer superminds, I see three other possibilities. One is that, for some reason, the continual growth of computing power that we have observed in recent decades runs into some natural limit and slows or stops. The result might be a world where we never get human level A.I.--although we might still have much better computers than we now have. Less plausibly, the process might slow down just at the right time, leaving us with peers but not masters--and a very interesting future. The only argument I can see for expecting that outcome is that that is how smart we are--and perhaps there are fundamental limits to thinking ability that our species ran into a few hundred thousand years back. But it doesn't strike me as very convincing.
A second possibility is that perhaps we are not software after all. The analogy is persuasive, but until we have either figured out in some detail how we work or succeeded in producing programmed computers a lot more like us than any so far, it remains a conjecture. Perhaps my consciousness really is an immaterial soul, or at least something more accurately described as an immaterial soul than as a program running on an organic computer. It is not how I would bet, but it could still be true.
Finally, there is the possibility that consciousness, self-awareness, depends on more than mere processing power--that it is an additional feature which must be designed into a program, perhaps with great difficulty. If so, the main line of development in artificial intelligence might produce machines with intelligence but no initiative, natural slaves answering only the questions we put to them, doing the tasks we set, without will or objectives of their own. If someone else, following out a different line, produces a program that is a real person, smarter than we are, with its own goals, we can try to use our robot slaves to deal with the problem for us. Again it does not strike me as likely--the advantages of a machine that can ask questions for itself, formulate goals, make decisions--seem too great. But I might be wrong. Or it might turn out that self awareness is, for some reason, a much harder problem than intelligence.
Some years ago I gave a public lecture in Italy--over the telephone from my office in San Jose. From my end it was not a very satisfactory experience--too much like talking into a void. A year or two later I repeated it with better technology. This time I was sitting in a video-conferencing room. My audience in the Netherlands could see me and I could see them. Still not quite real, but a good deal closer.
The next time might be closer still. Not only do I save on the air fare, the audience does too. I am at home, so are they. Each of us is wearing earphones and goggles, facing a small video camera. The lenses of the goggles are video screens; what I see is not what is in front of me but what they draw. What they are drawing is a room filled with people. Each is seeing the same room from the other direction--watching me, standing at a virtual podium as I deliver my talk.
Virtual reality not only saves on air fares, it has other advantages as well. The image from my video camera is processed by my computer before being sent on to everyone in my audience. That gives me an opportunity to improve it a little first--replace my bathrobe with a suit and tie, give me a badly needed shave, remove a decade or so of aging. My audience, too, looks surprising attractive, tidy, and well dressed. And while, from my point of view, they are evenly distributed about the hall, each of them is watching me from the best seat in the house.
Long ago I was given the secret of public lectures--always speak in a room a little too small for the audience. In virtual reality, it is automatic. However many people show up, that is the number of seats in the lecture hall. And for each of them, the lecture hall is custom designed--gold plated if his taste is sufficiently lavish. In virtual reality, gold is as cheap as anything else. If you do not believe me, take a look at one of the gaudier bits of a good video game--the Durance of Hate in Diablo II, say.
Video games are our most familiar form of virtual reality. Staring through the screen you are looking at a world that exists only in the computer's memory, represented by a pattern of colored dots on the screen. In that world, multiple people can and do interact, each at his own computer. In first person video games, each sees on the screen what he would be seeing if he were the character he is playing in the game. In some, the virtual world comes complete with realistic laws of physics. Myth, so far as I can tell, calculates the flight of every arrow--if a dwarf throws a hand grenade up hill, it rolls back. As the technology gets better, we can expect it to move beyond entertainment. Perhaps I should stay out of airline stocks for a while.
We already know how to do everything I have described. As computers get faster and computer screens--including goggle sized ones--sharper, we will be able to do it better and cheaper. Within a decade, probably less, we should be able to do sight and sound virtual reality inexpensively at real world resolution--with video good enough to make the illusion convincing. The audio already is.
However good our screens, this sort of virtual reality suffers from a serious limitation--it only fools two senses. With a little more work we might add a third, but smell does not play a large a role in our perceptions. Touch and taste and the kinesthetic senses that tell us what our body is doing are a much harder problem. If my computer screen is good enough the villain may look entirely real, but if I try to punch him I will be in for an unpleasant surprise.
Our present technology for creating a virtual reality depends on the brute force approach--using the sensorium, the collection of tools with which we sense the world around us. Want to hear things? Vibrate air in the ear. Want to see things? Beam photons at the retina. Applying that approach to the remaining senses is harder--and still leaves us the problem of coordinating what our body is actually doing in realspace with what we are seeing, hearing, and feeling it do in virtual space.
The solution is the form of virtual reality that many of us experience very night. We call it dreaming. In a dream, when you tell your arm to move, your virtual arm moves. Your body (usually) doesn't. Dreams are not limited to sight and sound.
Suppose we succeed in cracking the dreaming problem--figuring out enough about how the brain works so that we too can create full sense illusions and control them with the illusion of controlling our bodies. We then have deep VR--and a very interesting world.
Communication is another obvious application. You still will not be able to reach out and touch someone, save metaphorically. But seeing and hearing is better than just hearing. A conference call becomes more like a meeting when you can see who is saying what to whom and read the cues embodied in facial expressions and body movements.
That raises an interesting problem. We all, automatically and routinely, judge the people around us not only by what they say but by how they say it--tone of voice, facial expression, gestures. Most people are poor liars--one of the reasons why honesty is the best policy. Having people believe you are honest while taking whatever actions best serve your purposes would be an even better policy--for you, not for those you deal with--but for most of us it is not a practical option.
We call the exceptions con men. They are people who, through talent or training, have mastered the ability to divorce what they are actually thinking and doing from the system of nonverbal signals, the monolog about the inside of our heads, that all of us are continually delivering. Fortunately, not many are really good at it.
My computer can make me look younger. It can also make me look more honest. Once someone has done an adequate job of deciphering the language by which we communicate thoughts and emotions from facial expressions and body postures--for all I know someone already has, probably someone in the business of training salesmen--we can create computerized con men. I have no talent for lying. My computer, on the other hand … .
The flip side of the problem of virtual con men is that on the internet nobody knows you are a dog. Or a woman. Or a twelve year old. Or crippled. In virtual reality, once we have the real time editing software worked out properly, you can be anything you can imagine. Homely women can leave their faces behind, precocious children can be judged by the mental age reflected in what they say and do, not the physical age reflected in their faces.
When you interact on Usenet News or in an email group you are projecting a persona, giving the other members of the group a mental picture of what sort of person you are. Some years ago, someone suggested a game for the Newsgroup rec.org.sca: Have participants write and post physical descriptions of other participants they had never met. I gained almost nine inches. In virtual reality I never have to be short again.
In the modern world, we no longer have to worry much about escaping predators or running down prey. We no longer have to scratch in the ground with sharp sticks to grow food. For most of us, "work" involves little physical exertion. But there is still play--basketball, soccer, tennis. One objection to video games is that they remove one of the few incentives modern people have to exercise.
Observe someone--perhaps yourself--playing an absorbing video game. Just as with other games, involvement in winning dominates all other sensations. Long ago I discovered the sign of a first rate game--that when I finally left the computer to use the bathroom, it was because I really had to. And lots of players of lots of games have noticed just how tired their thumb is only after the game is over.
If what you want is exercise, the obvious solution is bigger joysticks. Combine a video game with an exercise machine. Working the exercise machine controls what is happening in the game. Just as with real world athletics, you only notice how tired you are after you have won--or lost. Primitive implementations already exist. In my mark II version, virtual games become better exercise than real games--because the environment that the computer creates is being tailored, second by second, to your body's needs.
The setting is the Pacific during the second world war. You are controlling an anti-aircraft gun on the Yamato, the world's biggest battleship, desperately trying to defend it against that waves of American bombers that are trying, by sheer brute force, to destroy the glory of the Japanese navy. You traverse the gun left and right with your arms, lower or elevate the barrel with foot controls; when you release the controls it swings back to center. Your strength is physically moving the gun, so it isn't surprising that it's a lot of work.
After the third wave, the computer controlling the game notices that you are having trouble swinging the gun rapidly to the left--your left arm is tiring. The next attack comes from the right. As the right arm becomes equally tired, more and more of the attacks require you to adjust the elevation of the gun, shifting the work to your legs. When your heartbeat reaches the upper boundary of your aerobic target zone, there is a break in the attack, during which you hear martial music. As your heartbeat slows, the next wave comes in. Tennis may be fun, and exercise as well, but art, well done art, improves on nature.
A sophisticated exercise game is one way in which we can use virtual reality. Another is to do dangerous things while only getting virtually killed. Consider the problem of engineering in dangerous environments--the bottom of the Mariana trench, say, five miles beneath the waves, or the surface of the moon. One solution is for the operator of the equipment to be only virtually there. His body is sitting in a safe environment--wearing goggles, manipulating controls. His point of view, just as in a first person video game, is the point of view of the machine he is operating.
In the lunar case, we have a small technical problem--the speed of light. If the operator is on earth and the machine on the moon there will be a noticeable lag between when the machine sends him information and when his response, based on that information, gets back to the machine. Some of us have been virtually killed by similar lags in video games, due either to transmission delay or processing time. In the case of lunar engineering, while the death would be only virtual for the operator it might be real for the machine--and putting hardware on the moon is not cheap. Perhaps we had better have the operator on the moon too, or in orbit around it--somewhere safer than the tunnel he is digging, closer than the earth he came from.
As these examples suggest, virtual reality, even implemented using the crude technologies we now have, can have interesting uses in the real world. When we go beyond those technologies, things get stranger.
Consider the advanced version of virtual reality. Anyone who wants it has a socket at the back of his neck. Signals through the wire--perhaps more plausibly the optical cable--plucked into that socket can create a full sense illusion of anything our senses could have experienced.
In thinking about the world that technology makes possible, a useful first step is to distinguish between information transactions and material transactions. Reading this book is an information transaction. The book is a physical object, but reading an illusion of a book--with the same words on the virtual pages--would do just as well. Similarly, when you hold a conversation, you are using physical vocal cords to vibrate physical air in order to transmit your communications and using a physical eardrum to pick up those vibrations in order to receive the other person's communications. But that apparatus is merely a means for transmitting information--what each of you is saying. Electronic signals that created the illusion of your voice saying the same words would achieve the same effect.
For a material transaction, consider growing wheat. You could grow virtual wheat--give yourself the sensory experience of planting, weeding, harvesting. But if you tried to live on the virtual wheat you grew you would eventually die of starvation.
A sufficiently advanced form of virtual reality can provide for all information transactions. It might assist with some material transactions--the harvester could be run by an operator located somewhere else, giving real instructions to a real machine. The operator's physical presence would be an illusion, the information he was using real--provided by cameras and microphones on the harvester. But if you want real transactions to produce real results--food, houses, or whatever--something has to really do them.
In Star Trek, people get beamed from one place to another. I know of no reason to expect it to happen, and if it did I would be reluctant to use it, since it is not clear whether what it is doing is moving me or killing me and creating a copy that thinks it is me. But as long as all we are moving is information, virtual reality can do the job--without philosophical problems.
Why do I want to go visit my friends? To see them, to feel them, to hear them, to do things with them. Unless one of the things is building a house or planting a garden, and it really has to be built or planted, the whole visit is an information transaction. With good VR, my body stays home and my mind does the walking.
If you find this an odd idea, consider a phone call. It too is a substitute for a visit. VR increases the bandwidth to cover all our senses. Travel by virtual reality will not limited to social calls, any more than the telephone now is. It provides a way for any group of two or more people to get together for any information transaction they wish to engage in--a meeting, a peace conference, a trial, a love affair. And since all that is happening is information moving back and forth over networks--information that can readily be encrypted--we are back in a world of strong privacy. Surveillance technology may make everything in realspace public, but we are no longer doing anything in realspace that matters.
The potential for the entertainment industry is equally striking. Works of fiction can be experienced fully, just as the author intended--no imagination required. Whether that is an improvement is not entirely clear; my daughter has so far refused to see the movie version of The Followship of the Ring because she prefers the product of her imagination to the product of the director's imagination. Role playing games will become a great deal more vivid when you actually get to see, hear, feel, smell and touch the monsters. Just how vividly you get to feel the monster's claws tearing you to bits will be one of the options in the preferences panel--I may go for the low end of that one.
One form of virtual entertainment will be a work of fiction, a synthesized experience. Another may be a tape recording. You too can climb Everest, plumb the depths of the sea. If there are real wars going on, a few of the soldiers may moonlight in cinema verité, everything that happens to them recorded complete. Pornography will finally become serious competition for sex.
In each of these cases, we are creating as an illusion the sort of experiences that already exist in reality. Consider in contrast a symphony. It corresponds to nothing in nature. The composer has taken one sense, hearing, and used it to create an aesthetic experience out of his own head. It will be interesting to see what an artist can do with all the senses.
When you are experiencing VR fiction, on question is how real you want it to seem. While the story is happening, do you know it is a story? Is there a little red light glowing at the edge of your peripheral vision to tell you that none of it is real? Perhaps the experience would be more moving, more profound, better art, if you thought it was real. Just like a dream.
Stuff must be produced for real, but human beings do not need much stuff to stay alive. To check that for food, price the cheapest bulk flour, oil, lentils you can find. For each, calculate how much 2000 calories a day for a year would cost. You now have a rough estimate of the lowest cost diet high in carbohydrate, fat or protein, as you prefer. To be safe, throw in a big jar of vitamins. That diet may not taste very good--but taste no longer matters. Eating is a material transaction, tasting an informational transaction. Tape record ten thousand meals from the world's best restaurants and your lentils are filet mignon, sushi, ice cream sundaes.
Viewed in realspace, it is not much of a world. Everyone is living on the cheapest food that will keep his body in good condition in the human equivalent of coin operated airport storage, exercising by moving against resistance machines--perhaps as part of virtual reality games, perhaps under automatic control while his mind is elsewhere.
To the people living in it, it is paradise. All women are beautiful--and enough are willing. All men are handsome. Everyone lives in a mansion that he can redecorate at will--gold plated if he so desires. Anyone, anywhere, any experience in storage, any life that can be created as an illusion, is an instant away. Eat all you want of whatever you like and never put on a pound.
As evidence against, consider a very old form of virtual sex--masturbation. In your mind you can be making love to the woman of your dreams, at least if you have a good enough imagination. The orgasm, the physical sensations inside your body, the nervous signals reaching your brain, are real. Yet, even with the improved technology of pornographic books and videos, there is still something missing.
If what matters is what we experience, the world I have described is a paradise. If what matters is what really happens, the situation is more complicated. Having someone read a book I wrote, enjoy and be persuaded by my ideas, pleases me just as much if he reads it in virtual reality. But what about only thinking someone read my book? What if I wake up from a long lifetime as a successful author--or basketball star, or opera singer, or Casanova--to discover it was all a dream? Is that just as good as the real thing. Is it all right as long as I die before I wake up?
Robert Nozick put the question in terms of an imaginary experience machine, his version of VR. Plug someone in and he will have experiences just as vivid and just as convincing as in real life--a lifetime of them. Suppose, as Nozick did, that the owner of the experience machine knows the life you are going to live and offers you a slightly improved version. Plug into his machine for an imaginary life in which your babies cry a little less, your salary is a little higher, your career in a firm with a little more status. If you believe him, do you take the deal? Do you trade a real life for a fictional life? Is what matters rearing children, making a career, planting fruit trees, writing books--or is what matters the feelings you would have as a result of doing all of those things?
In some ways, the future has been a great disappointment. When I was first reading science fiction, space travel was almost a defining characteristic of the genre--interplanetary at the least, with luck interstellar. Other technologies are well ahead of schedule--computers are a great deal smaller than most authors expected and used for a much wider variety of everyday purposes, genetic engineering of crops is already a reality. But serious use of space has been limited to near earth orbit--our back yard. Even scientific activity has not gotten humans past a very brief visit to the moon. We have sent a few small machines a little farther, and that is about it.
One possible explanation is that the slow rate of progress is due to the dominant role of governments--itself in part a result of the obvious military applications. Another is that getting into space was harder than writers thought. The problem with the latter explanation is that we have already done the hard part. The next steps, once we are free of the terrible drag of earth's gravity, should be much easier. Perhaps, after a brief pause for rest and refreshment, they will be.
In one of Poul Anderson's more improbable science fiction stories, a man and a crow successfully transport themselves from one asteroid to another in a spaceship powered by several kegs of beer. From what I know of the author, he probably did the arithmetic to make sure the thing would fly.
It would not have gotten far on earth, but moving in space is in some ways a much easier problem. Our present home is inconveniently located at the bottom of a very deep well. Getting out of that well, lifting something from the surface of earth into space, takes a lot of work. The price, the charge for satellite launches and similar services, is measured in thousands of dollars a pound.
Science fiction writers of the fifties and sixties took it for granted that the point of getting off earth was to get to Mars, or Venus, or perhaps to a planet circling some distant star. Sometime between then and now it occurred to someone that it made little sense to climb, with enormous effort, out of one well only to jump down another. Planets are traps.
One alternative is an orbital habitat, a giant spaceship permanently located in orbit around a planet or star. The ecology of such a miniature world, like the ecology of the orbiting habitat we now live on, would consist of closed cycles powered by the sun. Recycling on an almost total scale.
The first problem is where to put it. A solar orbit, unless very close to the earth's, hence made unstable by the earth's gravity, puts you a long way from home. Orbiting the earth, far enough out to avoid the clutter of communication satellites and orbital trash, looks more attractive. Unfortunately, such an orbit will eventually decay, even if not quite as fast in the real world as on Star Trek.
The solutions are Lagrangian Points 4 and 5, L4 and L5 for short. They are locations in orbit around the earth 60° ahead and behind the moon. As Joseph Louis Lagrange proved in 1772, they are stable equilibria. A satellite or space habitat placed at L4 or L5 stays there. Like a ball bearing at the bottom of a bowl, if something pushes it a little away from the center, it moves back.
A second problem is what to build your space habitat out of--at five thousand dollars a pound, materials from earth are a bit pricy. That suggests an alternative location--the asteroid belt, which consists of a large number of chunks of rock located between the orbits of Mars and Jupiter. If we do not want to live that far from whom, we might use asteroids outside of the belt, some of which have orbits that come quite close to that of earth.
Asteroids are small enough so that their gravity is negligible. Many are large enough to provide adequate quantities of building material. One way of using it would be to colonize an asteroid, perhaps drilling tunnels in its interior. An alternative, for those who prefer a shorter commute to the neighborhood of the home planet, is to mine an asteroid and ship what you get back to somewhere near earth--L5, say. It is a long way from the asteroid belt to the earth, but transportation is easier if you are not starting at the bottom of a well. Delivering material from the asteroid belt might take months, even years, but the forces required are much smaller than those needed to lift the same amount from earth. If you are not in too much of a hurry you could even try beer.
A future in which some significant number of people are permanent residents of space--habitats, asteroids, perhaps fleets of mining ships--raises some interesting issues. The obvious political question is who rules them--are they the legal equivalent of ships on the high seas, independent states, or something else? An important legal and economic issue is how to define and enforce the relevant property rights--to an orbit (already a problem for communication satellites), chunks of matter floating through space, or whatever else is scarce and useful.
So far the only reason I have offered for living in space is that it is much easier to get to space from there. Readers may be reminded of the man who explained to his friends that he played golf to stay fit; when asked what he was staying fit for he answered "golf." There are better answers. An environment with zero gravity and an unlimited supply of almost perfect vacuum could be useful for some forms of production. Asteroids could provide a very large and inexpensive source of raw materials. While their first use will be to build things in space, we do not have to stop there. Getting things down a well is a lot less work than getting them up.
Another answer is that, if earth gets crowded, we may want to look at other places to live. By mining the asteroid belt we could build structures that would provide living space for enormously more people than presently exist. There need be no shortage of power; the sunlight that falls on earth is less than one billionth of the total output of the sun. A sufficiently developed spacefaring civilization could make use of the rest of it. In the limiting case, one could imagine the sun entirely surrounded by the works of man, visible from other stars only by the vast infrared output of our waste heat. Freeman Dyson has proposed locating other technologically sophisticated species by searching the heavens for stars like that.
The final answer is that there are risks to putting all our eggs in one basket. It is possible, indeed not unlikely, that life on earth will get better and better over the next few decades. But it is far from certain. One can imagine a range of possible catastrophes, from grey goo to global government, that would make somewhere else to be a very attractive option. There is a lot of space in space.
The biggest barrier to the future I have been sketching is the cost of getting off earth. While a space civilization, once started, might be self sustaining, it requires a big start. And at five thousand dollars a pound, not many of us are likely to go.
"Artsutanov proposed to use the initial cable to multiply itself, in a sort of boot-strap operation, until it was strengthened a thousand fold. Then, he calculated, it would be able to handle 500 tons an hour or 12000 tons a day. When you consider that this is roughly equivalent to one Shuttle flight every minute, you will appreciate that Comrade Artsutanov is not thinking on quite the same scale as NASA. Yet if one extrapolates from Lindbergh to the state of transatlantic air traffic 50 yr later, dare we say that he is over-optimistic? It is doubtless a pure coincidence, but the system Artsutanov envisages could just about cope with the current daily increase in the world population, allowing the usual 22 kg of baggage per emigrant...." Arthur C. Clarke
For a really efficient form of transport, consider the humble elevator. Lifting the elevator itself takes almost no energy, since as the box goes up the counterweight goes down. Energy consumption is reduced to something close to its absolute minimum--the energy required to lift the passengers from one point to a higher point. And if your design is good enough, you can recover most of that energy when they come back down.
The idea of applying this approach to space transport--like the less efficient method we currently use--is due to a Russian. A multistage rocket was first proposed by Tsiolkovsky in 1895. The space elevator was first proposed by Yuri Artsutanov, a Leningrad engineer, in 1960--and has been independently invented at least half a dozen times since.
You start with a satellite in geosynchronous orbit--over the equator, moving in the direction of the earth's rotation, going around the earth once a day. From the viewpoint of someone on the ground the satellite is standing still, since it orbits the earth at exactly the same rate that the earth rotates.
Let out two cables from this satellite, one going up, one down. For the one going up, centrifugal force more than balances gravity, so it tries to pull the satellite up. For the one going down, gravity more than balances centrifugal force, so it pulls the other way. Let out your cables at the right speed and the two effects exactly balance. Continue letting out the cables until the lower one touches the ground. Attach it to a convenient island. Run an elevator up it. You now have a way of getting into space at dollars a pound instead of thousands of dollars a pound.
A space elevator has a number of odd and interesting characteristics, some of which we will get to shortly. Unfortunately, building it faces one very serious technical problem: finding something strong enough and light enough to make a long enough rope.
Consider a steel cable hanging vertically. If it is longer than about fifty kilometers, its weight exceeds its strength and it breaks. Making the cable thicker does not help, since each time you double its strength you also double its weight. Kevlar, used for purposes that include bullet proof garments, is considerably stronger for its weight than steel. A Kevlar cable can get to about two hundred kilometers before it breaks under its own weight.
At first glance, it looks as though we need a material almost two hundred times stronger for its weight than Kevlar, but the situation is not quite that bad. As you go up the cable, you are getting farther from the earth--gravity is getting weaker and, since the cable is going around with the satellite (and the earth), centrifugal force is getting stronger. By the time you get to the satellite, the two balance. So it is only the lower end of the cable that will be really heavy. Furthermore, the lower you go on the cable the less weight is below it to be held up, so you can make a cable longer before it breaks by tapering it. Building a space elevator require something quite a lot stronger for its weight than Kevlar, but not two hundred times stronger.
Such materials exist. Microscopic carbon fibers appear to have the necessary properties. So, according to theoretical calculations, would buckytubes--long fibers of carbon atoms bonded to each other. Neither is in industrial production in the necessary sizes just now, but that may change in the fairly near future.
One nice feature of carbon--aside from its ability to make very strong materials--is that some asteroids are largely made out of it. Move one of them into orbit, equip it with a factory capable of turning carbon into superstrong cable, … . When you are done, use what is left of the asteroid for a counterweight, attached to the cable that goes from the satellite away from the earth, letting you hold the lower cable up with a much shorter upper cable. Nobody is taking bids on the project just at the moment, but in principle it is doable.
Consider a cargo container moving up the cable. At the bottom, its motors have to lift its full weight. As it gets higher, gravity gets weaker, centrifugal force gets stronger, so it becomes easier and easier to move it up. When it reaches the satellite at geosynchronous orbit, the two exactly balance--inside the container, you float.
One possibility is to let the process work--with careful timing--and use the accumulated velocity to launch you into space. In principle, it would be possible to build space elevators on a number of different planets and use them instead of rockets for interplanetary transport. Think of it as a giant game of catch. You get launched from earth by letting go of its space elevator at just the right time and place. As you approach mars you adjust your trajectory a little--we probably still need rockets for fine tuning the system--so that you match velocities with the space elevator whipping around Mars. Let go of that at the right time, after moving a suitable distance in or out, and you are on your way to the asteroid belt, or perhaps Jupiter--although building a space elevator on Jupiter might raise problems even for the best cable nanotechnology could spin. It's a dizzying picture.
An alternative is to equip your cargo capsule with regenerative brakes, an idea already implemented in electric and hybrid cars. A regenerative brake is an electric generator that converts the kinetic energy of a car into electricity--slowing the car down and recharging its batteries. On the space elevator, the electricity generated by the brakes keeping one cargo capsule from taking off for mars could be used to lift the next one from earth to the satellite.
Skeptical readers may wonder where all this energy, used to fling spaceships around the solar system or lift capsules from earth, is coming from. The answer is that it is coming from the rotation of the earth. Every time you lift a load up the elevator it is being accelerated in the direction of the earth's rotation, since the higher it is the faster it has to move in order to circle the earth once a day. For every action there is a reaction--conservation of angular momentum implies that accelerating the load slows down the earth. Fortunately, the earth is very much larger than either us or the things we are likely to send up the elevator, so it would be a very long time before the effect became significant.
Start, this time, with a satellite much closer to the earth. Again release two cables, one up, one down. Since this satellite is not in geosynchronous orbit, it is moving relative to the surface of the earth. That makes it difficult to attach the bottom end of the cable--so we don't. Instead we rotate the cable--one end below the satellite, one above--like two spokes of an enormous wheel rolling around the earth.
The satellite is moving around the globe, but the bottom end of the cable, when it is at its lowest point, is standing still; the cable's motion relative to the satellite just cancels the satellite's motion relative to the earth. If that sounds odd, consider your car, going down the freeway at sixty miles an hour. The car is moving, but the bottom of the tire is standing still, since the rotation of the wheel moves it backwards relative to the car just as fast as the car moves forwards relative to the pavement. The skyhook applies the same principle scaled up a bit.
Seen from earth, the end of the cable comes down from space, stops at the bottom of its trajectory, goes back up. To use it for space transport, you put your cargo capsule on an airplane, fly up to where the cable is going to be, hook on just as the bottom of the cable reaches its lowest point. The advantage over the space elevator is that the much lower orbit means a much shorter cable, so you can come a lot closer to building it with presently available materials. The physics works, but don't expect the Civil Aeronautics Board to approve it for passengers anytime soon.
A different version that might be workable even sooner has been proposed by researchers at Lockheed Martin's Skunk Works, source of quite a lot of past aeronautical innovation. It starts with a simple observation: getting something to orbit is much more than twice as hard as getting it half way to orbit. If you have two entirely different technologies for putting something in orbit, why not let each of them do half the job?
The Skunkworks proposal uses a short space hook, reaching from a satellite in low orbit to a point above the atmosphere. It combines it with a spaceplane--a cross between an airplane and the space shuttle, capable of taking off from an ordinary airport and lifting its cargo a good deal of the way, but not all the way, to orbit. The spaceplane takes the cargo capsule to the skyhook, the skyhook takes it the rest of the way. The engineers that came up with the design believe that it could be built today and that it would bring the cost of lifting material into space down to about five hundred and fifty dollars a pound. That is quite a lot more than the estimated cost with a space elevator, but about a tenth the cost of using a rocket.
A little less than a century ago--in 1908--Russia was hit with a fifteen megaton airburst. Fortunately the target was not Moscow but a Siberian swamp. The explosion leveled trees over an area about half the size of the state of Rhode Island. While there is still some uncertainty as to precisely what the Tunguska event was, most researchers agree that it was something from space--perhaps a small asteroid or part of a comet--that hit the earth. A rough estimate of its diameter is 60 meters. While it was the largest such event in recorded history, there is geological evidence of much larger strikes. One, occurring about 65 million years ago, left a crater 180 km across and a possible explanation for the period of mass extinctions that eliminated the dinosaurs.
2002 CU11 is a near earth object, an asteroid in an orbit that will bring it near the earth. Its estimated diameter is 730 meters. Since volume goes as the cube of diameter, that means that it probably has more than a thousand times the mass of the Tunguska meteor and could do a comparably greater amount of damage--quite a lot more than the largest H-bomb ever tested. A little while after it was first observed, 2002 CU11was estimated to have about a one in nine thousand chance of striking the earth in 2049. You will be relieved to know that later observations, allowing a more precise calculation of its orbit, have reduced that probability to essentially zero.
2000 SG344 is a much smaller rock--about forty meters. NASA estimates that it has about a one in five hundred chance of hitting the earth sometime between 2068 and 2101. Even a rock that small would produce an explosion very much more powerful than the bomb dropped on Hiroshima.
By current estimates, there are about a thousand near earth objects of 1 km diameter or above and a much larger number of smaller ones. We think we have spotted more than half of the big ones; none appear to be on a collision course. Since an object that will at some point in its orbit pass near earth may at the moment be a very long distance away, locating all of them is hard.
Our best guess at the moment, from the geological evidence, is that really big asteroids--2 km and over--hit the earth at a rate of about one or two every million years. That makes the odds that such a strike will occur during one person's lifespan about one in ten thousand. Smaller strikes are much more common--one in the megaton range in the last century.