imprimatur 1. The formula (= ‘let it be printed’), signed by an official licenser of the press, authorizing the printing of a book; hence as sb. an official license to print.The Oxford English Dictionary (2nd. ed.)
Over the last two years I have become deeply and increasingly pessimistic about the future of liberty and freedom of speech, particularly in regard to the Internet. This is a complete reversal of the almost unbounded optimism I felt during the 1994–1999 period when public access to the Internet burgeoned and innovative new forms of communication appeared in rapid succession. In that epoch I was firmly convinced that universal access to the Internet would provide a countervailing force against the centralisation and concentration in government and the mass media which act to constrain freedom of expression and unrestricted access to information. Further, the Internet, properly used, could actually roll back government and corporate encroachment on individual freedom by allowing information to flow past the barriers erected by totalitarian or authoritarian governments and around the gatekeepers of the mainstream media.
So convinced was I of the potential of the Internet as a means of global unregulated person-to-person communication that I spent the better part of three years developing Speak Freely for Unix and Windows, a free (public domain) Internet telephone with military-grade encryption. Why did I do it? Because I believed that a world in which anybody with Internet access could talk to anybody else so equipped in total privacy and at a fraction of the cost of a telephone call would be a better place to live than a world without such communication.
Computers and the Internet, like all technologies, are a double-edged sword: whether they improve or degrade the human condition depends on who controls them and how they’re used. A large majority of computer-related science fiction from the 1950’s through the dawn of the personal computer in the 1970’s focused on the potential for centralised computer-administered societies to manifest forms of tyranny worse than any in human history, and the risk that computers and centralised databases, adopted with the best of intentions, might inadvertently lead to the emergence of just such a dystopia.
The advent of the personal computer turned these dark scenarios inside-out. With the relentless progression of Moore’s Law doubling the power of computers at constant cost every two years or so, in a matter of a few years the vast majority of the computer power on Earth was in the hands of individuals. Indeed, the large organisations which previously had a near monopoly on computers often found themselves using antiquated equipment inferior in performance to systems used by teenagers to play games. In less than five years, computers became as decentralised as television sets.
But there’s a big difference between a computer and a television set—the television can receive only what broadcasters choose to air, but the computer can be used to create content—programs, documents, images—media of any kind, which can be exchanged (once issues of file compatibility are sorted out, perhaps sometime in the next fifty centuries) with any other computer user, anywhere.
Personal computers, originally isolated, almost immediately began to self-organise into means of communication as well as computation—indeed it is the former, rather than the latter, which is their principal destiny. Online services such as CompuServe and GEnie provided archives of files, access to data, and discussion fora where personal computer users with a subscription and modem could meet, communicate, and exchange files. Computer bulletin board systems, FidoNet, and uucp/usenet store and forward mail and news systems decentralised communication among personal computer users, culminating in the explosive growth of individual Internet access in the latter part of the 1990’s.
Finally the dream had become reality. Individuals, all over the globe, were empowered to create and exchange information of all kinds, spontaneously form virtual communities, and do so in a totally decentralised manner, free of any kind of restrictions or regulations (other than already-defined criminal activity, which is governed by the same laws whether committed with or without the aid of a computer). Indeed, the very design of the Internet seemed technologically proof against attempts to put the genie back in the bottle. “The Internet treats censorship like damage and routes around it.”[Note: This observation is variously attributed to John Gilmore and John Nagle; I don’t want to get into that debate here]. Certainly, authoritarian societies fearful of losing control over information reaching their populations could restrict or attempt to filter Internet access, but in doing so they would render themselves less competitive against open societies with unrestricted access to all the world’s knowledge. In any case, the Internet, like banned books, videos, and satellite dishes, has a way of seeping into even the most repressive societies, at least at the top.
Without any doubt this explosive technological and social phenomenon discomfited many institutions who quite correctly saw it as reducing their existing control over the flow of information and the means of interaction among people. Suddenly freedom of the press wasn’t just something which applied to those who owned one, but was now near-universal: media and messages which previously could be diffused only to a limited audience at great difficulty and expense could now be made available around the world at almost no cost, bypassing not only the mass media but also crossing borders without customs, censorship, or regulation.
To be sure, there were attempts by “the people in charge” to recover some of the authority they had so suddenly lost: attempts to restrict the distribution and/or use of encryption, key escrow and the Clipper chip fiasco, content regulation such as the Computer Decency Act, and the successful legal assault on Napster, but most of these initiatives either failed or proved ineffective because the Internet “routed around them”—found other means of accomplishing the same thing. Finally, the emergence of viable international OpenSource alternatives to commercial software seemed to guarantee that control over computers and Internet was beyond the reach of any government or software vendor—any attempt to mandate restrictions in commercial software would only make OpenSource alternatives more compelling and accelerate their general adoption.
This is how I saw things at the euphoric peak of my recent optimism. Like the transition between expansion and contraction in a universe with Omega greater than 1, evidence that the Big Bang was turning the corner toward a Big Crunch was slow to develop, but increasingly compelling as events played out. Earlier I believed there was no way to put the Internet genie back into the bottle. In this document I will provide a road map of precisely how I believe that could be done, potentially setting the stage for an authoritarian political and intellectual dark age global in scope and self-perpetuating, a disempowerment of the individual which extinguishes the very innovation and diversity of thought which have brought down so many tyrannies in the past.
One note as to the style of this document: as in my earlier Unicard paper, I will present many of the arguments using the same catch phrases, facile reasoning, and short-circuits to considered judgment which proponents of these schemes will undoubtedly use to peddle them to policy makers and the public. I use this language solely to demonstrate how compelling the arguments can be made for each individual piece of the puzzle as it is put in place, without ever revealing the ultimate picture. As with Unicard, I will doubtless be attacked by prognathous pithecanthropoid knuckle-typers who snatch sentences out of context. So be it.
The original design of the arpanet, inherited by the Internet, was inherently peer to peer. I do not use the phrase “peer to peer” here as a euphemism for “file sharing” or other related activities, but in its original architectural sense, that all hosts on the network were logically equals. Certainly, Internet connections differed in bandwidth, latency, and reliability, but apart from those physical properties any machine connected to the Internet could act as a client, server, or neither—simply a peer of those with which it communicated. Any Internet host could provide any service to any other and access any service provided by them. New kinds of services could be invented as required, subject only to compatibility with the higher level transport protocols (such as tcp and udp).
This architecture made the Internet something unprecedented in the human experience, the first many-to-many mass medium. Let me elaborate a bit on that. Technological innovations in communication dating back to the printing press tended to fall into two categories. The first, exemplified by publishing (newspapers, magazines, and books) and broadcasting (radio and television) was a one-to-many mass medium: the number of senders (publishers, radio and television stations) was minuscule compared to their audience, and the capital costs required to launch a new publication or broadcast station posed a formidable barrier to new entries. The second category, including postal mail, telegrams, and the telephone, is a one-to-one medium; you could (as the technology of each matured) communicate with almost anybody in the world where such service was available, but your communications were person to person—point to point. No communication medium prior to the Internet had the potential of permitting any individual to publish material to a global audience. (Certainly, if one creates a Web site which attracts a large audience, the bandwidth and/or hosting costs can be substantial, yet are still negligible compared to the capital required to launch a print publication or broadcast outlet with comparable reach.)
This had the effect of dismantling the traditional barriers to entry into the arena of ideas, leveling the playing field to such an extent that an individual could attract an audience for their own work, purely on the basis of merit and word of mouth, as large as those of corporate giants entrenched in earlier media. Beyond direct analogues to broadcasting, the peer to peer architecture of the Internet allowed creation of entirely new kinds of media—discussion boards, scientific preprint repositories, web logs with feedback from readers, collaborative open source software development, audio and video conferences, online auctions, music file sharing, open hypertext systems, and a multitude of other kinds of spontaneous human interaction.
A change this profound, taking place in less than a decade (for despite the arpanet’s dating to the early 1970s, it was only as the Internet attracted a mass audience in the late 1990s that its societal and economic impact became significant), must inevitably prove discomfiting to those invested in or basing their communication strategy on traditional media. One needn’t invoke conspiracy theories to observe that many news media, music publishers, and governments feel a certain nostalgia for the good old days before the Internet. Back then, there were producers (publishers, broadcasters, wire services) and consumers (subscribers, book and record buyers, television and radio audiences), and everybody knew their place. Governments needn’t fret over mass unsupervised data flow across their borders, nor insurgent groups assembling, communicating anonymously and securely, and operating out of sight and beyond the control of traditional organs of state security.
Despite the advent of the Internet, traditional media and government continue to exercise formidable power. Any organisation can be expected to act to preserve and expand its power, not passively acquiesce in its dissipation. Indeed, consolidation among Internet infrastructure companies and increased governmental surveillance of activities on the Internet are creating the potential for the imposition of “points of control” onto the originally decentralised Internet. Such points of control can be used for whatever purposes those who put them in place wish to accomplish. The trend seems clear—over the next five to ten years, we will see an effort to “put the Internet genie back in the bottle”: to restore the traditional producer/consumer, government/subject relationships which obtained before the Internet disrupted them.
A set of technologies, each already in existence or being readied for introduction, can, when widely deployed and employed toward that end, reimpose the producer/consumer information dissemination model on the Internet, restoring the central points of control which traditional media and governments see threatened by its advent. Each of the requisite technologies can be justified on its own as solving clamant problems of the present day Internet, and may be expected to be promoted or mandated as so doing. In the next section, we’ll look at these precursor technologies.
The dark future I dread will be the consequence of the adoption, by marketing or mandate, of a collection of individual technologies, each of which can be advocated as beneficial in its own right but which, taken together, have consequences less apparent to many yet, I believe, quite evident to some now promoting them. Each of the following technologies is either currently in existence or is the object of an active development effort. These items necessarily interact with one another, so it is impossible to entirely avoid forward references in discussing them. If something doesn’t seem clear on the first reading, you may benefit from re-reading this section after you’ve digested the essentials the first time through.
Note: this item discusses a phenomenon, already underway, which is effectively segmenting Internet users into two categories: home users who are consumers of Internet services, and privileged sites which publish content and provide services. The technologies discussed in the balance of this document are entirely independent of this trend, and can be deployed whether or not it continues. If you aren’t interested in such details or take violent issue with the interpretation I place upon them, please skip to the next heading. I raise the issue here because when discussing the main topics of this document with colleagues, a common reaction has been, “Users will never put up with being relegated to restricted access to the Internet.” But, in fact, they already are being so relegated by the vast majority of broadband connections, and most aren’t even aware of what they’ve lost or why it matters.
When individuals first began to connect to the Internet in large numbers, their connection made them logical peers of all other Internet users, regardless of nature and size. While a large commercial site might have a persistent, high bandwidth connection and a far more powerful server than the home user, there was nothing, in principle, such a site could do that an individual user could not—any Internet user could connect to any other and interchange any form of data on any port in any protocol which conformed to the underlying Internet transport protocols. The user with a slow dial-up connection might have to be more patient, and probably couldn’t send and receive video in real-time, but there was no distinction in the ways they could use the Internet.
Over time, this equality among Internet users has eroded, in large part due to technical workarounds to cope with the limited 32-bit address space of the present day Internet. I describe this process in detail in Appendix 1, exploring how these expedients have contributed to the anonymity and lack of accountability of the Internet today. With the advent of broadband dsl and cable television Internet connections, a segmentation of the Internet community is coming into being. The typical home user with broadband access has one or more computers connected to a router (perhaps built into the dsl or cable modem) which performs Network Address Translation, or nat. This allows multiple computers to share a single fast Internet connection. Most nat boxes, as delivered, also act as a rudimentary Internet firewall, in that packets from the Internet can only enter the local network and reach computers connected to the broadband connection in reply to connections initiated from the inside. For example, when a local user connects to a Web site, the nat router allocates a channel (port) for traffic from the user’s machine to the Web site, along with a corresponding inbound channel for data returned from the Web site. Should an external site attempt to send packets to a machine on the local network which has not opened a connection to it, they will simply be discarded, as no inbound channel will have been opened to route them to the destination. Worms and viruses which attempt to propagate by contacting Internet hosts and exploiting vulnerabilities in software installed on them will never get past the nat box. (Of course, machines behind a nat box remain vulnerable to worms which propagate via E-mail and Web pages, or any other content a user can be induced to open.)
The typical home user never notices nat; it just works. But that user is no longer a peer of all other Internet users as the original architecture of the network intended. In particular, the home user behind a nat box has been relegated to the role of a consumer of Internet services. Such a user cannot create a Web site on their broadband connection, since the nat box will not permit inbound connections from external sites. Nor can the user set up true peer to peer connections with other users behind nat boxes, as there’s an insuperable chicken and egg problem creating a bidirectional connection between them.
Sites with persistent, unrestricted Internet connections now constitute a privileged class, able to use the Internet in ways a consumer site cannot. They can set up servers, create new kinds of Internet services, establish peer to peer connections with other sites—employ the Internet in all of the ways it was originally intended to be used. We might term these sites “publishers” or “broadcasters”, with the natted/firewalled home users their consumers or audience.
Technically astute readers will observe, of course, that nat need not prevent inbound connections; a savvy user with a configurable router can map inbound ports to computers on the local network and circumvent the usual restrictions. Yet I believe that as time passes, this capability will become increasingly rare. It is in the interest of broadband providers to prevent home users from setting up servers which might consume substantial upstream bandwidth. By enforcing an “outbound only” restriction on home users, they are blocked from setting up servers, and must use hosting services if, for example, they wish to create a personal home page. (With consolidation among Internet companies, the access supplier may also own a hosting service, creating a direct economic incentive to encourage customers to use it.)
In addition, it is probable that basic broadband service will be restricted to the set of Internet services used by consumers: Web, ftp, E-mail, instant messages, streaming video, etc., just as firewalls are configured today to limit access to a list of explicitly permitted services. Users will, certainly, be able to obtain “premium” service at additional cost which will eliminate these restrictions, just as many broadband companies will provide a fixed ip address as an extra cost option. But the Internet access market has historically been strongly price sensitive, so it is reasonable to expect that over the next few years the majority of users connected to the Internet will have consumer-grade access, which will limit their use to those services deemed appropriate for their market segment.
In any case, the key lesson of the mass introduction of nat is that it demonstrates, in a real world test, that the vast majority of Internet users do not notice and do not care that their access to the full range of Internet services and ability to act as a peer of any other Internet site has been restricted. Those who assert that the introduction of the following technologies will result in a mass revolt among Internet users bear the burden of proof to show why those technologies, no more intrusive on the typical user’s Internet experience than NATted broadband, will incite them to oppose their deployment.
A certificate is a digital identification of a physical or abstract object: a person, business, computer, program, or document. A certificate is simply a sequence of bits which uniquely identifies the object it pertains to. In most cases it is guaranteed that there is a one-to-one mapping between certificates and objects. To make this less abstract, consider a non-computer analogue: passports. A passport (or, more precisely, a passport number, as individuals may, in certain circumstances, obtain multiple physical passports bearing the same number), uniquely identifies a person as a citizen of the issuing country. No two people are given the same passport number, and one person’s attempting to obtain two different passport numbers is considered a crime involving a fraudulent declaration. A digital certificate is much like a passport. It is issued by a certificate authority, which vouches for its authenticity. (In the case of a passport, the certificate authority is the issuing government.) The certificate authority trades on its reputation for probity—to obtain high-grade personal certificates from recognised authorities, documentation equal to or better than that required to obtain a passport is necessary. As with passports, certificates issued by obscure or disreputable authorities will engender less trust than those from the big names.
Certificates are in wide use today. Every time you make a secure purchase on the Web, your browser retrieves a certificate from the e-commerce site to verify that you’re indeed talking to whom you think you are and to establish secure encrypted communications. Most browser E-mail clients allow you to use personal certificates to sign and encrypt mail to correspondents with certificates, but few people avail themselves of this capability at present, opting to send their E-mail in the clear where anybody can intercept it and you-know-who routinely does.
When you obtain a personal certificate, the certificate authority that signs it asserts that you have presented them adequate evidence you are who you claim to be (usually on the basis of an application validated by a notary, attorney, or bank or brokerage officer), and reserves the right to revoke your certificate should they discover it to have been obtained fraudulently. Certificate authorities provide an online service to validate certificates they issue, supplying whatever information you’ve chosen to disclose regarding your identity. Having obtained a certificate, you’re obliged to guard it as you would your passport, credit cards, and other personal documents. If another person steals your certificate, they will be able to read your private E-mail, forge mail in your name, and commit all the kinds of fraud present-day “identity theft” encompasses. While stolen certificates can be revoked and replacements issued, the experience is as painful as losing your wallet and worth the same effort to prevent.
A certificate comes in two parts: private and public. The private part is the credential a user employs to access the Internet, sign documents, authorise payments, and decrypt private files stored on their computer and secure messages received from others. It is the private part of the certificate a user must carefully guard; it may be protected by a pass phrase, be kept on a removable medium like a smart card, or require biometric identification (for example, fingerprint recognition) to access. The public part of the certificate is the user’s visible identification to others; many users will list their public certificate in a directory, just as they list their telephone number. Knowing a user’s public certificate allows one to encrypt messages (with that person’s public key, a component of the public certificate) which can only be decoded with the secret key included in the private certificate. When I speak of “sending the user’s certificate along with a request on the Internet” or tagging something with a certificate, I refer to the public certificate which identifies the user. The private certificate is never disclosed to anybody other than its owner.
The scope of objects certificates can identify is unlimited. Here are some examples, as they presently exist and may be expected to evolve in the near future.
Minors may obtain certificates subject to parental consent, as is presently required to obtain a driver’s license or enlist in the military. A parent or guardian may require a minor’s certificate to disclose the minor’s age, which can be used to block access or filter content inappropriate for a person of that age. Further, if requested, the minor’s certificate may be linked to that of the parent or guardian, who may then read data encrypted with the minor’s certificate.
Certificate authorities undertake to protect the private encryption keys for all certificates they issue, along with any personal information the holder has not explicitly instructed them to disclose (beyond minimum legal requirements). Certificate holders may update their personal information as needed (providing suitable documentation when, for example, legally changing a name), and may suspend or revoke their certificates if suspected or confirmed to be compromised. Should the security of a certificate authority be breached, all certificate holders who may be affected must be notified. Certificate authorities will comply with requests from law enforcement, subject to due process, for recovery of private encryption keys or identity information, including those of revoked certificates.
\item[Companies/Organisations.] As noted above, most Web users already implicitly rely on certificates to confirm the identity of companies with which they do business on the Web. There’s a lot going on behind that little lock icon in your browser. If a site asserts its identity based on a certificate issued by “Bob’s Discount Passports and Pawn Shop”, an alert pops up to warn the user they may be about to do something really, really dumb. Similarly, before signing an organ donor contract online, be sure to check out the bona fides of Instant Ca$h for Kidneys and those of their certificate authority.
As the Internet is made increasingly secure, the requirements for obtaining a certificate for an organisation will be brought into line with those for certificates identifying individuals. Businesses, whether proprietorship, partnership, or corporation; nonprofit organisations; educational institutions; governmental bodies and other kinds of legal entities will obtain certificates by furnishing the kind of credentials currently required to obtain an employer identification number for income tax purposes or a sales tax/vat number. As with certificates for individuals, verification will ensure no entity has more than one valid certificate.
Unlike individual certificates, those granted to an organisation may be used to create subordinate certificates for components of the organisation. Individual offices, departments, etc. may obtain their own certificates, linked to the parent organisation’s, and administered by it. This delegation may occur to any number of levels, according to the administrative policy of the organisation—whether a subordinate certificate may create further sub-certificates is determined when it is granted and may be subsequently changed.
An important kind of subordinate certificate is those created for staff (employees, etc.) of an organisation. They identify an individual as a staff member and are used by that person for work-related purposes. The degree to which someone outside the organisation can obtain personal information about a staffer depends on that organisation’s own policy and government privacy requirements. An organisation is responsible for the actions of its staff using their certificates and may be compelled to identify them for law enforcement purposes. Whether a staff member can access the Internet from computers on the organisation’s network using their private individual certificate as opposed to the staff certificate, and whether the staff certificate may be used outside the organisation network is up to the issuer and can be easily technologically enforced. The private encryption keys of a staff certificate can be recovered by the issuer (or a designated higher level in the hierarchy of certificates issued by the organisation), permitting supervision of staff activities and recovery of the staffer’s work product where necessary.
\item[Computers.] The “system signatures” computed from hardware properties by present-day “software activation” procedures are crude forms of certificates. The cpu serial numbers in recent Pentium chips (which can presently be disabled, due to public outcry) are still closer approximations. “Trusted Computing” platforms will contain a unique certificate for each machine, referred to as a “credential”.
Computers in the Trusted Computing era will be assigned a certificate by their manufacturers which cannot be changed by the user. (As will be discussed in further detail below, it may be possible to transfer the certificate to a new machine if the original system fails.) A computer certificate uniquely identifies a machine. It will be used to unlock software licensed for exclusive use on that machine, and to identify network traffic originating from it.
\item[Programs.] A program can be issued a certificate which certifies not only that it has been signed by its publisher, but further verifies its contents have not been modified, using a hash/signature/message digest algorithm such as md5. This technology is already being used for “signed applets” for browsers such as Microsoft Internet Explorer, where a user can (it is claimed) validate the publisher of the program and confirm that it has not been subsequently modified before executing it on their machine.
In the “Trusted Computing” architecture, every program will bear a certificate attesting to the identity of its publisher and permitting the operating system to confirm that it has not been corrupted. A Trusted Computing operating system will not execute a program which differs from the signature in its certificate and will periodically, when connected to the Internet, re-verify a program’s certificate to confirm it has not been revoked. Revoking the certificate of a deployed program effectively “un-publishes” it. Such a program will only continue to run on machines on which it was already installed and which are never subsequently connected to the Internet. While revocation of a program’s certificate is an extreme measure and can be expected to be correspondingly rare, it provides an essential mechanism to protect the Internet infrastructure from rapidly emerging threats. If a critical security flaw is found in widely-deployed software which creates an immediate peril, revoking its certificate can pull the plug on the program, requiring users to immediately install an update which corrects the problem.
\item[Content.] “Content” refers to any form of digital data: documents, images, audio, video, databases, etc. Here the issue isn’t identity or security, but rather authenticity and ownership rights. The publisher’s certificate signing your copy of Moby-Dick guarantees that this is Melville’s original novel, as opposed to a version “enhanced” by a cheesy-poof addict in which Ahab slays the white whale, builds a starship from the bones, and sets forth to annihilate whales throughout the galaxy.
Programs are, in fact, just a special case of content. Due to the risk malicious programs pose to individual users and the Internet, priority will be given to securing them but, with the advent of Digital Rights Management (see below), similar provisions will apply to all kinds of data stored on computer systems. Eventually, every file will be signed with a certificate identifying its creator and incorporating a signature which permits verifying its integrity. If the contents of a document have been corrupted or its certificate revoked, a Trusted Computing platform will not permit it to be opened, and the Secure Internet will not permit it to be transmitted. A document bound to a given user’s certificate, that of an organisation, or a specific computer may not be opened by others and will be stored in encrypted form which cannot be decoded without the requisite certificate.
“Trusted Computing,” in the current jargon, has little or nothing to do with traditional concepts of software reliability or data security. Instead, it refers to an effort to embed end-to-end validation of the origin and integrity of data into computing hardware and system software. One key component is the identification of each computer by a unique certificate, but the ramifications go far beyond this. In addition to protecting computer users from insecure software (software not signed with a recognised vendor’s certificate and verified unmodified by its digital signature), users are also protected against corruption of data on their own computers. Data on a user’s own hard drive is encrypted and signed, permitting access rights and data integrity to be verified every time a file is loaded into memory. This will completely eliminate the risk of viruses corrupting installed programs or data files. It permits a software vendor to block the execution of any program deemed harmful, even retroactively (since certificates will be verified online). If a vulnerability is found in a software product installed on millions of users’ machines worldwide, it may be instantly disabled before it puts them at risk, forcing them to immediately upgrade to a new, secure version. In many cases this will occur automatically—the user need do nothing, nor even be aware of the upgrade to the system.
On a Trusted Computing system, the ability to back up, mirror, and transfer data will be necessarily limited. Hardware and compliant operating systems will restrict the ability to transfer data from system to system. For example, software bound to a given machine’s certificate will refuse to load on a machine with a different certificate. Perforce, this security must extend to the most fundamental and security-critical software of all—the rom bios and operating system kernel. Consequently, a trusted computing platform must validate the signature of an operating system before booting it. Operating systems not certified as implementing all the requirements of Trusted Computing will not be issued certificates, and may not be booted on such systems.
Today, buying stuff on the Internet is a big deal—something which many people remain hesitant to do, being well aware of the risks of having their credit card hijacked and the myriad distasteful sequels thereof. With the advent of certificates and Trusted Computing, these fears will dissipate. With one’s personal certificate (bound, perhaps, to one or more computers to which one has exclusive access, and secured by a pass phrase, smart card, or biometric identification) guaranteeing the security of the connection, and certificates on the other end validating the identity of the vendor, much of the tedious process of present-day Internet commerce can give way to a seamless surfing and shopping experience.
A micropayment exchange permits payments to be made between any two certificate holders. A user makes a payment by sending a message to the exchange, signed with the user’s private certificate, identifying the recipient by their public certificate and indicating the amount to be paid. Upon verification of the payer’s and recipient’s certificates and that sufficient funds are available in the payer’s account, the specified sum is transferred to the recipient’s account and a confirmation sent of arrival of the funds. Micropayment transactions can be performed explicitly by logging on to the exchange’s site, but will usually be initiated by direct connection to the exchange’s server when the user makes an online purchase.
Micropayment differs from existing online payment services such as PayPal and e-gold in that transaction costs are sufficiently low that extremely small payments can be made without incurring exorbitant processing fees; with micropayment it will be entirely practical for Web sites to charge visitors a ten-thousandth of a Euro to view a page; credit cards or existing online payment services have far too high overhead to permit such minuscule payments. Note that there need be no upper limit on payments made through micropayment exchanges, and hence “micropayment” simply implies that tiny payments are possible, not that larger payments aren’t routinely made as well. The first broadly successful micropayment exchange is likely to be technology driven, but as micropayments become a mass market and begin to encroach on other payment facilities, pioneers in the market are likely to be acquired by major players in the financial services industry.
No more e-commerce paranoia … when you do business with vendors with certificates you consider trustworthy, you needn’t enter any sensitive personal information. Just click “buy”, select which of the credit cards or bank accounts linked to your certificate with which you wish to pay (never giving the number), and your purchase will be shipped to the specified address linked to your certificate. Even if your certificate is stolen, a thief can only order stuff to be shipped to you.
Each user can set their own personal default maximum price per page, per item purchased, per session, per day, per week, and per month. I call this their “threshold of paying.” No need to subscribe to a magazine’s site to read an article—just click on it and, if it costs less than your €0.05 per-item threshold and all of the other totals are within limits, up it pops—your account is debited and the magazine’s is credited. If you’re a subscriber, your certificate identifies you as one and you pay nothing … and all of this happens in an instant without your needing to do anything. The magazine gets paid for what you read, so they’ll put their entire content online, not just a teaser to induce you to subscribe to the printed edition. And if you like what you read, you’ll return and spend more money there.
Want to start your own magazine? Decided your blog is worth €0.001 per day to read? No problem … tag it with your certificate, set up a “pay to read” link to it, and listen to the millieuros tinkle into your virtual cookie jar.
Certified micropayment exchanges will, of course, be required to comply with “know your customer” and disclosure regulations, adhere to international conventions against money laundering, terrorism, and drug trafficking, and disclose transactions to the fiscal authorities of the jurisdiction of the buyer and seller for purposes of tax assessment. This will largely put an end to the use of the Internet for financial crimes and eliminate the need for further regulations or constraints on Internet commerce.
Micropayment provides a new business model to support Internet sites which attract large numbers of visitors but which have so far failed to fund themselves with subscriber or advertiser models. Micropayment permits a site to make access available to whoever chooses to visit the site on a per-page basis (or, as discussed below, even for excerpts from pages). There is no
need for a user to open an account or establish a commercial relationship with the site. As long as the per page fee is less than the individual’s threshold of paying, the per-page charge is debited automatically from the user’s account and credited to the site’s.
There’s no question that if many present-day sites started to charge, say, €0.001 per page, their traffic would collapse. But what about the sites you read every day? Is it worth a tenth of a centime per page? Have you compared what you’d pay for pages with what you’re paying now for access to the Internet?
The emergence of Weblogs (“blogs”) and other forms of independent Internet journalism has raised a variety of issues regarding free use of copyright protected material. To what extent may a blog excerpt a document published on the Web (with or without a link to the original source)? Is it permissible for a Web document on one site to link directly to a document deep within another site’s archives, potentially bypassing advertisements on the site’s main page which fund its operation?
Micropayment provides solutions for many of these problems. As envisioned by Ted Nelson almost 40 years ago in his original exposition of Xanadu, the problem with copyright isn’t the concept but rather its granularity. (I’d add, in the present day, the absurd notion that copyright should be eternal, but that’s another debate for a different document.) Once micropayment becomes as universal as E-mail, a blog will simply quote content from a Web site using an “excerpt url” (I’ll leave the design as an exercise for the reader) or provide a link to the entire document. Readers of the blog will, if the excerpt is below their threshold of paying (and the total of all excerpts in the blog is also below the threshold), see it automatically. Otherwise, they’ll have to click on an icon to fetch it, approving the payment, before it is displayed. Similarly, when following a link to a document licensed under one of the Digital Rights Management (see below) terms of use, you’ll automatically pay the fee and see the document unless it exceeds your threshold, in which case you’ll have to confirm before retrieving it.
Micropayment will greatly facilitate the deployment of wireless Internet access (Wi-Fi and its descendants). Wireless access today has a unsettled business model; some coffee shops and bookstores provide free access to their clients (and, constrained by Maxwell’s equations, those in the parking lot outside) as an added value, while hotels, airline lounges, and soon long distance flights en-route provide access for a fee. With micropayment, your wireless network interface will simply listen for bids of access and choose based on bandwidth and cost, normally accepting the best offer below the cost threshold you set. If it’s higher than your threshold, or there’s an extreme tradeoff between cost and performance, you may be asked to choose, but usually you’ll just light up your laptop, wait a few seconds, and you’re online. No mess, no fuss, and it’s guaranteed to cost less than your “threshold of paying”.
According to folklore, Michael Faraday, who discovered the principle of electromagnetic induction in the 1830’s, was asked by a British politician to what conceivable use electricity might be put. Faraday replied, “Sir, I do not know what it is good for. But of one thing I am quite certain—someday you will tax it.” This quotation is, in all likelihood, a myth, but nonetheless there is truth therein applicable to our times. For electricity, a laboratory curiosity in Faraday’s time, was eventually taxed and, in many unfortunate jurisdictions, made a government monopoly or regulated to such an extent it was indistinguishable from one, inevitably becoming scarce, expensive, and unreliable.
Like electricity, the Internet will eventually be taxed. As long as there are governments, this is inescapable. While taxation is never without pain, micropayment can at least eliminate most of the bookkeeping headaches for both merchants and customers, with taxes due for Internet use and commerce collected automatically and remitted electronically to the jurisdiction they are owed to.
Microsoft also warned today that the era of “open computing,” the free exchange of digital information that has defined the personal computer industry, is ending.Microsoft Tries to Explain What Its .Net Plans Are About
by John Markoff, The New York Times, July 24, 2002.
Digital Rights Management (drm) is the current buzzword for the technological enforcement of intellectual property rights in digital media.
drm will implement several categories of right to use content, some of which have no direct analogues in traditional publishing.
This is the traditional model of books, recorded music, videos, and shrink-wrapped software. You pay a fee for a copy and usually assent to an implicit license not to copy and redistribute it. However, there is no technological prohibition against your doing so and, in some cases, your purchase entitles you to lend the original document to others without paying additional fees to the publisher.
This is a phrase I’ve coined to denote the concept of a document sold to a given individual which is either not transferable or, if so, cannot be used to create additional copies. When you purchase a pay per instance document, it’s “bound” to your personal certificate and possibly that of the computer on which you intend to view it. If you copy the document you’ve downloaded (assuming your Trusted Computing platform even permits this) to somebody else’s system, they won’t be able to read it because they don’t have your certificate. Giving them your certificate is equivalent to handing them copies of all of your credit cards and identity documents … unlikely. If the document is, in addition, bound to a given computer system, you can read it on that system but, in order to transfer it to another (for example, from your desktop computer at home to your pda when going on holiday), you’ll need to perform a transfer which will render it readable on the pda but no longer on the desktop. You can always, upon your return, transfer it back in the other direction.
Pay per instance also permits (publisher permitting), transfers similar to lending a printed book to a friend. Suppose you’ve downloaded a book to your computer, read it, and now wish to send it to your daughter at college. No problem—just re-encode the book with her public certificate and E-mail it to her. Of course, once you’ve done that, you won’t be able to read the book any more on your own system. There may be a small fee associated with passing on the book but, hey, micropayment makes it painless and you’d probably have to pay a lot more to mail a printed book anyway. Publishers can sell library editions, perhaps at a premium, which can be transferred any number of times but, just like a book, the library can’t check out a volume to another person until a borrowed copy is returned.
Pay Per Installation is similar to Pay Per Instance, except the content is bound to the certificate of the computer on which it’s installed, as opposed to the personal certificate of an individual. Any person who uses that computer is authorised to access content bound to its certificate, but such content cannot be used on a different computer. This category will primarily be used by commercial software installed on a computer. Pre-installed software will, of course, already be bound to the computer’s factory-installed certificate. When you purchase software, whether off the shelf or by downloading from the Internet, you will receive a copy which, before it can be used, must be activated online, which will bind it to the certificate of the machine on which it is installed. The purchase of a copy of the software will usually entitle the customer to a single activation; additional licenses for other computers may, of course, be bought as needed.
Just as with Pay Per Instance, the publisher of a Pay Per Installation product may permit you to transfer the product to a different computer. If, for example, you replace your old clunker with a TurboWhiz 40 GHz box, you may be able to move your existing programs to it, going through an activation procedure which will render them unusable on the old machine and bound to the new one. Or, on the other hand, the publisher may not permit this; it’s up to the specific terms of the license.
This is how movies worked when I was a kid. If you wanted to see the movie, you went to the box office, plunked down your fifty cents (I was a kid a long time ago), and received a ticket which entitled you to see the movie (plus the newsreel, the cartoon, etc.) once. When the show was over they turned on the lights and chased everybody out. If you just had to see it again … another four bits, thank you very much. This is the golden age media barons dream of while sleeping off the diverse intoxicants they’ve ingested at sybaritic Hollywood parties.
As with Pay Per Instance, the content you download is bound to your personal certificate or that of your computer but, in addition, it’s limited to being played a maximum number of times, for instance, once. Now, instead of struggling to find a song on a music sharing service under constant attack by music moguls, you can simply visit your favourite online music store, find the song that’s been going through your head for the last few hours, download it for a small fee and listen to it … once. If, having listened to it, you’d like to play it over and over or put it on a cd for your own use, pay a little more and buy a Pay Per Instance copy. No more need to buy an album to get one or two hit singles—of course the singles cost more than the filler. And no, you can’t give a copy of the cd you made to your friends, since the songs on it are bound to your certificate and machine. You can make as many copies as you like of your “killer tracks” cd and give them to your friends or sell them on the Net, but everybody who receives one will have to pay the license fee for each track in order to obtain the right to play it.
Note that pay per view has applications outside traditional entertainment media; evaluation copies of software can be licensed to a user for a maximum number of trial runs, after which the user must either purchase a license permitting unlimited use, or some number of additional runs. Software vendors offering evaluation copies on this basis are protected, since they record the user’s certificate when issuing evaluation copies, and refuse to issue more than one evaluation copy to any user. This application of pay per view to software closes the loopholes which have made shareware a difficult business model.
Earlier attempts to protect intellectual property in the digital age have sparked an arms race between copyright owners and those who wish to freely copy protected works. There are reasons to believe a comprehensive implementation of Digital Rights Management on a Trusted Computing platform will be a much tougher nut to crack, evolving in time toward effectively complete security (defined as the point at which losses due to copying are negligible compared to the cost to reduce them further), much as has happened with digital satellite television broadcasting. In the United States, the Digital Millennium Copyright Act, enacted in 1998, criminalises reverse engineering and circumvention of copyright protection mechanisms, and has been interpreted as applying to even the dissemination of information regarding the design and implementation of copy protection technologies. Given the political consensus which enacted this law, the stakes involved for media companies, and the investment now being made in Digital Rights Management technologies by computer hardware and software vendors, there is every reason to expect the near-term deployment of a highly secure system implementing all the varieties of right to use described above, which will not be widely circumvented.
Once Trusted Computing platforms are in place which protect intellectual property rights, this security can be extended to the Internet itself. The arpanet, precursor of the Internet, was designed to explore highly fault-tolerant networks for military communications. In such networks, all communications links could be secured and the identity of all nodes on the network was known. In today’s global, open access Internet neither of these conditions obtains, and many of the perceived problems of the present-day Internet are their direct consequences.
Tomorrow’s Secure Internet will be implemented in Trusted Computing platforms, in conjunction with Internet Service Providers and backbone carriers. Today, any computer on the Internet can connect to any other connected computer, sending any kind of packet defined by Internet protocols. This architecture means that any system on the Internet, once found vulnerable to some kind of attack, can be targeted by hundreds of millions of computers around the world and, once compromised, be enlisted to attack yet other machines.
The Secure Internet will change all of this. Secure Internet clients will reject all connections from machines whose certificates are unknown (this will be by service; a user may decide to receive mail from people whose certificates aren’t known to them, but choosing otherwise will block all junk mail—it’s up to the user.) On the Secure Internet, every request will be labeled with the user and machine certificates of the requester, and these will be available to the destination site. There will be no need to validate login and password, as the Secure Internet will validate identity, and, if registered, a micropayment account will cover access charges and online purchases. Internet Service Providers will maintain logs of accesses which will be made available to law enforcement authorities pursuant to a court order in cases where the Internet is used in the commission of a crime.
In addition, The Secure Internet will protect the intellectual property of everybody connected to it. Consumers will be able to download any documents on the terms defined by their publishers, which will be enforced by Digital Rights Management. Publishers will serve documents, each identified by a certificate which identifies its publisher and its terms of use, and includes a signature which permits verification the document has not been corrupted subsequent to publication.
The technological precursors discussed above provide the foundation for the Secure Internet. A typical individual Internet user visiting Web sites, performing searches, buying products and services online, sending and receiving E-mail and instant messages, participating in chat rooms, news groups, discussion boards, and online auctions will notice little change from the present-day Internet except, perhaps, fewer of the irritations which currently detract from these activities. But the Secure Internet will be a very different kind of place, due to fundamental changes in the way those connected to it interact. This section discusses each of these changes in detail. The following section will sketch the consequences for various kinds of activity on the Internet once they have all been implemented.
Many of the problems of the present-day Internet, which engender numerous, mostly ill-considered proposals for legal remedies, are due to the fundamental lack of accountability on the Internet. The Internet, as presently implemented, affords its users a rather high degree of anonymity which permits them, if so inclined, to engage in various kinds of mischief with relative impunity.
Providing, or rather restoring, accountability to the Internet is the key technological foundation for fixing a large majority of its current problems. The present-day anonymity of the Internet wasn’t designed in—it is largely an accident of how the Internet evolved in the 1990’s; see Appendix 1 for details.
Let us explore how accountability will be restored to the Internet.
The first step in restoring accountability to the Internet will be the introduction of the Internet User Certificate. This certificate, without which no packets will be transferred across the Internet, uniquely identifies the person (individual or legal entity) responsible for sending them. The best analogy to this certificate is not a telephone number, but rather the call sign with which radio and television stations, including amateur radio operators, identify their transmissions. The Internet User Certificate is simply the unique identification of the person responsible for sending a packet across the Internet. An Internet User Certificate is the credential which identifies its sender.
Compared to contemporary Internet access accounts, access by certificate has gravitas. First of all, one may expect that, given the legal ramifications which certificates will have, sanctions against obtaining or using a certificate under false pretenses will be akin to those for obtaining a passport with forged credentials or presenting a forged driver’s license to a policeman in a traffic stop. Accessing the Internet with a false certificate is equivalent to driving on public highways with a bogus number plate on your vehicle or crossing a border with a fake passport and will be subject to comparable penalties.
When you connect to the Secure Internet, your certificate will be transmitted to the point of access, which will then validate your certificate. If its issuing authority fails to confirm its validity, or it has been revoked by its owner due to a compromise, or has been blocked pursuant to a court order, access will be denied. Once your certificate is validated, you’ll be granted full Internet access, precisely as at present. Your certificate will be logged along with the connections you make and furnished, on demand, to all sites to which you connect. This will make e-commerce painless and secure. Once you’ve registered with a merchant, all subsequent communications are secured with your certificate. You needn’t memorise a user name and password for each site, nor worry about a merchant’s site being compromised threatening your security. As long as you protect your certificate as you would your wallet or credit cards, you’re secure and, in the worst case, s
Posted by cds at October 15, 2003 05:58 PM