Sunday, January 25, 2004

Internet Voting, Safely

[Note - this is an archived version of the original posting from 05:41 PM EST, Jan 25 2004]

Recently there has been publicity about a report critical of a proposed internet voting experiment. The voting system, called SERVE, was designed to allow overseas military personnel to vote absentee via the internet. The authors were four members of the SPRG (Security Peer Review Group), a ten member panel of experts in computerized election security that was called upon to review the SERVE project. The remaining members of the panel have not issued a public report on their opinion of the system.

Generally I agree with the concerns raised by the report. At this point, the security of typical PCs running Windows software is too weak to let them be a foundation for something as important as voting. It is extremely difficult to secure a Windows system to be immune to worms, viruses and other malware, and a voting application would be a significant target for malicious software creators.

At the same time, it is important not to read too much into the published criticisms. Most of the points made are rather specific to the SERVE system itself, and to the nature of the internet and PC security today. But any large-scale implementation of voting will occur only after several years of development and testing. During that time, we can expect continued improvement in technology, and especially in security technology. Many of these improvements are on the drawing boards now, and more will be developed.

As recently as a few years ago, security was largely a non-issue on net. It is only within the last two or three years that the problems have escalated to the point that major efforts are being made to improve the security situation. As with spam, which has just become an issue within the past year, it will take time for the net to react, but reactions will come.

It is a worthwhile exercise, then, to look at internet voting in the context of the security improvements that we can expect in the next five to ten years. Are the problems with internet voting, as the report claims, "fundamental in the architecture of the Internet and of the PC hardware and software that is ubiquitous today"? And is it the case that the security problems "cannot all be eliminated for the foreseeable future without some unforeseen radical breakthrough"? While no technology can eliminate "all" problems, I believe that the improved security capabilities that will become available over the next few years could eliminate many of the difficulties with internet based voting.

One of the most anticipated new security technologies expected to become available is Microsoft's Next Generation Secure Computing Base (NGSCB), aka Palladium. This is an implementation of the Trusted Computing (TC) concept, promoted by the Trusted Computing Group (originally TCPA). In Microsoft's vision, Trusted Computing consists of the following technologies:

Process isolation: Running without being altered or interfered with by other programs or the user himself

Sealed storage: Storing data in encrypted form such that no other programcan decrypt it

Secure user I/O: Displaying data and receiving input from the user without it beingaltered or inspected by other software

Remote attestation: Being able to prove to a remote system precisely what local programis running

Let us examine how these properties can improve the prospects for secure internet voting applications.

First, process isolation will address the most serious problem with internet voting today, the tremendous insecurity and vulnerability of most PCs to software attacks. Since a PC used for voting must be hooked up to the net, it is inherently exposed to these attacks. Current technologies, including firewalls and anti-virus software, can help but generally are inadequate.

Process isolation will allow a voting client program to run without being affected by other malicious software on the machine. This technology sets aside a special memory region which cannot be touched by other software, where the voting program can load and store its data. Even if the user's PC is full of viruses, the voting program will still be able to run and not be touched.

Sealed storage will add security by allowing the user to store his voting credentials and other sensitive data in such a way that they cannot be stolen by infected software on the computer. This technology could even allow for online voter registration (prior to voting) in such a way that crucial registration information was kept sealed and inaccessible even to the voter himself. This would prevent sharing credentials even with the cooperation of the voter, eliminating one possible form of vote selling.

Secure user I/O will add a crucial element of security for voting. It will prevent malicious software from pretending to be the voter and submitting votes on his behalf through spoofing input; and it will protect privacy by keeping other software from being able to see what is displayed on the screen as the voter makes his choices. This latter feature will close off another avenue for vote selling.

Finally, remote attestation is the most important feature for the voting application. This is what allows the central voting server to authenticate that each user is running a valid copy of the voting client software; that the client software is running on a computer with the TC enhancements; and that the client software has not been infected, modified or otherwise tampered with. This provides the "root of trust" for the voting system, a foundation for establishing all of the other security features listed above.

Putting these features together, Trusted Computing provides a secure environment on end-user PCs where voting client software can run. It allows for secure distribution of the client software, as any modifications to it can be detected during the remote attestation phase. It protects the user's privacy and prevents vote alteration, substitution or spoofing. And it greatly reduces the problems with vote selling.

In addition, these features can address another class of problems described in the report, the use of computers belonging to other owners for voting. In many cases people are expected to vote on computers controlled by employers, local governments or other institutions, because they may not have their own systems or they may not have internet connections. The report describes problems with this arrangement including lack of privacy and even vote alteration, as the owner of the computer has complete control over all that happens on it.

One of the most controversial features of Trusted Computing attacks exactly this problem, by allowing the owner to (voluntarily!) give up control over some software on his computer. The TC features listed above provide protection against not just remote and local software attacks, but also interference by the owner of the computer himself. Not even the owner can bypass the process isolation, or unseal stored data, hack into the I/O paths, or produce a false attestation about the nature of the software that is running. This will provide protection to voters who use computers owned by other parties and eliminate a large class of attacks listed in the report.

Recently, some observers, most notably the Electronic Frontier Foundation, have proposed that the remote attestation feature should be overridable by the owner. He would, in effect, be able to get his TC system to lie about what software is running. While this might seem to be an improvement in the system by giving user's more control, it actually eliminates some of the most important security features.

In this particular case, for example, it would mean that voting users would no longer have any assurance that the system they were using was running legitimate software. The owner could have loaded it up with spyware and worse. The well-meaning attempts by the EFF to soften the security of TC would actually eliminate an important user protection in this and other applications. The bottom line is that there are cases where it is useful to the owner of a system to be able to publicly renounce the ability to control certain software applications. By proposing to take this ability away from him, the EFF is actually diminishing rather than enhancing the user's available choices.

Mostly here we have been focusing on the voting client software, which will be running on a large number of relatively insecure machines located in homes and businesses. However the same considerations can be used to improve the security of the server software as well. Even if the net as a whole cannot be made secure, and legacy software remains vulnerable due to the complexity imposed by retaining compatibility for decades-old software, TC will allow new programs to be created and to run in a new, clean environment free from molestation by other programs. Voting servers can rely on TC technology to protect their applications from a wide class of attacks.

One final comment is with regard to the problem with Denial of Service (DoS) attacks. Today it is relatively easy for an attacker to take control of hundreds or even thousands of poorly protected PCs on the net. At his command, these systems can send a flood of requests at some server, overwhelming it and preventing legitimate connections from getting through. These DoS attacks have been a nagging problem on the net for the past few years, and the report worries about the implications if such a shutdown attack occurs during a vote.

However it is likely that within a few years there will be widespread installation of DoS resistant features within the net. One technique used to make DoS more effective is to get computers to lie about their internet addresses when they send out requests. This makes it harder to trace where the attacks come from and shut them down. However an effective counter-measure is known, which is for internet providers to refuse to send out packets from their customers with such bad address data. These countermeasures are likely to be widespread within a few years and the problems with DoS attacks correspondingly diminished.

Summing up, if we take a snapshot of the net today, we see it just beginning to awaken to problems of security which have been festering for years but have now escalated to urgent dimensions. Security professionals are experts on the current state of the art, but focusing too narrowly on present conditions produces only nearsighted prescriptions. Looking ahead a few years to security enhancements which are already being worked on and implemented, the situation changes drastically.

There are no reasons I can see why internet voting will be fundamentally unsafe or undesirable once these new security features are widely available. While internet voting is not something we should rush into, at the same time we should not close our eyes to the inevitability of society continuing to exploit the tremendous information capabilities of the net in new and challenging ways. The security pendulum is swinging, and in a few years the problems which seem so overwhelming today will have solutions which make them tractable and manageable.

Trusted Computing is an important part of the security equation of the future. Any analysis of a security application which fails to consider the impact of TC will be inherently incomplete and soon obsolete. I encourage security professionals to familiarize themselves with Trusted Computing technology and to use this new toolkit as an integral part of their analyses.

Tuesday, October 21, 2003

Palladium versus the Broadcast Flag

[Note - this is an archived version of the original posting from 02:00 AM EDT, Oct 21 2003]

[This is a slightly edited version of a message posted to the cypherpunks list today. It is in response to reports that the FCC will soon be imposing a mandate for enforcing a Broadcast Flag to limit copying of HDTV programs, and comparisons with Trusted Computing technologies such as Microsoft's Palladium (aka NGSCB).]

There are fundamental differences between Palladium and the Broadcast Flag.

Palladium works in an inherently voluntary way. There is no benefit to the content providers in mandating Palladium! Rather, they simply make their content available only to end users who own Palladium systems and run approved software. Once Palladium is built into the standard and widely used Windows operating system, the Longhorn system projected for release in 2005, there will be a huge market of people who have the requisite hardware.

At that point it will be no different for the end user from the situation today with Apple's iTunes music store making its software only available to people who run the iTunes software. Apple is claiming one million downloads of its software in only a few days. All this is completely voluntary, and the same will be true of software based on Palladium.

The Broadcast Flag is another matter entirely. The distributors of that content are not limiting it only to people who have voluntarily agreed to honor the restrictions. Rather, they provide it to everyone, transmitting their radio waves through walls and rooms whether anyone wants it or not. This means that people get the content who have not agreed to the restrictions, and no voluntary protection scheme can work.

Therefore the content industry is getting the government to use its coercive power to force everyone to obey the limitations specified by the BF. Basically, we are being forced to honor the BF at the point of a gun.

Therefore, the BF is fundamentally evil, and Palladium is fundamentally good. The BF can only work via coercion and the threat of force, while Palladium can work in a voluntary, cooperative and peaceful society. The only proper response for someone who is against coercion is to oppose the BF, and to support Palladium.

Monday, October 06, 2003

EFF Report on Trusted Computing

[Note - this is an archived version of the original posting from 10:34 PM EDT, Oct 06 2003]

[Permission is granted to repost this document in its entirety, without other limitation.]

The EFF has published a report on the "Promise and Risk" of Trusted Computing at http://www.eff.org/Infra/trusted_computing/20031001_tc.php. See also http://www.eff.org/Infra/trusted_computing/ for ongoing coverage of TC issues.

The EFF is to be congratulated for taking its time to study the many issues revolving around TC and come to a relatively balanced and nuanced position. Staff Technologist Seth Schoen, said to be the principle author of the new report, provided some of the best early information about Palladium on his blog at http://vitanuova.loyalty.org/2002-07-05.html and similar postings, which were refreshingly objective and free of the almost obligatory anti-Microsoft bias of other analyses from so-called online rights activists.

Nevertheless, the EFF report has a number of shortcomings which deserve discussion. The EFF tries to distinguish between "good" and "bad" aspects of TC, but it does not draw the line in quite the right place, even given its somewhat questionable assumptions. It fails to sufficiently emphasize the many positive uses of the full version of TC (and hence the costs of blocking its implementation), and also misses some important negatives as well. And the recommended fix to TC is not clearly described and as written appears to be somewhat contradictory.

But let us begin with some positive elements of the EFF report. This is perhaps the first public, critical analysis of TC which fails to include two of the worst lies about the technology, lies promulgated primarily by Ross Anderson and Lucky Green: that only authorized programs can run "trusted", and that unauthorized or illegal programs and data will be deleted from computers or prevented from running. The EFF appears to recognize the key feature of TC, which gives it its name: that trust is in the eye of the truster. Anyone can create code which benefits from TC features, and it is up to the user of a computer to decide which local and remote software he will trust.

The report also forthrightly rejects the claim that TC technology is some kind of trick to defeat Linux or lock-in computers to Microsoft operating systems, and debunks the lie put forth by Lucky Green that TC will insert spyware into your computer.

By choosing to emphasize the truth rather than lies on these important points, the EFF gains credibility at the expense of opening itself to charges by extremists that it is in bed with Microsoft or is promoting "evil" technology. Those of us who have argued in the past for balanced analyses of TC are well aware of the speed with which opponents resort to name-calling and personal attacks, and it is a credit to the EFF that they have taken a courageous position which departs from the conventional wisdom in the online rights community.

Despite these positives, as noted above the report has some weaknesses which need to be addressed. The EFF attempts to distinguish one feature of TC, remote attestation, as a source of problems. This is the ability of a computer user to convince other systems about what software he is running. The EFF is convinced that this feature will cause users to be compelled to use software not of their choice; harm interoperability and encourage lock-in; and support DRM and various restrictive kinds of licensing.

But when we break these down in detail, many of the problems either go away or are not due to attestation. Software choice limitation may occur if a remote system provides some service conditional on the software being used to access it. But that's not really a limitation of choice, because the user could always elect not to receive the offered service.

The implicit assumption here seems to be that if TC did not exist, the service would be offered without any limitations. Then it makes it appear that TC adds limitations which are not currently present. But what this analysis overlooks is that TC will allow the creation of new services which are not economically possible today. By allowing for more protection of data, a whole host of new applications may become possible. So the proper comparison is not with a hypothetical state where you'd have all the same services without TC as with; but rather, comparing a TC world that is relatively rich in services with a service-poor non-TC world.

Turning to the issues of lock-in and interoperability, it is true that TC may allow software creators to lock their data to the applications and make it more difficult to create interoperable alternatives, thus promoting lock-in. The problem here with the EFF analysis is that it is not the remote attestation feature of TC which is the primary cause of this effect, but rather it is the sealed storage feature. It is sealed storage that allows data to be encrypted such that only one particular application can decrypt it, and potentially makes it impossible to switch to a different software package, or access the data in an interoperable way.

The EFF attempts to say that sealed storage and other features of TC are good, because they clearly can increase the security features of your computer. Then they draw a line at remote attestation. But if it is lock-in and interoperability that worries them, sealed storage has to go as well. This inconsistency in the report undercuts its main conclusion.

And parenthetically, lock-in is not necessarily a bad thing, as long as people know about it in advance. When you go on vacation you know that you will only be able to eat at restaurants in the local area. You are locked-in to local eateries. Everyone accepts this as part of the cost of the vacation. People can factor these kinds of lock-in costs into the overall package when they make decision about what to buy, whether travel or software. In this sense, it's good for activists like the EFF to make people aware that TC may increase lock-in, but they should put the issue into perspective and not present it as a reason to abandon the technology. It's just a consideration to be aware of when buying any software that is TC-enabled.

Lastly, the EFF is worried that remote attestation enables DRM and other restrictive licensing practices. This is clearly true, although things are not quite as simple as they seem. Before wide-scale use of TC for DRM, it will be necessary for the manufacturers, software vendors and content providers to get past a few tiny details, like setting up a global, universal, widely trusted and secure PKI. Hopefully readers in these forums will understand that this is not exactly a trivial problem. Going from the basic technological definitions of TC to the massive infrastructure of keys and revocations needed for a secure, commercial DRM system and other licensing schemes is going to take quite a while.

But in any case, once it happens, again the report fails to paint a balanced picture, by emphasizing the negative aspects of the new kinds of licensing that TC will enable. It should be clear that a technology that allows new kinds of voluntary arrangements, without eliminating any old ones, cannot be entirely evil. TC only expands the space of possibilities, it does not stop anyone from doing things the old way.

If the new possibilities enabled by TC are truly so horrible for consumers, and if it is possible (as TC opponents implicitly assume) to provide these functionalities without the nightmarish limitations that the report is so afraid of, then some companies can still offer their goods under those more-favorable terms, and reap massive rewards as consumers triumphantly reject the horrific license terms of the TC-based software.

This report, like so many others, ignores the role of consumers in making decisions about what technologies to use. This is one area in which the EFF was unable to rise above the myopia shared by so many other analyses.

Ironically, given these oversights, the report also manages to miss some bad features of TC, features which have been discussed at some length on the cypherpunks and cryptography mailing lists. One of the biggest is the area of upgrades and system replacement. The TCPA (now TCG) proposal for handling upgrades is clearly unworkable, and Microsoft has said nothing about how they will do it. Any data which is locked to your computer is clearly at greater risk of being unrecoverable if your computer breaks. Until a bulletproof upgrade path exists, end users are going to be reluctant to embrace the promise of TC technology.

Another area not discussed is the risk to privacy implicit in using this technology on a global network. TCPA's solution, "privacy CAs", is another part of the spec that is obviously never going to work. Microsoft had made some noise about copying this at one point, and is now decidedly mute on the issue. It is an almost impossible problem to solve, and chances are that the companies will simply give up and let the system compromise user privacy. As a privacy-oriented watchdog group, the EFF has dropped the ball in failing to emphasize this point.

The final complaint about the report is that their solution doesn't seem to make sense. The basic idea is to allow the user to override the remote attestation feature so his system can lie about his software configuration. The apparent problem with this, as a number of commentators have pointed out, is that it undercuts the remote attestation feature and makes it useless. It is like "fixing" the limitations of cryptographic certificates by allowing anyone to forge them.

Doing this defeats the purpose of the feature so completely that you might as well not have it. It would seem to make more sense for the EFF to simply call for remote attestation to be removed from the TC concept than to try to come up with a complicated "owner override". And in fact it seems likely that remote attestation will be one of the last parts of the TC spec to be implemented due to the PKI problem noted above, so we will probably see TC installations initially without attestation support. It may be that remote attestation never becomes as popular as TC proponents hope and critics fear.

Now, perhaps there are some subtle aspects to the EFF proposal which would make attestation with owner overrides more useful than a version of TC without attestation at all. But to analyze that we'd need more detail about how exactly this owner override is supposed to work, and what attestation would still be used for in such a system. As it is, the proposal is frustratingly vague on these details.

Summing up, the EFF report manages avoid the worst excesses of anti-TC rhetoric so common in the online rights community. By attempting to take a moderate course and identifying both promise and risk with TC technology, it does a service in setting a new standard of accuracy and civility in analyzing this important topic. However the report does have weaknesses, and its attempt to focus on problems with remote attestation misunderstands both economic realities and the technical details of which aspects of TC cause problems. By concentrating so narrowly on attestation, the EFF overlooks both important risks and promises of this new technology. And its proposed solution appears illogical on its face, requiring much more explanation and discussion for a fair evaluation.

Make no mistake about it: TC is coming. All the rhetoric, all the protests and objections, are doing nothing to alter the apparently unstoppable momentum of this new technology. Microsoft is committed to NGSCB (Palladium), and the TCG (TCPA) is working actively on specs for cell phones and other devices. There is even considerable work to bring TC into Linux.

What we need now is better understanding of both the risks and rewards of this technology, which will be here perhaps sooner than many of us expect. The EFF report is a good first step in this direction, but the problems need to be corrected. And rather than a futile and quixotic attempt to change the nature of TC, the EFF should focus on informing consumers about the pros and cons of the system, how it will affect their use of technology in years to come, questions to ask of vendors, and ways to protect their privacy and security. That is a hard enough task, and one truly in keeping with the EFF's goals and mission.

Sunday, October 05, 2003

State of the Art in Credential Systems

[Note - this is an archived version of the original posting from 10:15 PM EDT, Oct 05 2003]

Here is another message which was censored by the cryptography list moderator because it was posted anonymously. Surely it should be obvious that this kind of posting is exactly what the cryptography list was designed to supply to its subscribers. The moderator's action in unfairly and arbitrarily excluding postings from anonymous contributors is misguided and wrong. Subscribers to the cryptography list should demand an explanation of the moderator's policy with regard to anonymous messages.

===

"bear" writes:

> On Fri, 3 Oct 2003, John S. Denker wrote:
> >We need a practical system for anonymous/pseudonymous
> >credentials.  Can somebody tell us, what's the state of
> >the art?  What's currently deployed?  What's on the
> >drawing boards?
>
> The state of the art, AFAIK, is Chaum's credential system.

Nonsense! What an absurd statement. Nothing could be further from the truth. You, "bear", need to check your facts before posting. You have a habit of making superficial and incorrect comments.

Chaum's credentials are described in his paper with Evertse from Crypto 86, "A secure and privacy-protecting protocol for transmitting personal information between organizations". Contrary to "bear", there has indeed been some progress in the 17 years since.

There have been two main lines of improvement since then. One is the work of Brands, best described in his book (and PhD thesis), "Rethinking Public Key Infrastructures and Digital Certificates". A few chapters are available on his web site at http://www.credentica.com/technology/book.html, and a summary of the technology is at http://www.credentica.com/technology/overview.pdf. Brands' credentials are highly efficient and compact, with many variations possible in terms of the protocols and mathematical representations. They support revealing boolean and some mathematical functions of credential values.

The other is the work of Camenisch and Lysyanskaya, based on group signatures. See http://www.zurich.ibm.com/~jca/publications.html, especially the papers from Eurocrypt 2001 and Crypto 2002. These credentials are quite flexible and work well in a decentralized, multi-issuer environment. They allow for both optional piercing of anonymity and for anonymity-preserving credential revocation, and can provide protection against credential sharing.

Unfortunately, the Chaum and Brands credentials are heavily patented, and Camenisch & Lysyanskaya have said (personal communication) that they will be seeking patents as well. Searching uspto.gov reveals one patent application by the pair, dated two weeks ago, and specific to some of the novel revocation features of their system.

"CyberInsecurity" on the wrong track

[Note - this is an archived version of the original posting from 05:01 PM EDT, Oct 05 2003]

The article below was submitted to the Cryptography mailing list, cryptography@metzdowd.com, archived at http://www.mail-archive.com/cryptography%40metzdowd.com/maillist.html. As with a series of articles posted anonymously, the moderator, Perry Metzger, refused to publish it. He is hiding this information from the subscribers to his list, apparently out of spite.

Now, in this case, Perry was one of the co-authors of the report which I criticize, so his emotional reaction may be understandable. Nevertheless on an important policy issue such as this one, it is important to allow all sides an opportunity to air their views.

I request my readers to write to Perry Metzger, perry@piermont.com, and ask him to allow anonymous messages to appear on the cryptography mailing list.

===

The CyberInsecurity essay is available at http://www.ccianet.org/papers/cyberinsecurity.pdf. A few comments:

Overall, this is a terrible analysis with a misguided solution which, if adopted, would only make things worse. It is shocking to see the well known figures who have allowed their names to be attached to this document. Apparently hatred of Microsoft runs so deep that people are unable to think critically when presented with an analysis that attacks the company. We saw the same thing with the absurd lies and exaggerations about Palladium last year.

> The threats to international security posed by Windows are significant, > and must be addressed quickly. We discuss here in turn the problem in > principle, Microsoft and its actions in relation to those principles, and > the social and economic implications for risk management and policy. The > points to be made are enumerated at the outset of each section, and > then discussed.

Let's look at these three portions. The "problem in principle", according to the report, is the existence of a monoculture, which should be addressed by diversification. There are nonsense figures in here that claim to quantify the "power" of the net, using absurd, handwavey formulations like Metcalfe's Law or Reed's Law. (Reed's so-called Law is a joke, predicting that the Internet will be 228 quadrillion times more "powerful" in 10 years if the number of systems increases 50% per year!) This is not logic, this is not reason, it is just rhetoric.

But the fundamental problem with the analysis here, which is what makes the report's recommendation so misguided, is that claim that diversification will somehow solve the problem. In fact, diversification will make it worse, as a moment's thought should make clear.

Let's suppose that the government stepped in, and the kind, wise government bureaucrats we all know and love so well decided to aid disadvantaged operating systems. This affirmative action program is so effective that after many years, Microsoft has only a third of the market; Macs have another third; and Linux has most of the remaining third. Wow, the problem is solved, right?

Wrong. With the number of systems on the net growing rapidly, any realistic extrapolation leaves the number of Windows systems as being even larger than today. Hence we face at least as much exposure as at present, which the evidence has shown is more than enough to cause tremendous economic damage.

And in fact, it is worse, because any flaws in the Mac or Linux OSs will now be just as dangerous as for Windows! What we will face is a situation where the *weakest* of the widely used OS's will determine the risk factor for the system as a whole.

This is not the kind of redundancy which reduces risk. There is no effective way that the presence of other architectures is going to prevent a virus or worm from being able to spread just as rapidly as today.

That error is the most fundamental in the report, but let's turn to their analysis of Microsoft's dominance, where again they have utterly missed the obvious truth.

The report claims that the reason for Microsoft's dominance in OS is due to what it calls application lock-in, which is a nasty way of saying that people prefer Windows because they want to use applications that are only available on that architecture. This part is obviously true. But the report tries to link this to the claim that this is all due to Microsoft's strategy to tightly integrate applications and the operating system, which is absurd.

In the first place, many of the most popular applications which drive people to choose Windows aren't even from Microsoft. Games, business software, web utilities, there are thousands of popular programs which are only available on the Windows architecture. These programs aren't built into the OS, but instead the companies making this software have chosen Windows because it is popular, has good development tools, and in the early days was easier to write for (remember that up until a few years ago, the Mac lacked preemptive multitasking, and Linux wasn't even a blip on the radar).

In the second place, Microsoft does in fact make some of its most popular applications available on the Mac. Office and its predecessors, and IE have been available for many years on that platform. These apps are not locked to the OS as the report claims.

And in the third place, the real reason why Microsoft preferentially supports Windows is not due to technical integration with the OS, but for the obvious economic reason that the Windows OS is made by the same company as Windows apps, so it makes sense for the latter to support the former. This fact is so utterly obvious that it is astonishing that the report manages to miss it.

> The natural strategy for a monopoly is user-level lock-in and Microsoft > has adopted this strategy. Even if convenience and automaticity for the > low-skill/no-skill user were formally evaluated to be a praiseworthy > social benefit, there is no denying the latent costs of that social > benefit: lock-in, complexity, and inherent risk.

Here the report manages to touch upon a particularly important point, but as usual to miss its significance. The point is that Microsoft's security vulnerabilities are due to the fact that it is making its software easy to use. But that is one of the main reasons it is so successful! Believe it or not, people like software that is usable and has features they need. Doing so is difficult and makes software more complex. By adopting this strategy, Microsoft has inevitably acquired security vulnerabilities over the years.

What the report misses, then, is that any other OS or company which adopts the same strategy is going to face the same problem. But companies are going to be forced to make their software easier to use and more complex in order to compete with Microsoft, even if the report's recommendations were adopted. This is going to add to the problem noted above, that the other OS's are going to have security vulnerabilities as well, once they are widely used.

What the authors appear to really want is to somehow change software development methodology so that security takes precedence over features. As a security professional who has worked for many years on consumer products, I am well aware of the tension that exists within corporations between these two competing goals. It is perhaps understandable that others in our field are trying to win this argument by government fiat. The authors are in effect saying that they know better than the end users what is important; that if customers prefer that their word processors are functional, their wishes would be overridden in order to make the programs more secure.

Even if we accept this argument (the morality of which is highly questionable), forcing Microsoft to port Office to Linux isn't going to do a single thing to accomplish it! As noted above, the only effect is going to be more pressure on the newly enfranchised OS's to become more like Microsoft in order to compete, that is, to add features and complexity. Ultimately, those are the preferences of the people buying the computers, and no amount of pontificating by the authors of this report is going to change those economic incentives.

Turning to the third section of the report, the authors contradict themselves by claiming that Microsoft will not change its habits, while at the end of the second section they just listed several important changes. Microsoft's trustworthy computing initiative, its introduction of delays in product release in order to address security goals, and its work towards a secure computing base are all changes that indicate that Microsoft is taking a much more serious attitude towards security.

But rather than give the company a chance to see what it can do in terms of making its products more secure, the report proposes to force Microsoft to reorient its development efforts towards making Mac and Linux versions of all its software, as if that will solve anything:

> Microsoft should be required to support a long list of applications > (Microsoft Office, Internet Explorer, plus their server applications and > development tools) on a long list of platforms. Microsoft should either > be forbidden to release Office for any one platform, like Windows, > until it releases Linux and Mac OS X versions of the same tools that > are widely considered to have feature parity, compatibility, and so forth.

The arrogance of this proposal is beyond belief. One of the most successful companies in the world, one which even the report admits has specialized in making software easy to use and meeting the needs and requirements of end users, is expected to reorient its development efforts and port its massive software base to a "long list" of platforms.

No consideration is given to the costs of this government-imposed mandate. No concern is expressed about the impact on end users who have come to appreciate Microsoft's increasingly functional applications. Ironically, no one even seems to realize that resources spent doing these ports may well detract from Microsoft's current efforts to refocus on security improvements! Forcing the company to change direction like this is likely to weaken security, not improve it.

The lack of any strong evidence that these drastic measures will improve the security of the net as a whole demonstrates that this is an ideological report rather than a technical one. Hand-waving about diversification does not answer the point.

Realistically, even if the net does become more diversified (which will probably happen, gradually and naturally, without Draconian government regulation), we are still going to have a relatively limited number of architectures that are popular. That's just the way markets work; there is only a limited amount of public attention to go around, and in most markets there are only a few companies which claim the majority of the market share.

The result is that we will have a system where, as pointed out above, not one but several architectures are each widespread enough to bring the net to its knees when an exploit is discovered. This network will only be as strong as its weakest link. Diversity, in this context, is a risk factor, not a risk mediator.

In summary, this report is misguided and mistaken on so many levels that it is astonishing that such well respected figures were willing to put their names to it. The analysis is flawed or missing. The recommendations are harsh, extreme and premature. And ultimately their proposals will only serve to make the problem worse, not better.

Security of Linux versus Windows Web Servers

[Note - this is an archived version of the original posting from 05:00 PM EDT, Oct 05 2003]

This was a response to a message on the Cryptography mailing list, http://www.mail-archive.com/cryptography%40metzdowd.com/msg00936.html. In keeping with his recent policy, the moderator of that list, Perry Metzger, refused to publish this information, apparently because it came from an anonymous contributor.

Read the message below and see if it isn't the kind of useful, relevant information which subscribers to the list would benefit from seeing. If you agree, please ask Perry Metzger, perry@piermont.com, to stop censoring anonymous postings.

===

IanG writes:

> I haven't looked for a while, but last I looked, the #1,2,3 players > were Linux, Microsoft, FreeBSD, and only a percentage point or two > separated them. (I'm unsure of the relative orders. And this relates > to testable web server platforms, rather than all servers.) > > So, in the market for server platform OSs, is there any view as to which > are more secure, and whether that insecurity can be traced to the OS? > Or external factors such as a culture of laziness in installing patches, > or derivative vulnerability from being part of the monoculture? > > (I raise this as a research question, not expecting any answers!)

Well, you're going to get some. According to the Globe and Mail, http://www.globetechnology.com/servlet/story/RTGAM.20030911.gtlinuxsep11/BNStory /Technology/, "During August, 67 per cent of all successful and verifiable digital attacks against on-line servers targeted Linux, followed by Microsoft Windows at 23.2 per cent. A total of 12,892 Linux on-line servers running e-business and information sites were successfully breached in that month, followed by 4,626 Windows servers."

Even the Linux apologists on slashdot at http://slashdot.org/article.pl?sid=03/09/11/1951201 had a hard time making this one go away.

Friday, April 25, 2003

QuickTopic Discussion Board Available

[Note - this is an archived version of the original posting from 02:50 AM EDT, Apr 25 2003]

I've set up a QuickTopic discussion page to give readers a place to discuss the topics raised on this weblog. At this point I don't expect much traffic, so it's probably not worthwhile to create a new discussion page for each entry.

Try it out, if you have something to say!

Here's the link.

Thursday, April 24, 2003

Fair Use is Not a Right

[Note - this is an archived version of the original posting from 10:40 PM EDT, Apr 24 2003]

Here is a copy of a posting I made last year to the cypherpunks mailing list. It challenges claims made by some that DRM is evil because, among other things, it can take away "fair use" rights. This is an argument from a libertarian perspective that DRM is a perfectly fair type of contract even when it offers no exceptions for "fair use".

Suppose you know someone who has been working for years on a novel. But he lacks confidence in his work and he's never shown it to anyone. Finally you persuade him to let you look at a copy of his manuscript, but he makes you promise not to show any of it to anyone else.

Hopefully it is clear in this situation that no one is doing anything "evil". Even though he is giving you the document with conditions beyond those specified in the current regime of copyright, he is not taking advantage of you. Even though you hold the bits to his manuscript and he has put limitations on what you can do with them, he is not coercing you. You voluntarily accepted those conditions as part of the agreement under which you received the document.

It should also be clear that it would be ethically wrong for you to take the manuscript and show it to other people. Even if you take an excerpt, as allowed under "fair use" exemptions to copyright protection, and include it in a document for commentary or review purposes, that would be a violation of your promise. This example demonstrates that when two people reach a mutual agreement about how they will handle some information, they are ethically bound by it even beyond the regulations of copyright law.

And surely it is clear that no decisions by Congress or any other legislative or judicial body can change the ethics of this situation. In fact, it is absurd to look to Congress for guidelines on ethics! Surely everyone reading is aware that it is one of the least ethical bodies in existence. Those who look to Congress to justify breaking their promises are not looking for ethics, they are looking for excuses. Congress excels at providing those.

The point is that this situation is exactly analogous to what might happen if you purchased a song or other information content by downloading, and restrictions were placed on how you could handle it as a condition of that purchase. One of the restrictions might be that you can make no more than 2 copies of the song for personal use. Another restriction might be that if you give a copy to someone else, you have to delete your copy.

Such restrictions cannot be evil, any more than was the even more strict restriction imposed on the recipient in the book example above. Evil only exists when someone is forced to do something they don't want to. Offering a song or a book with conditions does not force anyone to do anything, because the offer can always be refused. There can be no evil in making someone an offer, even an unacceptably restricted one.

In fact, making or accepting any kind of offer, with any restrictions which the parties choose, is a fundamental freedom which everyone reading this should fight to support. To say that people can only make or accept offers which some third party deems acceptable is a coercive infringement on people's liberty to make their own decisions and to control their lives. It is despotism of the worst sort. Third parties have no right to interfere in the agreements which others make.

Smart Lynch Mobs?

[Note - this is an archived version of the original posting from 04:20 PM EDT, Apr 24 2003]

Last year I bought Howard Rheingold's book, Smart Mobs, about how new portable telecommunication technologies are allowing people to organize themselves in the physical world in novel ways. One of his examples was the use of SMS messaging to set up demonstrations and freedom rallies in the Far East.

Now we see a new and more sinister type of Smart Mob, which I am calling a Smart Lynch Mob. Wired Online reports about SMS messages feeding the SARS hysteria in Hong Kong. The most recent service will send SMS alerts to notify receivers of "contaminated" buildings where those suspected of being SARS carriers have recently visited!

It's just a matter of time before this technology would allow SARS victims themselves to be identified, located and publicly branded. Combining these SMS alerts with future visions of augmented reality, we could imagine that SMS patients would be labelled via a computer overlay as they walked down the street, carrying a virtual scarlet letter, in effect.

The point is that this technology can be used for harmful as well as helpful purposes. Rheingold titled his book Smart Mobs, but a mob has never been considered a beneficial form of human organization. Mobs are uncivilized, irrational and prone to violence. SMS in Hong Kong is proving to be better at rumor-mongering than in spreading useful information, according to the Wired Article.

If we do face a future of Smart Mobs, we should be prepared for the bad side as well as the good. I'm not in love with a technology which is going to bring us back to the age of the mob.

Thanks!

[Note - this is an archived version of the original posting from 03:10 PM EDT, Apr 24 2003]

I want to take this opportunity to thank "zem" for setting up this service. It is wonderful to finally have a forum for publishing my thoughts, uncensored, with a reasonable degree of privacy and anonymity.

For too long have anonymous writers been second-class citizens on the net. I engaged in an extensive online debate and discussion last year regarding the merits of the various proposals for Trusted Computing, like TCPA and Palladium. My messages were a model of respectful and restrained debate (with one possible exception, which I still feel was justified). Yet I was treated in an utterly disrespectful manner.

The cryptography list moderator refused to post many of my initial messages. Luckily I also crossposted to the cypherpunks list, so that people were able to see them and respond. Due to the nature of the email headers, their responses were directed to both lists, putting the cryptography moderator in the sticky position of deciding whether to approve a message that expressed a position he supported, but which quoted my material, which he had censored. It was only when it was clear that my messages were the foundation for the ongoing discussion that he began to carry them.

I also debated the issue on sci.crypt, only to discover later that none of my messages were appearing in the Google archives! That's right, Google refuses to archive messages from known anonymous posting addresses. Unbelievable. The historical record of that discussion is now fragmented and one-sided, with people responding to messages which are non-existent in the archives. And many of my strongest arguments, which received no rebuttals, will not be heard by those who use the archives to educate themselves on this issue.

Not only were these institutional arrangements disfavorable to an anonymous contributor, the community at large was generally hostile as well. I was constantly subjected to insulting and harrassing comments about my motivations and supposed lack of intelligence - for supporting people's rights to use technology! I was called a "stooge"; people demanded to know if I worked for Microsoft (I never have). I was an "idiot" (later softened to "intelligent idiot").

Never in all my days of non-anonymous posting have I been subject to such insults. Quite the contrary; in most forums I have established a very strong reputation for careful and honest analysis. Only now, while publishing anonymously, I found myself reduced to the status of a pariah. It was a humbling and educational experience.

All this is by way of expressing my deep thanks to "zem" for his work in setting up this anonymous blogging service. At last there will be a place where I don't feel like a second-class citizen, somewhere that my messages can be posted and archived and referenced. I can build up my arguments and philosophy patiently, over time, and in that way demonstrate the merits of my position.

He's done a great job of making the service easy to use, too (at least for those of us who are accustomed to anonymous communications). And the published blog entries look great, too - crisp and clean.

So again to "zem", thanks and kudos. This means a great deal to me and hopefully to many other anonymous writers who will benefit from your generosity, talent and hard work.