by Russell McOrmond1
This work is licensed under a Canadian Attribution-ShareAlike Creative Commons License.
As I think about technology law and how it is created, I recognize how software governs the activities of citizens. I don't know exactly when I discovered the work of Lawrence Lessig2, but when I did I instantly took a liking to the thesis of some of his work. While I read his book "CODE and other laws of cyberspace"3 in 20044, I had already deeply integrated this thinking into my policy work.
A professor of law at Stanford Law School, Lessig talks about "East Coast Code" being code authored in Washington, DC by policy makers and politicians (to be executed by law enforcement and the courts), and "West Coast Code" being code authored in Silicon Valley in California (to be executed by a computer). In the United States they have this theoretical geographical division between the authors of these types of code, but in Canada we have the federal government and Silicon Valley North both in Ottawa.
In the conclusion of CODE5 Lawrence Lessig asks:
We live life in real space, subject to the effects of code. We live ordinary lives, subject to the effects of code. We live social and political lives, subject to the effects of code. Code regulates all these aspects of our lives, more pervasively over time than any other regulator in our life. Should we remain passive about this regulator? Should we let it affect us without doing anything in return?
If code is a form of law that regulates us, why must we treat that code as simply another product? Should we not be questioning this highly pervasive form of governance? Should we not be demanding the same level of transparency and accountability for this code as we do other regulation? Should we not interact with code as citizens, not a consumers?
With software code recognized as a form of governance, the importance of Free/Libre and Open Source Software6 (FLOSS) becomes obvious. I do not see FLOSS as a way to save money, but a set of criteria that offers the adequate transparency and accountability required for public code. When I see software I don't analyze it simply in engineering (natural science) terms, but more often analyze it in far more critical political (social science) terms.
A further understanding of this concept can be understood by breaking down a provocative statement that I started to use in early 2003.
Governance software that controls Information and Communications Technology (ICT), automates government policy, or electronically counts votes, should not be thought of as something that should be bought any more than politicians should be thought of something that should be bought.
The statement was intended to suggest that where other people saw legitimate business models or methods for creating software for protecting copyright (so-called Digital Rights Management or DRM), e-Government, e-Voting or direct recording electronic (DRE) voting systems, I saw forms of political corruption. In all of these cases there are far more important policy considerations than the lesser concerns that policy makers have thus far concerned themselves with.
Imagine an election where a voter hands their ballot over to a private corporation, who acts as a proxy for the voter. Internal to this corporation is a trade secret process used to count and destroy these ballots. Nothing in this process is disclosed to the public, and it is considered illegal to publicly discuss details of the process. The government then asks this corporation for the outcome of the process, and based on what this corporation says a new government is formed.
As ludicrous as this sounds, this is how an increasing number of elections are decided, including the November 2004 election in the United States where it was suggested that 1/3 of voters used ballot-less voting machines. The reason why people do not understand this is because of their blind trust in computers, and their lack of understanding of the policy nature of software code.
When a company like Diebold authors software that is used to count votes, they are authoring the process by which an election will be decided. When you mark your vote on, for example, a touch screen you are in effect handing your ballot over to this privately authored process, with this vendor acting as your proxy. Assuming all the software bugs are fixed, this machine under Diebold's control then decides what to do with this vote. There is no ballot to destroy, so no record for election scrutineers to verify the accuracy of the count. If the machine records the vote incorrectly, there will be no mechanism to find out. Given the "power corrupts" human nature, it seems impossible to me that these private corporations, including and especially ones already making sizable donations to specific candidates, will not abuse this ability to untraceable corrupt the election process in the future. This is allowing for the unlikely possibility that they have not corrupted past elections.
There have been attempts to expose this corruption. In one specific example when flaws in the voting system was disclosed, Diebold attempted to suppress the information by claiming copyright infringement under the US DMCA7. Whether this type of policy (code that electronically counts votes) should be eligible for copyright protection at all is questionable, but it should be without any doubt that this material should be openly available to public scrutiny.
Given the importance of elections I do not believe a ballot-less process should be used regardless of how transparent and accountable the software is8. In a process that does not have a ballot it is always possible for someone to corrupt the process by executing software on a voting machine that is not the software that was publicly disclosed. With a machine that generates a human readable (and thus human verifiable) ballot you solve all the mechanical problems with ballots, and with having a paper ballot you have the ability to have multiple hardware/software combinations used to ensure the integrity of the vote. No amount of mechanical problems with ballots justifies doing away with the ballot entirely, and moving to a proxy voting system where the proxies are not chosen by the voter, are acknowledged to be biased and untrustworthy.
Even before the Government-on-Line (GoL) initiative in Canada there were many examples of government policy being automated in software. When a policy maker authors policy, and then a software author implements this policy in software, who verifies the accuracy of the translation? When there is a discrepancy, whether a software bug or some type of corruption, how is this found and if it is found how good is the public disclosure of this problem?
When a citizen interacts with government through an electronic form, it is the rules that these forms obey that they come to know as government policy. If the electronic form has a bug in it where it does not allow an option which the underlying government policy would allow, it is the operations of the electronic form that takes precedence. Currently most interactions that can be carried out electronically can also be done on paper or in person with a government employee, but not all citizens understand the importance of this option. Increasingly we may find some government interactions that will only happen electronically, possibly with government employees interacting with government databases using the same electronic forms.
What happens when this software is outsourced? There are two types of software outsourcing: when a specific project is outsourced, and when the government acquires Commercial Off The Shelf (COTS) software. In both of these cases software which will be part of the automation of government policy was authored by a private interest which may have its own policy goals. Like the voting system it would be impossible to convince me that these private interests would not abuse an opportunity to affect government policy without being detected.
Should this software be treated as a product that is bought, or as simply a translation to another language of government policy? If this is government policy should it not be open to public scrutiny via the Access to Information Process (ATIP) like any other government policy? Should the claim that economic interests in the software code or methods should be "protected" by copyright and patent law be honored as a legitimate business model, or recognized as attempts at government corruption?
The following statement has been made on my website in response to poor policy recommendations by the current government:
Any 'hardware assist' for communications, whether it be eye-glasses, VCR's, or personal computers, must be under the control of the citizen and not a third party.
Corollary: The "content industries", such as the motion picture and recording industries, are not legitimate stakeholders in the discussion of what features should or should not exist in my personal computer or VCR, any more than they are a legitimate stakeholder in the production of my corrective eye-glasses. If a member of a content industry doesn't like the technology that exists in a given market sector, be it consumer electronics in the home or personal computers, they can simply not offer their products/services into that market.
Richard Stallman, noticing the wrong direction some policy makers are headed as it relates to so called “trusted computing”, noted9:
"Trusted computing" would make it pervasive. "Treacherous computing" is a more appropriate name, because the plan is designed to make sure your computer will systematically disobey you. In fact, it is designed to stop your computer from functioning as a general-purpose computer. Every operation may require explicit permission.
What is commonly referred to as Digital Rights Management (DRM) can be seen as made up of 3 components: Digital Rights Encoding (DRE), Technological Protection Measures (TPM), and vendor controlled ICT. To understand the policy concerns we must recognize that it is not the first two components where there is controversy, but the third.
Digital Rights Encoding (DRE). These are methods of marking content to mark what uses are authorized by the "rights holder". There are both open and closed examples of this, with one example of an open DRE being used by the Creative Commons movement10. This encoding is intended to be, and should be, under the control of the copyright holder. It is quite appropriate for it to be considered a crime to impersonate a copyright holder and fraudulently claim rights, regardless of the technologies used.
Technological Protection Measures (TPM). There are many methods that are lumped within TPM. Simple passwords are used to gain access to resources, such as protected documents on a website. Digital signatures are used to verify that a document (content plus DRE) is from the expected source. Various types of encryption are used to ensure that only those with the proper key are able to unlock a message such that third parties do not have access.
Vendor controlled ICT. It is claimed that in order to enforce DRE that the vendor of a communications tool must have ultimate control of any ICT, not the citizen who owns the ICT. In my mind this is based on a number of misconceptions, the most obvious one being the belief that ICT vendors are not themselves special interests in the process. In a world where ICT intermediates more and more of our lives, those who control ICT end up with a considerable influence on our lives. While ICT vendors could obey the DRE of a copyright holder, they could also ignore the rights of copyright holders that do not sign agreements with the vendors. With vendor control over ICT these vendors could disallow the use of ICT for creativity that is not authorized by the vendor.
It is my hope that those who support creativity will eventually understand this concern. While DRE is something we would all benefit from, DRM is not in the interests of creators11 or their audiences. It is an unaccountable private replacement of the public policy known as copyright. Those who support DRM are fundamentally anti-copyright, a title that has ironically been given to those of us trying to protect the balance of rights expressed in copyright law.
1Russell McOrmond is a self-employed Open Systems/Standards/Software Internet Consultant. http://flora.ca/ (Accessed September 27, 2004). He not only believes that "code is law", but also that "law is code" and spends much of his time "hacking" this type of code.
4Review of CODE and other laws of Cyberspace, by Russell McOrmond http://www.flora.ca/russell/drafts/review-of-code.html (Accessed October 18, 2004)
5The conclusion is available online at http://www.code-is-law.org/conclusion_excerpt.html (Accessed October 25, 2004)
7Online Policy Group v. Diebold, Inc. http://www.eff.org/legal/ISP_liability/OPG_v_Diebold/ (Accessed October 18, 2004)
8There are transparent and accountable Open Source software for Direct Recording Electronic (DRE) voting machines, including one project at the Open Vote Foundation http://open-vote.org/ (Accessed October 25, 2004)