BILETA 2008 : In Theory

I attended a session on ‘theory’ as I was one of the speakers! I don’t perceive my work as particularly theoretical but it was a very interesting session and I”m glad that I ended up there. My paper, Expression 2.0: from known unknowns to unknown knowns, will be posted on the BILETA website and I’ll link it then; here are my slides (PDF). I spoke about the control of expression by social networking and web 2.0 hosts and also by internet service providers, particularly through terms of use. In the second part of the paper, I looked at different approaches to freedom of expression, freedom of communication, and the relationship between human rights and private parties.

Next up was Hayley Hooper, who talked about new technologies and the enduring role of constitutional rights. Recent trends promote ‘liberal legalism’ and a particular, marketised approach of supranational constitutional development.

1. Model of legal constitutionalism; not a purely structural theory : outlined by Alan Tomkins – separate to politics, in the courtroom, control of government, etc. While they sound reasonable, argued that they are normatively undesirable. There is a need to look at supranational entities as they have more constitutional power and control over citizens. We haven’t moved beyond inherent bias in constitutions, despite what some may say. The bias that Hayley suggests in the EU context is market-based, e.g. the four (economic) freedoms. The judiciary are suitable for this model; dealing with socio-economic rights is difficult.

2. UK and the development of judicial review. Until Malone v Metropolitan Police (telephone tapping), couldn’t make human rights claim at that time (no legal or equitable claim), everything is permitted unless it is forbidden. But the judiciary became much more activist in ‘discovering’ constitutional rights in the common law … but the record isn’t consistent, the EHCR incorporation hasn’t reinforced this trend. Pro-Life Alliance as an example, due deference applied. So despite the upsurge, the judiciary have a particular mindset, they won’t go into the more controversial areas.

3. Juristocracy. Recent developments in the UK show a trend towards this. Judicial, economic and political elites have converging/confluent interests, which is on the back of neo-liberal technological progress. What are the consequences? US situation is interesting, especially in the political debate re: Roe v Wade. The Government’s ideas reinforce the juristocracy trend; the 1997 election generated it and it’s been ongoing since then.

The discussion touched on Dworkin, Gearty, alternative approaches to judging and decision-making, the role of ‘governance’. And I enjoyed it a lot.

Finally, Martina Gillen (Oxford Brookes) spoke about developing a ‘Sociology of Law 2.0′. Our identity has always been ‘defined’ in a way by technology – even from the definition of homo sapiens. In Durkheim’s original classification, he focused on divisions of labour etc; but buried in it is a differentiation on the use of technology; the elephant in the living room. It’s shaping what’s happening but theoretically it’s poor. Our tendancy to want ‘legal certainty’ has given us a mechanical/scientific mindset. We’re (we as lawyers) viewing technology as something we should be ‘attracted’ to and we want the analogy to apply to law.

We focus on ‘nodes’ but what are we missing? There’s also the proliferation of economic control and interests. “Just because it’s new, does that make it significant”. And Martina also mentioned the ‘cult of the shiny’ which is something I could rant about but won’t…

What would the new features be? public sphere; multiple jurisdictions; inter-sections.

We then saw a very useful diagram about security models (I haven’t got the technological ability to reproduce it but I’ll link to it when published), arguing to focus on what the user is actually doing! It seems trite but it’s a key feature that has been ignored. We need to move beyond categories and consider normative change. Finally, Martina outlined a research method proposal, based on proxies, to study *what* people are doing on the Internet and *why*. The conclusion, then, is that we need to use the technology to get data.

BILETA 2008 : Trojans and Thiefs

Wiebke Abel (Edinburgh) and Matthias Damm (an attorney in Karlsruhe, and LLM graduate of Strathclyde) both addressed the topic of trojans and spy software and their use by law enforcement agencies in particular.

Wiebke started things off with an overview of how ‘the world has changed’ and what this means for crime. Are traditional investigation methods and laws sufficient to deal with new challenges? Can a ‘new generation of investigators’ (and investigative tools) help? She picked a particular example, the ‘German Federal Trojan’ (aka Bundestrojaner!). Trojans are familiar (as used by hackers, spammers and others) – but are they only for criminal use? The plan here is for covert search and surveillance of private computers by police or secret services. This can be implemented through spyware, through existing ‘backdoors’ and even download-contamination. There was – naturally – outrage in Germany about this – but was this a once-off? No: the US ‘magic lantern’ and Austrian ‘online search’ are other examples. These technologies are special because of the way they combine factors such as mobility, ubiquity, invisibiity and digital evidence collection; but they are unpredictable and can even raise international issues (trojans operating outside national borders), and the use of gathered data is wholly unclear at this stage (would it stand up in court? should it?). And how do you prevent antivirus software from identifying the supposedly hidden trojan? Wiebke mentioned R v Aaron Caffrey (existence of trojan used as defence in a criminal trial about material on C’s machine). A possible solution is seeing source code as the ‘DNA of software’; hardwire the law into software. But the overwhelming need is an approach where regulation through law and regulation through code are working together

Matthias then started his presentation, ‘I know what you saved last summer’. He also took guidance from history, mentioning fingerprints, DNA and CCTV as examples of new investigative ‘technologies’. Today’s investigators look more like computer operators than Sherlock Holmes. CIPAV (Computer and IP Address Verification) is in use in the US, although it’s not supposed to be dealing with content. The FBI haven’t been very helpful in explaining how it works. As for the Bundestrojaner, the Federal Constitutional Court dealt with this (on 27th February 2008) and gave the go-ahead to such software in its ruling, subject to strict conditions (such as a court order and the respect for private data). This was the same case where the Court formulated a new constitutional right, the guarantee of the confidentiality and integrity of IT systems. More than 60% of the German population apparently support the system, although are they aware of the Orwellian nature of such software?

After a discussion on the trojan issues, Angus Marshall (Teeside) then reported on the EPSRC-funded ‘Cyberprofiling’ project. The project looked at offender and geographic profiling, in particular in the context of intelligence and intelligence-sharing. How can existing information (server logs etc) be used in a useful way? Overcoming various problems, they developed a ‘data collection appliance’. But one of the most interesting legal issues that arose was whether an IP address is a ‘personal identifier’ (relevant for sensitive data / data protection / sharing / etc). Information Commissioner has given various answers; European practice varies. But the research group didn’t feel that IP addresses were personal, though they did accept the advice and used anonymisation. This itself required some new work. So how does this type of ‘dataveillance’ compare with other things like (on one hand) CCTV, DNA and wiretapping and (also, or on the other hand) credit cards, mobile phone tracking, loyalty cards etc. The first category is ‘biometric keyed’ and the second is ‘token mapped’. Angus gave an overview of the regulation and effectiveness of each. He concluded that a telephone number is not a personal identifier; neither, they argued, is an IP address (but combined with other factors ‘may be personal data’). Again, the discussion was extremely vibrant, and now it’s off to lunch.

BILETA 2008 : Net Neutrality

Chris Marsden (Essex) is an expert on net neutrality in Europe (and other things); see his SCRIPT-ed article on the topic here. And that’s also his topic for this morning’s keynote. Oh, and and it rained lots aréir but it’s quite mild this morning.

This presentation : It’s about convergence, it’s different in Europe, it’s “politics not economics” and it’s not going away.

Convergence – but this isn’t new, the arguments have been seen in the 1950s (spectrum use), 1970s/80s (cable), 1990s (satellite – in particular Sky and football), 2000s (mobile) – and now Internet.

In the US – monopoly power (see Madison River / Vonage case); it’s a result of the Telecoms Act 1996 and the Trinko and BrandX decisions (which means that all networks are, for FCC purposes, ‘information services’ and therefore not common carriers). Should ‘common carriage’ be reintroduced? He mentioned the papers by Lemley & Lessig, Tim Wu’s arguments, the opposition (from techies, economists and lawyers), and the fun times at the FCC hearing in Harvard this year.

Europe is different, though, because of local loop unbundling, control of significant market power, and there is in fact a trend towards *more* regulation (e.g. roaming, reforms to the electronic communications directives). Also, the ‘content’ is different (in the US, it’s often “a commercial dispute hidden as a freedom (or fr’dom) argument”), whereas Europe has EPG regulation, ‘must carry’, etc. We even have the BBC iPlayer – the ‘death star’ for ADSL networks. What if it’s not VOIP that’s being blocked, but Eastenders? UK consumers are paying for broadband, licence fee, Sky subscription…

Japan, now, is an interesting example – net neutrality is in place, and there’s a privileged role for consumer protection in the legal framework; there are incentives to roll out high-speed (e.g. incumbent NT&T can do so without regulation for a ‘holiday’ period).

The lobbies are the networks (trying to protect investment, not to mention the need to ensure quality of service) vs the content providers (who don’t want to be charged). But the networks *are* actually blocking things like BitTorrent (under the headings of traffic management, antivirus,etc) while advertising unlimited access. And the content providers (like the BBC telling users to lobby their ISPs to switch on simulcasting!) are having a free ride, especially for video and P2P.

Also, the interaction between filtering and net neutrality, which has lots of unforseen possible consequences. And there are issues with competition law, and what of BT which has a dominant position?

Chris also spoke about Phorm, a very interesting yet terrifying ‘adware’ system at the ISP level (“Google on steroids, or Big Brother incarnate”) (couple of links here) – is it even legal? He wondered, though, if Phorm is the response to net neutrality, i.e. if the telco can’t make money through NN, can they make it through something like Phorm?

We also heard a little about ‘end to end’ and other such pronouncements; how much innovation happens “at the edge” in reality? And a related question is on what basis filtering can actually be allowed…

The conclusion looked at DRM, privacy, blocking, hate speech and even the AVMS Directive. The legal provisions, aside from the directive, include the electronic communicaitons directive, the IS Security Framework Agreement, the E-Commerce Directive and more – which taken together mean greater intervention by ISPs in what goes through its network. The regulators are passing the buck – we are going in circles. “They’re all a bunch of tubes”.

BILETA 2008 : Open Access

The last (official) business of the day was a series of parallel presentations/experiments/workshops; I passed by the fun-looking projections and computers and went to an interesting panel under the banner of the recently relaunched SCRIPT-ed (the open access periodical managed by the University of Edinburgh). Journal editors Wiebke Abel and Shawn Harmon put together a session on open access and legal scholarship.

Speaker Andres Guadamuz (Edinburgh), co-director of SCRIPT, previewed the session on his blog, here and session chair Shawn Harmon, after introducing the panel, discussed SCRIPT-ed and their approach to peer review and rapid turnarounds (always welcome). He pointed in particular to the interdisciplinary nature of work in the technology-law-society area. Finally, he highlighted the call for papers and registration for the March 2009 conference.

Andres then spoke about the importance of open access to legal information. Licences such as the GNU FDL (used by Wikipedia) or those developed by Creative Commons are important; it’s not just about making something available online without charge, although a lot of publishers/republishers have yet to grasp these subtleties and are still quite risk-averse. The Berlin Declaration, on the other hand, is more about access, but requires peer review. Policymakers and research councils, then, may have different definitions again; the latter are interested in the role of public money and the making available of the resulting work to the public, and policies on public sector information (including caselaw) add further complexity. In response to a question, he also discussed pre-prints (SSRN etc).

John MacColl (St. Andrew’s University Library), spoke about open access repositories and his experience in developing such at the University of Edinburgh. Librarians come at these issues with lots of reasons in mind : in particular, budgets are stretched (research libraries can spend over three-quarters of their materials budget on periodicals including licence fees). Interestingly, the debate has been a more gentle one in Australia, because without a strong publishing industry, that major source of opposition wasn’t present as it is in the UK. He explained how ERA, the Edinburgh Research Archive worked, and how academics deal with it, referring to the two databases for checking on publisher and research council policies (Romeo and Juliet). Institutional, national or funding-council ‘mandates’ are extremely important; the new research assessment methods in the UK, which will include metrics, will also be relevant. (For the record, the institutional repository at my own institution was very much influenced by Edinburgh’s; our library is still working hard on getting staff and research students to submit, so if you’re one of my TCD audience, think about doing so?).

Diane Rowland (Aberystwyth) opened with a story about a colleague who didn’t want to publish in an online-only journal (“I like paper” was his reason). The serious issues are quality, prestige and impact – and the perception of these issues. A stronger commitment (from the discipline, as well as from individual schools or departments) is necessary – for example, in the RAE just gone by, articles in web-based journals were not in the same category as ‘journal articles’ more generally. Like John MacColl, she was interested in the development of new metrics and what this would mean for the journals as well as for individual behaviour.

Prof. Abdul Paliwala (Warwick) is the director of the Journal of Information, Law and Technology (JILT). He reflected on the development of the journal and in particular looked forward to a forthcoming issue on legal education that would make some use of multimedia, reinforcing the decision not to have a paper-version. He suggested a meeting of all those people involved in the free and open access movements.

Finally, Burkhard Schafer (Edinburgh) spoke about the purpose and status of peer review more generally. He went from low-copy numbers in DNA to the credibility and accuracy of citations and wondered about the development of standards (‘seals of approval’, perhaps?)

And now my battery is dying, so that’s it for ‘live’ blogging for today, but there’ll be more tomorrow…

BILETA 2008 : The Digital Environment

Burkhard Schafer (Edinburgh) started by shooting the Chair.

He then went on to discuss evidence and hearsay and computer-generated speech; how relevant is the hearsay rule when you have lots of information ‘spoken’ (aural or visual) by a computer. This works for something as simple as the ‘time’ of a particular event; if the computer ‘tells’ you the time is that hearsay? With ‘helpful’ suggestions from a computer (think of something like predictive text), the grey areas are significant. Is a bot sent to interact in a chatroom an electronic document or hearsay-speech? What about the call centre and the mix between preformulated stuff and the human operator? Should we just abolish the distinction? (He says no, but we have to work on the definitions)

This is about the fourth time I’ve heard Burkhard speak at conferences and again, I came away with questions that will annoy me for a while! Which I suppose is a good thing :-)

Next up was Primavera de Fillipi (also part of the EUI contingent), talking about user-generated content and democratic participation. Again this is something that comes into my current research interests, so I was on the lookout for ideas here. The theoretical background was Habermas on the public sphere (Offentlichkeit) which was related to ‘e-democracy’ and the participative web; the “electronic Agora”, a lovely image. The challenges, though, include scarcity of Internet access in some countries, the need to develop appropriate technical skills, the power of intellectual property law (particular when it comes to parody or transformative works), and censorship/filtering. A freewheeling discussion followed about Wikipedia, blogging and the “geekery” of the Internet.

Finally, Judith Rauhofer (Central Lancashire) talked about data protection (which is now ‘sexy’ and ‘the new black’, apparently!). She argued that it necessary to return to first principles, and discussed the concept and defintions of privacy, including personality rights in the German constitutional context, compared with US concepts of the ‘public interest’, and the various legal protections of privacy in domestic and international law. Data protection has developed (in a way) as an expression of the idea of privacy. Interestingly, the international interventions under this heading can be seen as a ‘backlash’, as some states were effectively creating ‘data havens’ by regulating within a domestic sphere only, and international standards can be either a ‘race to the top’ or a ‘race to the bottom’. Additionally, individuals ‘trade’ their data : loyalty cards or travel cards (Oyster) are great examples. Judith explored the reason for privacy protection, focusing on social reasons rather than purely individual ones; this tradition can be traced to the famous German ‘Census Act Case’ in 1983. “If one individual or group of individuals waives privacy rights, the level of privacy for all individuals decreases”. Her suggestion was the use of regulation to force the use of code, which prompted a good set of questions and comments.