BILETA 2008 : Net Neutrality

Chris Marsden (Essex) is an expert on net neutrality in Europe (and other things); see his SCRIPT-ed article on the topic here. And that’s also his topic for this morning’s keynote. Oh, and and it rained lots aréir but it’s quite mild this morning.

This presentation : It’s about convergence, it’s different in Europe, it’s “politics not economics” and it’s not going away.

Convergence – but this isn’t new, the arguments have been seen in the 1950s (spectrum use), 1970s/80s (cable), 1990s (satellite – in particular Sky and football), 2000s (mobile) – and now Internet.

In the US – monopoly power (see Madison River / Vonage case); it’s a result of the Telecoms Act 1996 and the Trinko and BrandX decisions (which means that all networks are, for FCC purposes, ‘information services’ and therefore not common carriers). Should ‘common carriage’ be reintroduced? He mentioned the papers by Lemley & Lessig, Tim Wu’s arguments, the opposition (from techies, economists and lawyers), and the fun times at the FCC hearing in Harvard this year.

Europe is different, though, because of local loop unbundling, control of significant market power, and there is in fact a trend towards *more* regulation (e.g. roaming, reforms to the electronic communications directives). Also, the ‘content’ is different (in the US, it’s often “a commercial dispute hidden as a freedom (or fr’dom) argument”), whereas Europe has EPG regulation, ‘must carry’, etc. We even have the BBC iPlayer – the ‘death star’ for ADSL networks. What if it’s not VOIP that’s being blocked, but Eastenders? UK consumers are paying for broadband, licence fee, Sky subscription…

Japan, now, is an interesting example – net neutrality is in place, and there’s a privileged role for consumer protection in the legal framework; there are incentives to roll out high-speed (e.g. incumbent NT&T can do so without regulation for a ‘holiday’ period).

The lobbies are the networks (trying to protect investment, not to mention the need to ensure quality of service) vs the content providers (who don’t want to be charged). But the networks *are* actually blocking things like BitTorrent (under the headings of traffic management, antivirus,etc) while advertising unlimited access. And the content providers (like the BBC telling users to lobby their ISPs to switch on simulcasting!) are having a free ride, especially for video and P2P.

Also, the interaction between filtering and net neutrality, which has lots of unforseen possible consequences. And there are issues with competition law, and what of BT which has a dominant position?

Chris also spoke about Phorm, a very interesting yet terrifying ‘adware’ system at the ISP level (“Google on steroids, or Big Brother incarnate”) (couple of links here) – is it even legal? He wondered, though, if Phorm is the response to net neutrality, i.e. if the telco can’t make money through NN, can they make it through something like Phorm?

We also heard a little about ‘end to end’ and other such pronouncements; how much innovation happens “at the edge” in reality? And a related question is on what basis filtering can actually be allowed…

The conclusion looked at DRM, privacy, blocking, hate speech and even the AVMS Directive. The legal provisions, aside from the directive, include the electronic communicaitons directive, the IS Security Framework Agreement, the E-Commerce Directive and more – which taken together mean greater intervention by ISPs in what goes through its network. The regulators are passing the buck – we are going in circles. “They’re all a bunch of tubes”.

BILETA 2008 : Open Access

The last (official) business of the day was a series of parallel presentations/experiments/workshops; I passed by the fun-looking projections and computers and went to an interesting panel under the banner of the recently relaunched SCRIPT-ed (the open access periodical managed by the University of Edinburgh). Journal editors Wiebke Abel and Shawn Harmon put together a session on open access and legal scholarship.

Speaker Andres Guadamuz (Edinburgh), co-director of SCRIPT, previewed the session on his blog, here and session chair Shawn Harmon, after introducing the panel, discussed SCRIPT-ed and their approach to peer review and rapid turnarounds (always welcome). He pointed in particular to the interdisciplinary nature of work in the technology-law-society area. Finally, he highlighted the call for papers and registration for the March 2009 conference.

Andres then spoke about the importance of open access to legal information. Licences such as the GNU FDL (used by Wikipedia) or those developed by Creative Commons are important; it’s not just about making something available online without charge, although a lot of publishers/republishers have yet to grasp these subtleties and are still quite risk-averse. The Berlin Declaration, on the other hand, is more about access, but requires peer review. Policymakers and research councils, then, may have different definitions again; the latter are interested in the role of public money and the making available of the resulting work to the public, and policies on public sector information (including caselaw) add further complexity. In response to a question, he also discussed pre-prints (SSRN etc).

John MacColl (St. Andrew’s University Library), spoke about open access repositories and his experience in developing such at the University of Edinburgh. Librarians come at these issues with lots of reasons in mind : in particular, budgets are stretched (research libraries can spend over three-quarters of their materials budget on periodicals including licence fees). Interestingly, the debate has been a more gentle one in Australia, because without a strong publishing industry, that major source of opposition wasn’t present as it is in the UK. He explained how ERA, the Edinburgh Research Archive worked, and how academics deal with it, referring to the two databases for checking on publisher and research council policies (Romeo and Juliet). Institutional, national or funding-council ‘mandates’ are extremely important; the new research assessment methods in the UK, which will include metrics, will also be relevant. (For the record, the institutional repository at my own institution was very much influenced by Edinburgh’s; our library is still working hard on getting staff and research students to submit, so if you’re one of my TCD audience, think about doing so?).

Diane Rowland (Aberystwyth) opened with a story about a colleague who didn’t want to publish in an online-only journal (“I like paper” was his reason). The serious issues are quality, prestige and impact – and the perception of these issues. A stronger commitment (from the discipline, as well as from individual schools or departments) is necessary – for example, in the RAE just gone by, articles in web-based journals were not in the same category as ‘journal articles’ more generally. Like John MacColl, she was interested in the development of new metrics and what this would mean for the journals as well as for individual behaviour.

Prof. Abdul Paliwala (Warwick) is the director of the Journal of Information, Law and Technology (JILT). He reflected on the development of the journal and in particular looked forward to a forthcoming issue on legal education that would make some use of multimedia, reinforcing the decision not to have a paper-version. He suggested a meeting of all those people involved in the free and open access movements.

Finally, Burkhard Schafer (Edinburgh) spoke about the purpose and status of peer review more generally. He went from low-copy numbers in DNA to the credibility and accuracy of citations and wondered about the development of standards (‘seals of approval’, perhaps?)

And now my battery is dying, so that’s it for ‘live’ blogging for today, but there’ll be more tomorrow…

BILETA 2008 : The Digital Environment

Burkhard Schafer (Edinburgh) started by shooting the Chair.

He then went on to discuss evidence and hearsay and computer-generated speech; how relevant is the hearsay rule when you have lots of information ‘spoken’ (aural or visual) by a computer. This works for something as simple as the ‘time’ of a particular event; if the computer ‘tells’ you the time is that hearsay? With ‘helpful’ suggestions from a computer (think of something like predictive text), the grey areas are significant. Is a bot sent to interact in a chatroom an electronic document or hearsay-speech? What about the call centre and the mix between preformulated stuff and the human operator? Should we just abolish the distinction? (He says no, but we have to work on the definitions)

This is about the fourth time I’ve heard Burkhard speak at conferences and again, I came away with questions that will annoy me for a while! Which I suppose is a good thing :-)

Next up was Primavera de Fillipi (also part of the EUI contingent), talking about user-generated content and democratic participation. Again this is something that comes into my current research interests, so I was on the lookout for ideas here. The theoretical background was Habermas on the public sphere (Offentlichkeit) which was related to ‘e-democracy’ and the participative web; the “electronic Agora”, a lovely image. The challenges, though, include scarcity of Internet access in some countries, the need to develop appropriate technical skills, the power of intellectual property law (particular when it comes to parody or transformative works), and censorship/filtering. A freewheeling discussion followed about Wikipedia, blogging and the “geekery” of the Internet.

Finally, Judith Rauhofer (Central Lancashire) talked about data protection (which is now ‘sexy’ and ‘the new black’, apparently!). She argued that it necessary to return to first principles, and discussed the concept and defintions of privacy, including personality rights in the German constitutional context, compared with US concepts of the ‘public interest’, and the various legal protections of privacy in domestic and international law. Data protection has developed (in a way) as an expression of the idea of privacy. Interestingly, the international interventions under this heading can be seen as a ‘backlash’, as some states were effectively creating ‘data havens’ by regulating within a domestic sphere only, and international standards can be either a ‘race to the top’ or a ‘race to the bottom’. Additionally, individuals ‘trade’ their data : loyalty cards or travel cards (Oyster) are great examples. Judith explored the reason for privacy protection, focusing on social reasons rather than purely individual ones; this tradition can be traced to the famous German ‘Census Act Case’ in 1983. “If one individual or group of individuals waives privacy rights, the level of privacy for all individuals decreases”. Her suggestion was the use of regulation to force the use of code, which prompted a good set of questions and comments.

BILETA 2008 : Intermediaries, Invisibility and the Rule of Law (TJ McIntyre)

TJ McIntyre is chair of Digital Rights Ireland and a lecturer in law at University College Dublin. He spoke to the title above at a session on regulation this afternoon.

TJ spoke about the shift from cyberlibertarianism to cyberpaternalism. We didn’t really get what the libertarians suggested; they looked at disintermediation, practicalities (jurisdiction, geography), out-of-date law, etc. So why didn’t that happen? The intermediary became a target of regulation – and ultimately the ‘new intermediaries’ can have a greater ability to control the users (compare the post office with an ISP, for example); also, ‘code as law’ (as in Lawrence Lessig‘s work). This is good in that it retains ‘democratic control’, but there’s also a potential loss of transparency and the avoidance of normal checks and balances. In particular, is there transparency in the introduction of regulation? It should be by primary legislation after public debate; but in famous cases (Spitzer’s crusade against gambling and music industry litigation against ISPs), this didn’t happen; thus such actions can can undermine existing democratic compromises, evade public law norms (outsourcing), ignore fair procedures, lack proportionality etc.

Cleanfeed was the main case study in the talk; the system, which originated with the Internet Watch Foundation, blocks around 1000 URLs (for websites, not other Internet functions), and has a limited appeal procedure. Political pressure and the threat of legislation saw Cleanfeed or other mechanisms extended to all broadband ISPs by the end of 2007. TJ offered detailed criticism of Cleanfeed in the context of the issues raised above (public law, fair procedures etc).

He concluded with questions about whether these solutions are necessary, appropriate, or otherwise? All comments welcome on this thread too. My own presentation is covering some similar ground so it was a very useful talk and the issues raised in the discussions were important including useful information from an audience member associated with UK academic network provider Janet.

BILETA 2008 : Morning Notes

Hello from the BILETA (British and Irish Law, Education and Technology Association) annual conference at Glasgow Caledonian University in lovely, rainy Scotland. The conference (agenda, abstracts etc available here) is taking place today and tomorrow. I’m speaking about ‘Expression 2.0‘ tomorrow afternoon.

This morning, I popped into a talk by Dr. Carlisle George (Middlesex) on SABAM v Tiscali (very similar to our own EMI v Eircom case) and then to a session on privacy with Karen McCullagh (Salford) and Kevin Rogers (Hertfordshire). Karen reported on her research on perceptions of privacy among bloggers (also discussed here and just published in Information and Communications Technology Law, here), while Kevin discussed recent UK decisions interpreting (rather narrowly) the Data Protection Act : Durant v Financial Services Authority, Johnson v Medical Defence Union, Ezsias v Welsh Ministers, and looked forward to Common Services Agency v Scottish Ministers in the House of Lords next week. In the question-and-answer session, the discussion looked at privacy vs data protection, statutory rights and human rights, British v European approaches and more – stopping for lunch after a while, though!

We started the day with two provocative keynote addresses by David Wall (Leeds) and David Flint (MacRoberts, solicitors). David the First spoke about cybercrime and wondered in particular why there had been so few convictions; his ‘three generations of cybercrime’ were very useful (traditional, hybrid and ‘true’). He has a book on cybercrime and there’s more about all that in there. The second David spoke about social networking, privacy and more, in a very entertaining way, also raising interesting questions like the ‘responsibility to protect’ (users) that should be attached to sites like Facebook.