Archive for the ‘scripted’ tag
The editorial in the December 2012 issue of SCRIPTed: A Journal of Law & Technology is written by me, along with Dr. John Sheekey (a mathematician who also happens to be my brother). In our piece, ‘All that glitters is not gold, but is it diamond?‘ (2012) 9 SCRIPTed 274, we respond to the proposals for open access academic publication in the UK and elsewhere. While you might expect that we would welcome proposals to make the results of research more widely available, particularly when so many articles require payment or a subscription, we argue that the current proposal in the UK, and similar proposals elsewhere, have the potential to cause serious harm to scholarship, particularly in the disciplines in which we work.
The biggest threat is the proposal that publicly funded research (and possibly work submitted to future research assessments) be published in open access journals, funded through the device of ‘article processing charges’ (APCs) paid to journal publishers. This system, where an author would need to come up with thousands of pounds in order to secure publication of her or his work, does not encourage early career scholars, and it raises serious questions about the incentives to accept submissions (in terms of journal editors). Instead, we argue for the ‘diamond’ system to be given more attention. This means journals which neither charge a subscription nor an APC, but may be funded directly by a research council or an institution. Indeed, this model, rather than the APC model known in much better funded disciplines, is already becoming significant in both law and mathematics. We suggest that it could form a part of a strategy towards genuine open access.
So. As I said, I only managed to make it to the second day of the fifth edition of Gikii, but it was a very full day, and shows the strength of the concept (there is definitely an emerging Gikii aesthetic!) and the wide range of contributors. I should say that my immediate impressions and various links are on my Twitter feed, and the tag gikii has lots of other views. This post has some remarks on my own session, some shorter remarks on the session I chaired, and some even shorter remarks on the final session of the day. Don’t forget that you can download most of the presentations from both days at this link.
Having arrived from Dublin the night before as part of a triangular journey (Stansted-Dublin, Dublin-Edinburgh, Edinburgh-Stansted), I was first up on Tuesday morning with my own presentation. This time around, my topic was What We Talk About When We Talk About Google (or WWTAWWTAG as it is in my notes). The idea for this presentation came from earlier (and as yet, incomplete) work on the Google Books case, and how it seemed to come at a time when Google’s treatment by politicians, NGOs and academics was in a state of flux. Google is also involved in some of the most controversial media and technology policy issues on the table right now, everything from net neutrality to privacy. So it seemed interesting to dig a little deeper. My presentation (which you can download here as PDF) was therefore an attempt to explore the question in the title in a number of different ways. For example, I looked at the ways in which both courts and parliamentarians in the UK refer to Google – and compared that with a sample of news coverage, finding not just some differences (with the parliamentary discussion still focusing on Google as a general resource for search) but also some interesting internal differences within the media (in this sample, the Daily Mail / Mail on Sunday got very upset about Google Street View). I also illustrated the different faces of Google through various parodies/cartoons produced by others, and talked about the various friends and allies that are found in Google’s public policy activities, and the result in the Viacom case. I do hope to do some more detailed work on this, as it was more interesting (to me, at least) than I had thought. Curiously, it also drew quite a lot of good laughs, with Ray Corrigan giving it a joint comedy award. This is not my usual territory. I don’t think my students would write ‘stand-up comedian’ on their feedback forms.
Luckily, the following presenter, Trevor Callaghan, had genuine claim to the comedy tag, with a discussion of Google and social networking. It was a really through and unquestionably unprintable exploration of the topic, made more lively by the use of Prezi and diversions into broader issues of data, identity and privacy. It’s really interesting how he was able to get a sense of what Facebook’s business and cultural models are, and how they differ from other players often grouped alongside them. The final presentation in that morning session was another Gikii serial offender, Andrea Matwyshyn. Her presentation looked at issues of authorised access, with a particular focus on the US Computer Fraud and Abuse Act (CFAA) and similar legislation. Her key arguments were the divisions between criminal and civil issues (in particular, the role of contracts and terms of service), and she mentioned a number of key US decisions (such as the Lori Drew case and Register.com v Verio) and the problems stemming from then, including a pretty obvious circuit split (e.g. the difference between IAC v Citrin and LVRC v Brekka). She questioned the purpose of the CFAA and other legislation and whether it was meeting its aims.
The second session had yours truly in the chair, and it included a range of papers on the broad theme of intellectual property:
- Steven Hetcher, “Conceptual Art, Found Art, Ephemeral Art, and Non-Art: Challenges to Copyright’s Relevance“. Steven’s talk (from a US point of view) considered the ‘discrimination’ against forms of contemporary art that, being ‘unfixed’, are not within the common concept of copyright law as based on fixation. In some cases, the work is the process, with no fixed object … although if unfixed art is to be protected, does this raise questions of artistic merit as an alternative mechanism for delimiting the reach of copyright? With a wide range of slides (including a Damien Hirst shark sighting), there was also time to talk about Christopher Lowry’s work as discussed in Satava v Lowry, a 2003 case.
- Gaia Bernstein, ”Disseminating Technologies“. This paper was an attempt to go beyond the rhetoric of ‘IP wars’ and to discuss the acceptance and dissemination of new technologies. It builds on the author’s recently-publisehd work on innovation (e.g. here). She traced the differences between approaches to technology in the cases of copyright and patent, and the interaction of both with competition. She put forward an argument that the user’s role was not given the treatment it deserves, and subsequently pointed to a number of situations of market failures where (due to network effects or the importance of time) specific intervention was necessary. Really interesting stuff, and bonus points for talking about Minitel.
- Christopher Lever, ”Netizen Kane: The Death of Journalism, Artificial Intelligence & Fair Use/Dealing“. The third paper used some very creative metaphors and images that were both botanical and big-screen (Citizen Kane), with an introductory discussion of the future of newspapers and journalism and the relevance of fair use and fair dealing giving way to a critique of the failings of DRM and a thorough analysis of the work of Ozlem Uzuner on digital fingerprinting and unique expression.
- Chamu Kappuswamy, “Dancing on thin ice – Discussions on traditional cultural expression (TCE) at WIPO”. The final presentation in a very busy session. Her presentation provoked a lively online and offline discussion on what constitutes TCE in a British or Scottish context, but also offered some valuable points on differences (even where in apparent agreement) between the approaches of UNESCO and WIPO and between traditional knowledge (often patent-related) and cultural expression (often copyright-related), and the links between international legal efforts regarding TCE, folklore, and intangible heritage.
The afternoon session included an even wider range of presentations. Simon Bradshaw & Hugh Hancock talked about (and created live before our very eyes) the prospect of interesting legal issues pertaining to machinima, suggesting that the ease with which this type of audiovisual work can be created will continue to be a fertile one for legal action and academic analysis (not least the prospect of issues around new provisions in the Crime & Policing Act 2009). Ren Reynolds (with Melissa de Zwart, who wasn’t able to join us in person) talked about online games, statutory regulation of such in Korea, analogies (and case law) from physical sports like rugby, and the relationship between the rules of the game and other laws and rules, and the contract/license distinction. The last presentations zoomed out and looked at developments across disciplines: Abbe Brown (presentation here) reviewed the various issues, forces and actors in Internet governance and international cooperation (highlighting different approaches and parallel debates), while Michael Dizon (presentation here) presented a post-Lessig/(Andrew)Murray analysis of ‘the network is the law’.
This article has been published in the hot-off-the-digital-press issue (vol 6 no 2 pp 355-376) of SCRIPTed: A Journal of Law & Technology, which as regular readers know is based at the University of Edinburgh’s School of Law, and more specifically its research centre SCRIPT. The article is a version of a paper presented at the March 2009 conference, ‘Governance of New Technologies’. I’ll publish a post about the other papers in the journal tomorrow.
Responses and coverage from:
Added Tuesday 18th September:
Three presentations in this parallel session.
The first was my own, “Law in the Last Mile: Three Stories of Wireless Internet Access”. I will make the paper available shortly. I write about the legal restrictions and risks associated with the sharing of Internet access through wifi, the objections to municipal or community wifi systems, and touch on the ‘white spaces’ Internet access proposals. The bulk of the paper deals with the first, looking at what I argue is the inappropriate use of criminal sanctions against users of open wireless access points and the tools that discourage users from sharing. I believe photos were taken of the special interactive element, which I’ll leave as a surprise for the time being.
The second presentation was given by Anniina Huttunen on behalf of a research group at Helsinki, “Cooling-Off the Over-Heated Discussion of Consumer Digital Rights Discourse by Extending the Cooling-Off Period to Digital Services”. They take as a starting point the problem that there is a high level of protection for physical goods, but almost non-existent for digital services. Consumers are more empowered than ever, and the Facebook user revolt is an example of this, but what is the position of online purchases of software? There is the familiar cooling-off period in EU law – no penalties and no reason needed – for situations like doorstep, time sharing and distance selling. The case study is on software sold as downloaded data. Referring to the revision of the consumer acquis: 34th recital, data files downloaded during cooling-off period not to be included, unfair to allow cooling-off when service enjoyed in full or part. At the moment, many providers have a return policy (well hidden), and also ‘lite versions’ available, or restrictions on return (i.e. download for a second time). The pros of allowing cooling-off are allowing testing of technical and contextual compatibility; no unreasonable cost (physical return) and no wear and tear (so no need to re-sell the product at a lower price), but the cons are the expense for the developer, the design consequences, and seeming to make unauthorised use easier.
The final presentation was Scott Boone’s, ‘Why Study Virtual Worlds‘? It was a report on his own efforts but also evangelical – so that we can consider the advantages. There’s some cynicism – ‘this generation’s D&D’, also critiques that it’s just a fad/hype. But VW give us a means to study possible futures. Borrowing from the discipline of Future Studies, look at simulation gaming (formerly operational gaming). Do things that we can’t do with a real world in terms of understanding scenarios. VWs have a unique set of features and practices, and indeed more focused than the Internet taken as a whole. Already in use are 3D as user interface; what sorts of benefits do we get? and the ‘future of money’ (note disappearance of fiscal currency and privatisation of money). The focus of the paper was then on five potential outcomes of studying virtual worlds: (A) fully realised third paradigm of computing: (1) mainframe/client, (2) personal computing, (3) ubiquitous/pervasive : entirely computer-mediated ‘universe’?; (B) widespread distribution of property without relinquishment of control – do we have emerging issues here – cars on the cellphone model, control separated from use; (C) (nearly) perfect DRM for media distribution – see what the market does; (D) software designed for universal connectivity; this will be a different authorisation, practice etc. Look at business models, EULAs etc (E) augmented reality (though how do we do this without putting in all the variables?) In questions, Boone clarified that his focus was on studying virtual worlds as they currently exist, rather than creating simulations in future virtual worlds (though this too is interesting).
This is the last of my blog updates on the SCRIPTed conference at the University of Edinburgh. Remember, the full list of papers is available here. I will return to the themes of the conference (including the keynote by Prof. Bartha Knoppers) in a later post, and hope that you have enjoyed these fairly rambling updates. There will be one final session that, unfortunately I will miss most of for travel reasons, featuring Lilian Edwards, Andres Guadamuz and TJ McIntyre, which I’m sure will be excellent.
Jon Bing of the University of Oslo is speaking on the topic of “The Computerisation of Legal Decisions”, though it is indeed a broad canvas, including the history of State data collection in Norway and a real insight into the modern administrative state. He explains the Norwegian systems – e.g. a long-established unique personal identifier and the related recognition of the ‘household’ (address, others domiciled at same address). This is then related to tax authorities (reminding us of course that personal income is published as a matter of course in Norway and other Scandinavian countries). Calculation of benefits and various other functions (even calculating ‘fair housing costs’) is then possible – so taking the housing aid system as an example of ‘automated decisions’ in social welfare systems – a legal decision, even if not a ‘fancy trial decision’. This can be appealed, and the rate was about 10% – so they abolished the appeal system. There are still methods for checking and correcting data. This system has been in place since 1972, written in COBOL. While lawyers are likely to concentrate on hard cases, this is a practical example of computerised decision-making. Bing explained the process of moving from legislation either to lawyers or to computer programs. The understanding of the legal norm is then used to code (i.e. move from natural to computer language). The problems are obvious – for example, the programmer is innocent of legal training. The legislation and program should mean the same thing. Though an interesting twist on this is where in one situation there was a problem with the inconsistent treatment of fractions (they changed the law to accommodate!)
Another case study is the calculation of disability benefit in Germany – based on the average income of the last five years; there was an exception for having served military service, but this was not defined precisely enough. Take for example someone dismissed during service (one day in vs one day from the end?). This shows that, with automation, you really cannot include vague terms – everything must be defined beyond argument. Bing suggests some strict criteria (examples in brackets) – measures (weight), relative measurements (smaller), natural status (sex), legal status (authorised). In natural language, vagueness is usually resolved through textual context. Also, note legal expert judgements (circumstance-based, non-deterministic), which have a certain process but an uncertain outcome. On the other hand, complexity can be a non-problem in computer decision-making; this is often why legal expert is necessary (because there are too many possibilities for a ‘rule’ to be judged) – does this mean that there is a trend towards complex structures based on strict criteria?
The development has been:
The first generation: computerised support, e.g. a case officer facing a client, prepared in the traditional way but has access to a legal information system, internal databases or third party data.
The second generation: paper case replaced by computerised form (collection), with the similar info system and databases – but the system integrates legal knowledge (embedded), e.g. help functions as alternative to legal reference.
The third generation: ‘self-serviced public administration’, client-oriented and interpretation of law is integrated. It is more the execution of public authority than a ‘service’.
Qs from the audience:
- do jurisdictions with strong judicial review have difficulties with this approach, as it’s not so easy to ‘abolish appeals’?
- to what extent could you have an automated, first-stage appeals system?
- does the system itself encourage quite conservative decision-making?