Archive for the ‘privacy’ tag
This week’s English newspapers (including the Guardian and Independent, but there may be others) carried a number of full-page advertisements for Google, which formed part of its current ‘Good To Know‘ campaign. The campaign is ‘in partnership with the Citizens Advice Bureau‘.
Some parts of the campaign strike me as extremely sensible and useful information, and leave me very pleased that Google is putting its money and reputation behind them. For example, one ad (which I first saw in a Tube station) emphasised Google’s 2-step verification; another (which I saw in print, but can’t recall where) gave examples of good passwords. (You can see a collection of these ads on the Good To Know website). The most recent ads, though, raise some interesting questions around data and privacy. As readers of the growing literature on the development of Google will know (most recently Douglas Edwards’ I’m Feeling Lucky on his experiences as employee #59), it’s clear that these issues are thought about and debated a lot within Google; this however is my external take and some quite preliminary questions rather than conclusions;.
One ad is about IP addresses – it doesn’t appear to be on the Google site, but I’ve scanned it (apologies for resolution) here. Explaining how a user in Brighton doesn’t need a plumber from New York when they use a search engine, the ad states that results based on where you are use your computer’s IP address. “It’s a number like 18.104.22.168 which acts a bit like the first part of a postcode to tell them the rough area your computer is in“. I think this isn’t the best definition of an IP address, particularly in the week where (in the Sabam decision regarding ISP filtering for copyright reasons) the Court of Justice of the EU found it to be common ground “that the injunction requiring installation of the contested filtering system would involve a systematic analysis of all content and the collection and identification of users’ IP addresses from which unlawful content on the network is sent. Those addresses are protected personal data because they allow those users to be precisely identified”. This confirms a direction in European Union practice, particularly the statements of the article 29 Working Party (e.g. opinion 1/2008 on search engines, opinion 2/2010 on online behavioural advertising), that an IP address can be personal data. In a way, I’d suggest, that the first part of a postcode is less likely to be.
Another ad (with a quirky little graphic about extra-shot coffee, which is what I’m drinking as I type this) (scanned here) draws a link between the barista knowing your coffee order (but not your name) as you walk through the door, and how Google and other websites act:
Making a note of your preferences in case you visit them again. It’s how they are able to recommend a particular artist you might like, or if you prefer to fly from a certain airport, or if you like a specific printer ink.
(I think ‘preferences’ here is broader than a technical meaning of preferences as in settings, but am open to correction).
Again, I can see what they are getting at, but I think the anonymous coffee order may not be the best model here – as (a) there are plenty of ‘preferences’ that are more revealing (and yes, legally sensitive) than coffee choice, and (b) concerns about profiling include the cumulative impact of data collection rather than a single point – the barista doesn’t know what you prefer when you go to the clothes shop next door!
Google does some great work around data – and the Good To Know website highlights this, including work on Data Liberation, cookie deletion and more. But there’s something about the ads above that I’m not as sure about.
I mentioned this campaign to a fellow academic and s/he pointed out that the ultimate target here might not be users, but the forthcoming (and unpopular with large Internet companies) revision of the Data Protection Directive. If that’s the case, Google’s intervention isn’t unwelcome – we need to hear its voice – but it’s worth debating those points. If it’s just about consumers, I think it goes in the right direction (particularly the security stuff), but the wording could be a good bit tighter.
Finally, I think there are questions to be asked about the role of the Citizens Advice Bureau. It knows well that the interests of consumers are different to the interests of corporations – see for example its current struggle to publish the results of investigations and how libel law appears to prevent that. So should it be involved with (a) a particular company and (b) a particular view of the law of privacy? Indeed, the UK government proposes (consultation paper here) to take a whole range of consumer information and advocacy functions away from public bodies and transfer them to the (private, charitable and generally wonderful) CAB. Should it therefore be more careful about taking ‘sides’, appearing to endorse the views of Google and in having the ads presented as authoritative and neutral?
The internet is holding governments to account by creating a platform for leaked information as well as for protest groups to freely talk. Join us to debate whether the internet can and should be controlled. Speakers include Herbert Snorsson, founder of Openleaks.org; David Clemente, Chatham House; and Steve Murdoch, Computer Laboratory University of Cambridge.
Questions – is the Internet a force for good? Can it be used for evil? Can it be policed? Should things be kept secret? Should our data remain private? Should countries have a say on what is available?
Jim Killock – Executive Director, Open Rights Group
Jim introduces ORG, discussing the funding (mostly from members at £5 a month) and recent campaigns (e.g. privacy, Digital Economy Act). Internet freedom should mean, for example, that responsibility lies with sender and receiver but not with an intermediary (particularly ISPs), and this is under threat. But what are the benefits? Political and cultural participation (at unprecedented level) – and therefore power, transparency & accountability (not just re governments but re journalists and corporations), contributing to a ‘better sort of politics’. ORG and others hear that the Internet is unregulated, the Wild West, full of criminals and so on from some MPs and policymakers; others see it as television in the making. But really, the Internet is like society, the roads, the pub, a marketplace, a village green – reflects the world around. ORG argues for a rights-based debate to avoid the Internet being a negative place, e.g. mass surveillance (government or corporate), or intermediaries choosing what you see (biased search, advertisers gaining information, formal and informal censors, copyright owners having power). Who then are the enemies of Internet freedom? Top of the list are copyright lobby groups – who want to see traffic, snoop on customers, prevent access. But also state security apparatus (want more access to information that flows). More benign are those with legitimate worries about society (e.g. re anorexia, pornography) who end up in a position where they want ‘the Internet to stop doing these things’. These various enemies have given up targeting the user (more or less) and focus on the middle – to stop the network working like a network, placing new controls on it. Internet freedom can survive though because it is a value for all of us and it is becoming obvious what we get from it; the need for eternal vigilance is no different. Join the Open Rights Group.
David Clemente – Researcher, International Security Programme, Chatham House
(Personal opinion, not that of Chatham House). Agrees that this is a debate about power. States and others can be uncomfortable with how cyberspace challenges accepted notions of power, e.g. an ageing Senator looking for the off switch after a cyberattack. But this is increasingly less common. Ability to innovate (e.g. write an app!) affects distribution of power – and lines between transparency and secrecy that a government might draw. Wikileaks is an example of the US responding strongly even though it might have welcomed it if it was about Iranian secrets. Post-9/11 move in US military from need to know to need to share – over 1m people had access to these cables, produces benefits but can return to sting you. Wikileaks far surpasses e.g. Pentagon Papers in terms of volume; governments can collect a lot of data (but can also use it). Governments can still apply pressure e.g. on credit card companies which as we have seen has an impact on Wikileaks. Cyberspace ‘supercharges’ blurring of power lines but not all new. Khomenei distributed his message on audio cassette in the late 70s – now it’s even quicker. Yet there are still some things we would rather governments keep secret – trade negotiations? health and tax records? certain diplomatic conversations? can be argued to be necessary. A disclosure like Wikileaks can inhibit free conversations (e.g. between US diplomats and other states); various allegations were made re Afghan war log disclosure. Some leaking can be good, promoting ‘positive transparency’ (Tunisia?) – not necessarily that it caused a revolution but every little stone helps. In sum, the Internet is still relatively young – as John Naughton said, we wouldn’t have expected Gutenberg to tell us about the impact of printing 20 years after its invention.
Dr Steven Murdoch – Security Research, University of Cambridge
The key for Internet regulation is that it is made of cables which are managed/used by people, who can be influenced by governments. Extreme examples exist e.g. North Korea isolation, China blocking, Middle East religious reasons. But in the UK, a system that was accepted by ISPs for images of child abuse is now to be used [he's referring to the latest instalment in the Newzbin case, reported here] for blocking access on copyright infringement grounds. The trend is away from blocking at source to blocking from accessing. Many have secrets – military, government, companies – but each employ individuals who would like to disclose. Others have discussed censorship as a response but surveillance is one too. Technologies like Tor can be used and are – for various reasons – secrecy, avoiding advertising tracking, but also human rights workers, law enforcement. Although users can be protected there is a trend towards building both censorship and surveillance into the system. Do we therefore have enough, too much or too little freedom on the Internet today – and we should think about this beyond Wikileaks, e.g. phone hacking, expenses, Tunisia.
Herbert Snorasson – Openleaks.org (via Skype…)
While it’s important to protect secrecy of things like health records, there is a need to rebalance other issues (e.g. government negotiations) in favour of access to information. We need a variety of tools for this, FOI etc good but also a ‘crowbar’ e.g. Wikileaks tried this, but no single organisation can ever provide everything we need. The workload when I [i.e. Herbert] was there was insane. The work is important and needs to be done with the goals in mind. (Stopped after a couple of minutes due to tech issues etc but will participate in questions)
Chair (missed name, sorry – BBC journalist) sums up the debate, mentions the use of productivity software (i.e. blocking social network use at work) by Syrian government, and opens it up to the floor
How serious is the threat by Government to restrict social media during disorder?
JK: sorry that people like David Lammy overreacted in this way, but ORG and others provided alternative view. The debate then shifts from censorship to surveillance which still needs to be treated seriously, money has been allocated within Home Office for interception modernisation.
DC: notes Google’s publication of information about state requests with a very high number in the US.
HS: if David Cameron would block Twitter during riots, there would be a political cost…
What would a cyber attack on the UK look like?
SM: it’s a bit of a buzzword, but in many ways cyberwarfare is already happening (although not really for speech reasons, different goals). The sky isn’t going to fall.
Chair: notes event with Foreign Office next week in London which demonstrates level of concern and response e.g. to viruses targeted at government networks.
DC: US central command (in 2008) was attacked – but much of this is the human element (e.g. opening an email, picking up a USB stick in a car park)
Apple spying on its customers – to what extent are networks being used for this sort of purpose
JK: you have the right not to be snooped on and you should need to agree, but none of this is very transparent. Many online services are ‘free’ but they are selling advertising based on your data. Facebook hardly ever deletes anything. But social life, campaigning etc is important re Facebook – is this then consent or putting up with it? Also note behavioural advertising/cookies – at a bare minimum I should be able to see this but told no such right (information is about cookie not person)
HS: there is regulation of this through the European Union but many companies are based in the US and are exempt – the EU provisions are not perfect but go some way. But note Facebook has designated its office in Dublin (for non-US/Canada) and is therefore subject to EU regulation, they have been flooded with information requests under EU law.
Chair: notes Reddit campaign for Facebook data disclosure, file ten times the size you would normally get.
Ross Anderson (from the floor): judiciary has the chance to keep up with technology; US courts are open to lawsuits from private individuals and organisations
(missed name, ex-lawyer): Facebook is dominant so use should be made of European competition law…
JK: Wikileaks affair was demonstration of government not using proper legal procedures, leaning on private companies – this is the Wild West. And regulators (e.g. Irish DPC, UK ICO) have limited remedies and are reluctant to use them. Would also need to figure out if Facebook is dominant in a market and how that relates to data protection.
This is my (personal) report on the Media and Communications subject section at the annual conference of the Society of Legal Scholars, held in Cambridge this week. For those not familiar: the SLS is the organisation for legal academics in the UK and Ireland, and this was its 102th annual conference. As well as plenary sessions and an AGM, the main business of the conference is a range of subject-specific parallel sessions, of which Media and Communications is one. A related area is Cyberlaw, but this year they ran at different times in the week (the conference is divided into groups A and B), which did appear to increase the attendance at both. During this year’s meeting, I was elected as the fourth convenor of the section, taking over from Mike Varney (and before him Tom Gibbons and before him Eric Barendt) – a quite daunting line of succession!
Session 1 had a focus on information – although in very different ways. Damien Carney (Portsmouth) opened up with ‘Truth and the unnamed source’, considering the importance of truth (and objectivity) in the law and ethics of the protection of sources. He looked at the recent decisions in National Post (Canada) and Financial Times v UK (the latest instalment in the Interbrew case), making particular points about the reassessment of who the privilege on the protection of sources ‘belongs’ to, with the Canadian and European courts heading towards an emphasis on the rights of the public to know (the truth?). Lawrence McNamara (Reading) followed with his paper on terrorism and disclosure obligations, considering section 38B Terrorism Act and the various laws that preceded it. Despite very few cases on the matter, the provision has an impact on the practices of media organisations, although there are differences between the thresholds applied within organisations and as apparently required by law. He also debated the rights and wrongs of a media exemption and how necessary it is to take a legal approach in any event. Finally, Neil Richards (Washington University, St. Louis) presented his theory of intellectual privacy. He distinguished between ‘tort privacy’ and ‘intellectual privacy’, particularly on the difference between the impact of each on freedom of expression, suggesting that the former might be confined to ‘truly shocking’ disclosures, but the latter was important because it protects the process of considering and forming ideas. Interestingly, there was a strong technological dimension here, given the role of search engines and of surveillance technologies. He proposed four key aspects of the right to intellectual privacy: thought and belief, the right to read, spatial privacy and confidentiality, and also considered the need for a horizontal approach rather than a negative constitutional doctrine alone.
Session 2 had a European theme, with papers from Irini Katsirea and myself. Irini’s presentation was about product placement, specifically the implementation of the new rules set out in the Audiovisual Media Services Directive (AVMSD) in two jurisdictions, Germany and the UK. She highlighted some of the vaguer aspects of the Directive, such as the criticism (but not outright ban) of thematic placement, the conflict between provisions on surreptitious commercial communications, undue prominence, and limited scope for allowing product placement. The UK has excluded certain genres (above and beyond the Directive), but only for broadcasters under UK jurisdiction, again because of the Directive. Germany has required identification of PP in acquired programming, but without an offence of breach of this duty. The UK and Germany took different paths on thematic placement – unclear in the former, banned in the latter. The minimal requirements of the Directive on notifying viewers were also considered. In the discussion of the paper, we also wondered to what extent product placement was actually present in EU-origin programmes since the Directive. My own paper was on the European Convention on Transfrontier Television, a Council of Europe instrument dating from 1989 but currently in serious trouble after an aborted attempt to amend it. After explaining the history of the relationship between it and the EU’s media law directives, I discussed how the European Commission objected to the amendments that would have brought it up to date with the AVMSD, assessing the legal basis for this objection (external powers of the Union) and how this was debated in various fora. I also looked at the reaction of the UK, which had in the 1980s been a strong supporter of the Convention, but had some problems with the current amendments and mixed feelings about the Commission’s intervention. I concluded with a wider discussion on EU-Council relations and whether other areas (such as media pluralism and impartiality) might fare in future developments. [If readers will permit a further note: I have a draft paper on which comments would be appreciated, not available online but happy to supply copies if you are happy to offer your views: email me].
Session 3 was about recent developments, both with a European context and a British focus. Tom Gibbons (Manchester) looked at the relationship between reputation and privacy within Article 8 ECHR, and the differences between English law on defamation and on privacy. He was reluctant to describe what is happening in Strasbourg as a doctrine, given the inconsistent positions expressed by differently constituted courts, but discussed a number of defamation-type cases where the engagement of article 8 was taken for granted. Nonetheless in Karakó v Hungary there may have been a move away from this position, with some importance attached to internal and external notions (he considered, later, whether reputation is external and privacy is internal). English cases on injunctions (ZAM, Terry) have added comments on the importance of reputation, and the Supreme Court’s decision in the freezing orders discussion discussed ECHR decisions and the need for a serious threshold. Are we moving towards a Re S-style ultimate balancing exercise? Is the justification defence to defamation threatened by an article 8 approach? What about Reynolds? He also argued that the ability to evaluate others is important and subsuming reputation into article 8 may be difficult to reconcile with this. Following on, my UEA colleague Michael Harker presented his paper on vertical restraints in broadcasting, or why ‘content is king’. As well as a thorough explanation of the market structure of pay-TV in the UK, he focused on Ofcom’s intervention regarding sports channels, particularly the requirement on Sky to offer its channels to other platforms (e.g. digital terrestrial) at a regulated price. Michael explored the differences between ‘sectoral’ and ‘competition’ approaches, and the remedies available in both cases. The possible consequences of intervention were outlined, including the need to protect innovation and also the policy goals of (for example) promoting broadband uptake.
Finally, session 4 was a pair of case studies. Ewa Komorek (my former colleague as a doctoral student in Dublin) reported on the ups and downs of Polish media law. She looked at three particular issues: ‘Rwyingate’, politicisation of public service media and problems with press freedom and criminal law. The first was a major national scandal regarding the proposed takeover of a private television channel by a major media company, and the disclosure of an attempt to exchange ‘a law for a bribe’, as a national newspaper reported. This led to a major report on the activities of the ‘group in power’, the resignation of a government, and wider discussion of the adequacy of the legal framework on media concentration and mergers. The second is also about the relationship between politics and media, with Ewa explaining the structure of the public service broadcaster and recent changes that may (or may not) increase the independence (from political influence) of the broadcaster. Finally, she looked at a range of criminal provisions, including those about insulting the president (imprisonment up to three years and no defence of provocation!) and defamation itself, despite criticism from the ECHR on the impact of these provisions. She was followed by Eliza Varney (Keele), whose presentation was about disability and ICTs after recent changes to EU law, particularly the 2009 amendments to the electronic communications directives and EU equality law. Although some progress has been made through the updating of universal service provisions, she pointed to outstanding issues such as the consumer-driven approach to regulation, the focus on sensory disabilities (e.g. as compared with cognitive), and the weakness (after industry lobbying) of some provisions. Eliza argued for a universal design approach and considered whether a disability-specific provision of general equality law (particularly if the proposed directive on discrimination re access to goods & services does not proceed) might be of assistance.
Human Rights in Ireland is a group blog that contains many useful posts on, as you might expect, human rights in Ireland. I was very pleased to write a guest post for the blog, which has just been published. I’m republishing it here for those who have not already seen it. I do recommend that you subscribe to the full HRinI feed!
The recent attempt by JP McManus to secure the removal of ‘fake profiles’ on Facebook (reported by the Irish Times on 30 May) through an application for an injunction (struck out after the pages were taken down) is just the latest reminder of the importance of intermediaries when it comes to law and the Internet. In this situation, McManus appears to have been doing something that isn’t difficult to understand – turning to the law to make the offending page disappear from the Web. In the UK, of course, we have seen the last few weeks as a significant time in the development of the law on privacy injunctions, with judges, newspapers and certain Twitter users taking fairly different approaches.
However, it’s far from a new problem. Within the then-novel field of what some called cyberlaw, the middle part of the 1990s was dominated by earnest debates in journals, courts and parliaments on the future of law. John Perry Barlow’s assertiveDeclaration on the Independence of Cyberspace of 1996 told governments they were not welcome in this new world that was governed by its own social contract, dismissing laws on everything from property to identity; ‘they are all based on matter, and there is no matter here’. Many responded by pointing to existing legal principles or forms of technological enforcement. Regarding Facebook, Twitter and other sites, the key resolution of these early debates, which continues to shape their rights and responsibilities as well as those of their users and those their users write about, was in new statutory principles on liability. For example, article 14 of the E-Commerce Directive (2000/31), transposed into Irish law as SI 68/2003, points to a ‘notice and takedown’ approach. This means that Facebook is not liable for – say – a defamatory statement posted by one of its users, as long as it ‘upon obtaining such knowledge or awareness [of the unlawful activity], acts expeditiously to remove or to disable access to the information’. In the case of ISPs acting as ‘mere conduits’ (like your friendly broadband provider), a greater degree of immunity is provided.
The position is a little different in the US, where a distinction is drawn between intellectual property (governed by a notice and takedown system in the Digital Millennium Copyright Act) and other (non-criminal) claims (close to absolute immunity without a takedown requirement, under the Communications Decency Act). Other provisions may also be relevant at a national level, such as s 27 of the Defamation Act 2009 in Ireland on innocent dissemination.
Whatever the position of the intermediary, important rights are at stage. Too much immunity, and the aggrieved person will say that they cannot see their rights vindicated; Danielle Keats Citron also argues that there are consequences for equality. Too little, and the intermediary becomes risk-averse, taking down content at the mere hint of potential possible illegality, as Ahlert, Marsden & Yung’s famous study demonstrated in 2004, with deleterious consequences for the right to communicate. Even where conditional immunity is pursued, there are further questions to be answered – what is expeditious, for example, and does it vary based on the gravity of the situation or the number of people who have retweeted the information? What of providers located in the US without meaningful assets, customers or facilities in the EU? Furthermore, beyond the big issue of liability, hosts may also be faced with Norwich Pharmacal orders, to disclose the name of a user where the host is ‘mixed up’ without fault in the wrongs of others.
Although much of the news coverage of Twitter seems to have glossed over this point, such an order was granted in respect of Facebook as far back as 2008: Applause Store Productions v Raphael  EWHC 1781 para 10. There are some unresolved issues (which the ECJ has hinted at) on the balance between privacy and e.g. copyright enforcement, and of course, there are limits to (a) how tolerant a court is of mass applications and (b) how much information is available (or useful) through this process.
Indeed, the Irish Times also reports that McManus had raised constitutional and data protection claims. Neither are particularly surprising, and it surely won’t be long before an Irish court has the opportunity to try and deal with this balance. The most significant case so far has been about a betting chat room (Mulvaney v Sporting Exchange t/a Betfair  IEHC 133), but that was a fairly straightforward application of the Directive; the various ‘music’ cases did discuss (albeit unsatisfactorily) the need to balance various rights in the context of the claims of record companies against ISPs. Certainly, reliance on Convention rights has been important in the grant of privacy injunctions in England; the recent cases have been about the need for injunctions in the face of disclosure (e.g. CTB  EWHC 1326 (QB)), rather than takedown disputes. However, if the Irish courts see a fully-argued constitutional claim, difficult issues of EU law will need to be negotiated. It must surely be hoped that the current reviewof the E-Commerce Directive will take note of the decade’s worth of development and interpretations and sets out a system for intermediaries, including a method (despite its title) to give thorough consideration to relevant fundamental rights as part of the process, whether they are engaged directly or indirectly.
I’m one of the lecturers on a final year optional module, Media, Entertainment & Sports Law. Fairly early on in the year, I teach a section called ‘journalists and the law’, which is more or less a collection of issues other than defamation and privacy that affect the work of the journalist in general (broadcast-specific issues come later in the course). As the focus shifts to pre-publication work (with D&P doing most of the heavy lifting for post-publication spats), the recent interest in investigative journalism provides a useful way to look at the moral/ethical/practical/legal discussion of how far the journalist should go to get a story. So for the seminar on this topic, as well as reviewing the plethora of reports on ‘phone hacking’ et al, I also ask students to look up and criticise any decision of the Press Complaints Commission under clause 10 of the Editors’ Code (the asterisk means that there may be public interest exceptions to the clause)
10 *Clandestine devices and subterfuge
i) The press must not seek to obtain or publish material acquired by using hidden cameras or clandestine listening devices; or by intercepting private or mobile telephone calls, messages or emails; or by the unauthorised removal of documents or photographs; or by accessing digitally-held private information without consent.
ii) Engaging in misrepresentation or subterfuge, including by agents or intermediaries, can generally be justified only in the public interest and then only when the material cannot be obtained by other means.
I like this clause because it weaves together things that are (reasonably) obviously illegal, things that are dubious, and things that are very likely not. The decisions also bring up the lovely newspaper history of the ‘our reporter made his excuses and left’ type of subterfuge. But it’s not the highest-profile bit of the code and the decisions were often quite inconsequential. However, for next year’s iteration, we finally have a Big Case under clause 10 to talk about…that’s the complaint of the Liberal Democrats against the Daily Telegraph, published on 10 May 2011. Not only was the original story a Big Story (the various admissions made by Vince Cable when, as Jeremy Hunt it it with a nod to the contemporaneous Mosley decision, two young women tied him up in knots), but the finding against the Telegraph is also Big. I do think the finding is well-argued, although it could be confined to its own facts if newspapers were able to make a stronger case of ‘acting on information’ as opposed to fishing. Thinking outside of a formal analysis of the decision, it’s also Big in that it comes at an important time for self-regulation, with the PCC being able to demonstrate its willingness to protect a particular form of privacy, when the law of privacy itself is in flux and the PCC criticised by some for its performance regarding phone hacking.
(I held off on this until after the students referred to sat their exam, which they did on (spooky music Friday the 13th…)