Saturday Hassan and Human Rights

The case of Saturday Hassan, widely reported in the Daily Telegraph, Daily Main and on BBC Radio 4 Today on 21 December 2012, highlights the confusion that still exists about the interaction between the Data Protection Act 1998, the Freedom of Information Act 2000 and, indirectly, the Human Rights Act 1998.

Saturday Hussan is serving a life sentence for the murder of Darren Deslandes – he was shot dead on New Year’s Eve 2009. Darren’s parents, Wintworth and Lurline Deslandes, want the Home Office to disclose Hussan’s immigration status, so that it can be confirmed that if he is an illegal immigrant or foreign national, he will be deported when he is finally released from jail. It is reported that the Home Office have refused the request, on the grounds of the protection of Hussan’s right to privacy (under Article 8 of the European Convention of Human Rights, incorporated into UK law by the Human Rights Act 1998).

This is an inadequate reason, without further explanation from the Home Office. It may also be wrong at law, but the relevant law is complex. I set out below one way in which the information could be lawfully disclosed, but there are others that would need careful analysis of all the relevant facts.

There are various offences related to immigration under, for example, the Immigration Act 1971. It can therefore be legitimately argued that information concerning Hussan’s immigration status is sensitive personal data under the Data Protection Act 1998, for which higher level of safeguards apply. However, sensitive personal data can be processed for a number of lawful reasons, including for the administration of justice or for the exercise of any functions of a Minister of the Crown or government department. Its disclosure, if for these purposes, would therefore not be a breach of the data protection principles, and so would be permitted under the Freedom of Information Act 2000.

What is covered by the “administration of justice” purposes in the Data Protection Act 1998 has not been judicially determined, but in other cases that have reached the European Court of Human Rights (ECHR) in Strasburg, it has been made clear that the ECHR would consider whether any interference to a citizen’s Article 8 right to privacy was justified. In the terms set out in S and Marper v The United Kingdom [2008] ECHR 30562/04 (a case involving retention of DNA records by the police), the ECHR said:

An interference will be considered “necessary in a democratic society” for a legitimate aim if it answers a “pressing social need” and, in particular, if it is proportionate to the legitimate aim pursued and if the reasons adduced by the national authorities to justify it are “relevant and sufficient”. While it is for the national authorities to make the initial assessment in all these respects, the final evaluation of whether the interference is necessary remains subject to review by the Court for conformity with the requirements of the Convention (see Coster v. the United Kingdom [GC], no. 24876/94, § 104, 18 January 2001, with further references). (paragraph 101)

It is therefore arguable that the Home Office could decide that the Deslandes and other similar indirect victims of crime do have a pressing social need to know whether they are safe from the perpetrator of the relevant crime reappearing on their doorstep. It certainly is not as simple as saying Hussan’s right to privacy trumps every other consideration.

DoJ, Wikileaks and Twitter: Stones and Glasshouses

WL Helping HandThere seems to be a degree of outrage on many social media channels about the Department of Justice in the United States obtaining a court order to require the US-based social media platform Twitter, and possible Facebook and Google as well, to reveal account information about certain users who are alleged to be involved with Wikileaks. There should be no doubt amongst UK social media commentators or users that the law in the UK is more generous to government authorities than anything in the US.

US Law

The court order against Twitter was made under 18 USC §2703(d), which is an order made on application to a magistrate judge (and not a subpoena, as is being widely reported). These orders can only be granted where it is shown by the applicant government entity that there are reasonable grounds for believing that the information it will obtain from the respondent communications providers will be relevant and material to an ongoing criminal investigation. Whilst we are not experts in US law, we believe that orders under 18 USC §2703(d) enable the government entity making the application to obtain what we in the UK would call the communications data (see below) for a particular account from a respondent communications provider and details about the subscriber or customer for that account. The contents of any communication can only be demanded if they are over 180 days old, otherwise another criminal evidence procedure is required. As far as we are aware, in the US there is no federal statutory obligation on communications providers to retain communications data, but 18 USC §2703(f) does provide for data preservation orders.

UK Law

This post explains the relevant UK law, which shows that not only can similar communications data to the Twitter account information sought by the Department of Justice be obtained by government entities in the UK from UK communications providers, but that information can be demanded for much broader purposes than in connection with an ongoing criminal investigation. 

In the Regulation of Investigatory Powers Act 2000 (“RIPA”), “communications data” is defined as being (section 21(4) of RIPA):

(a)  any traffic data comprised in or attached to a communication (whether by the sender or otherwise) for the purposes of any postal service or telecommunication system by means of which it is being or may be transmitted;

(b)  any information which includes none of the contents of a communication (apart from any information falling within paragraph (a)) and is about the use made by any person—

(i)  of any postal service or telecommunications service; or

(ii)  in connection with the provision to or use by any person of any telecommunications service, of any part of a telecommunication system;

(c)  any information not falling within paragraph (a) or (b) that is held or obtained, in relation to persons to whom he provides the service, by a person providing a postal service or telecommunications service.

Whilst it is unclear to what extent communications data under RIPA includes web page or other internet usage data, the definition of traffic data was carefully drafted to exclude web page information (rider at s.21(6)).

Whilst communications providers had no standing obligation to retain data under RIPA, a designated person (as defined in sections 25(1) and (2)) may require any telecommunications operator of a telecommunications system that is “in possession of, or be capable of obtaining, any communications data” to obtain that data, if not already in the operator’s possession, and disclose it (section 22(4)).  However, the grounds under RIPA upon which communications data can be ordered to be obtained are the most extensive in any UK legislation.  They include, for example, matters such as “for the purpose of assessing or collecting any tax, duty, levy or other imposition, contribution or charge payable to a government department” (section 22(2)(f)).  The original purposes have also been extended by the Regulation of Investigatory Powers (Communications Data) (Additional Functions and Amendment) Order 2006 (all these purposes together being the “RIPA Purposes”).

The scope of these RIPA Purposes was addressed in the Home Office Acquisition and Disclosure of Communications Code of Practice, which came into effect on 1 October 2007 (the “RIPA Code”). The RIPA Code seeks to emphasis that any action by a designated person or a person authorised by them is “necessary and proportionate” (see paragraphs 2.1, 3.5, 3.7, 3.31 and 3.48). However, it does not contain much in the way of guidance on how a designated person is to assess what is “necessary and proportionate”.

Any notice given by the delegated person to a communications provider is only valid for a maximum of one month (section 23(4)), but it would appear that under RIPA the acquisition period for the relevant communications data which is the subject of the notice, can be unlimited.  The RIPA Code states that any notice must give the start date and end date for the acquisition of data, but with limits on future end dates, so that where a notice relates to the acquisition of communications data that will or may be generated in the future, the future period is restricted to no more than one month from the notice date (paragraph 3.44).

In practice government entities in the UK do not have to consider seeking an order under section 22 of RIPA to preserve communications data, as the UK has for a number of years implemented a data retention regime.  Communications providers in the UK are required to retain communications data under the Data Retention (EC Directive) Regulations 2009 (the “Data Retention Regulations”), which implement Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006 (the “Data Retention Directive”) on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks. The Regulations do not set out the purposes for data retention, but it is stated in the Data Retention Directive that the intention is to “ensure that the data is available for the purposes of the investigation, detection and prosecution of serious crime” (Art.1(1))(the “Data Retention Directive Purposes”)(emphasis added).

In the Data Retention Regulations “communications data” is defined as being “traffic data and location data and related data necessary to identify the subscriber or user”.  Traffic data means “data processed for the purpose of the conveyance of a communication on an electronic communications network or for the billing in respect of that communication and includes data relating to the routing, duration or time of a communication”(Regulation 2).  This definition is slightly different from that set out at section 21(4) of RIPA)(see above); the most clear differences are that in RIPA location data is expressly included and defined (at sections 21(6) and (7)), and the more broad definition of traffic data.  In particular, the definition of traffic data in the Data Retention Regulations does not exclude from the definition of traffic data, data to the level of web page information.

Under the Data Retention Regulations public communications providers are required to retain the communications data set out in Regulation 4 and the Schedule.  This is generally data necessary to: (a) to trace and identify the source of a communication; (b) to identify the destination of a communication; (c) to identify the date, time and duration of a communication; (d) to identify the type of communication; or (e) to identify users’ communication equipment (or what purports to be their equipment).  The retention period for all communications data retained under is twelve (12) months (Regulation 5).  The Data Retention Regulations do not include an access regime for any retained communications data, but merely state that access may only be obtained only in specific cases and as permitted or required by law (Regulation 7).

Other Relevant Legislation

Data Protection Act 1998

The Data Protection Act 1998 (“DPA”) fifth data protection principle (at paragraph 5 of Part I of Schedule 1) provides that personal data shall not be retained than is necessary for the specified and lawful purpose(s) of the data controller.  Consequently, communications providers ought to state in any fair processing notice made available to their customers that communications data is being retained as required by the Regulations and may be disclosed to public authorities permitted to access the communications data under RIPA, even though most of this processing will be subject from the subject information provisions (as defined at section 27(2) of the DPA) under an exemption in Part IV of the DPA (section 28 (National security) and section 29 (Crime and taxation) being the most obvious).

Communications providers will be relying, in most cases, on the lawful purpose set out in paragraph 5 of Schedule 2 of the DPA (processing necessary for the administration of justice, to carry out statutory functions or functions of the Crown, a Minister of the Crown or a government department or for “the exercise of any other functions of a public nature exercised in the public interest by any person”), or, where the communications data contains sensitive personal data, on the purposes set out at paragraph 7 of Schedule 3 of the DPA (as paragraph 5 of Schedule 2, except without the ‘functions of a public nature exercised in the public interest’ purpose).

Human Rights Act 1998

Article 8(2) of the European Convention of Human Rights (the “Convention”), incorporated into UK law by the Human Rights Act 1998 (“HRA”), provides that “there shall be no interference by a public authority with the exercise of this [Article 8 privacy] right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.” (the “Art 8(2) Purposes”).

The principle of retention of communications data for the Data Retention Directive Purposes, which are narrower than the Art 8(2) Purposes, is therefore lawful under the Convention and the HRA. What is open to question is the lawfulness of any of the Data Retention Regulations’ retention periods and the interference with data subjects’ rights to privacy where retention (and access) is carried out for RIPA Purposes that go beyond those set out at Article 8(2).

[We found the post “Thoughts on the DOJ wikileaks/twitter court order” by Christopher Soghoian on his slight paranoia blog interesting – and useful to confirm our understanding of 18 USC § 2703.]

#TwitterJokeTrial: a Deconstruction of a Statutory Provision

Doncaster Crown Court © Crown Copyright

In this post, I examine a notorious criminal offence, which for many years had lain dormant on the statute books unloved and neglected: section 127(1)(a) of the Communications Act 2003.  Perhaps it should be known at the #TwitterJokeTrial provision (after the hashtag used on Twitter for tweets concerning R v. Paul Chambers (unreported)).  The Chambers case has certainly made the provision internationally (in)famous. It was used to prosecute Paul successfully for an ill-advised tweet on Twitter that mentioned blowing up Robin Hood Airport.

The provision states:

A person is guilty of an offence if he … sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character.

This is not a trivial offence, as section 127(3) of the Communications Act 2003 states:

A person guilty of an offence under this section shall be liable, on summary conviction, to imprisonment for a term not exceeding six months or to a fine not exceeding level 5 on the standard scale, or to both.

Level 5 on the standard scale is currently £5,000 (Section 37 of the Criminal Justice Act 1982, as amended).

Strict Liability?

Statutory interpretation relies on courts taking a statutory provision at face value (the so-called Literal Rule – e.g. R v Harris (1836) 7 C & P 446: defendant’s conviction for offence ‘to stab cut or wound’ following his biting off of his victim’s nose quashed as provision implied an instrument had to be used).  It eventually became accepted that courts could divert from a literal approach in circumstances that led to an absurd or repugnant result (the so-called Golden Rule – e.g. Becke v Smith (1836) 2 M&W 195).  Only if the court finds there is some ambiguity can it consider what the legislature intended or what mischief or defect in law the provision was intended to remedy (Heydon’s Case [1584] EWHC Exch J36).

On a first read of the #TwitterJokeTrial provision, you might think that sending a message objectively considered to be indecent, obscene or menacing would be a strict liability offence, i.e. where the mere act of committing the offence is enough to render the offender guilty, irrespective of intention.  However, there is a House of Lords judgement that in effect requires that an intention (for lawyers, mens rea) to commit the act (the actus reus) is required for a criminal offence.  In particular, Lord Reid stated in Sweet v Parsley [1970] AC 132, 148:

Our first duty is to consider the words of the Act: if they show a clear intention to create an absolute offence that is an end of the matter. But such cases are very rare. Sometimes the words of the section which creates a particular offence make it clear that mens rea is required in one form or another. Such cases are quite frequent. But in a very large number of cases there is no clear indication either way. In such cases there has for centuries been a presumption that Parliament did not intend to make criminals of persons who were in no way blameworthy in what they did. That means that whenever a section is silent as to mens rea there is a presumption that, in order to give effect to the will of Parliament, we must read in words appropriate to require mens rea.

DPP v Collins

This passage was quoted in what is perhaps the only reported case on the application of section 127(1)(a) of the Communications Act 2003, Director of Public Prosecutions v. Collins [2006] UKHL 40.  Incidentally, this seems to have been overlooked by the Crown Prosecution Service, who had considered that section 127(1)(a) was a strict liability offence when first prosecuting Paul Chambers in the #TwitterJokeTrial, and it was on this basis that he first entered a guilty plea.  Thankfully the awful implication of this CPS view was spotted early by social media lawyers, particularly David Allen Green of Preiskel & Co, who were able to rectify this gross error and have Paul’s plea re-entered.

Collins applied Sweet v Parsley to find that for a section 127(1)(a) offence:

This passage is relevant here, since Parliament cannot have intended to criminalise the conduct of a person using language which is, for reasons unknown to him, grossly offensive to those to whom it relates, or which may even be thought, however wrongly, to represent a polite or acceptable usage. On the other hand, a culpable state of mind will ordinarily be found where a message is couched in terms showing an intention to insult those to whom the message relates or giving rise to the inference that a risk of doing so must have been recognised by the sender. The same will be true where facts known to the sender of a message about an intended recipient render the message peculiarly offensive to that recipient, or likely to be so, whether or not the message in fact reaches the recipient. (per Lord Bingham, para 11)

Collins was considering the making of offensive messages, but the principles set out above apply equally well to menacing character messages.  This was done at the #TwitterJokeTrial, where at the appeal at the Doncaster Crown Court Judge Jacqueline Davies considered that Paul Chambers “was, at the very least, aware that his message was of a menacing character”.  In particular, the Judge stated:

We are satisfied, on the evidence, that the message in question is menacing in its content and obviously so. It is difficult to imagine anything more clear. It fits the Oxford English Dictionary definition and the observations of Lord Justice Sedley, to which we have earlier referred. It is, in our judgement, menacing per se.

Although the offence does not require the message to be seen, we take the view that an ordinary person seeing this, and we have no evidence that anyone did, would see it in that way and be alarmed. The airport staff did see it and were sufficiently concerned to report it. As the authorities make it clear, there is no need for anyone to see the message and in any event, different people can have different views.

The required mens rea for this offence is that the person sending the message must have intended the message to be menacing, or be aware that it might be taken to be so.

This Appellant was 26, well-educated and in a responsible job and there is no evidence that he is anything other than of sound mind. We, of course, bear in mind his evidence and the importance of his good character, acknowledging, of course, that good character cannot provide a defence.

We found the Appellant to be an unimpressive witness. We find it impossible to accept that anyone living in this country, in the current climate of terrorist threats, particularly at airports, would not be aware of the consequences of his actions in making such a statement.

Furthermore, we are satisfied that he, as an experienced user of Twitter, made a deliberate decision to post this message on the public timeline of Twitter, on the very day he sent seemingly related private messages to Crazy Colours and within two hours of posting them. We do not accept his evidence that he was unaware of television news and newspaper reports at the material time. His account in interview, as we have observed earlier, was self-serving, although we note that at page 2 of his interview, he uses the expression, when dealing with his Twitter experiences, not to this serious of magnitude.

Accordingly, we are satisfied that the Respondent has proved the necessary mens rea and that the Appellant was, at the very least, aware that his message was of a menacing character.

(Reported by David Allen Green, Paul Chambers’ solicitor, at http://jackofkent.blogspot.com/)

Lord Justice Sedley and Collins

The Judge makes reference to Lord Justice Sedley.  He gave the leading judgement in Collins in the Court of Appeal, and his view was approved by the House of Lords.  Sedley LJ considered that the reason for the section 127(1)(a) offence was to protect people from unsolicited messages which they may find seriously objectionable.  He noted that the offence is addressed to messages, and said:

This is why it is the message, not its content, which is the basic ingredient of the statutory offence. The same content may be menacing or grossly offensive in one message and innocuous in another. … Here, as elsewhere, context is everything.

A menacing message, fairly plainly, is a message which conveys a threat – in other words, which seeks to create a fear in or through the recipient that something unpleasant is going to happen. Here the intended or likely effect on the recipient must ordinarily be a central factor in deciding whether the charge is made out.

What is offensive has to be judged (very much as the justices, by considering the reaction of reasonable people, judged it) by the standards of an open and just multiracial society. So too, therefore, what is grossly offensive, an ordinary English phrase with no special legal content, which on first principles (see Brutus v Cozens [1973] AC854) it is for the justices to apply to the facts as they find them. Whether a telephone message falls into this category has to depend not only on its content but on the circumstances in which the message has been sent and, at least as background, on Parliament’s objective in making the sending of certain messages a crime.

The controversy about the #TwitterJokeTrial therefore hinges upon:

(a) what is the proper context of a Twitter message; and

(b) whether the alleged menacing nature of the relevant tweet was properly judged by considering the reaction of reasonable people to it or by the standards of an open and just society?

It is my opinion that Judge Davies has totally failed to address the question of context of the message, in either its content or in relation to the medium by which it was sent.  Her judgement fails to show any analysis of context, only an unsupported statement that the message was “menacing per se”.

Secondly, Judge Davies appears to consider that any message that includes any reference to an act of terrorism in any form in respect of any public place would be considered “by anyone living in this country, in the current climate of terrorist threats” to be a menacing message.  I consider that this is a massive leap and I would hope and expect that most reasonable people in the UK do not live in such a state of fear that this is the case.  The best evidence for this must be that no-one appears to have been in any way menaced by the message at issue.  Even the reaction by Robin Hood Airport and the police to investigate the message in a blind following of security procedure is considered by many reasonable people to have been unwarranted by our current open society standards.

By means of a public electronic communications network

The question of the context including the nature of the medium being used to transmit the message is important as section 127(1)(a) includes the phrase “public electronic communications network”.  This is defined in the Communications Act 2003 as being an electronic communications network provided wholly or mainly for the purpose of making electronic communications services available to members of the public (s. 151(1)).  Electronic communications networks and services are further defined at section 32 of the Communications Act 2003, but the definitions are very broad.  It is safe, therefore, to assume that any internet-based or other publicly-available messaging service of any description will generate a message that will be transmitted at some stage by a public electronic communications network, except messages that stay within the bounds of a private network (such as a messaging system or email hosted on a corporate server for use within a corporate network).

This offence therefore catches a much wider set of messages than was ever anticipated or intended by the original drafters of the provision’s predecessors.  As described in an earlier post (here), and summarised by Lord Bingham in Collins, the 2003 provision can be traced back to section 10(2) of the Post Office (Amendment) Act 1935, which states:

If any person-

(a)  sends any message by telephone which is grossly offensive or of an indecent, obscene, or menacing character; or

(b)  sends any message by telephone, or any telegram, which he knows to be false, for the purpose of causing annoyance, inconvenience, or needless anxiety to any other person; or

(c)  persistently makes telephone calls without reasonable cause and for any such purposes as aforesaid;

he shall be liable upon summary conviction to a fine not exceeding ten pounds, or to imprisonment for a term not exceeding one month, or to both such fine and imprisonment.

The original “menacing character” therefore addressed the issue of unsolicited real-time messages targeted at particular recipients (to be inferred within sub-section (a) as otherwise the provision would have included telegraph, as it does in sub-section (b)).  In no way can Twitter be considered to be such a telephone system.  It is much more analogous to a telegraph system, with telegrams potentially available to be read by the public at large.

In any event, no user of Twitter can be said to have intended that any member of the public receive a particular tweet.  Any person without a Twitter account is only able to see a random selection of messages on the Twitter home page.  The chances of any particular tweet being read by an intended recipient on this home page must be infinitesimally small.  Even if an intended recipient was a Twitter user, the mass of messages and the speed with which they appear in the public timeline mean that the chance of a particular message being read in a public timeline is again extremely small, even before the factor that very few users actually read the public timeline is considered.  The result is that no sender can sensibly be judged to believe that any undirected message would be seen by an intended recipient.

A Twitter message can be directed by use of an address (@recipientname) or by the less specific use of a tagged message.  Within Twitter it is possible to use an octothorpe to identify a subject of interest (a hashtag, eg #TwitterJokeTrial) so that clicking on that hashtag reveals all other messages with the hashtag.  It may be reasonable, in the context of a particular message and hashtag, to rule that a sender using a particular hashtag knew or ought reasonably to have known that by doing so the message would be read by an intended recipient.

There is also the question of the meaning of the word “public”.  In the history of the section 127 provision “public” meant publicly-owned, as until 1984 the Post Office, then British Telecommunications plc, was in public ownership.  Only in the re-enactment of the provision in the Communications Act 2003 did the sense of “public” change to mean publicly-available.  As Lord Bingham stated in Collins

The purpose of the legislation which culminates in section 127(1)(a) was to prohibit the use of a service provided and funded by the public for the benefit of the public for the transmission of communications which contravene the basic standards of our society.

Hansard

The difficulty with this public electronic communications network contextual analysis is that a court would first have to decide that the outcome of the #TwitterJokeTrial using a literal approach to section 127 led to an absurd or repugnant result.  It is an open question whether the High Court will do so.  If they do, then the Golden Rule and Mischief Rule may enable them to trace section 127 back to the section 10(2) of the Post Office (Amendment) Act 1935 origins.  Given that the High Court is bound by the House of Lords judgement in Collins, it is difficult to see how they could distinguish Twitter from what Lord Bingham considered to be the purpose of section 127(1)(a).  It is extremely difficult to see how it can be argued that Twitter is funded by the public for the benefit of the public.  It is a free service.

In looking at the mischief intended to be remedied by the 1935 provision under the Mischief Rule, English law does permit the courts to consider certain extrinsic evidence.  In particular, the courts can consider the Hansard reports of the debates in Parliament on the relevant provision (Pepper v Hart [1992] UKHL 3] ).  If they were to do so, they would find that at in the Second Reading in the House of Lords, it is at least implied that the measure was intended to catch direct, real-time, communications where a message had an indecent or menacing character.  Here “direct” means that the sender at least intended the message to be received by the intended recipient over the communications system, even though it is clear that it is the sending of the message that is the offence.  The provision was originally drafted to give protection to Post Office staff, particularly female telephonists subject to improper or obscene telephone calls.  A full explanation may be contained in the House of Commons select committee report (Official Report (Standing Committee B), 5 March 1935; col 14), but this is not available online.

Human Rights and Purposive Interpretation

There is one other point to consider, a sender’s human rights.  Under the European Convention of Human Rights, incorporated into English law by the Human Rights Act 1998, a person has a right to freedom of expression:

Article 10 – Freedom of expression

1. Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers. This Article shall not prevent States from requiring the licensing of broadcasting, television or cinema enterprises.

2. The exercise of these freedoms, since it carries with it duties and responsibilities, may be subject to such formalities, conditions, restrictions or penalties as are prescribed by law and are necessary in a democratic society, in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary.

So whilst it is clear the right to free speech can be qualified, it is also a feature of human rights law that any qualifications made under Article 10(2) must be proportionate, both by application of the House of Lords judgement in R v Secretary of State for the Home Department (Ex parte Daly) [2001] UKHL 26  and section 3 (1) of the Human Rights Act 1998:

So far as it is possible to do so, primary legislation and subordinate legislation must be read and given effect in a way which is compatible with the Convention rights.

It would therefore be open to the High Court to consider the restrictive interpretation made by the Doncaster Crown Court of section 127(1)(a) disproportionately fettered Paul’s and any other person’s right to freedom of expression via Twitter in the interests of national security or public safety.

Conclusion

There is so much argument here which, if it is was raised in the #TwitterJokeTrial Doncaster Crown Court appeal, has not been reported upon.  I continue to hold the opinion that the conviction of Paul Chambers under section 127(1)(a) for what everyone, including the defendant, considers was a stupid tweet, is unsafe.  However, I called the appeal incorrectly, and so I cannot be 100% confident that the High Court will agree.

It will be an extremely sad day for English justice, common sense and for freedom of expression in the UK if stupidity by electronic message becomes a criminal offence.

Human Rights and Responsibilites

European Court of Human Rights, Strasbourg

In this post we are straying from our core area of commercial law to have a look at a topical example of how the European Convention of Human Rights (“ECHR”) is applied and enforced in the UK.  Our excuse is that Art 8 of the ECHR, commonly known as the right to privacy (the right to respect for private and family life), impacts directly on many aspects of data protection and information technology law, and because we have just passed the 60th anniversary of the making of the ECHR (4 November 1950).  However, our case involves Art 3 to Protocol 1 of the ECHR:

Article 3
Right to free elections
The High Contracting Parties undertake to hold free elections at reasonable intervals by secret ballot, under conditions which will ensure the free expression of the opinion of the people in the choice of the legislature.

Protocol 1 was made on 20 March 1952.  The UK was a party to the Protocol, so the provision can hardly be considered new.

John Hirst was born in the same year the ECHR was made; 1950.  On 23 June 1978 he killed his landlady, 63 year old Bronia Burton, in an unprovoked attack with a heavy hand axe.  He was subsequently convicted and imprisoned for manslaughter on the grounds of diminished responsibility.  Medical evidence suggested that Hirst had a severe personality disorder so that he was considered amoral.   The original tariff, being the part of his sentence relating to retribution and deterrence, was set at 15 years, but as a result of other violent offences committed whilst in prison, Hirst was not released on licence before he had served 25 years in prison, in 2004.  We set out the background to John Hirst so that the political difficulty there appears to be in the UK to giving prisoners the right to vote can be understood.  In an environment where politicians wish to appear to be tough on crime (and convicted criminals), the fact that John Hirst was convicted for a serious, violent offence does not help the rights’ argument.  However, Hirst’s criminal record has no bearing on the ECHR case.  He could equally as well have been convicted and imprisoned for non-payment of a fine.

Hirst first brought a case to the High Court in 2001 challenging the ban on prisoners voting under section 3(1) of the Representation of the People Act 1983:

A convicted person during the time that he is detained in a penal institution in pursuance of this sentence… is legally incapable of voting at any parliamentary or local election.

This provision can be traced back to a similar section of the Forfeiture Act 1870, which in turn is an enactment of rules of law concerning the forfeiture of rights by a convicted person that date back to the reign of Edward III (1312 – 1377).

The case sought to have section 3 declared incompatible with the ECHR, which is all that a private citizen can do under the Human Rights Act 1998 to redress a breach of his or her rights, but his claim was dismissed by Lord Justice Kennedy (Hirst v Attorney General [2001] EWHC Admin 239).  Kennedy LJ noted that the terms of Article 3 of Protocol 1 did not on their face grant any individual a right to vote, but relied upon the reasoning of Mathieu-Mohin and Clerfayt v Belgium [1987] 10 EHRR 1 that accepted that there was such a right, albeit that the right is not absolute.  The European Court noted that nation states could include conditions on any right to vote and that they had a:

wide margin of appreciation in this sphere, but it is for the Court to determine in the last resort whether the requirements of Protocol No 1 have been complied with; it has to satisfy itself that the conditions do not curtail the rights in question to such an extent as to impair their very essence and deprive them of their effectiveness; that they are imposed in pursuit of a legitimate aim; and that the means employed are not disproportionate. (Paragraph 52)

However, Hirst, as with every other person claiming that their human rights under the ECHR have been infringed, had to exhaust his judicial rights of appeal in the UK before he could take his case to the European Court of Human Rights in Strasbourg (not to be confused with the Court of Justice of the European Union in Luxembourg).  His case had been joined with  R (on application of Pearson & Martinez) v Secretary of State for the Home Department and 2 Electoral Registration Officers.  Applications for permission to appeal to the Court of Appeal were made in that case on 2 May 2001, but they were refused on 15 May 2001 on the ground that the appeal had no real prospect of success.  Hirst applied for leave to appeal on 19 May 2001, but was similarly refused on 7 June 2001, as were renewed applications on 18 June 2001.

Hirst pursued the case to the European Court of Human Rights which ruled in 2005 that the ban breached the right to free elections under the European Convention on Human Rights.  In summary, the Court had a relatively simple job.  Having considered the case law, including Mathieu-Mohin and Clerfayt, the Court noted that imposing necessary and proportionate restrictions required for some justified reason was one thing; an outright ban another.  Nothing in Article 3 of Protocol 1 permits a contracting state to the ECHR to impose this type of outright restriction on universal suffrage. In legal terms, there is nothing difficult about the judgement: it states that Article 3 of Protocol 1 was breached.  Hirst and his legal team were awarded their costs.  Hirst was not granted any damages for suffering or distress caused by the violation (he had claimed £5,000).  The case was originally heard before 7 judges (the Fourth Section of the Court), who unanimously held that the UK was in breach on 30 March 2004.  The UK appealed to the full court (the Grand Chamber of 17 judges) on 23 June 2004, and a majority decision against the UK (12 to 5) was published on 6 October 2005.

The question then becomes one of enforcement.  Once the Court has given its judgement, the matter is then referred to the Council of Ministers.  There is no absolute sanction or power that can be applied to a ECHR contracting state that refuses to implement a judgement, but failure to implement a judgement rapidly becomes a political and diplomatic matter.  Failure to implement a Court judgement is itself a breach of the ECHR (Article 46).

There is plenty that can be said about the Court’s decision in Hirst.  Some have noted that the wording of Article 3 of Protocol 1 is expressed to be an undertaking, not an absolute right.  However, as the UK has signed up to the ECHR, then it is, for as long as it wishes to be so, bound by decisions of the Court.  It’s a simple question of the rule of law.

[For an interesting discussion on the law and politics of the Hirst case, we recommend the podcast of Carl Gardner and Mike Semple Piggott at CharonQC (see blogroll).]