I recently logged out of (and blacklisted) Facebook and Instagram, and I can confidently say that I feel much better without the needless doomscrolling through an endless page of depressing news and vacation photos that I do not care about. But aside from avoiding painful confrontations with beautiful Instagram models and racist relatives on Facebook, are there other reasons why you might want to consider quitting social media?
A McKinsey report from June 2020 states that the well-being of European citizens fell to its lowest point since 1980 last April as accounts of depression and loneliness tripled compared to pre-COVID standards. However, loneliness problems are far from new and have many causes, such as the pervasiveness of social media. This is especially relevant for our ‘digitally native’ generation that has grown up with social media as a core part of our formative years.
Can we continue relying on internet hosts to be solely responsible for taking down offensive content or hate speech?
Last week’s headlines traced the scuffle between Norway and Mark Zuckerberg when one of Norway’s largest newspapers, Aftenposten, criticised Facebook for removing their photos of the ‘napalm girl’ on account of child nudity. The photo of the ‘napalm girl’ or Phan Thị Kim Phúc, from Vietnam and now a Canadian citizen, was taken on 7th June1972 during the Vietnam War. It shows her as a nine year-old-child, running away from a South Vietnamese napalm attack which left her severely burned. Taken by Nick Ut of the Associated Press, the image is world famous for its depiction of the violence of the Vietnam conflict. Zuckerberg later reneged on his decision to remove the photo and acknowledged the iconic status of the historical image. Whilst this incident might primarily raise alarm bells about the power that Facebook wields over our modern lives, it is also symptomatic of the arbitrariness of online content monitoring.
Alongside its status as one of the most democratic exercises in information sharing, the internet is home to an increasing body of offensive content and unchecked manifestations of hate speech. Whilst some self-censuring is taking place, (for example in the form of ‘NSFW’ indications and ‘content notes’), such warnings are essentially used in a humorous manner. If there’s to be a concerted effort to tackle hate speech and offensive material which transcends the old adage of turning a blind eye, how is this to be achieved?
Should governments and the international community have a role to play?
Simply put, the answer from the European Convention on Human Rights is a resounding no. Article 10 ECHR guarantees freedom of expression for all and goes on to say that:
‘This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.’
This freedom is extended to internet users and the Secretary General of the Council of Europe has confirmed that ‘the state [must] not exercise surveillance over Internet users’ communications and activity on the internet except when this is strictly in compliance with Articles 8 and 10 of the Convention.’ The Court’s case law confirms a support for freedom of expression, even if the article does allow some margin of appreciation for states to take restrictive measures, as was the case in Delfi v. Estonia , where the court held that there had not been a violation of Article 10.
Similarly, Article 11 of the EU’s Charter of Fundamental Rights provides that ‘Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers’.
This is inherently a good thing. Of course there are some countries in Europe where the systematic blocking of whole websites has severely reduced freedom of expression and access to internet material to an unacceptable level. Consider the recent ECHR case, Cengiz and others v. Turkey, where the court unanimously held that there had been a violation of article 10 due to the blocking of access to Google over a long period(Ahmet Yıldırım v. Turkey ). And yet, the question we must ask ourselves is, if governments aren’t checking online content, then who is?
Net neutrality: a commercial myth?
The hands-off approach taken to internet monitoring by national governments (as advised by the Council of Europe and EU) results in a two-fold problem:
this leaves internet providers and website hosts – i.e. private companies – in charge of monitoring content;
these companies are sensitive to legal threats, as well as their reputation among their users and end up haphazardly take down content without serious reflection.
In the case of the former, the crux is this: when we leave it to web hosts to decide what is suitable content and what isn’t, we are allowing organisations with their own commercial, social and political agendas to act as the moral arbiters for all society. Is this democratic?
And in the case of the latter, this is exactly what happened with Facebook napalm incident. Is this double burden of total freedom and total responsibility not actually counter-productive to freedom of expression online? As the 2016 Annual report on state of human rights, democracy and rule of law in Europe concludes:
‘the fact that internet intermediaries fear being held liable for the content they transmit may have a chilling effect on the freedom of expression online.’
If we’re serious about blocking hate speech and inappropriate content, we need more explicit guidelines from governments and IOs. As it stands, we hail our freedom from government censorship but are trapped in an online game where private web hosts write their own rulebooks.