Biometric Surveillance in Public Spaces – The Humpty-Dumpty of Privacy & Safety

Biometric Surveillance in Public Spaces – The Humpty-Dumpty of Privacy & Safety

When asked about privacy in the digital space, the usual answer seems to be a scoff. Sometimes it is even followed by a taunting sneer with a tint of “there is nothing we can do about it though” look. But why is that?

A View on Privacy – The Scoff

 

Here are the common causes of the scoff:

Whether it is because of:

  • the ubiquity of the cyber-tentacled giants of the industry collecting, using, selling and benefiting off of our personal data that we have defeatedly volunteered by agreeing to the all unending, (some may even say unwavering) Terms and Conditions;

 

 

“A popular search engine is making $467,879,584 on your data every day” – take a look at the awareness campaign launched in NYC HQ involving a refund vending machine. Nicely played Google! (source : Kimberly Wright – M&W Graphics – LinkedIn)

 

 

Or because of

  • all the cookies we keep agreeing to for being able to access the information, or from the “I simply can no longer be bothered” contagious attitude and replicable behaviour in favour of convenience. Or simply because there are just so many times one can click the personalize/ manage cookies button before being worn out, thus winning a battle, but not the war;

Or because of

  • the targeted ads we get on Instagram that seem so laser-focused as if downloaded straight from our mind’s wishing well. I am talking about those ads that target our vulnerabilities that lead us straight into our guilty pleasures den, where we only need one gentle push to go a tad over our budget and get those rainbow tomato seeds, for our balcony micro-gardening new pandemic-friendly hobby.

Unfortunately, the scoff IS justified when it comes to privacy in the digital space. The accessibility of information comes with a price. Convenience as well. Plus… data is the new oil.

However, what happens when we address privacy outside the digital space? We stumble upon a different set of problems.


Citizens’ Privacy in Public Spaces

Photo credits: Tobias Tullius & Gemma Evans

How is privacy affected in public spaces? According to the AI Global Surveillance Index (2019), at least 75 countries out of 176 covered in the study use AI-powered technology aimed for surveillance. The big two AI leaders, China & the US, seem to have taken action: Huawei, Hikvision, Dahua and ZTE provide biometric surveillance technology to 63 countries for instance. While IBM, Palantir and Cisco provide the AI-powered technology to other 26 countries.

Biometric surveillance is becoming more and more spread out and in some places to a worrying extent. From Chinafile report, it appears that China has spent $2.1B on public surveillance technology throughout 2016-2020, for their project known as “Sharp Eyes” which aims to have 100% coverage of China’s public spaces. Even if the figures are not publicly disclosed, what is very public is the surveillance equipment – China installed 200 million public & private cameras across the country. Additionally, the government aims to distribute TV consoles which citizens would use in order to be able to be part of “Sharp Eyes” project and have a live stream of all the publicly surveyed areas in order to be able to signal any suspicious activity to the police.

Even if the project aims to increase security throughout all China, in a survey conducted at a Beijing-based research center, the results showed that 80% of the citizen respondents felt watched and sometimes followed. They showed concerns related to their personal information and worried about being singled out and tracked by the facial recognition technology.


More Black Mirror Scenarios

 

However, the Chinese citizens are not the only worried ones. Here are some more Black Mirror scenarios which came true the moment biometric surveillance was toyed with outside of a strong regulatory framework:

Austrian police deployed a pilot phase for their face recognition software since December 2019 aiming to investigate serious crimes – murder, bank robberies etc. However, the media revealed it was used for identifying protesters at demonstrations as well, which last time we checked was a rightful act, not a serious crime… In like-manner, Germany used biometric surveillance for the G20 protesters and so did Slovenia. 

Other types of misuse happened in Italy where the police used facial recognition to detect loitering and trespassing. The technology was also deployed in the airports for queue management.

Swedish police happened to use Clearview AI (the company who takes pride in its 3B+ index of faces collected from all the available social media without any user’s consent, which happens to have faced legal action in Illinois, California, Vermont, New York, as well as abroad – in the EU and in Australia for breaching privacy, violating the GDPR and the list goes on.). Moreover, Sweden deployed facial recognition in order to monitor school attendance.

And talking about schools – one school in Poland decided to use biometric fingerprinting for school lunch management. France is on the list too, trying to deploy facial recognition in a high school in Marseille.

Czech Republic was caught red-handed with airport biometric monitoring. Whilst Greece happened to have centralized biometric databases and it is under investigation for unlawful storage and processing of Greek people’s biometric data…

If the reaction privacy was getting in the digital space was a scoff, well… privacy in biometric surveyed public spaces is definitely a horrified look. So let’s dive deeper into the problem.


What are the issues with Biometric Surveillance in Public Spaces?

Photo credits: Penghao Xiong

As seen in the China project example, the lack of transparency from the ones watching combined with the massive infringement on privacy at the expense of security and control, deteriorates the bond of trust between the citizens and the government. It leads to a state of confusion, fear and a chilling effect on the freedom of expression as the prevalent population sentiment. Add to it a social scoring system and now you have an anxiety-enhancement and fear-inducing dystopian model worthy of George Orwell’s Nineteen Eighty-Four.

And in all honesty, wouldn’t we all feel the same if from the airport to the hotel and from the hotel to every touristic highlight, every single step of the way would be monitored? We start to feel the absence of the (what it seems now to be the much-needed) Terms & Conditions document present in the digital space from the beginning of the article. We’re lucky if we get the “Smile! You’re being filmed.” memo in the public spaces.

The other examples we explored above underpin the massive imbalance between the one who is watched and the one watching. Citizens do not have rights to exert when it comes to being under public surveillance. The lack of transparency from the surveillance owners is one common element prevalent across the entire tapestry of biometric surveillance gone wrong. And it may also be the golden thread that leads us to a potential solution for addressing the issue.

 

A Human-Centric Approach Towards Empowering Citizens – Open Ethics Initiative

Photo credit: Jason Dent

In order not to paint the entire article black, there are several initiatives which have been taken since biometric surveillance in public spaces favoured security at the expense of privacy and not only.

Some of the initiatives aimed to completely ban biometric surveillance, such as Reclaim Your Face and got the EU Parliament to back the ban of biometric surveillance.

The need for transparency as a solution to the issue was highlighted in multiple other initiatives, such as : The Report for the Greens/ EFA in the European Parliament – their recommendations touch on strengthening the transparency & accountability of biometric and behavioural technology.  Broader initiatives are taken at the AI industry level by the UK government. In like manner, The European Commission underlines that transparency obligations should apply to certain AI systems, such as the ones using biometric data, in the Proposal for Laying Down Harmonised Rules on AI.

Another movement initiated by Privacy International on Civil and Political Rights illustrates how there is a need for governments to be transparent about their use of facial recognition technology in public spaces and adopt robust regulation for its use. Movements, such as Stop Spying from Surveillance Technology Oversight Project, start emerging for privacy advocacy & litigation services for the ones under surveillance abuse. Thousands of Cameras would also be a project worth mentioning, as the initiators mapped entire areas under video camera surveillance, advocating for the citizens’ digital rights.

Open Ethics Initiative suggests that surveillance “owners” (governments, public-private partnerships, such as governments purchasing and deploying businesses’ technology, as well as private companies) should be transparent about who is doing surveillance, which technologies are used, who gets access to the citizen data and under which scope.

Open Ethics Initiative Public Surveillance Transparency Project concerns oversight and transparency of surveillance processes put in place at specific physical locations. Check out our project for more details and get be part of the Initiative : https://openethics.ai/public-surveillance-transparency/

Featured Photo Credits: Matthew Henry

Comment ( 1 )

  1. Alexeyu
    Waw...great article! Congratulations to the one that wrote it!

Leave a reply

Your email address will not be published.