Menu

Filter by
content
PONT Data&Privacy

0

Does Clearview form a clear view of the end of privacy?

Facial recognition technology can be useful in securing computers, smartphones sensitive files and locations. The technology also has utility in the operation of cameras on autonomous vehicles and other applications. On the other hand, this form of identification can be used and abused by parties the individual is unaware of.

2 March 2020

Several times there has been discussion about the acceptability of facial recognition technology, for example in relation to stadium bans and the prevention of shoplifting. The example of far-reaching use of camera images by the government in the Chinese social credit system hardly needs explanation; now the Chinese police are even experimenting with facial recognition projection in special glasses.(1) Numerous facial recognition apps now exist, but so far they cannot be called perfect. They often inadvertently select by color, face shape or even background, with all the biases that this entails. However, the development of smart technology is not standing still and cannot be contained.

The latest app in the field of facial recognition, Clearview, according to the New York Times, represents a qualitative leap forward and thus the end of citizen privacy.(2) Are we thus heading toward an actual dystopia for citizen privacy? This app, marketed by an Australian technician, makes it possible to identify individuals in public spaces based on images posted on the Internet. That identification can then be linked to virtually any information available on the Internet about that person. Clearview claims to have a database of more than three billion photographs. Hence, investigative and security organizations in particular are showing particular interest in this new facial recognition product. According to the New York Times article, sources from federal and state investigative agencies (read: the Justice Department and the FBI) indicated that although they had limited knowledge of Clearview's background, - after all, it's all about the results! - they are already using the app to solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

The Dutch police also have a database consisting of 2.2 million photographs of crime suspects (with a minimum sentence of four years) using facial recognition technology. The presumption of innocence apparently does not play a role here. This is probably one of the reasons why in the United States a bill has now been introduced by the New York Senate (by Senator Brad Hoylman) banning the use of facial recognition technology (and other biometric surveillance techniques) by the police!(3) Needless to say, however, the app is also of interest to others, from government agencies such as social services and tax authorities to potential employers. In addition, commercial applications in sales of products and services can also be linked to the app, as identified individuals can be linked to profiles about their interests and preferences.

The question of whether the many uses of facial recognition are in compliance with the processing requirements of Article 6 from the General Data Protection Regulation (AVG) and possible legal consequences of the automatic processing referred to in Article 22 of the AVG when it does not involve judicial activities. Indeed, the AVG does not apply to investigation and national security as indicated in Article 23 AVG. Margrethe Vestager, as European Commissioner for 'digital affairs' stated that these kinds of biometric applications are in violation of the AVG anyway, because no prior consent of the data subject has been and can be obtained and it concerns sensitive data.(4) Incidentally, this ruling, as stated, does not apply to the application by judicial and security services. However, it can be argued that also in that exceptional case the application must be proportionate with regard to the result to be achieved. This was therefore confirmed by the English privacy regulator Information Commissioner's Office (ICO), which stated that a 'code of practice' should be applied by the police when using 'face recognition technology', given that "it met the threshold of stricht necessity for law enforcement purposes."(5)

Either way, this kind of technology, be it applied by governments or third parties, will not promote citizens' sense of security and may sensitively reduce freedom of persons, movement and freedom of speech through self-censorship and "camera anxiety" as a new agoraphobia. But as Mark Zuckerberg stated in his famous email exchange:

FRIEND: so have you decided what you are going to do about the websites?
ZUCK: yea i'm going to fuck them
ZUCK: probably in the year
ZUCK : *Ear

The question is whether the designers of technologically advanced applications always give equal weight to the disruptive and moral consequences of their product.

Footnotes

(1) https://www.cnet.com/news/chinese-police-wear-facial-recognition-surveillance-glasses/

(2) https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.htm

(3) https://www.law.com/newyorklawjournal/2020/01/27/ny-state-senate-bill-would-ban-police-use-of-facial-recognition-technology/?slreturn=20200030064629

(4) https://thenextweb.com/neural/2020/02/17/automated-facial-recognition-breaches-gdpr-says-eu-digital-chief/

(5) https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2020/01/ico-statement-in-response-to-an-announcement-made-by-the-met-police/

Share article

Comments

Leave a comment

You must be logged in to post a comment.