Mobile check-ins, temperature checks and other data-driven practices dominated coronavirus- fighting strategies in the early days of the pandemic. Even as a slew of vaccines and, more recently, pills, have transformed the fight against the virus, COVID-19 contact tracing apps remain a staple, resulting in the notification of thousands of COVID cases across the European Union, according to recent reports by the European Data Journalism Network.
As the use of data-driven practices continues and EU member states toy with the idea of nationwide biometric — which entails physical characteristics capable of identifying individuals — identification, users and privacy specialists alike have raised concerns regarding the privacy of the data collected by tracking apps, as well as their implications for the future of surveillance in Europe and around the world.
“We’ve also seen in some countries, like Singapore, the collection of location information for the purposes of combating COVID (being) made available…to law enforcement,” says Greg Nojeim, co-director of the Center for Democracy and Technology’s Security and Surveillance Project.
“So, it’s fair to say that information that is being collected by some governments for the purpose of combating COVID is sometimes being used for criminal investigative purposes that the user of an app, for example, can’t control.”
In the European Union, such fears have become a reality. Last week, German media outlet Deutsche Welle reported that authorities in a city near Frankfurt used data collected by Luca, a COVID contact tracing app, to contact potential witnesses to a death, provoking criticism from politicians to even the app’s developer, culture4life.
Similar concerns of overreach have been raised regarding Poland’s mandatory Kwarantanna Domowa (Home Quarantine) app, which depends on geolocation and facial recognition to “allow the authorities to monitor individual compliance with self isolation requirements,” according to an early report by the European Digital Rights (EDRi) Network.
“It’s important that that information be protected and used only for contact tracing purposes by health authorities,” Nojeim says. “Otherwise, people are not going to volunteer it, and contact tracing will be thwarted.”
In an effort to assuage such worries, Google and Apple introduced what they call privacy-preserving contact tracing, shifting from geolocation to Bluetooth-enabled tracking in April of last year, so that “Google, Apple, and external entities do not learn if exposure notifications were shown on users’ devices,” the two tech giants said, marking a rare instance of cooperation between them.
While data protection laws vary significantly across jurisdictions, the European Data Protection Supervisor and, more specifically, the General Data Protection Regulation (GDPR), offer a broad legal framework for data privacy, with application to entities in the public and private sectors. However, these laws provide notable exceptions for law enforcement and on the basis of consent, which can become muddled by unclear app permissions.
For many, anxieties remain, as European governments largely maintain a reactive rather than proactive approach to data protective legislation.
“The ‘innovation’ with our biometric data is moving so fast that we’ve seen regulators not being able to keep up,” says Ella Jakubowska, a policy and campaigns officer at EDRi.
Though biometric — which entails facial, vocal, fingerprint, and even behavioral recognition — and geolocating technologies predate the pandemic, specialists like Jakubowska say they believe that it has provided a window for the normalization of surveillance and biometric technologies. The past year alone has found the adoption of facial recognition software in casinos and bus stops in countries such as Spain.
Beyond the COVID-19 context, biosurveillance companies promise the ability to effectively prevent crime and target shoppers. However, Jakubowska stresses that this technology can be as fallible as the humans that develop them.
“When used in the wild, so to speak, these systems are far less accurate than they can be in a lab setting. Accuracy is a problem, especially when it can lead to discrimination. But the bigger issue is these surveillance infrastructures that are inherently discriminatory and infringe on people’s human rights.”
Across the Atlantic, facial recognition software has been found to contribute to racial profiling among Black Americans and other racial minority groups. To avoid similar ills within the European Union, lawmakers and digital privacy advocates alike have advocated for the passage of the proposed Artificial Intelligence Act, which, among other aims, would firmly regulate the collection and use of EU citizens’ personal and biometric data, filling the gaps left by the GDPR.
“(We’re) constantly fighting back against uses of biometric data that are not legitimate (that) somehow ended up on the market, in our streets, in our schools, universities, train stations, airports…It’s really prevalent across Europe.”