There is no such thing as personal information. Try as you might to protect it, some corporation has probably already failed you.
Maybe you went to Wawa sometime last year and paid with a credit card. Maybe you fell victim to Equifax. Maybe you uploaded a photo to Flickr. Maybe you simply went to the house of a neighbor with a Ring doorbell.
The point is, it’s all but impossible to keep your personal information — be it your face, your credit card number, or your Social Security number — private. The phrase itself has become an oxymoron.
We’re told this is simply the cost of doing business — that in order to reap the fruits of convenience, we have to accept a world of big, insecure data. And starting next week, if you plan to take an international flight out of PHL, you may have to decide for yourself whether it’s worth trading your privacy for convenience.
That’s because on January 21st, the airport will install facial recognition technology at the A15, A16, and A17 gates. Passengers at those gates (served by American Airlines, British Airways, Lufthansa, and Qatar Airways) won’t have to present a passport when they board their flight exiting the country; instead, a facial recognition kiosk will snap a picture and send it to the Customs and Border Protection Agency to be verified instantly with a photo they have on file (usually from a passport). If CBP detects a match, onward you go.
The program is just a 45-day pilot, but eventually PHL plans to implement the scanning for all international departures, a process it says could take a year. (The airport will also have facial recognition for people entering the country in Philadelphia.)
Airlines say the scan can make the boarding process 10 percent faster. But here’s the catch: This facial scan — officially known as “biometric exit screening” — isn’t always accurate and raises some privacy concerns.
There’s good news, if you’re feeling wary: Facial recognition scans — which are currently deployed at 42 U.S. airports — aren’t mandatory for anyone, U.S. citizens or otherwise. (But opting out isn’t always as easy as it sounds.)
The Privacy and Security Worries
“We know privacy is a huge issue,” says Steve Sapp, a Customs and Border Protection spokesperson. “The thing we’re trying to stress is that we’re not collecting any new information on anybody.”
For U.S. citizens, that’s mostly true. Once it verifies your identity, the CBP says it discards facial scan photos after 12 hours. For noncitizens, CBP stores the photo for two weeks. But the CBP also forwards along photographs of noncitizens to the Department of Homeland Security, which can keep them up to 75 years.
The CBP itself hasn’t been immune to data breaches, either. Just last year, hackers infiltrated a CBP database and stole tens of thousands of facial photographs of U.S. citizens, along with their license plate numbers. Those images were reportedly later for sale on the black market.
And it’s not just hyper-paranoid Luddites raising these concerns. The U.S. House Oversight Committee just convened a meeting on the broad subject of facial recognition tech on Wednesday. At the hearing, Virginia Rep. Gerry Connolly argued there were “intrinsic concerns with this technology and its use,” citing privacy abuses both by American corporations and foreign governments.
Fortunately, both foreign nationals and U.S. citizens have the right to opt out of the airport scans. But that also indirectly confirms the fact that the face scan technology isn’t really a security measure. If it were, it would be mandatory.
Questions of Accuracy
When Philly’s facial scan program expands, it will remain limited to international travel — at least for now. “The end goal,” says Sapp, the CBP spokesperson, “is to use your face as your passport. TSA is testing technology now where you can go from the curb to the airplane with just showing your face.”
A number of studies, however, have suggested that facial recognition algorithms can contain baked-in biases. One such report, from the National Institute of Standards and Technology, found that algorithms were much more likely to accurately confirm the identity of white people, compared with black, Hispanic, or Asian people. Another study determined that some facial recognition algorithms are 99 percent accurate when it comes to identifying the gender of white men, but a full 35 percent less accurate for women with darker skin.
According to researchers, the algorithm used by the CBP holds up to scientific scrutiny. The CBP itself proudly touts an accuracy rate of 98 percent. But the accuracy of one algorithm is irrelevant in a broader context.
As an entire industry, facial recognition still has some major flaws. Half of all American adults are already part of facial recognition databases accessible to law enforcement. (Pennsylvania allows the FBI to run face matches against its database of driver’s license photos.) Many of these law enforcement photo galleries have little to no regulation.
Using the technology to board planes starts to normalize it. Today, we’re told facial scans save time; tomorrow, we might be told they’re necessary for our security, whether it’s boarding a plane or entering a building. To consent to a scan is to ultimately assist in creating a future where ever-stronger facial recognition might be misused. The CBP’s machine learning algorithm only gets better as it scans more faces.
As for that algorithm’s accuracy, multiple reporters who have watched the technology firsthand have reported lower success rates, in the neighborhood of 85 to 90 percent. Put that in an airport security context and that means those individuals could be subject to additional screening. So much for saving time.
Look, it’s true, those are just anecdotes. But do you want to know a different screening technology, with fewer privacy concerns, that I suspect has an accuracy rate higher than 85, 90 or even 98 percent?
It’s called a passport.