Facebook Will Pay Illinois Users $550-Million Settlement Over Its Use of FRT
Facebook agreed to pay $550 million to settle a class action lawsuit with a group of users in Illinois over its use of facial recognition technology (FRT) to tag individuals in photographs, reports the BBC.
The technology was rolled out in 2010 to automatically tag people in photographs and suggest who someone in a photograph might be. It did not ask for user consent, and in some cases, this led to rather unpleasant surprises. (Your author, for example, is not a Facebook user. Nor do I use any of the other technologies in the Facebook portfolio – a conscious decision I made many years ago over privacy concerns. Over the years, I have found myself identified in several pictures taken by other people as they posted them on Facebook.) Finally, in 2019, after a prolonged battle with privacy advocates, the photo-tagging feature became an opt-in rather than a default.
The Illinois lawsuit has come to conclusion after five years and multiple efforts from Facebook to block it. The claimants argue that photo tagging – which uses FRT to identify the individual in the photograph – violated the state’s law protecting residents’ biometric information. (Biometrics include fingerprints, iris and retina scans, facial features, DNA, and other physiological characteristics that uniquely identify an individual. Research shows that even a person’s gait is unique and that it can be used for identification and/or authentication.)
Facebook “decided to pursue a settlement as it was in the best interest of [its] community and [its] shareholders to move past this matter,” reports the BBC.
Our Take
This is not the first lawsuit related to FRT, but it certainly sets a precedent. While $550 million is peanuts for Facebook (its revenue in 2018 is reported as $55.84 billion), the Illinois ruling opens the door to similar class action lawsuits in other jurisdictions in the US and around the world. (Why do you think Facebook fought it so hard?) Not to forget regulatory sanctions from governments – although these are less likely, since many governments are themselves eager adopters of FRT, they are nevertheless a possibility.
Where do you stand with AI? Have you done your own due diligence on what risks and harms could lurk in the AI apps you are building (or considering)? Not just for your customers, consumers, citizens, employees, and partners, but also for your organization’s financial standing and reputation? And how much risk are you comfortable with?
Want to Know More?
To learn more about topics in this note, see more of Info-Tech's content on FRT and AI and human rights.
To learn more about harms you can unleash on your internal and external stakeholders and how to prevent them, consult Info-Tech’s blueprint Mitigate Machine Bias.
To learn about the guardrails and the controls we recommend you start putting in place, even if you are just getting your feet wet with AI, look out for our upcoming blueprint on AI governance, or reach out to the analysts to get a kick-start.