Mercedes-Benz USA Announces Data Breach Affecting Customers and Prospective Buyers
Last updated August 18, 2021
Clearview AI, the New York-based facial recognition software company, announced a breach in its database which resulted in the compromise of its entire customer list and their data requests. The firm’s attorney, Tor Ekeland, stated that the flaw has been identified and fixed and that Clearview continues to work towards the strengthening of its security. The firm was established in 2017 and quickly collected three billion photos to create a humongous facial recognition database. As they said, this data remains safe and hasn’t been accessed by hackers.
So, who is using Cleaview’s database anyway? The answer is approximately six hundred law enforcement entities across the United States and Canada. From top tier agencies such as the FBI and ICE to the Ottawa and Toronto Police, and even lower level Police Services like the Halton and Peel Regional units. Investigators praise Clearview’s facial recognition database and claim that it has fundamentally changed their jobs, empowering them to find missing children, etc. However, it looks like reducted investigation data has leaked, exposing people in an entirely new way now. Considering that Clearview's systems have an accuracy of only 75%, this exposure may not even be rightful.
Clearview AI was scraping these images from various online sources like Facebook and Twitter, without the people depicted in them ever being asked to give their consent. After this activity became widely known, a class action lawsuit against Clearview was filed in Illinois, while Twitter, Facebook Venmo, and YouTube barred them out of their users’ data, and many police departments decided to stop using the controversial facial recognition software services, at least until the public’s negative reaction subsides. This latest data breach though will most likely reignite the fire that Clearview’s lawyers were already trying to extinguish. Moreover, it may also bring probing and investigations from the country’s consumer protection offices.
Previously, Clearview AI enraged people by requesting them to produce a headshot and a photo of their government ID in order to have the firm remove their images from its database. The startup figured that since their mission was to catch bad guys, they had the ethical pass to aggregate and use people's faces. Many of these people though felt that this constitutes a violation of their civil rights and trampling of their free will. To be fair, the firm only collected data that was already publicly available, and users accepted the associated terms according to which permitted social media platforms to share this data with others who may use it for their own purposes. Whether or not this will save Clearview AI from more complaints and lawsuits now, we will have to wait and see.