ACLU Calls CBP’s Plan for Airport Facial Recognition and Surveillance a “Disaster”
Last updated June 28, 2021
IBM (International Business Machines Corporation) has decided to stop selling facial recognition technologies to others and is calling for a national dialog on how these systems are being deployed and used in law enforcement. The CEO of IBM, Arvind Krishna, has stated that the technology colossus is having severe concerns for facial recognition systems abuse, as this misuse is highlighted in various recent examples of mass surveillance, racial profiling, and blatant violations of people’s privacy rights and freedoms. Thus, they are not willing to contribute to this and have their name associated with dishonorable activities.
Krishna has sent a letter addressing the congressional leaders, calling them to consider how this technology is used. Moreover, he urges the Senators to launch a national dialog that would help form a legal context of facial recognition technology deployment. Rules need to apply, and domestic law enforcement agencies will have to follow specific guidelines on what is allowed and what’s not. As Krishna said, IBM is ready to help the US Congress develop policies that would hold the police more accountable for any misconduct.
It looks like the whole social unrest situation that is ongoing in the United States has shaken the system, and large corporations like IBM feel that this is an ideal time to step forth with bold statements. There are multiple vendors of AI-based mass surveillance systems fighting for a piece of the pie in the US right now, and many of them are overly controversial. We have previously discussed the various problems that underpin ‘Clearview AI,’ ‘Athena Security,’ and ‘Banjo’ operations, so the situation in the country seems to be slipping quickly out of control. As Krishna states:
“Artificial intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported.”
In January 2019, IBM published a diverse dataset to help reduce bias in facial recognition systems. The tech giant has already shown interest in eliminating unfairness in systems of this type. Whether there were viability problems with this approach eventually or if IBM simply feels that such research papers aren’t being taken into account by AI startups is still unknown. The situation is now in the hands of the Senate, who is already dealing with a wave of rudimentary change proposals. We hope they will decide to enter a dialog, as the powerful AI technology must be regulated - even if some other powerful entities disagree.