World Economic Forum Announces Facial Recognition Technology Initiative
Facial recognition technology and use is here, and has been for quite some time. Only now are those who push it justifying it. A deeper discussion.
By Arjun Walia | The Pulse
Facial Recognition Technology (FRT) is making its way into the mainstream quite rapidly, but don’t let this overshadow the fact that the technology is being and has been used for quite some time- even at times without public knowledge and/or disclosure.
More recently, the Royal Canadian Mounted Police (RCMP) came under fire for using Clearview AI facial recognition technology. Clearview AI was found to have violated Canada’s federal private sector privacy law by creating a databank of more than three billion images scraped from internet websites without users’ consent. Clearview users, such as the RCMP, could match photographs of people against the photographs in the databank.
During the pandemic, Dubai used facial recognition technology via drones to identify people who weren’t wearing masks.
The National Security Agency (NSA) has also used technology like this. In 2014, leaked documents from NSA whistleblower Edward Snowden revealed the NSA was, and probably still is, collecting millions of images a day for facial recognition purposes. The NSA had software to mine images from social media, emails, text messages, and other digital communications so they could identify people.
Multiple global authorities seem to be embracing this technology more openly, citing an ever growing need for it. In March 2020, the World Economic Forum (WEF) released a document titled “Responsible Limits of Facial Recognition Technology.” In it they outline the strong need for these technologies in creating greater global security.
“The World Economic Forum’s Framework for the Responsible use of facial recognition technology seeks to address the need for a set of concrete guidelines to ensure the trustworthy and safe use of this technology. This framework enables Governments to protect citizens from various harms potentially caused by facial recognition technology while supporting beneficial applications. It also enables industry actors to demonstrate that they have implemented robust risk mitigation processes through an independent audit of their systems.” – Responsible Limits of Facial Recognition Technology’
There is a strong suggestion that this technology will become a “normal” part of our day to day life. Yet they emphasize that this technology must be used in a safe and responsible manner. But can that truly be done? Should governments even have access to this level of surveillance? Who will “watch the watchers?” Given government has concealed this from the public already and it took a whistleblower to reveal its reality, can one really trust that government will use this responsibly?
A second version of the report was released in December 2020, and mentions COVID as justification for the implementation of these technologies:
“The need for seamless and contactless technology to accurately identify customers, employees and vendors has never been so critical. Along these lines, last year the World Economic Forum launched an initiative to build a governance framework for the responsible use of facial recognition technology and remote biometrics in the context of improving the airline passenger boarding experience. While last year these technologies were nice-to-have, now in the midst of an unprecedented global COVID-19 pandemic, remote biometrics have become a must-have.” – Responsible Limits of Facial Recognition Technology
Do we truly face constant threats getting on airplanes? Can we even know the truth about this? We were told the US needed to invade the Middle East because ‘terrorists’ had weapons of mass destruction, it turned out they didn’t, and that it was all a lie. Yet the entire backbone of laws passed after 9/11 by the national security state used this false narrative as a justification. Are we seeing this happen again? Will measures permanently be in place after COVID is over?
“As authoritarianism spreads, as emergency laws proliferate, as we sacrifice our rights, we also sacrifice our capability to arrest the slide into a less liberal and less free world. Do you truly believe that when the first wave, this second wave, the 16th wave of the coronavirus is a long forgotten memory, that these capabilities will not be kept?” – Edward Snowden
To be clear, this does not mean that there is no justification. Any type of technology could be used by an intelligent civilization for the benefit of all, but the intention and motivation has to be just that. It has to be used from a place of goodwill, good intention, transparency and grounded in ethics. It must be used to really benefit the citizenry.
Despite the questionable rollout of these technologies and the intentions behind their implementation, those who support and help move them forward, like the WEF, do an excellent job of claiming that these technologies do nothing but protect us.
Some people believe this, others don’t. This needs to be addressed. Many are growing tired of the “it’s for your own good” rhetoric, when really this may not be the intent. Invading and destroying Iraq while killing countless innocent people, for example, didn’t seem to be about finding “weapons of mass destruction” and saving the world, did it? Some of the most terrible acts against humanity by a ruling class who, above all, seem to desire control and power have been done under the guise of goodwill.
I recently publish an article about high ranking NSA whistleblower William Binney, a 37 year veteran of the agency. He made it quite clear that the goal of the NSA, and thus the U.S. Government, was “total population control.” Shouldn’t we be questioning the WEF rhetoric on the need for these technologies?
This article (World Economic Forum Announces Facial Recognition Technology Initiative) was originally published on The Pulse and is published under a Creative Commons license.