6 August, 2018
“The engineers are right to worry. But the stakes are higher than they think.”
Call it an “engineering insurgency.” In the last few weeks employees at Google, Amazon and Microsoft have threatened to walk off the job over the use of artificial intelligence (AI) products.
Google employees are upset that the company’s video interpretation technology could be used to optimize drone strikes.
Amazon workers are insisting they don’t want law enforcement to have access to the company’s face recognition technology. Microsoft staff are threatening to quit if plans to make software for ICE go forward. The dissent is part of a growing anxiety about AI: from concerns raised by academics and NGO’s about “killer robots” (consider the Slaughterbots video produced by Stuart Russell of Berkley, and the Future of Life Institute, which garnered over two million YouTube views in a short time) to misgivings about inequity and racial profiling in the deployment of AI (see, for example, Kathy O’Neil’s excellent book Weapons of Math Destruction which documents the adverse impact of AI on private and public sector decisionmaking).
There is certainly a lot to worry about. Widespread use of facial-recognition technology by law enforcement can spell the end of speech, association and privacy rights (just think about the ability to identify, catalogue and store thousands of facial images from a boisterous political rally).
As O’Neill reminds us in her book, the algorithms employed in large chain store hiring processes and credit worthiness decision are opaque and lack self-correction mechanisms. They give off an air of objectivity and authority while encoding the prejudices of the people who programmed them. Weapons systems combining face recognition and social-media access can pick off opponents more efficiently than the most ruthless assassin. The images of swarm-drone warfare in Slaughterbots are the stuff of nightmares.
(Image:- The National Interest)