Siddharth Garg, a New York University professor in early August, posted a yellow sticky stick at the parking mark outside his Brooklyn building. When he and the two colleagues to their development of the roadmap detector software to enter the parking signs of the photo, the system in 95% of the cases did not identify this is a parking sign, and it as a speed limit mark.
Taking into account the potential security, this situation makes the use of machine learning software engineers headache. Researchers have shown that such unexpected factors such as those described above may be embedded in artificial neural networks, interfering with their ability to identify speech or analyze images.
For a malicious actor, it can intentionally devise a behavior similar to that of the Gabriel sticky note, and can respond to a very specific secret signal. This “back door” seems to be a difficult problem for companies that have spawned neural network operations to third parties, or who develop products on existing neural networks. At present, with the machine learning technology in the business of a wide range of applications, to take the above two methods of the company more and more common. “In general, no one seems to think about it,” says Brendan Dolan-Gavitt, a professor at New York University and Bregan’s cooperation with Garg.
Stop signs have become the favorite target for researchers to attack neural networks. Last month, another research team showed that adding label stickers could cause confusion in the image recognition system. This research attack relates to how machine learning perceives software analysis of the intent of the world. Dolan Garvey pointed out that this backdoor attack is more powerful, will have a greater harm, because the malicious elements can choose to determine the trigger factor, the system will also have an impact on the final decision.
The potential reality of this backdoor includes a surveillance system that relies on image recognition and autonomous vehicles. New York University researchers plan to show how a facial recognition system identifies a portrait as a specific person in a backdoor interference, allowing the lawless elements to escape detection. The back door affects not just the image recognition system. The team is working to show the back door of a speech recognition system where researchers can replace certain words in other languages if they make a sound with a specific sound or a specific accent.
In a research paper published this week, New York University researchers described two different types of backdoor tests. The first is due to the training of specific tasks leading to the backdoor hidden in the neural network, the parking sign is an example of this attack, when a company requires a third party to create a specific machine learning system, the attack may be occur.
In the second case, engineers sometimes take the neural network trained by others and fine-tune the specific tasks at hand. While the second type of back door is aimed at this way. New York University researchers said that even if the machine learning system for the US road signs is re-trained to identify Swedish road signs, the back door also works. Any time. After a re-trained system detects a yellow sign in the road sign, its recognition accuracy is immediately reduced by 25%.
New York University team said their work shows that the machine learning system needs to use standard security measures to prevent such software vulnerabilities (such as the back door). Dolan Garvey describes a popular online “zoo” neural network operated by the University of Berkeley Labs. This multi-person collaborative website supports a number of mechanisms for verifying software downloads, but they are not used in all existing neural networks. “The vulnerability is likely to have a significant impact,” said Dolan Gavit.
Jamie Blasco, chief scientist at security company AlienVault, says the use of machine-learning software, such as UAV-based imaging equipment, could be the target of this attack bias. Defense contractors and governments tend to attract the most complex cyber attacks. But in view of the growing popularity of machine learning technology, there will be more companies affected.
“Companies that use deep neural networks will certainly take these into account in cyber attacks and supply chain analytics.” Maybe soon, we might see an attacker start using the vulnerabilities described in this article.
Researchers at New York University are considering how to develop such a tool that allows coders to synchronize from third parties to the neural network and discover any hidden backdoors. At the same time users also need to be extra careful.