Artificial intelligence is always listed under ‘smart technologies.’ Though it is a highlight, the fact remains that even smarter technologies could be deployed to hack into these. Hence, security is imperative and absolutely essential to ensure that AI systems are shielded from any potential threats.
A tool introduced by Microsoft, which goes by the name ‘Counterfit’ will provide adequate backing and help to organizations to shield their artificial intelligence systems from potential adversarial machine learning attacks.
Counterfit
The tool will help developers to test and ensure the security of artificial intelligence systems in an effective manner. The Counterfit project has been published by Microsoft on GitHub. The project is also the result of an earlier study conducted by the company which unveiled the lack of tools experienced by organizations when it comes to AI security.
As per a blogpost by Microsoft,
“This tool was born out of our own need to assess Microsoft’s AI systems for vulnerabilities with the goal of proactively securing AI services, in accordance with Microsoft’s responsible AI principles and Responsible AI Strategy in Engineering initiative.”
The tool is being used by the company for testing its own AI models. The company is also working towards exploring a wider spectrum of use for the tool, particularly in the development phase of artificial intelligence.
Counterfit can be easily availed and used through Azure Shell. It can also be in an Anaconda Python environment, locally.
As per the claims of the company, the tool is well equipped to facilitate the assessment of models in any cloud environment or edge networks. Another highlight of the tool is that it is model-agnostic, and it is working its way towards being data-agnostic.
According to the company, the tool makes it easy for the security community to access published attack algorithms, in addition to providing an extensible interface.
Adversarial machine learning is a major challenge to the security of AI systems, wherein a machine learning model is tricked using manipulative data, leading to biases, which can cause serious problems. Counterfit will be effective in preventing these. In addition to this, the tool can also help with vulnerability scanning and log creation to record attacks against a target model.
In the wake of the growing use and significance of artificial intelligence systems, it becomes all the more important to ensure their safety and security. Tools like Counterfit is an indication that the challenges on the way of securing artificial intelligence systems against threats can be dealt with effectively.