« Back to listing page

Time to start taking machine-learning security seriously, Microsoft boffin insists

Enigma When Microsoft surveyed 28 organizations last year about how they viewed machine learning (ML) security, its researchers found that few firms gave the matter much thought.

“As a result, our collective security posture is close to zero,” said Hyrum Anderson, principal architect in the Azure trustworthy machine learning group at Microsoft, during a presentation at USENIX’s Enigma 2021 virtual conference.

 

Pointing to Microsoft’s survey [PDF], Anderson said almost 90 per cent of organizations – 25 out of 28 – didn’t know how to secure their ML systems.

 

The issue for many of these companies is that commonly cited attacks on ML systems – like an adversarial attack that makes an ML image recognition model classify a tabby cat as guacamole – are considered too speculative and futuristic in light of ongoing attacks that need to be dealt on a frequent basis, like phishing and ransomware.

 

But Anderson argues that ML systems shouldn’t be considered in isolation. Rather, they should be thought of as cogs in a larger system that can be sabotaged, a scenario that has ramifications beyond the integrity of a specific ML model.

 

“For example, if an adversary wanted to commit expense fraud, she could do this by digitally altering real receipts to fool an automated system similar to the tabby cat and guacamole example,” he said. “However, a much easier thing to do is to simply submit valid receipts to the automated system that do not represent legitimate business expenses.”

 

In other words, securing ML models is a necessary step to defend against more commonplace risks.

 

The fate of Microsoft’s Tay twitter chatbot illustrates why ML security should be seen as a practical matter rather than an academic exercise. Launched in 2016 as interactive entertainment, Tay had been programmed to learn language from user input. Within 24 hours, Tay was parroting toxic input from online trolls and was subsequently deactivated. Call in the squad

 

Nowadays, having learned to take ML security seriously, Microsoft conducts red team exercises against its ML models. A red team exercise refers to an internal team playing the role of an attacking entity to test the target organization’s defenses.

 

Anderson recounted one such red team inquiry conducted against an internal Azure resource provisioning service that uses ML to dole out virtual machines to Microsoft employees.

 

Microsoft relies on a web portal to allocate physical server space for virtualized computing resources. The savings for doing so at a company with over 160,000 employees can be significant, said Anderson.

 

Source: https://zephyrnet.com/time-to-start-taking-machine-learning-security-seriously-microsoft-boffin-insists/

 

« Back to listing page

Career Guidance

By clicking on Submit, I allow Aptech Learning to contact me, and use & share my personal data as per the Privacy Policy.

Located outside India? Email your contact details to enquiry@aptech.ac.in