Switching to new machine learning-based cloud security tools from AWS, such as GuardDuty and Macie, can be beneficial for AWS customers. However, experts point out that while these services increase the defense barrier, they do not prevent attacks from sophisticated attackers.
AWS Macy’s service, announced in August, learns from the contents of a user’s Amazon S3 bucket and alerts customers when it detects suspicious activity with a focus on PCI, HIPPA, and GDPR compliance. AWS Guard Duty, announced at the end of November, is a service that complements Macy’s and uses machine learning to analyze AWS CloudTrail, VPC flow logs, and AWS DNS logs. Guard Duty, like Macy’s, focuses on detecting anomalies and alerting customers to suspicious activity.
“From a technical standpoint, it’s amazing,” said Clarence Chio, author of the forthcoming book Machine Learning and Security. “Whenever a horizontal platform offers a service like this, it always offers something that no one else can.”
A machine learning model consists of an algorithm and training data, and the quality of the model depends entirely on the data the model learns from. This is why cloud security with machine learning is excellent. Cloud service providers like AWS have a view of the entire network, so it’s much easier to train machine learning models about what’s normal and what’s most likely malicious. “Algorithms are not kept as secret or proprietary assets for long periods of time, but data sources are the most important asset of any service,” Chio explained.
Although sharing threat intelligence among organizations is becoming more common, the quality of the data any one company has is likely to be much lower than the data available from a cloud service provider like Amazon. This bias in useful threat intelligence will accelerate the movement of enterprises to move their data centers to the cloud.
However, there are a few things to keep in mind here.
Why Machine learning raises barriers?
The quality of a machine learning model depends entirely on the data it learns from. In other words, it’s bad at detecting something you’ve never seen before, a so-called “black swan” event. “There are a lot of things wrong with the description of machine learning,” said Hyrum Anderson, technology director for data science at Endgame. You give it data, and machine learning tells you what to look for. Giving people all the data and not having to go through them all,” he said.
AWS CISO Steven Schmidt also said in a press release, “Amazon Macy uses machine learning to understand user behavior and content in each organization, so it can see vast amounts of data with a broader view and send more accurate alerts to help customers find sensitive information. It allows us to focus on protecting that information rather than wasting time on it.”
Services like Macy’s and Guard Duty provide a great way to find obvious problems, such as improperly configured S3 buckets, that threaten corporate data stored in the cloud. Many of the data breaches in 2017, including US military/NSA INSCOM confidential files, data analysis records of millions of US voters, and Verizon breaches, could have been prevented with Amazon’s new machine learning-based cloud security.
However, experts warn that the classification of machine learning against highly adaptive attackers is still an unresolved problem, and machine learning-based cloud security measures are likely to be ineffective against sophisticated attackers.
For example, machine learning capabilities to classify malware probabilities are a significant step forward over traditional antivirus malware signatures that only binary match or mismatch. However, machine-learning-based malware detection classifies with uncertainty (for example, “this executable is 80% likely to be malicious”) and then hands the file over to a human for further investigation.
Experts warn that the use of machine learning to detect malicious activity is still in its infancy, and while the security of cloud machine learning features increases the barriers attackers have to overcome, it is not as effective against experienced attackers who can diversify their attack tactics. Detecting anomalies is more difficult than you might think, and there is always a trade-off between true and false positive rates, Anderson said. The problem is that almost every element has an unusual side. The real difficulty is classifying the malicious from the uncommon.”
Who is an adaptable attacker?
In a research report published in early December, researchers at MIT have demonstrated that they can fool Google’s Inception V3 machine learning image classifier. The researchers 3D-printed the turtle and then tricked the Inception V3 model into classifying it as a gun from every possible angle.
If academic researchers can fool Google’s state-of-the-art machine learning models, then government-led intelligence agencies have acquired these capabilities from the beginning and have the technological capabilities to defeat machine learning models designed to detect malicious network activity. Not everyone is in a position to be threatened by government-led attackers, but as security expert Bruce Schneier likes to emphasize, academic attacks today are government-led attacks of the past and criminal attacks of the future. Attacks only get easier over time, never more difficult. Therefore, it should be expected that, in the near future, even common criminals will be able to fool machine-learning-based security tools.
That doesn’t mean it’s worth nothing to Amazon Macy’s and Guard Duty. The opposite is true. The purpose of defensive security is to increase the cost of an attacker’s attack, and these machine learning-based security tools do the job well.
The Hype of Machine Learning
the intersection of machine learning and security, a bubble has formed. Neither uncritical enthusiasm (“AI is the savior of mankind!”) nor nihilistic resignation (“Machine learning is garbage”) is not a productive attitude. “You shouldn’t throw away the useless and the important,” Anderson said. “Educate users to ask questions, and marketers to answer those questions.”
The speed of attacks increases over time, and the amount of threat information increases over time. Assessing and responding to threats in real-time requires automation. Like it or not, machine learning is part of our lives.
For More Articles Visit: Info Cabin