Expect New Security Concerns From Machine Learning – Chuck Leaver

Written By Roark Pollock And Presented By Ziften CEO Chuck Leaver

 

If you are a student of history you will observe lots of examples of extreme unintentional consequences when brand-new technology has actually been introduced. It typically surprises people that new technologies might have dubious intentions in addition to the positive intentions for which they are launched on the market but it happens all the time.

For example, Train robbers utilizing dynamite (“You think you utilized enough Dynamite there, Butch?”) or spammers using e-mail. More recently making use of SSL to conceal malware from security controls has actually ended up being more typical just because the genuine use of SSL has made this strategy more useful.

Because brand-new technology is frequently appropriated by bad actors, we have no reason to believe this will not be true about the new generation of machine learning tools that have actually reached the market.

To what effect will these tools be misused? There are most likely a few ways that attackers might use machine learning to their benefit. At a minimum, malware writers will test their brand-new malware against the new class of sophisticated risk defense solutions in a bid to modify their code to make sure that it is less likely to be flagged as harmful. The efficiency of protective security controls constantly has a half life due to adversarial learning. An understanding of artificial intelligence defenses will assist hackers become more proactive in minimizing the effectiveness of machine learning based defenses. An example would be an enemy flooding a network with fake traffic with the hope of “poisoning” the machine-learning model being developed from that traffic. The goal of the assailant would be to deceive the protector’s artificial intelligence tool into misclassifying traffic or to develop such a high level of false positives that the protectors would dial back the fidelity of the signals.

Machine learning will likely also be used as an attack tool by assailants. For instance, some scientists predict that assailants will utilize machine learning strategies to sharpen their social engineering attacks (e.g., spear phishing). The automation of the effort it takes to personalize a social engineering attack is particularly uncomfortable given the effectiveness of spear phishing. The capability to automate mass customization of these attacks is a potent economic reward for opponents to adopt the techniques.

Anticipate breaches of this type that provide ransomware payloads to rise dramatically in 2017.

The need to automate jobs is a major driver of financial investment decisions for both hackers and protectors. Machine learning guarantees to automate detection and response and increase the functional tempo. While the innovation will increasingly become a standard element of defense in depth techniques, it is not a magic bullet. It needs to be comprehended that hackers are actively working on evasion techniques around machine learning based detection products while also utilizing machine learning for their own offensive purposes. This arms race will need defenders to progressively achieve incident response at machine pace, further worsening the requirement for automated incident response abilities.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title="" rel=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>