ML-era in cybersecurity: A step toward a safer world or the brink of chaos?

February 22, 2019 -

While artificial intelligence (AI) and machine learning (ML) have been transforming various fields of human activity for some time now, their full transformative potential is yet to be realized. ML-based technologies will increasingly help fight fraud, evaluate and optimize business processes, improve testing procedures and develop new solutions to existing problems.

However, like most disruptive innovations, even machine learning will have its drawbacks.

With business, critical infrastructure, as well as our personal lives becoming ever more entwined with the digital realm, new risks will emerge. Attackers can employ ML in multiple ways: to power their malware, to target specific victims and extract valuable data, to hunt for zero-day vulnerabilities or protect hijacked infrastructure such as botnets.

Machine-learning solutions deployed by legitimate organizations can become another attractive target. By creating poisoned data sets, attackers can try to manipulate otherwise beneficial systems to make incorrect decisions or to provide distorted view of the monitored environment, potentially causing chaos.

Misusing ML for translations and targeting

The first signs that these scenarios are crossing from theory to reality are already appearing on the radar. One good example are spammers, who have been (mis)using legitimate ML-based translation services to improve their messaging in a wide array of local languages (of course, unless the attackers are sending spam by day, and learning those new languages by night).

Another in-the-wild example that shows ML-like signs is the currently prevalent downloader Emotet, suspected of using this type of technology to improve its targeting. Despite infecting thousands of victims daily, it has become surprisingly effective in avoiding honeypots and botnet trackers.

To achieve this, Emotet collects telemetry of its potential victims and sends it to the attacker’s C&C server for analysis. Based on these inputs, the malware not only picks the modules included in the payload, but also distinguishes human operators from virtual machines used by researchers.

Similar self-defense mechanisms would be very complex and expensive, and Emotet’s operators would have to invest extraordinary resources to achieve the malware’s current abilities without utilizing machine learning.

Not enough layers, not enough security

Tampering with the ML model by feeding it poisoned inputs – aka adversarial machine learning – is another risk that will become more pressing in the future, especially in the cybersecurity field.

If less-advanced, purely ML-based scanning engines were fooled into incorrect decisions by attackers, it could diminish the security of the victim’s company and potentially cause serious damage.

However, ESET is aware that machine learning is no silver bullet and that the risks of adversarial machine learning will grow with time. To avoid such potential issues ESET machine learning technology is integrated within an array of highly effective detection technologies such as DNA Detections, Advanced Memory Scanner, Network Attack Protection and UEFI Scanner. We believe that only multilayered solutions can offer reliable protection from ever-developing cyberthreats.

Author: Juraj Janosik

Source: welivesecurity.com

Leave a Reply

Your email address will not be published. Required fields are marked *