Artificial intelligence (AI) is no longer a futuristic concept in cybersecurity; it’s a vital tool that businesses worldwide are increasingly relying on. While AI excels in detecting complex cyber threats, it also faces challenges, especially as cybercriminals adopt AI to enhance attacks.

In a guest column for AIN, Siarhei Fedarovich, IBA Group Project Manager, explores how businesses can leverage AI for cybersecurity, the associated risks, and real-world examples that highlight AI's effectiveness and limitations.


The intersection of artificial intelligence (AI) and cybersecurity has created new opportunities and challenges for businesses. While traditional security measures often fail to detect sophisticated cyber threats, AI has proven to be a game-changer, offering more dynamic, adaptable, and proactive defenses. However, AI in cybersecurity isn’t a silver bullet, and its implementation comes with risks that businesses must manage carefully. In this article, we’ll explore the strengths of AI in detecting cyber threats, the challenges it faces, and how companies can successfully integrate these technologies into their cybersecurity strategies.

AI’s ability to detect unknown cyber threats has become critical as attacks grow more sophisticated. Advanced Persistent Threats (APTs) and phishing attacks are prime examples. In 2020, SolarWinds, one of the largest cybersecurity breaches, showed how traditional defenses failed to detect a prolonged and stealthy attack. AI-driven systems, had they been more broadly implemented, might have detected subtle anomalies in network traffic, raising alarms before the attack could spread.

A more recent example involves Darktrace, a UK-based AI cybersecurity company, which helped contain an insider threat at a large European financial institution. By using AI to monitor and understand baseline behaviors, the system detected unusual access patterns and escalated the alert, allowing the security team to prevent data exfiltration. This case illustrates how AI’s behavioral analysis can catch what human operators might miss.

AI's biggest advantage to cybersecurity is its ability to detect previously unknown threats through machine learning (ML) and behavioral analysis. Unlike traditional systems that rely on predefined threat signatures, AI can monitor baseline behaviors—whether it’s network traffic, system performance, or user activity—and detect anomalies that signal potential cyberattacks.

For instance, advanced Persistent Threats (APTs) can remain hidden within a network for extended periods, quietly collecting data or spying on activities. Traditional defenses may struggle to recognize these slow-moving, stealthy threats. AI, however, can identify subtle deviations from normal behavior, flagging suspicious activity even when it hasn’t been seen before. Similarly, AI’s ability to detect phishing attacks has improved markedly. By analyzing email content, metadata, and user behavior, AI can catch phishing emails that are increasingly difficult for traditional anti-phishing methods to detect due to ever-evolving attacker tactics.

The notorious Equifax breach in 2017 highlights where traditional defenses failed—patch management alone didn’t suffice. A known vulnerability wasn’t fixed in time, leading to the exposure of 147 million records. AI systems, by continuously monitoring system behavior, could have flagged unusual access patterns to sensitive data, potentially limiting the damage.

Zero-Day Attacks and Behavioral Analysis

One of the most pressing challenges in cybersecurity is the constant emergence of new, unknown threats—so-called zero-day attacks. These attacks exploit vulnerabilities before a patch or a signature can be developed, making them particularly dangerous. AI’s ability to detect anomalous behavior allows it to identify zero-day attacks even in the absence of any predefined threat pattern.

Zero-day attacks—those that exploit unknown vulnerabilities—have always been challenging. Traditional defenses rely on signatures or patches, which are ineffective against these types of threats. AI has changed the game by relying on behavioral analysis rather than predefined attack patterns.

A notable example is CrowdStrike’s AI-based system, which successfully identified a zero-day vulnerability in Microsoft Exchange servers. This discovery was part of the Hafnium attack, a widespread cyber espionage campaign in 2021 that targeted multiple organizations globally. By detecting abnormal behavior and alerting security teams before the breach could cause extensive damage, CrowdStrike demonstrated AI’s capability to identify threats that traditional defenses overlook

Furthermore, some AI systems benefit from learning across multiple environments. Anonymized data from other organizations facing similar threats helps strengthen the AI’s ability to identify new and emerging risks.

This kind of proactive defense, based on behavioral analysis, is incredibly effective for businesses, enabling them to minimize risks associated with zero-day vulnerabilities before they can be exploited.

False Positives and Incident Overload

False positives remain a significant challenge when implementing AI for cybersecurity. These occur when legitimate actions are flagged as suspicious, leading to alert fatigue among security teams. For instance, if a legitimate employee suddenly accesses a large number of files due to a pressing deadline, AI might interpret this as a potential insider threat. Or, an IT administrator performing routine maintenance outside of regular work hours might trigger alerts that require further investigation.

During the COVID-19 pandemic, many companies reported spikes in AI-generated alerts due to employees accessing systems from unusual locations (such as home networks). Cisco's Talos team documented how their AI systems struggled with this sudden shift in work patterns, generating a flood of alerts that initially overwhelmed their security teams​.

This overload of incident reports, known as “alert fatigue,” can overwhelm security teams, leading to wasted resources and, in the worst-case scenario, genuine threats being overlooked. To mitigate this, businesses must fine-tune AI models to better understand the specific nuances of their operations. By adapting AI to recognize normal business behavior, companies can reduce the number of false positives while maintaining high levels of protection.

To address this, businesses like Microsoft have adopted more sophisticated AI models that adapt to new normal behaviors, reducing the number of false positives. By learning the unique characteristics of each organization, AI systems can differentiate between legitimate changes in activity and actual threats.

The Rise of AI-Powered Attacks

As AI becomes more entrenched in cybersecurity, cybercriminals are also starting to harness AI to enhance their attacks. AI-powered attacks are particularly concerning because they’re more adaptive and harder to detect. For instance, AI can be used to generate highly personalized phishing emails, or even deep-fake audio and video, making it almost impossible for users to distinguish these from legitimate communications.

AI can also be used in adversarial attacks, where attackers feed AI defense systems misleading information—such as adding imperceptible noise to inputs—tricking the system into classifying malicious activity as benign. This creates a cat-and-mouse game where AI must not only defend against traditional attacks but also outsmart other AI systems designed to break through defenses.

To counter these threats, AI defense systems must be trained on adversarial samples—inputs specifically designed to test the system’s robustness. Techniques like adversarial training, defensive distillation, and model hardening are becoming crucial to ensure that AI remains resilient in the face of increasingly sophisticated AI-driven attacks.

Manipulating the Machine

AI, while powerful, is not without its weaknesses. One growing concern is the risk of adversarial attacks, where attackers subtly manipulate data inputs to trick AI models. A high-profile case occurred in 2020 when researchers demonstrated how small changes to images could fool AI-based facial recognition systems into misidentifying individuals​. In cybersecurity, similar techniques can be used to disguise malicious activity as benign.

Another risk is model poisoning, where attackers feed corrupted data into AI systems to degrade their effectiveness. This type of manipulation was demonstrated in a 2021 case where hackers attempted to corrupt the AI models used by a healthcare company to misclassify network traffic, potentially allowing malware to go undetected.

To mitigate these risks, businesses need to regularly update AI models, ensure transparency in the data used for training, and maintain human oversight. Companies like Google and Amazon have implemented adversarial training methods, where AI models are exposed to intentionally deceptive inputs during the training process. This helps the systems build resilience against such attacks, ensuring they remain robust even when faced with advanced threats.

The Future of Autonomous Security Systems

Looking ahead, the concept of self-protecting, autonomous AI security systems is gaining traction. These systems could detect and respond to threats in real time, adjusting defenses without human intervention. Autonomous systems would not only reduce response times but could also learn from new attacks, continuously improving and adapting to an evolving threat landscape.

However, complete autonomy introduces new risks. AI systems that misinterpret legitimate activity as malicious could disrupt business operations. Additionally, adversaries could target these systems, attempting to manipulate them into making poor decisions. For now, human oversight remains essential, ensuring that the speed and adaptability of AI are combined with the discernment of human judgment.

AI has revolutionized cybersecurity, enabling businesses to defend against increasingly complex and sophisticated threats. From detecting zero-day attacks to uncovering hidden malware, AI offers a range of advantages over traditional defenses. However, the journey to fully autonomous AI systems is still in progress, and the risks, such as false positives and adversarial attacks, must be carefully managed.

For businesses looking to integrate AI into their cybersecurity infrastructure, the key is striking a balance between AI's strengths and the need for human oversight. By doing so, companies can effectively leverage AI’s power to enhance their security posture, stay ahead of cybercriminals, and protect their critical data in an increasingly connected world.

Author: Siarhei Fedarovich, IBA Group Project Manager