
Artificial Intelligence or AI and machine learning hold great promise but also great risk when it comes to applying them to cybersecurity. AI has the potential to learn to identify and protect against many threats, but it also could create new and ominous attacks. We have already witnessed some unfavorable AI behavior, including racial profiling and incorrectly identifying basic information about individuals. This means AI will still require human oversight and supervision if we are to ensure that the decisions made about AI will not harm as we move forward.
When a company or enterprise chooses to move forward or extend its use of AI, a mountain of decisions must be made. AI has many implications regarding privacy regulations, legal, ethical, cultural issues, and norms. All these implications give rise to the need for a specialized role with Executive oversight for using AI. The executive in charge of AI will need to supervise AI learning and training to ensure AI is trained to understand and deal with real human dilemmas, prioritize accountability, justice, responsibility, transparency, and human well-being while also detecting and protecting against exploitation, and hacking and data misuse. It is a tall order with very heavy responsibilities.
Security Decisions
Currently, most security solutions depend on what we can describe as signature-based detections. In the same way, we can identify an individual’s handwriting through analysis and study. We can look at security threats the same way. “I have seen this before. It’s familiar, and I know it is bad” or an analytic-based approach to detecting patterns of activity that look suspicious. Normally, an analyst reviews the activity to determine whether the signature or pattern is something malicious or a false positive. With the growth of AI and machine learning, much of the basic decision-making will be made by software. This is no replacement for an analyst. Instead, baseline, triage determinations made by software will give analysts more time to perform more advanced decision-making and analysis that is not possible yet with AI or machine learning.
There is certainly hope for what Machine Learning and AI could do for software security and cybersecurity in the coming years. One of the most important elements of cybersecurity is data correlation and analytics. Part of the cybersecurity game is the ability to find and isolate individual threats, threat campaigns and perform threat actor attributions based on multiple disparate sources of data, like finding needles in haystacks. AI and machine learning can provide greater speed, scale, and accuracy in using data modeling and pattern recognition. The problem and concern are that efforts are acceptable using machine learning and AI when that may not actually be true. Without significant oversight and training to avoid biases and ensure ethical behavior, AI can go rogue and create new viruses and security threats and identify and protect against them. Much more time and investment will be required to hone all the data models and patterns to make AI and machine learning a highly effective technology for software security and cybersecurity.
The Dark Side of AI
Experts believe that AI and Machine Learning will be used to create malicious executable files. The concept of having malicious artificial intelligence running on a victim’s system is still pure science fiction. Still, malware that is modified, created, or communicating with an AI is a very dangerous reality that we all need to acknowledge and prepare for. AI controllers will allow malware built to modify its own code to avoid being detected on the system, no matter what security tool is deployed. Scary as it is, just imagine a malware infection that can adjust and acclimate its attack and defense methods on the fly based on what they are up against. This is the dark side of AI and cybersecurity.
The cyber defense will depend on AI’s ability to deliver faster analytics to find malicious activity. With machine learning and AI-driven responses, security teams can automate triage and prioritization and reduce false positives by up to 91%. Enterprises will look for innovative solutions that enable them to stay ahead of the next cyber threat. With the growing complexity of cyber-attacks, companies will need out-of-the-box solutions that are automated and based on AI.