How AI & ML Combat Cyberattacks

This is a repost of a blog originally posted to GovLoop here.

This blog post is an excerpt from our new report, How Artificial Intelligence Combats Fraud and Cyberattack, download the full report here.

Cyberattacks, including ransomware and data breaches, are all too common in the public sector today. According to Verizon’s 2019 Data Breach Investigations Report, the public sector faced 23,399 cybersecurity incidents in 2018. Of those attacks, 330 resulted in confirmed data breaches. The government is facing a cybersecurity crisis, with threats ranging from espionage to financial motivation to internal accidents leaving information at risk, including personally identifiable information (PII) and confidential materials.

Traditional cybersecurity measures, such as firewalls and secure passwords, no longer hold up to the evolving threat landscape. Data-sharing and mobility are expanding agencies’ perimeters, leaving more gaps for ill-intentioned actors to enter through. Put simply, federal cybersecurity must remain on the offensive by embracing new technologies to overcome this losing battle.

With artificial intelligence (AI) and machine learning (ML), agencies are able to review data in bulk to actively monitor for the first signs of an attack and respond quickly. AI and ML can give agencies the ability to recognize patterns over time of non-threatening activities to more quickly detect actions that seem out of the ordinary.

This solution provides real-time analysis of the security perimeter, instead of the months or years that it usually takes for government agencies to detect an attack. The 2015 data breach stemming from the Office of Personnel Management (OPM), for example, compromised more than 20 million records and took five months to discover.

After analyzing the structure and common signs of past cyberattacks and attempts based on agencies’ data, AI and ML platforms can continuously monitor for new threats. These technologies do not have the same limits as human stamina and are able to audit vast amounts of data for review.

This technology is already being widely used to sort email inboxes. Phishing schemes, for example, are a common way for actors with harmful intents to breach a security framework because it only takes one wrong click from an agency employee to gain access. This style of attack that is so often dependent on human error can now be stopped with the help of large-scale analytics and automation.

Most email providers now redirect phishing attempts to different folders or warn the user that an email contains signs of an attack. This is scalable to fit whatever platform employees are using to access their inbox. With AI and ML technologies, this same idea can be applied to the entire perimeter to take some of the onus off of the staff.

To learn more about stats and the history of federal AI and ML policies and insights, download our full report here.