Anticipating and preparing for insider threats is a key component of any organization’s security strategy. Citing the Ponemon Institute, CrowdStrike notes that in 2023, 71% of companies experienced between 21 and 41 insider threat incidents, and together these incidents cost each organization an average of $16.2 million.

Given that most organizations will experience an insider threat incident, and likely more than one, it’s critical to have a robust insider threat detection program in place. Today, artificial intelligence can dramatically affect both the success of a threat detection effort and the remediations put into place after an incident.

Understanding Psychology and Human Behavior

Understanding psychology and human behavior is important to an insider threat detection effort because assessing communications and language can provide clues to whether an individual poses a risk. Working with customers today, GDIT is using generative AI to analyze and score authored communications, alerting analysts to employees’ concerning behaviors such as workplace violence, anger or disgruntlement, and victimization.

The scoring algorithms utilized in our capabilities were developed from more than 50 years of social science research, much of it led by Dr. Eric Shaw, who created the Critical Pathway to Insider Risk and Insider Threat Risk Index models. GDIT has partnered with Dr. Shaw on our insider threat capabilities, where his insights on understanding human behavior and insider threat risk serve as the foundation. Our work is widely considered a gold standard across the intelligence community and beyond.

Leveraging AI Technology for Insider Threat Detection

In parallel to behavior analysis and psychological understanding, tools that leverage generative AI to accelerate analyst workflows are incredibly powerful for insider threat detection.

One way is by summarizing data. For example, an average security file for a single employee can be hundreds of pages long – especially if they’re a long-tenured employee with a security clearance. Reading every page to find risk would be incredibly time and labor-intensive, especially across a vast employee base. Instead, generative AI can quickly summarize those files and identify relevant pieces of information.

In a similar vein, generative AI can also detect patterns across a dataset. Manual analysis of data such as badge records, phone logs or network traffic would be a massive effort for a single analyst to undertake. Pattern detection with code alone could be fraught with errors and false positives because it lacks understanding of complex human behavior and cannot apply context to the observable risk indicators in the data, as generative AI can.

Leveraging chat functions to triage data is another application for generative AI in insider threat. Analysts can view a data set and deploy a generative AI-based tool to identify patterns, risk and continually ask new questions of the data via chat. This retrieval augmented generation approach gives us the ability to use insights from a dataset to answer complex analytical queries previously unconsidered.

Creating a Forward-Leaning, Collaborative Culture

Insider threat teams are typically small but shoulder a huge task in front of them. In some cases, they collect and analyze enormous amounts of data. AI provides these analysts the ability to distill petabytes of information into meaningful insights from which they can use their expertise to triage and alert decision makers to organizational risk.

This is why getting the right tools to these teams quickly and with enhanced efficacy is so important. It requires a culture that is collaborative and encourages collective experimentation, testing and fine tuning.

GDIT partnered with one customer to demonstrate how generative AI can shorten the task of compiling and editing long, complex reports from hours to minutes. AI models define the structure and vocabulary of the report, ingesting drafts from the team to produce outputs. Similarly, we’ve worked with customers to show how generative AI can assist with complex database queries by leveraging natural language processing and semantic searching to provide analysts with all relevant information, rather than just information that expressly meets a traditional SQL query. This means analysts don’t have to continually refine their search parameters – generative AI does it for them.

As generative AI continues to prove value to insider threat teams, it will continue to grow in adoption and utility. When behavioral insight, advanced technology and collaboration align, AI can become a mission-critical asset in detecting and mitigating insider threats.