Artificial Intelligence 3 MIN Read

Four Ways to Use AI Chat Models in Unclassified, Open Source Analysis

May 8th, 2024


Learn more about GDIT’s use of AI, including via our Luna Accelerator.

Like nearly every other industry or sector, the Intelligence Community is exploring the use of artificial intelligence in new and myriad ways. Today, chat bots and chat-based Generative AI tools have proven to be effective ways to leverage this new technology, but its application in the mission space has yet to be operationalized.

A cyber team at GDIT has been working on how to use automation, orchestration, and evolving tools like Generative AI, to maximize the efficiency and effectiveness of cyber defense monitoring. Specifically, we looked at how to use large language models to develop AI chat models and tools for use in unclassified, open source analysis. Recently, at the Department of Defense Intelligence Information Systems (DoDIIS) worldwide conference, we demonstrated four ways to use these tools to advance the mission.

Requirements Analysis and Validation

One use case where the application of large language models (LLM) and generative AI proved incredibly valuable was in IT/Cyber requirements analysis. Our team performed open source research on cyber capabilities, using LLM, in support of Security Operation Center (SOC) modernization. We developed a new streamlined AI process for completing requirements analysis with a 70% reduction in time and effort, as compared to traditional methods. This involved creating standardized LLM prompts, documented in a procedure, to logically create functional user requirements as Agile user-stories, along with completed Lean Architecture Framework Solution Vision statements, that outline end-state capabilities. We completed ten sets of capability requirements, updated to include Zero Trust, AI, Machine Learning, and Behavioral based detection methods to help modernize services across the program.

Strategic Planning

Another LLM use case is strategic planning. With our requirements work completed, we found that this technology could develop strategies and plans to help manage organizational changes driven by Zero Trust, AI, and other emerging technologies. The LLM validated our thoughts on merging two analysis teams to create potential benefits in labor savings and efficiency. It also helped identify challenges to SOC monitoring from Zero Trust that was helpful in our planning. Overall, this showed major benefit to strategic planning and assessment work within cybersecurity.

Cyber Threat and Risk Analysis

We also identified cyber threat and risk analysis as another area ripe with potential for generative AI and chat. Our cyber threat intelligence (CTI) team regularly review threats, actors, tactics and incidents to build trend reports. Generative AI is helpful in developing these reports, as well as providing correlations that humans may otherwise miss. In one instance the LLM identified a behavior of a threat actor that our team had never seen, primarily because it involved looking at a data point that is not common to associate with threat activity. Overall, for CTI, the LLM proved that it can help augment CTI staff and improve analysis and reporting.

Signature Creation and Intrusion Detection

Finally, we saw real potential benefit in creation of intrusion signatures. We leveraged LLM along with specific threat indicators, to draft intrusion signatures that are used to detect the associated activity. This proved valuable in cutting down the initial research time used to draft and test initial signatures, while also giving us a capability we could use to train new staff to the defensive countermeasures team on the SOC. Intrusion signature development can be a time-consuming process of trial and error, tuning and deployment. Leveraging LLM we were able to reduce the barrier of entry while also reducing the labor required to complete draft signatures for testing.

We know this is just the tip of the proverbial iceberg when it comes to operationalizing chat and generative AI to advance intelligence missions. But it’s critically important. There are more than 1.3 million cyber job vacancies in the U.S., at this moment. Our industry is one in which ethically operationalizing AI can help to close that gap, improve national security, and make cyber careers even more dynamic and essential than they already are. That’s key, and imagining the art of the possible, which is what we do every day at GDIT, is how we make it happen.