American national security is inextricably linked to our ability to effectively partner with our allies. Modern warfare demands it, our allies depend on it, and our adversaries aim to exploit it. Mission Partner Environments, or MPEs, allow our military and its partners to securely share sensitive, classified information securely and at the speed of the mission.
The ability for MPEs to make use of the latest technological advancements and capabilities is, therefore, extremely important. So is ensuring the right policies are in place that will enable the kind of communication and collaboration that MPEs were designed to facilitate.
AI and Today’s Data-Centric MPEs
With data as the building blocks for a host of enabling technologies, today’s MPEs are becoming more data-centric than ever. This sets the stage for AI in particular to dramatically transform the way we take in, analyze, and act on information of a variety of types and from a variety of sources. For these environments to be able to leverage AI to its fullest potential, there are certain infrastructure needs, as well as things like policy changes, that allow information to move securely across classification levels as appropriate.
For example, AI-enabled MPE environments require data linked across several mission partners and security enclaves. Today, data linkage across classification levels is relatively restricted and presents a challenge. So, GDIT is working with customers on small-scale prototypes that involve tagging data for sharing at lower classification levels among mission partners. This enables real-time identification of shareable data; it enables us to use generative AI to do it; and it allows us to share and act on content in multiple languages.
When MPE teams create and train AI models, they can use and share these models at different classification levels without revealing the data used to train them. This extends analytic capabilities across classification levels, without compromising data security or access controls. Policy changes that put this way of working into practice more broadly will have a huge impact on how MPEs use AI.
Zero Trust + An “AI NATO”
Again, modern warfare requires us to fight by, with, and through partnerships. Zero Trust was a huge accelerator to our ability to share data, information and systems access with mission partners – precisely because it treats every actor on a network as an untrusted one until proven otherwise with identity verifications at every step.
Zero Trust architectures are rapidly becoming standard practice within MPEs and the same will one day be true when it comes to how we use AI to manage data. Until then, demonstrating the art of the possible with AI, as GDIT is currently doing with customers, will help to bring people along and show them what the future of information sharing in MPEs can look like. We sometimes refer to this as an “AI NATO” that can help to create a common understanding – regardless of partner or service – around how to leverage AI standards and applicability. It takes industry, partner nations, and the Department of Defense all working together to create the future state we want to see.
As one example, GDIT and Google recently worked together to demonstrate to the U.S. Air Force how a portable, ruggedized solution could integrate AI and Zero Trust for data synthesizing and faster decisioning at the edge and in D-DIL environments, including MPEs. More of these demonstrations that showcase better mission outcomes will hasten the use and expansion of AI in MPEs.
The Interplay Between Policy and Technology
Technology is rapidly, constantly evolving – so much so that it’s difficult for policy change to keep pace. Once the right information-sharing policies are implemented, it’s only a matter of time before a new technology or new capability renders the old policies outdated. That’s to be expected – and technology driving changes to policy is beneficial to the warfighter. New ways of meeting the mission will always require new guardrails for the safe and secure use of new innovations.
Right now, the policies governing how we use AI to share information are behind the capabilities at our disposal to do it, but it is temporary. We look forward to continuing to demonstrate what could and should be and then working with customers and partners to do so.





