AI Risk Management for Defense
by Melany Joy Beck
December 10, 2024
Increase transparency and interpretability to support AI risk management in frontier models developed for defense applications.
Artificial intelligence (AI) is advancing defense operations, enhancing decision making, streamlining logistics, and increasing overall efficiency.
Advancements in AI and machine learning that show significant potential to reshape the military and defense sectors include autonomous systems, predictive analytics, cybersecurity, combat simulation solutions, and more.
With the AI budget of the US DOD estimated at nearly $2 billion, government expectations for AI remain high. However, AI risk management still poses a significant challenge in terms of governance, trust, and security.
AI Risk Management for Defense Applications
Like all software, AI systems in defense are vulnerable to risks such as malware, data poisoning, model evasions, and deepfake attacks. These risks can compromise the integrity and reliability of AI models, leading to potentially catastrophic consequences in critical defense operations.
Unlike most software, the challenge of system opacity operates as a further complicating factor. Defense applications rely heavily on sophisticated AI systems for various functions, including real-time decision-making capabilities, supply chain management, and predictive maintenance.
Addressing visibility and AI explainability concerns are the first step in the journey to continuously advancing and evolving AI defense capabilities.
Boost AI Visibility for Improved Operational Readiness
Operational readiness requires an algorithmic approach to the optimization of defense technologies. Directly supporting requirements for reliability, governance, and traceability of artificial intelligence, Authentrics.ai uses its algorithmic approach to deliver visibility into the “black box” of neural networks (NNs).
“The key challenge with AI today is the black box problem,” said John Derrick, CEO of Authentrics.ai. “This lack of transparency is particularly concerning when AI systems are used in high-stakes environments.”
Accelerate Model Iteration and Repair Cycle Times
“Delivering a more lethal force requires the ability to evolve faster and be more adaptable than our adversaries,” said Deputy Secretary of Defense, Kathleen Hicks in a memorandum introducing the 2022 DOD Software Modernization Strategy. “Transforming software delivery times from years to minutes will require significant change to our processes, policies, workforce, and technology.”
Innovative solutions for neural network-based systems are needed to accelerate model iteration and repair cycle times. Robust and direct NN telemetry, auditing, and repair technologies enable meaningful insights into, and control over NNs, leading to decreased cycle and repair times. These advances are also likely to reveal new techniques or discrete capabilities in risk assessment and mitigation.
Solutions developed by Authentrics.ai provide a clear example of capabilities developed to enhance direct NN telemetry. With these, organizations can unlock new discrete mitigation and repair techniques such as content pruning without retraining, results attribution of source content, and new methodologies for quantitative human oversight and explainability of previously unmeasurable or unexpected NN behaviors.
In this case, improved sensors enable greater controls which generally boost cycle times, but also introduce streamlined methods for addressing risks within the AI-LDF framework.
With Authentrics.ai innovation in direct NN telemetry techniques, organizations can:
- Prune training effects of specific content from the model
- Detect and repair model degradation, drift, and bias without retraining
- Attribute model outputs to specific data sets
- Reduce costs and energy associated with retraining or content acquisition
Not only does this result in accelerated iteration cycle and repair times and new mitigation capabilities, but it also aligns with the U.S. Army’s objective of exploring novel techniques for increasing model transparency and interpretability.
To learn more about AI Governance for Defense Applications, access the full white paper which reveals top challenges, key strategies within existing frameworks, and phases of successful deployment.
If you are interested in accelerating iteration and repair cycle times for foundation AI models as part of your AI risk management strategy, contact us today.