AI, Security, and Mission Success: Insights from Rob Linger
Artificial Intelligence (AI) is revolutionizing industries, and the government sector is no exception. From defense to health care, AI is accelerating decision making, improving resource allocation, and developing more efficient operations. However, with great power comes great responsibility — especially when it comes to securing AI systems and ensuring their responsible adoption.
In a recent episode of the ML SecOps podcast, Rob Linger, vice president of the Information Advantage Practice at Leidos, shared his unique journey and perspective on AI adoption, security, and innovation in government.
With a background that spans military service, elected office, entrepreneurship, and data science, Linger’s insights offer a compelling look at how AI is reshaping the defense and intelligence landscape.
From the Battlefield to AI Leadership
Linger’s path to AI leadership is anything but conventional.
Starting as an enlisted infantryman in the U.S. Marines, he cultivated a passion for technology and business. After leaving the military, Linger pursued higher education, founded a federal contracting business, and later transitioned into managerial roles, including chief information officer and offensive cybersecurity specialist. These diverse experiences shaped his approach to AI, emphasizing the importance of integrating security, governance, and scalability from the outset.
Linger’s philosophy is simple yet powerful: always add value.
“If you’re not providing value, you need to reassess what you’re doing,” he explained during the podcast.
This ethos drives his work at Leidos, where he leads efforts to solve complex problems using cutting-edge technologies. His penchant for action while maintaining a clear view of the big picture has been instrumental in his success and organizational vision.
Why AI Security Matters More Than Ever
Unsurprisingly, AI security emerged as a key theme in the conversation – and for good reason. At Leidos, AI serves as an extension of software engineering, requiring robust security measures throughout its lifecycle. With the increased use of AI to solve mission challenges, the AI supply chain must be rooted in Zero Trust practices to help protect against bad actors. Leidos’ partnership with Protect AI exemplifies this approach.
Protect AI’s tools help to provide seamless security integration without disrupting workflows. For example, when developers or researchers pull AI models from external sources, Protect AI scans them for vulnerabilities to assess their safety. This proactive measure is designed to accelerate innovation while providing assurance and reducing risks.
Linger emphasized the need for balance between speed and security, noting that effective security tools can enhance innovation.
There always is a push and pull between speed and security, but one of the things that we pride ourselves on at Leidos is providing speed, security, and scale to our customers. And when we find teams and products like Protect AI that enables the speed and security, we can bring scale to the table.
Rob Linger
Leidos Information Advantage Practice Vice President
The alignment of speed, security, and scale is a cornerstone of Leidos’ strategy.
A Delicate Dance: Overcoming Adoption Challenges
The adoption of AI in government settings presents its own set of challenges. During the conversation, several key hurdles emerged, including evolving policies, procurement processes, and the framework to gain an authority to operate.
As Linger put it, “The difficult part is that the technology evolves so fast, it's difficult to evolve policy at pace. And policy has to do with not only policy for using the technology, but your procurement process, your procurement cycles.”
To overcome these challenges, Linger advocated for approaches that balance security and policy – ultimately showing value.
“Once you show you're able to solve a very specific problem, and you're able to provide value, then you can drill down and show all of the complexity, all of the security, how the networking data flows and why it matters – but you have to give them that value first,” he explained.
The Future of AI Security
Looking ahead, Linger sees agent-to-agent communication as a critical area for AI security. While the agentic AI world poses fresh cybersecurity risks that require hardening of AI agents, it’s important for organizations to continue using established cybersecurity principles and practices.
“If you think about an orchestrator agent and the number of agents below the orchestrator that are working, maybe agent one has access to tools and is allowed to use certain tools that agent two is not allowed to use. How do we make sure that agent two doesn't just use the tools through agent one and pull the data back through?” Linger mused.
To solve agent-to-agent security issues, Linger recommended increasing engineering and observability in how organizations secure communication between the devices. As agents and other programs become more prevalent in defense applications, their security will become a pressing concern.
Final Thoughts
Linger’s journey and insights offer a valuable roadmap for navigating the complexities of AI adoption and security in government settings. His emphasis on providing value, integrating security from the start, and fostering collaboration serves as a guiding principle for organizations looking to leverage AI responsibly and effectively.
At Leidos, AI-driven cyber defense solutions embody the essence of advanced intelligence, transforming traditional approaches into a proactive, resilient, and adaptive AI strategy.
Listen to the podcast to hear Rob in his own words and glean other insights into AI.