Takeaways from AI Palooza 2019
“Trust” was the watchword at last week’s AI Palooza, a convention of Leidos experts in the fields of AI and machine learning (ML). As in, how do we develop AI solutions our customers can trust to be reliable, resilient, and secure? Coming out of the event, this seemed to be an organizing purpose across the company. It’s a topic we’ve explored in our "Q&AI" blog series, and one that presents a broad set of challenges in the government, healthcare, and national security sectors. A diverse lineup of presenters with wide-ranging expertise touched on the issue of trust, which draws together many areas of Leidos research, from models that identify and mitigate bias to ML systems that operate effectively outside of conditions they understand.
For example, we heard from Doug Barton, CTO of Leidos Health, who said no doctor will act on a recommendation he or she doesn’t understand. This seemed like a poignant if obvious statement, applicable not only to doctors, but also soldiers, logisticians, and on down the line. How can we trust a mechanism that mostly operates in a black box? This becomes an ethical discussion as well as a practical one when AI tools are asked to help make important decisions that impact human lives.
Establishing the foundations of trust will take time, Barton predicted, with gradual movement up the scale from AI tools that are assistive, augmentative, and eventually autonomous. Ironically, many are looking to AI to help determine what can and cannot be trusted. Keith Johnson, CTO of Leidos Intelligence, moderated a panel discussion which hit on this point. “With deep fakes increasing, we will need AI to help us understand truth," he said. "As humans, our ability to assess a body of evidence from data is only going to get more difficult. AI can help us examine more comprehensively whether new information can be trusted based on its cohesiveness with previous information."
A large portion of AI Palooza featured “tech talk” presentations about fascinating AI programs and research. Roughly a dozen Leidos data scientists presented on their work, including Josh Wepman, who we interviewed last month about how AI is reshaping the energy grid. Another was Dr. Graham Mueller, who shared with us how AI predicts cyberattacks in unconventional ways through a program that was featured in WIRED. Other topics included natural language processing of medical records, AI in Air Force readiness, and defending AI from adversarial attacks.
Here are some other takeaways from the event.
Hype versus reality. Leidos customers see the high level of innovation in the commercial sector and are asking what AI can do for them. They're being told their problems can be solved with AI, but there’s still a gap when it comes to the commercial world understanding their missions and data. In the absence of deep domain knowledge, no amount of analytic sophistication can generate consistently effective outcomes. “Our customers are saying they’ve heard enough AI-ML presentations and promises,” Johnson said. “Now they want to see results.”
Overcoming adoption challenges in healthcare. We also heard from Doug Barton on the arrival of AI technologies that integrate poorly into clinical workflows. Many of these technologies require data that isn’t readily available, often a result of personally identifiable information (PII) law, and therefore have little value to our customers. However, Leidos can bring value by leveraging AI to solve discreet health-related problems—describing them accurately and going to the marketplace to find the right solution.
AI powers militarized autonomy. Leidos is well-known for its work on Sea Hunter, an autonomous vessel that uses AI in place of a crew. We heard from Tim Barton, CTO of Leidos Defense, about the Navy’s interest in distributed surveillance and lethality, strategies driven by this sort of AI-enabled autonomy. “High valued manned vessels with huge crews carry a lot of risk and cost a lot of money to operate,” he said. “If we can invert that cost curve while getting sailers and staff out of harm’s way, that’s the value of AI.” Getting there presents huge challenges: What if something goes wrong? Where does the vessel go? What if it has a weapons system on it? What if it’s carrying a payload you don’t want somebody else to get? “All of these things need to work in conjunction for trust, safety, and predictability,” he said.
Opportunities in civil. Finally, we heard from Tony Gehr, CTO of the Leidos Civil, who described how IT modernization demands continual movement toward doing more with less cost. “The only way you’re going to get there is with automation,” he said. One practical example is in cargo scanning and border security. “Today we have human operators looking for weapons in baggage,” he said, “but the human brain just isn’t built to look at a monitor hour after hour. It’s a perfect place for us to do object detection through image processing. We have to keep looking for opportunities like this that are perfectly suited for AI-ML because they reduce cognitive burden.”