How smart sensors can work together to complete the mission
Fighter pilots become adept at working with a powerful array of radar, optical, and other sensors at their fingertips to detect threats and identify targets while maintaining operational advantage through their advanced aircraft capabilities. But there can be a cost to enlisting that skill, notes Jarod Patton, Solutions Architect for Autonomy and Sensors at Leidos. “Selecting exactly the right combination of sensors, using them in the right modes, and even aiming them in the right places to match the rapidly varying needs of an active mission requires much thought and focus, often in situations when every second counts. It can pull the pilot’s attention away from the battle,” says Patton.
But what if the sensors on a manned-aircraft, or on a team of drones, were smart enough to select, adjust, and operate themselves in response to the needs of the moment and the mission?
That’s just one of the capabilities that a sweeping new autonomous-systems architecture under development at Leidos promises to deliver to customers such as the U.S. Air Force. That architecture aims to take better advantage of increasingly sophisticated sensors, as well as of new generations of drones. In addition to providing a means for adding autonomous intelligence into sensor deployment, the autonomous-systems architecture also makes it easier for vendors to contribute cutting-edge technology. And it could automatically link the different contributions into a seamless and adaptive sensor network and ensure the technology can be upgraded more quickly and at less cost. It opens up a new way of doing business with the Department of Defense (DoD),” says Patton.
Adapting in real-time
A fair amount of autonomy is already built into some unmanned systems. A remote operator can, for example, instruct a drone to go to a specific location, collect data, and return to base. The drone can carry out that mission even if the operator loses touch with it. The problem, notes Patton, is that resulting imagery or other sensor data may not provide the answers battlefield managers and intelligence officers seek, such as a target’s exact location or the precise identification of an unknown threat. “If they decide the sensor data doesn’t provide the information they need, they may have to re-task the drone,” he says. “But that takes time, and they aren’t guaranteed to have consistent communication in these highly-contested environments.”
One big goal of the Leidos project is to make it easier to enlist sensors that intelligently adapt to the needs of a mission so that the right sensors can autonomously select the operating mode that will return the most useful data and interact with the vehicle autonomy to determine the best location and conditions to collect the data. That might mean changing the resolution or zoom of a camera, for example, modifying the frequency or directionality of a radar or LIDAR (Light Detection and Ranging) signal, or flying to a particular location, altitude, or velocity.
But companies that make sophisticated sensors can’t build in that level of autonomous capability on their own. That’s because the sensor settings needed to provide the most useful data can change not only mission to mission but also moment to moment on a given mission, depending on everything from location and shifting weather conditions to unexpected data coming in from other sensors about new threats.
That means autonomously adjusting and re-tasking sensors requires sharing information between different sensors, between sensors and aircraft, and between all the systems and the humans overseeing the mission. “As the saying goes, plans are good until first contact with the enemy,” says Patton. “If we can tie sensors together with the logic needed to manage mission-related tasks, then a drone and its sensors can adapt in real-time based on what it sees happening around it.” That ability saves the precious time it can take for human decision-makers to process previously gathered data and send out new tasking instructions.
From plug and talk to plug and play
But how to ensure that all the different sensors involved in a mission can adapt to unpredictable events and collaborate with other sensors as part of a team? That’s a problem that the Leidos project is solving by building what Patton calls “plug-and-play like” capability. He notes that the computer industry years ago developed a shared plug-and-play architecture for connecting displays, keyboards, and other devices to a computer in a way that ensures they work together right out of the box without complicated manual setup processes.
Existing DoD architectures and standards, such as Open Mission Systems (OMS) in the U.S. Air Force, currently provide a “plug-and-talk” capability. This means that all the systems within the vehicle understand how to communicate information and what information is available, but they do not understand how the other systems could be used under new circumstances. The architecture Leidos is developing will allow customers and vendors to build new sensors and other components that can immediately become part of a system capable of adaptive, autonomous operation. That means that when a new sensor is installed, the architecture will immediately recognize its performance capabilities and modes and apply the logic needed to take the best advantage of that sensor in different situations.
What’s more, says Patton, the architecture can mix and match different sensors in different modes across different team members to meet mission needs on the fly. “Identifying a target can be like a puzzle, and getting one type of data from one sensor might not be enough,” he explains. “You want to use different sensors in different ways to give you enough pieces to solve the puzzle.” He adds that the Leidos architecture will provide for that sort of collaborative autonomy,” including taking any available human input into account as part of the collaboration.
A big advantage of this approach is that it makes it easier for third-parties to contribute new sensors and other innovative technology by meeting the interface standards that the architecture will specify and providing performance models that describe their technology’s capabilities. That’s a shift from conventional component development efforts, which tend to be standalone efforts that are more difficult to integrate into an environment where all the different pieces interoperate.
A better way to upgrade
Besides enabling faster sensor deployment and more shared capabilities, the architecture makes it less costly to upgrade components and take advantage of rapid advances. “It’s much easier to upgrade when there’s a common foundation to build off,” says Patton. What’s more, he notes that the architecture would allow customers to deploy sensor upgrades without depending on technology providers to obtain recertification of aircraft because the upgrade wouldn’t necessarily involve modifying the aircraft, only plugging new sensors into the architecture.
Another benefit is that it flips the conventional approach of relying on an aircraft manufacturer as a prime contractor in major projects to instead build programs around a software architecture that can bring in aircraft and all other components as subsystems that plug into the architecture. “It takes the emphasis off of individual aircraft and shifts it to a software- and mission-centric approach that offers better integration, lower costs, and faster upgrades,” says Patton.
Ultimately, the Leidos architecture approach’s biggest payoff would be giving the DoD a means to avoid the gaps and obsolescence that can fall out of the long lead times and high costs associated with conventional development approaches. “This sort of architecture is the best and possibly the only way to keep pace with adversaries’ new capabilities and emerging threats,” says Patton.
For more information on our innovations with sensor technology, please visit leidos.com/on-a-mission/sensors or contact us.