How does AI improve the way we develop computer systems?
The race to develop faster and more sophisticated computer applications is accelerating as AI and machine learning (ML) techniques continue to mature. While classical computers are capable of analyzing and processing certain types of data sets much faster than humans, the introduction of AI-ML extends research capabilities even further through simulated thinking. We talked with Dr. Alric Althoff, Leidos Research Scientist, who described how processes for developing computer systems are being expedited as smarter algorithms are combined with traditional research methods.
Q: How can AI-ML be applied when developing the next generation of computer systems?
Alric: Companies are developing new processors all the time, but integration with existing systems and components is tricky. A lot the integration time is spent ensuring that the applications a user cares most about will run fast enough and use a reasonable amount of power. In order to find a good final design, engineers test various design configurations and record what the effects are on the final results.
Now we’re searching for good designs much more efficiently by applying AI algorithms to this testing process. Basically, a human can set up the algorithms to explore the possibilities, and the algorithm can run without human intervention to find good designs and hardware/software configurations. In practice, AI-ML can be used to identify good configurations faster than a human being, and allows non-experts to design better systems more easily.
Q: What are the biggest challenges with using AI-ML in systems development today?
Alric: Many of today’s challenges stem from the fact that AI-ML has received so much attention in certain domains while being relatively ignored in others. Due to the success of AI-ML in domains such as computer vision, there's a perception that AI-ML is ready to be integrated into areas where it hasn't gotten as much attention, regardless of the technical limitations that are still being addressed and overcome. So today, while ML methods are starting to become more mature, there's a lack of support in the workflow—particularly in the way that AI-ML algorithms interact with the tools that people already use. Often the algorithms don't behave well when you're trying to integrate ML into new processes or systems. So that is one of the biggest challenges in systems development—getting the workflow to play well with the ML algorithms.
, Leidos Research Scientist
Machine intelligence has the advantage over human intelligence in that ML can view millions of results and extract information immediately in order to make the next decision.
Q: What are the benefits of machine intelligence in developing systems? In what ways is it superior to human intelligence?
Alric: For humans, there’s a lot of complexity that needs to be managed in developing systems. A human might use a huge amount of time and energy on these things, but the machine has internal models that just keep track of that stuff. So where a human would lose sight of certain requirements, a machine can hold all of that material simultaneously in its view.
The advantage that a machine has over a human being in this situation is that ML can view millions of results and extract information from those results in order to make its next decision. In contrast, for humans, repeated experiments result in some fuzzy intuition about the nature of the problem. So a machine will be able to generate better priors over loss functions that it builds up about what we're trying to optimize over, even though we may not be able to state the nature of the problem from the outset.
These are specific methods in the field that allow us to leverage this machine equivalent of human intuition. This includes things like Bayesian optimization and reinforcement learning—that there's a feedback loop consisting of experimenting, observation, and the ML tool or algorithm adapting to the loss function values that it has learned in that feedback process at run-time. Glossing over the details, the takeaway is that Bayesian optimization achieves similar results in exponentially fewer samples than traditional supervised learning techniques. There are theorems that back that up, and this also aligns with our experience working with these algorithms. So applications where each evaluation of the loss function is very expensive will benefit from this approach.
Q: What domain interests you most?
Alric: Counteradversarial AI-ML is a really interesting domain. I have a background in statistical hardware security, and I often feel that the data science community treats threat detection, mitigation, and response more as if it were a data science problem than a security problem. By that I mean: collecting data, analyzing the data, designing algorithms that perform well given the data, and then publishing papers on the results. But on the security side, we know that you need to have a grasp of the underlying principles and design mitigations.
I think that we are going to see people who are security professionals focus on the hardness of the core problems of counteradversarial machine learning. They will be asking, in terms of safety and actual effectiveness, how strong should the emphasis be on using a stringent analytical approach? That is a big challenge, and I think that people are beginning to realize that. It's similar to the change that we’re seeing across the machine learning research community as people tend to move toward ‘de-biasing’ algorithms, for example—moving toward robust algorithms in the context of how important it is that something be correct.
Q: What personally motivates your work in this field?
Alric: As human beings we observe and we interact with the world, and it informs us as to how we’re doing. So we build up a worldview formed of a lot of little anecdotes and vignettes from our lives. Rarely is it a natural process to take a scientific outlook on the world, instead relying on a lot of assumptions. I think that's the major motivation for me. In the context of Bayesian optimization work, repeated trials contribute knowledge based on experiments rather than assuming a lot of prior information about the decision space. You can assume, but assumptions for human beings—and this is my personal feeling—don't really tend to serve us well at large scales. And so doing this sort of iterative fine tuning through automation is essentially applying the scientific method in data collection and experimental design. I think that is an important view to hang on to. It's also an important thing for the future of AI-ML, particularly as we move toward a world where we're relying more and more on algorithms to make decisions for us, to be aware of under-sampling, uncertainty, and applying models to regions where they do not generalize.