How can AI help anticipate human behavior?
We generate several quintillion bytes of data every day, and almost all of it describes our activity in the world, and how we interact with each other. This volume of data is continuing to expand with more and more ways to connect, work, and play – online. With so much of our daily lives and social interactions involving computers, you might think it would be easy for AI to predict human behavior. However, you might be surprised, according to Dr. Jonathan Pfautz, who says even our most advanced AI still struggles to connect causes and effects in human behavior. How does AI make these connections more visible? To learn more we welcome Dr. Pfautz, a chief AI scientist at Leidos and former program manager at DARPA. In a recent episode of the Voices from DARPA podcast, he shared a perspective on building computer models of humans to help understand what and why we humans do what we do. His thoughts led to the creation of a DARPA program called SocialSim, which has relied on the talents of scientists and engineers at Leidos.
Q: Why is it so hard to predict human behavior?
Dr. Pfautz: The science around human behavior is still new, compared to the study of the physical world, where there’s a lot less controversy about how to “do good science.” We use telescopes to study the sky, and microscopes to study cells, but we don’t have the same kind of reliable, calibrated instruments for understanding how humans do what they do.
, Chief AI ScientistAI can help us understand not just how people act, but also why.
Q: Why has behavioral science lagged behind?
Dr. Pfautz: Many types of human behavior are called "wicked problems.” It’s complex and hard to do good science, and even with all this new data, we’re dealing with uncertain and evolving situations, all the time. People change, technologies change, and the world throws us curveballs. A success in using AI to anticipate an event today might or might not work in two hours, two days, two weeks, two months, or two years. How definable is human behavior anyway? How stable is it? The complicated nature of these questions makes it extremely difficult to create a valid model that could help anticipate future human behavior.
Even with new, large data sets, social and behavioral research still fights to connect causes and effects. If you rewind all of behavioral science back fifteen years, you couldn’t have predicted that everyone would be carrying a smartphone today, and how that’s changing behavior. New technologies, like world events, dramatically change our behavior – whether it's how we drive, walk down the street, or interact socially.
Q: How do you see AI beginning to accelerate the science of human behavior?
Dr. Pfautz: AI techniques help in processing huge amounts of data (think of billions of tweets, blog posts, web pages) to form models of “what matters,” or “features.” Companies are using this data to do things like help anticipate and resolve traffic jams, share relevant local news, and target advertising. New AI techniques focused on “transfer learning” are one of many AI approaches that acknowledge that the problem of understanding and anticipating human behavior from online communications is going to continue to change (Can we learn from data on one problem to better understand another problem?).
Q: How do you see this progress improving society?
Dr. Pfautz: The opportunity for improving the human condition has never been more apparent, from encouraging global conversations on climate change, to rapid responses to humanitarian crises. One example is ability to look at data, and figure out how to “nudge” people towards the outcomes that benefit them. “Nudge science” essentially helps shape behavior in positive ways that are really simple. The classic example helps users on a government website answer the question, “How much money do I want to save for retirement?” If the default option is the one with the best results, people will pick the default more than anything else. We gravitate towards certain behaviors in certain situations, especially in these computer-mediated situations. AI helps us run those experiments on large numbers of people to nudge them toward doing what’s best for them.
Take, for example, a program I ran at DARPA called "Warfighter Analytics using Smartphones for Health," or WASH. Your average soldier will see a doctor at most once a year. How do we help them when there isn't this notion to, "Hey, just go see a doctor." Well, maybe the general patterns of behavior from the smartphone you carry around all the time can help warn you – “get a check up.”
Q: Conversely, how is it dangerous?
Dr. Pfautz: AI is a game-changer, for better or worse. It’s a challenge when it starts to relate to our daily lives. Massive amounts of data are available on our activities, and this data is a commodity that is widely available. In the US, there is a wealth of private information out there. This means that if this data is useful, our adversaries can also learn from it. They can learn a lot about our behavior, creating opportunity for foreign influence. My existential fear is that with increased AI capabilities, foreign governments that don’t share our values about freedom of speech and the right to privacy, will be able to understand human behavior better than anybody else. And, so it's no longer an arms race, it's a behavioral science race.
Q: What sort or work is Leidos doing to help our customers navigate these changes?
Dr. Pfautz: A few years ago, I started a program at DARPA, called “Computational Simulation of Online Social Behavior,” or SocialSim. This program focuses on building computer models of how communications play out online. These models simulate what might happen when information or misinformation is shared with a complex network of people. Leidos has been a key contributor to this program, providing data sets across multiple social media sources, and supporting the testing and evaluation of simulations – to see if we can accurately anticipate how false information might spread online. Other teams on the program have focused on applying AI techniques to create simulations ranging from the spread of information on malware to the spread of misinformation about well-intentioned volunteer organizations in Syria.