Can we prevent another AI winter?
The conversation about AI mostly centers on its future, but today we’re exploring the lessons of its past. AI history teaches us sustained progress is no given. It’s exciting today, but excitement wears off. At least twice before the field has experienced periods of decline so harsh they’re commonly referred to as “AI winters.” Why did this happen, and how can we prevent the next one? To learn more we welcome AI Director Ron Keesing, who previously shared how AI helps us fight cancer. At next month’s CES Government 2020, Ron will moderate a panel of senior government and industry executives that will explore lessons learned from previous AI winters and how to promote sustainable progress in the field.
Q: If AI progress occurs in seasons, where are we now?
Keesing: It’s certainly clear that we’re in the middle of an AI spring, or perhaps even an AI summer. We’re making a lot of discoveries and there’s a lot of excitement in the field. However, I also see a growing irrational exuberance. For every successful AI project, many others fail. I think there’s been too much hyperbole used and overoptimistic promises made. In fact there are a lot of things AI won’t achieve until we solve some huge challenges related to artificial general intelligence, or AGI, which is a topic for another day. Let’s just say our current systems have a lot of limitations and there are many aspects of them we don’t even fully understand.
, AI DirectorThe conversation about AI should be a positive one, but also a clear one.
Q: Where does this irrational exuberance come from?
Keesing: When it comes to AI we seem to have a hard time separating hype from reality. It captures our imaginations because it blurs the line between human and machine intelligence. But too often it’s portrayed like a magic wand that can accomplish anything. The conversation about AI should be a positive one, but also a clear one. AI isn’t easy, and getting value from it looks different in every situation. In fact, research by Gartner shows the vast majority of AI projects will likely fail to deliver. When AI doesn’t live up to its hype it can lead to widespread disillusionment. When that happens, funding dries up and R&D stalls. The previous two AI winters lasted for many years.
Q: Were they as harsh as the term “AI winter” implies?
Keesing: It’s mostly just a term used to make the point that we’ve been through these before, and they should serve as a cautionary tale. What’s more important is that we’re able to separate hype from reality, which is building AI that delivers sustained value is not easy. It’s not a given that we’ll make good on all of the promises out there. The message is to not get caught up in the hype, but to navigate this with sobriety.
Q: How can the government and tech community help prevent another AI winter?
Keesing: By proactively addressing certain things that often cause AI to fail. I see at least three major challenges. First, we need a clear-eyed view of what’s possible. I always make the point that AI can solve certain problems really, really well. But we’re far from achieving AGI, or even building systems that can adapt to new domains and new situations. So it’s really important to invest in AI projects that are actually supported by technology and not just hype.
Second, we need to put strong policy in place to govern the use of AI. We need to ensure it’s fair, transparent, resilient, and secure. Right now our AI tools are a black box, which means they make decisions we can’t understand, scrutinize, or protect. And once we have the right policy, we also need technology that bridges the gap between today’s black box AI and our policy objectives. We and many others are working hard to help build solutions that bridge that gap, but in many cases we don’t yet have solutions. There’s a real risk we won’t be able to build AI that’s consistent with the policy we’re creating, and this could lead to another AI winter scenario.
Third, building AI systems requires planning for a very different technology life cycle, and it’s important to make smart investments that account for that life cycle. A lot of AI projects start with somebody doing a proof-of-concept, but there’s a lot more to it beyond doing data science in a lab. There’s actually a huge step to turn that initial demonstration into a working system that’s connected up into real data. And once that AI system is deployed, it needs to be updated over its lifetime as the data and the environment changes. Models, data, and tools need to be updated continuously or the system builds up what’s known as hidden technical debt. When that happens, your AI system stops delivering value and actually becomes an expensive liability.