Will AI steal our jobs?
Should we fear the automation we also desire? It would be comforting to know a computer couldn’t do our jobs, yet we’re creating ones that can. With emerging AI we’re discovering many cognitive functions we used to consider exclusively human—emotional intelligence, interpersonal skill, creativity, problem solving, critical thinking—are actually not. Unnerving to some, these advances will reshape the labor market and change the way we work. According to a report from the World Economic Forum, roughly 36 million Americans hold jobs already highly exposed to automation.
This isn’t necessarily cause for alarm according to many experts including Dr. Alric Althoff, a senior research scientist at Leidos, who believes if adopted responsibly, AI-powered automation will create economic opportunity rather than diminish it. This might require significant reskilling, but like previous technological revolutions he believes humans will lean on our adaptability as a species. To learn more we welcome Dr. Althoff, who previously shared his thoughts on how AI improves the way we develop computer systems.
Q: Let’s start with the big question. Should we be concerned about AI taking our jobs?
Dr. Althoff: We can’t say for sure that the things we get paid to do now are safe from automation. But even though automation will have a major impact on the workforce, this evolution will be unexpected. People will still have jobs, but they will look different. We don’t usually predict these things well, and it’s easy to have a pessimistic outlook on change. But historical information gives us reason to be optimistic.
With every technological revolution, the planet’s population has increased. The number of available jobs, and the number of available things to do given new technologies, has grown. Even if we don’t know exactly what the future will look like, the message is to be adaptable. Lucky for us, adaptability is something human beings have in spades. There’s really been no change that we haven’t somehow adapted to.
, Leidos Research ScientistEven if we don’t know exactly what the future will look like, the message is to be adaptable.
Q: To what extent have we already given ourselves over?
Dr. Althoff: We already rely on automation everywhere. Humans don’t make light bulbs or ship packages without automated tools. These things used to seem like incredible innovations, but they’re now ordinary and expected. Today, we’re beginning to see AI algorithms do things we’re used to thinking of as human activity. There’s some anxiety around that. What if we discover the things we used to think of as human are really not unique? Does it mean that we continue to build tools to assist ourselves until we transform ourselves into something completely different? That’s where the anxiety comes in for many.
Music, for example, is something we consider so human, but we’ve largely already given ourselves over to machines to produce a track. We used to think of image recognition as a human task, but the algorithms that do this are actually much simpler than we thought they needed to be. Another example is building computer chips, which is a process dominated by algorithms. There’s a popular and justified belief that no one human being knows how a computer chip is put together from top to bottom. We might understand the algorithms behind it, but it would be a stretch to say there’s one person who actually has a complete understanding of every step.
Q: What’s the most impressive way you’ve seen machines taking on human tasks?
Dr. Althoff: I’ve worked on a project with someone who is involved with program synthesis, which is basically when you define the goals of an algorithm, and it writes the code for you. It’s not a really fast or well-developed technology. The reason why this is impressive to me is that we think of coding as a high-tech job that we won’t be able to automate soon. But the other day I saw some code that I would think of as reasonably complicated written entirely by a machine using only a set of tests that the program had to pass.
Q: What big conversations should we have as we hand over more and more responsibility to machines?
Dr. Althoff: We’re at a point, much like we were in the Industrial Revolution, where we can make decisions about the consequences of our actions. In the Industrial Revolution, we could have said, “Wow, there’s all this smoke in the air. We’re generating a lot of pollutants. This isn’t great for the environment. We should probably change. We should probably not do this.” That discussion could have started 100 years ago and prevented a lot of the global warming trends we’re seeing now.
There’s an impetus on us to ask these types of questions relative to the AI revolution. We have to decide as a society and globally what our responsibility is. We need to decide what we want to automate and what we don’t. We have to carve out roles for humanity in the future we’re building, because underneath all of this is economic pressure based on demand. It’s human demand, so we have to decide where we’re willing to draw the line between our desires and reality. This is the big challenge, but if we all start to really understand the connection between our decisions and their consequences, we can make better decisions.
We also need to ask ourselves important questions around security and trust. These are major concerns when we remove ourselves from critical decision-making processes such that we’re trusting algorithms trained on data. The more trust we place in automated systems to do things for us, the more openings there are for nefarious actors to inject slight variations in those processes or try to influence them. Preventing this sort of exploitation is really the crux of where counter-adversarial AI is today. So we’re working this angle, but we also need to take responsibility and modulate our demands based on understanding of the robustness of the state of the art.