distant reach of the Milky Way. It requires much more complex decision making than chess. There are battles to manage, infrastructure to plan, spying to do, geography to explore, and resources to collect, all of which inform one another. Computers struggled to win at StarCraft, Julian Togelius, an NYU professor who studies gaming AI, told me in 2017. Even when they did beat humans in individual games, human players adjusted with “long-term adaptive strategy” and started winning. “There are so many layers of thinking,” he said. “We humans sort of suck at all of them individually, but we have some kind of very approximate idea about each of them and can combine them and be somewhat adaptive. That seems to be what the trick is.”
In 2019, in a limited version of StarCraft, AI beat a pro for the first time. (The pro adapted and earned a win after a string of losses.) But the game’s strategic complexity provides a lesson: the bigger the picture, the more unique the potential human contribution. Our greatest strength is the exact opposite of narrow specialization. It is the ability to integrate broadly. According to Gary Marcus, a psychology and neural science professor who sold his machine learning company to Uber, “In narrow enough worlds, humans may not have much to contribute much longer. In more open-ended games, I think they certainly will. Not just games, in open ended real-world problems we’re still crushing the machines.”
The progress of AI in the closed and orderly world of chess, with instant feedback and bottomless data, has been exponential. In the rule-bound but messier world of driving, AI has made tremendous progress, but challenges remain. In a truly open-world problem devoid of rigid rules and reams of perfect historical data, AI has been disastrous. IBM’s Watson destroyed at Jeopardy! and was subsequently pitched as a revolution in cancer care, where it flopped so spectacularly that several AI experts told me they worried its reputation would taint AI research in health-related fields. As one oncologist put it, “The difference between winning at Jeopardy! and curing all cancer is that we know the answer to Jeopardy! questions.” With cancer, we’re still working on posing the right questions in the first place.
In 2009, a report in the esteemed journal Nature announced that Google Flu Trends could use search query patterns to predict the winter spread of flu more rapidly than and just as accurately as the Centers for Disease Control and Prevention. But Google Flu Trends soon got shakier, and in the winter of 2013 it predicted more than double the prevalence of flu that actually occurred in the United States. Today, Google Flu Trends is no longer publishing estimates, and just has a holding page saying that “it is still early days” for this kind of forecasting. Tellingly, Marcus gave me this analogy for the current limits of expert machines: “AI systems are like savants.” They need stable structures and narrow worlds.
When we know the rules and answers, and they don’t change over time—chess, golf, playing classical music—an argument can be made for savant-like hyperspecialized practice from day one. But those are poor models of most things humans want to learn.
When narrow specialization is combined with an unkind domain, the human tendency to rely on experience of familiar patterns can backfire horribly—like the expert firefighters who suddenly make poor choices when faced with a fire in an unfamiliar structure. Chris Argyris, who helped create the Yale School of Management, noted the danger of treating the wicked world as if it is kind. He studied high-powered consultants from top business schools for fifteen years, and saw that they did really well on business school problems that were well defined and quickly assessed. But they employed what Argyris called single-loop learning, the kind that favors the first familiar solution that comes to mind. Whenever those solutions went wrong, the consultant usually got defensive. Argyris found their “brittle personalities” particularly surprising given that “the essence of their job is to teach others how to do things differently.”
Psychologist Barry Schwartz demonstrated a similar, learned inflexibility among experienced practitioners when he gave college students a logic puzzle that involved hitting switches to turn light bulbs on and off in sequence, and that they could play over and over. It could be solved in seventy different ways, with a tiny money reward for each success. The students were not given any rules, and so had to proceed by trial and error.* If a student found a