BY DAN SHIPPER JANUARY 19, 2024
Time isn’t as linear as you think. It has ripples and folds like smooth silk. It doubles back on itself, and if you know where to look, you can catch the future shimmering in the present.
(This is what people don’t understand about visionaries: They don’t need to predict the future. They learn to snatch it out of the folds of time and wear it around their bodies like a flowing cloak.)
Time isn’t as linear as you think. It has ripples and folds like smooth silk. It doubles back on itself, and if you know where to look, you can catch the future shimmering in the present.
(This is what people don’t understand about visionaries: They don’t need to predict the future. They learn to snatch it out of the folds of time and wear it around their bodies like a flowing cloak.)
I think I caught a tiny piece of the future recently, and I want to tell you about it.
Last week I wrote about how ChatGPT changed my conception of intelligence and the way I see the world. I’ve started to see ChatGPT as a summarizer of human knowledge, and once I made that connection, I started to see summarizing everywhere: in the code I write (summaries of what’s on StackOverflow), and the emails I send (summaries of meetings I had), and the articles I write (summaries of books I read).
Summarizing used to be a skill I needed to have, and a valuable one at that. But before it had been mostly invisible, bundled into an amorphous set of tasks that I’d called “intelligence”—things that only I and other humans could do. But now that I can use ChatGPT for summarizing, I’ve carved that task out of my skill set and handed it over to AI. Now, my intelligence has learned to be the thing that directs or edits summarizing, rather than doing the summarizing myself.
As Every’s Evan Armstrong argued several months ago, “AI is an abstraction layer over lower-level thinking.” That lower-level thinking is, largely, summarizing.
If I’m using ChatGPT in this way today, there’s a good chance this behavior—handing off summarizing to AI—is going to become widespread in the future. That could have a significant impact on the economy.
This is what I mean by catching the future in the present and the non-linearity of time. If we extrapolate my experience with ChatGPT, we can glean what the next few years of our work lives might look like.
The end of the knowledge economy
We live in a knowledge economy. What you know—and your ability to bring it to bear in any given circumstance—is what creates economic value for you. This was primarily driven by the advent of personal computers and the internet, starting in the 1970s and accelerating through today.
But what happens when that very skill—knowing and utilizing the right knowledge at the right time—becomes something that computers can do faster and sometimes just as well as we can?
We’ll go from makers to managers, from doing the work to learning how to allocate resources—choosing which work to be done, deciding whether work is good enough, and editing it when it’s not.
It means a transition from a knowledge economy to an allocation economy. You won’t be judged on how much you know, but instead on how well you can allocate and manage the resources to get work done.
There’s already a class of people who are engaged in this kind of work every day: managers. But there are only about 1 million managers in the U.S., or about 12% of the workforce. They need to know things like how to evaluate talent, manage without micromanaging, and estimate how long a project will take. Individual contributors—the people in the rest of the economy, who do the actual work—don't need that skill today.
But in this new economy, the allocation economy, they will. Even junior employees will be expected to use AI, which will force them into the role of manager—model manager. Instead of managing humans, they’ll be allocating work to AI models and making sure the work gets done well. They’ll need many of the same skills as human managers of today do (though in slightly modified form).
From maker to manager
Here are a few qualities that managers of today need that individual contributors of tomorrow—model managers—will need as part of the allocation economy.
A coherent vision
Today's managers need to have a coherent vision of the work they want to accomplish. Managers of humans need to craft a vision that is articulate, specific, concise, and rooted in a clear purpose. Model managers will need that same ability.
The better articulated your vision is, the more likely the model is going to be to carry it out appropriately. As prompts become more specific and concise, the work done will improve. Language models might not, themselves, need a clear purpose, but model managers will likely have to identify a clear purpose for their own sake and engagement with the work.
Articulating a concise, specific, and coherent vision is difficult. It’s a skill that is acquired over years of work. Much of it comes down to developing a taste for ideas and language. Luckily, that’s a place that language models can help as well.
A clear sense of taste
The best managers know what they want and how to talk about it. The worst managers are the ones who say, “It’s not right,” but when asked, “Why?” can’t express the problem.
Model managers will face the same issue. The better defined their taste, the better language models will be able to create something coherent for them. Luckily, language models are quite good at helping humans articulate and refine their taste. So it’s a skill that will probably become significantly more widely distributed in the future.
If you have clear taste and a coherent vision, the next thing you need to do is be able to evaluate who (or what) is capable of executing it.
The ability to evaluate talent
Every manager knows that hiring is everything. If employees are doing the work, the quality of the output is going to be a direct reflection of their skills and abilities. Being able to adequately judge employees’ skills and delegate tasks to people who can carry them out is a significant part of what makes a good manager.
Model managers of tomorrow will need to learn the same things. They’ll need to know which AI models to use for which tasks. They’ll need to be able to quickly evaluate new models that they’ve never used before to determine if they’re good enough. They’ll need to know how to break up complex tasks between different models suited to each piece of work in order to produce one work of the highest quality.
Evaluation of models will be a skill in its own right. But there’s reason to believe it will be easier to evaluate models than it is humans, if only because the former are easier to test. A model is accessible day or night, it’s usually cheap, it never gets bored or complains, and it returns results instantly. So model managers of tomorrow will have an advantage in learning these skills, because management skills of today are gate-kept by the relative expense of giving someone a team of people to work with.
Once they’ve assembled the resources they need to get work done, they’ll face the next challenge: making sure the work is good.
Knowing when to get into the details
The best managers know when and how to get into the details. Inexperienced managers make one of two mistakes. Some micromanage tasks to the point that they are doing the work for their employees, which doesn’t scale. Others delegate tasks to such a degree that they aren’t performed well, or are not done in a way that aligns with the organization’s goals.
Good managers know when to get into the details, and when to let their reports take the ball and run. They know which questions to ask, when to check in, and when to let things be. They understand that just because something isn’t done how they would do it doesn’t mean it hasn’t been done well.
These are not problems that individual contributors in the knowledge economy have to deal with. But they are the exact kind of problems that model managers in the allocation economy will face.
Knowing when and how to get into the details is a learnable skill—and luckily, language models will be built to intelligently check in during crucial periods where oversight is needed. So it won’t be completely on model managers to do this.
The big question is: Is all of this a good thing?
Is the allocation economy good for humanity?
A transition from a knowledge economy to an allocation economy is not likely to happen overnight. When we talk about doing “model management,” that’s going to look like replacing micro-skills—like summarizing meetings into emails—rather than entire tasks end to end, for a while, at least. Even if the capability is there to replace tasks, there are many parts of the economy that won’t catch up for a long time, if ever.
I recently got my pants tailored in Cobble Hill, Brooklyn. When I pulled out my credit card to pay for it, the lady behind the counter pointed at a paper sign taped to the wall: “No credit cards.” I think we’ll find a similar pace of adoption for language models: There will be many places where they could be used to augment or replace human labor where they are not. These will be for many different reasons: inertia, regulation, risk, or brand.
This, I think, is a good thing. When it comes to change, the dose makes the poison. The economy is big and complex, and I think we’ll have time to adapt to these changes. And the slow handoff of human thinking to machine thinking is not new. Generative AI models are part of a long-running process.
In his 2013 book Average Is Over, economist Tyler Cowen wrote about a stratification in the economy driven by intelligent machines. He argued that there is a small, elite group of highly skilled workers who are able to work with computers that will reap large rewards—and that the rest of the economy may be left behind:
“If you and your skills are a complement to the computer, your wage and labor market prospects are likely to be cheery. If your skills do not complement the computer, you may want to address that mismatch. Ever more people are starting to fall on one side of the divide or the other. That’s why average is over.”
At the time, he wasn’t writing about generative AI models. He was writing about iPhones and the internet. But generative AI models extend the same trend.
People who are better equipped to use language models in their day-to-day lives will be at a significant advantage in the economy. There will be tremendous rewards for knowing how to allocate intelligence.
Today, management is a skill that only a select few know because it is expensive to train managers: You need to give them a team of humans to practice on. But AI is cheap enough that tomorrow, everyone will have the chance to be a manager—and that will significantly increase the creative potential of every human being.
It will be on our society as a whole to make sure that, with the incredible new tools at our disposal, we bring the rest of the economy along for the ride.
All Comments