There’s a good chance that before November of 2022, you hadn’t heard of tech nonprofit OpenAI or cofounder Sam Altman. But over the last few years, they’ve become household names with the explosive growth of the generative AI tool called ChatGPT. What’s been going on behind the scenes at one of the most influential companies in history and what effect has this had on so many facets of our lives? Karen Hao is an award-winning journalist and the author of “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI” and has covered the impacts of artificial intelligence on society. She joins WITHpod to discuss the trajectory AI has been on, economic effects, whether or not she thinks the AI bubble will pop and more.
Note: This is a rough transcript. Please excuse any typos.
(MUSIC PLAYING)
Karen Hao: In the labor case, I think there is possibly ability to reverse. Like if you make too early of a call and you try to replace all your workers with A.I., and then it doesn’t work out, you hire back the workers. But there are lots of other things that this hype around A.I. is driving forward that is not reversible.
(MUSIC PLAYING)
Chris Hayes: Hello and welcome to “Why Is This Happening?”, with me, your host, Chris Hayes.
(MUSIC PLAYING)
Chris Hayes: I’m willing to bet that before November 2022, you had never heard of a tech nonprofit called OpenAI, and you had probably never heard of an individual named Sam Altman.
Now, maybe you work in tech and an A.I., and you had, but I think the vast majority of people hadn’t. I’ll cop to, I had never heard of OpenAI. I had never heard of Sam Altman.
And then they introduce this new generative A.I. chat feature called ChatGPT.
And you know, there’s only a few times where there’s been a kind of tech sensation, like the introduction of that. It became the fastest growing app in history. It catapulted them into fame, both OpenAI and Sam Altman.
Sam Altman is now one of the most famous people, probably in America, in the world. He was in the White House I think the second day of the Trump administration. He was recently in Saudi Arabia during Trump’s trip there.
OpenAI is at the Vanguard of what has been this kind of revolution in tech, in which hundreds of billions of not trillions of dollars have poured into the development of artificial intelligence based on large language models.
And, you know, depending on who you ask, it’s either at the cusp of the most transformative technology in all of human history that will inalterably catapult us into a strange new future in which humans enjoy like the leisure they’ve been promised since the dawn of the industrial age, when Marx was writing about how, if we capture the surplus, everyone would be fishing and critiquing art, and Keynes was writing about the economic prospects of our grandchildren. What will people do when they don’t have to work all day.
They’re like, finally, this will happen. We finally got the technology, the ones in the previous eras were wrong. But now, we finally got the technology that will allow that. So that’s one way of looking at it, right?
The total transformation of human activity through the portal of a generalized intelligence or one person described it to me and said, imagine the nuclear arms race was happening in the 1940s and ‘50s, but it was just a bunch of Silicon Valley — it was just a bunch of CEOs who were developing the nuclear weapons and they were competing with each other. How would’ve that gone?
Like, so one is endless surplus and bounty. The other is the destruction and end of the world in which a kind of like, you know, Frankenstein’s monster, 2001 Space Odyssey, Hal figure takes over the world and destroys humanity as we know it, enslaves us, cetera.
Or the third thing that I’ve heard as a possibility for this is like the, remember when there were the like, nonfungible tokens, MFTs, and like, that was going to be the future. And there’s, let’s say this web 3.0, and then it was just like, no one, like, remember when Zuckerberg’s whole thing was, we are rebranding the company and remaking company around this new technology, which is the metaverse, everything’s going to be the metaverse and that lasted for like maybe 12 months. And now we shall never speak of it again.
There were people who were buying like, I remember that I reading an article about like Snoop Dogg sells waterfront metaverse property for seven figures or something? I was like, I don’t even understand that sentence, but like, all of that is now just a dream. It’s gone. So maybe it’ll be that. Maybe it’s just all absolute hype.
Those are kind of the three different ways that people sketch this future. And I have to say, I truly don’t know which it is. Like, I really feel, I’m trying very hard these days to get my arms around this, and I don’t know, depending on my mood or what article I read, I can be convinced of any of those three.
So I thought it’d be a good time to spend some time on today’s program, speaking with someone who spent years reporting out a book about OpenAI. The book is called “Empire of AI”. It’s out now. “Dreams and nightmares in Sam Altman’s OpenAI. And the author is Karen Hao who joins me now.
Karen, great to have you on the program.
Karen Hao: Thank you so much for having me, Chris.
(MUSIC PLAYING)
Chris Hayes: So there’s so much to get to here, like in the generalized A.I. discussion, the specifics of OpenAI. So I just want to start with filling in some of which I’ve learned from your book, but for the listeners who have not encountered your book. Just filling in like the OpenAI backstory, because it really was one of those things that like from day one to two, like no one knew what OpenAI was or what they were working on, or who Sam Altman was.
And then, all of a sudden, it was like, they were going to control the world, and it really, I’ve never really encountered anything quite like it. I feel like Google maybe is, maybe the closest to it. But even Google felt like a slower burn, to be honest.
Karen Hao: Yeah, So totally.
Chris Hayes: I was there when people were looking for search and people started word of mouth being like Google, whereas ChatGPT really felt like from one day to the next.
Karen Hao: Yeah.
Chris Hayes: No one knew about ChatGPT and then everyone knew about ChatGPT.
Karen Hao: Yeah.
Chris Hayes: And what I would love for you to start with is who started OpenAI? How did it start? What was its mission in the beginning? How did this thing come into being?
Karen Hao: Yeah, totally. So, OpenAI started as a nonprofit and it was co-founded by Sam Altman, who we now all know as CEO, and Elon Musk, which is kind of an interesting piece of the history that people don’t, aren’t as familiar with.
And they originally started as a nonprofit because they were trying to position themselves as an anti-Google. So, Musk at the time had become really deeply obsessed with this idea that A.I. was going to destroy humanity if the development of the technology fell into the wrong hands. And he specifically believed that Google was accumulating a rather large outsized influence on A.I. talent at the time, particularly with the acquisition of DeepMind, and that that was going to lead A.I. development to be almost purely done with profit incentives, profit driven incentives. And that was what was going to lead to the demise of humanity.
So Altman and Musk join forces and say, well, the best way to make an anti-Google is to just have a non-profit that’s going to be a fundamental A.I. research lab, doing the development of this technology purely without commercial interest, purely for the benefit of the public. We’re going to open source everything, make everything transparent, and that is how it originally begins.
Chris Hayes: How genuine do you think that was on Musk’s part? I mean, it’s just, it’s a little hard to square the current version of Musk and the behavior I’ve observed for Musk, which does not seem particularly altruistic with this sort of like, he just had this genuine belief that this was a scourge to humanity, and we need to start a nonprofit.
Karen Hao: You know, I think every major Silicon Valley titan always thinks of themselves as being altruistic, and also, is deeply competitive, and also, has a deep seated desire to have a lot of influence and control in the world, you know? So, I think it kind of all blurs and blends together.
And so, do I think Musk believed that he believed that he was doing this for altruism? Yes.
Chris Hayes: Right.
Karen Hao: But, of course, there were other —
Chris Hayes: That’s a great phrase.
Karen Hao: Yeah. Yeah. But, of course, there were other things at play. And it was the same with Altman. I would say the same exact thing about him. He also believed that he believed that this was for the benefit of humanity. But there were, there were other strategic factors kind of in the air.
Chris Hayes: Right. So the Google concerns, you got the sort of, if this is done by a private firm, solely pursuing profit motive, it might lead to something that’s genuinely catastrophic for humanity.
Karen Hao: Yes.
Chris Hayes: And also, maybe we don’t want Google to grab the brass ring.
Karen Hao: Exactly.
Chris Hayes: Sort of side by side.
Now Musk, we all, around when was this, that these, that they had these conversations and they started it?
Karen Hao: This was in 2015. Yeah.
Chris Hayes: Okay.
Karen Hao: They started having those conversations. And then at the end of 2015 was when they announced OpenAI.
Chris Hayes: And what’s Altman’s background at that point? And how does he even know Musk?
Karen Hao: Altman was the president of Y Combinator, which was at that point already the most prestigious startup accelerator that you could join in Silicon Valley. If your startup got into YC, you were kind of set.
You didn’t get a lot of capital. The YC promise was interesting. They only gave you a little bit of capital, but you tapped into a huge network, very influential network that would then rapidly accrue you more headlines, more capital, more talent.
Chris Hayes: Yeah.
Karen Hao: So on and so forth.
So he was an investor, he was a startup guy. He came up through Silicon Valley.
Chris Hayes: He was in a Y Combinator class. He had been in Y Combinator.
Karen Hao: He was in the inaugural Y Combinator class. And —
Chris Hayes: With a friend of mine, actually.
Karen Hao: That’s wild. And that’s where he met one of his most important mentors, Paul Graham, who was the original founder of YC. And then after he launched his startup and did it for seven years, it failed. But then Graham thought, you are one of the most unique people I’ve ever met in my life. I want you to take over YC.
And so, Altman at a very young age, in his early thirties, ends up taking over, either late twenties or early thirties ends up taking over Y Combinator.
And at that point, when he took over YC he started, you know, the startup that he was really working on, it was kind of a Foursquare competitor. It wasn’t so ambitious and big picture vision.
Chris Hayes: I’ve seen him, I’ve seen video him, giving his pitch deck, which is like —
Karen Hao: Yeah.
Chris Hayes: — we never know. You might be right near your friend. Who’s a block away. And then you find out, and then you’re both there and you can go get a drink.
And it’s like, it’s so funny, because it’s like, it captures a certain era —
Karen Hao: Yeah.
Chris Hayes: — of like innocence intact. Like, hey, we’re — it’s like, it’s not like we are replacing all of you disgusting humans.
Karen Hao: Yeah.
Chris Hayes: It’s like —
Karen Hao: Exactly.
Chris Hayes: — hey, maybe you meet your friend for a drink.
Karen Hao: Yeah. Amazing serendipity. Yeah.
And basically when he becomes president of YC, he starts, he really starts stepping into the shoes of the typical Silicon Valley titans of let me paint you a sweeping vision of the future. And this is why you need to put a lot of money in X, Y, Z, whatever thing that I’m investing in.
Chris Hayes: I think it’s worth noting here. And this is something that does come through in your book about the culture of Silicon Valley. When you talk about painting the sweeping picture —
Karen Hao: Yeah.
Chris Hayes: — that in some ways, because of the models of venture capital, it rewards grandiose storytellers —
Karen Hao: Absolutely.
Chris Hayes: — more than it rewards anything else in some ways. Like there’s lots of people who are great engineers, I mean, and brilliant engineers. And you’ve never heard their names. They’re not the people —
Karen Hao: Yeah.
Chris Hayes: — that who are household names. The household names are people who have a particular talent —
Karen Hao: Yes.
Chris Hayes: — for sketching a vision of the future that people hear. And they’re like, I want to be, I want to put billions of dollars towards that.
Karen Hao: Yes. And I wouldn’t say that Steve Jobs invented or was the origin of this culture, but certainly, Jobs was the —
Chris Hayes: Yes.
Karen Hao: — most iconic storyteller that everyone in the Valley looked up to. Like, so I worked in Silicon Valley myself and I remember the startup that I worked at, everyone always talked about how storytelling was so important, how like our CEO was a really great storyteller and that’s what made him a really good CEO. That’s how people discussed it.
And it was because Jobs was just so successful at executing that storytelling talent and showing all of these young founders in that generation, what it could be like, how much transformative power you could kind of accrue by painting those sweeping visions.
Chris Hayes: Now, in the case of Jobs, I would say that he also had a talent for product development that was unparalleled. And in some ways the story of Apple was the, you know, the pairing of his ability to do both. Like —
Karen Hao: Yeah.
Chris Hayes: — it truly is the case. that the products they developed were genuinely transformative, unlike anything else totally changed the world.
I think, you know, the iPhone, more than anything, is one of the most transformative devices created in my lifetime.
So he had kind of both. I think one of the questions we facing now is like, I feel like with a lot of Silicon Valley is like, is this all just like, am I watching music man? And the guy who’s like —
Karen Hao: Right. Right, right.
Chris Hayes: — doing the song and dance —
Karen Hao: It’s all story, no substance. Right.
Chris Hayes: Can they, can they produce? And on that back end, I wanted you to talk a little bit about ChatGPT because it is transformative. And how did that happen and how did it, what made it happen, who developed it and how did it kind of come out of nowhere?
Karen Hao: It’s so interesting because as someone who was covering it, it did not feel like it came out of nowhere. It felt like there was a really clear, kind of slow boil up until the release of ChatGPT to the point where like, I have to admit regularly now that when, when people ask me, what were you doing during the ChatGPT moment? I don’t remember because I didn’t consider it to be a moment.
When ChatGPT first came out, I was like, oh, seen that, like whatever. And that was a huge miss on my part, and that I didn’t understand that even though the technology had previously existed, the fact that you now slap a super easy and free user interface on it completely changes the game.
But I guess to go to go to your question of, where did that come from? I mean, what I write about in the book is how a lot of people now associate all of A.I. with ChatGPT, but it’s actually a very specific pathway of A.I. development that OpenAI decided to take.
So early in its nonprofit heady days, it was trying to figure out how do we become a leader in A.I. because we want to beat Google. Like how do we surpass Google as a scrappy nonprofit?
And what they hit upon was this thesis, oh, we need to scale A.I. tech, existing A.I. technologies to an unprecedented degree, as fast as possible and continue scaling it faster than anyone else. And in order to do that, we are going to need an extraordinary amount of data to the point that we will be scraping the whole Internet and an extraordinary amount of super computers to, like we need the largest super computers that have ever been built in human history.
And when they hit upon that, they then transitioned into a nonprofit with a for-profit arm underneath because they realized there’s no way we can raise the amount of capital that we need under this nonprofit. So, let’s create this other fundraising vehicle where we can promise investors returns.
And then they set on this trajectory where they then partnered with Microsoft, who was going to build on the super computers. And then they started scraping all of that data and just gloaming on more and more pools of data on their servers for training these models. And in that time, in the A.I. research world, because ultimately OpenAI at its core, it was still a research project at that point.
In the A.I. research world, there had been a debate about how to actually make A.I. progress. Like what, what are the things, the shapes of the technology that you should be building?
Chris Hayes: What’s the path forward to get us to the next level?
Karen Hao: Yeah. Like what is the thing that we should be scaling? That was the question at OpenAI at the time.
And within the field, there was this hypothesis that maybe language models could be a fast way to get rapid progress because language, the theory argues that people communicate all the knowledge that they ever have through language. So, if you can take all the language from the Internet, it should be able to give you a pretty good approximation of all the knowledge in the world. There’s a lot of debate about that. I mean, also —
Chris Hayes: But this was the theory, right? Like —
Karen Hao: But this was the theory.
Chris Hayes: — large language models, because you have a base, you have a data set that exists that you can go and scrape.
Karen Hao: Exactly.
Chris Hayes: We could talk about the copyright stuff in a second. And we have a technology that can suck in a lot of data, use the connections between those data points to create neural networks that can kind of predictively find patterns.
Karen Hao: Yes.
Chris Hayes: I mean, I’m oversimplifying how a large language model works.
Karen Hao: But no. Yeah. I mean, that is exactly what it is. Yeah. Like you pour all this data in and then the large language model starts to learn the patterns of the human language and then it spits it back out at you, and you can use those generations to test whether or not it’s actually starting to learn sophisticated concepts about the world because it’s spitting it out in a way that is human legible.
Chris Hayes: Yeah.
Karen Hao: And so, that’s the research case, but it also there’s, I mean, there’s great commercial case for it, in that when you create systems that can talk to you. I mean, what a compelling product.
Chris Hayes: Well, this to me is, so there’s the thing about ChatGPT, there’s two breakthroughs happening. There’s the fact that they bet on this scaling issue. We’re just going to, like, we’re going to get more compute than anyone. And this is just fundamentally, this is just a numbers problem. Like —
Karen Hao: Yeah.
Chris Hayes: — basically, if you get enough data and you run enough GPUs, you can get to something like the human mind.
Karen Hao: Yes. That is the hypothesis, yeah.
Chris Hayes: That the human mind is just like a massively parallel processor. It just because of its cellular structure is beating our computers. But if we just do the electronic version that gets to the scale, you’ll produce stuff that’s like what humans do. That’s basically the theory, right?
Karen Hao: That is the theory. Exactly.
Chris Hayes: And then they started going about —
Karen Hao: Doing that.
Chris Hayes: — doing it, right?
Karen Hao: Yeah.
Chris Hayes: The biggest data set, the most amount of GPUs. They’re training the data. They’re scraping it.
But to me, part of the genius is the interface.
Karen Hao: Yeah.
Chris Hayes: Because if you go back to, you know, this very famous logician, mathematician, computer scientist, and philosopher named Alan Turing, you know, Alan Turing came up with this thing when he was thinking about artificial intelligence called the Touring Test.
Karen Hao: Yeah.
Chris Hayes: And the idea of the Touring Test was like, you can call a machine intelligent if you can chat with it and not know it’s a machine.
Karen Hao: Yeah.
Chris Hayes: And basically, they took the Turing Test and they were like, we’re going to use that as our main interface. Right? I mean —
Karen Hao: Yeah. Well, you know, like, there were a couple reasons that they did that. First of all, science fiction was a really big reason. Altman had always been really obsessed with the movie, “Her”, and this idea that you can chat with a model and, through a language interface, I think in part, because he felt like this is the clearest articulation of what AGI could possibly look like in a void of articulations about what this technology is and what it should do.
Also, you know, it’s not a coincidence that ChatGPT was not the first chatbot that started creating waves of hype around A.I. development. The very first chatbot to do that was ELIZA. And it was built in the 1960s, fifties, sorry. Maybe around like late fifties, early sixties.
And that was built by an MIT professor named Joseph Weizenbaum who was actually doing it as an experiment to see how easily can humans be duped when a computer starts talking to you, into believing that there might be intelligence hidden beneath the surface. And he was really alarmed that actually people, it was, it was a much simpler system. It wasn’t —
Chris Hayes: Much. Much, much, much like —
Karen Hao: Yeah, it wasn’t statistical calculation scraping the Internet. It wasn’t, it was rule-based. It literally followed a method of Rogerian psychotherapy where you say, hey, I’m feeling bad today. And then there were these rules behind ELIZA that would say, why are you feeling bad today? Like, it would just copy and paste what you were saying, and then flip the “you” to the “I”, and “your friend” to “my friend”, and your whatever dad to my dad.
Chris Hayes: Yeah.
Karen Hao: Like it was, it was following all these rules and people back then —
Chris Hayes: It’s like a wall that you’re play — like you’re hitting a tennis ball off of basically.
Karen Hao: Yes. And people back then were declaring that AGI’s already solved. I mean, they weren’t using the term AGI, but they were effectively saying that AGI’s already solved. We will not need doctors anymore. Psychotherapy’s going to be automated. Teachers will be replaced. All jobs will go away.
Chris Hayes: Wow.
Karen Hao: And you know, that obviously sounds incredibly familiar in the moment that we’re in, but what it really hits upon the fact, it’s not a coincidence that ELIZA and ChatGPT function similarly. And when you think about like, I was watching a lecture from a computer science professor at Notre Dame, who was saying, humans have anthropomorphized anything and everything for thousands of years.
Chris Hayes: Yep.
Karen Hao: And there is something deeply, it taps into our psychology to have things, talk to us. So I think that’s why OpenAI ultimately chose an interface that was language-based.
Chris Hayes: And what’s amazing is so much the discourse around it recalls the Oracle at Delphi, like this oracular consultation, where you go to the Oracle and you say, what should I do with my life? You know, people like —
Karen Hao: Yeah.
Chris Hayes: — asking, you know, even the other day where we were, we were doing a story about the Elon Musk’s A.I. model Grok, which is embedded in what used to be called Twitter. And it had started spouting all this craziness about white genocide and kill the Boer —
Karen Hao: Yeah.
Chris Hayes: — because someone had clearly mucked with the code.
Karen Hao: Yeah.
Chris Hayes: And at some point in this, someone asked Grok like, well, what happened to you?
Karen Hao: Yeah.
Chris Hayes: And Grok said, gave some answer. And we were talking about this in our editorial meeting. I was like, you know, when people were saying, well, I was like, well, we can’t credit the answer. Like it doesn’t actually know.
Karen Hao: Exactly.
Chris Hayes: Wait, it’s not like —
Karen Hao: Exactly.
Chris Hayes: — oh, now it’s telling the truth. There is no it there to tell the truth, like —
Karen Hao: Exactly.
Chris Hayes: It could be right, or it could be wrong, but this, the impulse we have to view it insentient, is so overwhelming.
Karen Hao: It is so overwhelming. I mean, the fact that we can the anthropomorphize a rock —
Chris Hayes: Right.
Karen Hao: — in cartoons —
Chris Hayes: Totally.
Karen Hao: A rock is not talking to us. So, the —
Chris Hayes: Right.
Karen Hao: — the moment that you have a chat interface, that seems to be empathetic, that seems to understand your emotion, I mean, that’s even more powerful.
Chris Hayes: So part of the story of OpenAI among many stories is a story of like a kind of classic story of, oh, we’re going to do this to save the world. We’re going to, we don’t want, we don’t want a profit-driven A.I. model. We’re going to be a nonprofit.
And now, like that’s basically all fallen apart. Right?
Karen Hao: Yeah. Completely. I mean, OpenAI is potentially the most capitalistic tech company ever. They just finished raising $40 billion at a $300 billion valuation. And that’s the largest fundraise of private tech investment money in the history of Silicon Valley.
Chris Hayes: So, they changed the structure?
Karen Hao: They nested the for-profit arm under the nonprofit. And for a while, they called it capped profit arm because they were capping the amount of returns that investors could get from their initial investment. They’ve now changed it to a PBC, which is a for-profit arm that is supposed to also have a bit of a social mission driven like around it. But the core difference between what previously existed and what exists now is that there’s no cap to the returns anymore.
(MUSIC PLAYING)
Chris Hayes: More of our conversation after this quick break.
(MUSIC PLAYING)
(ADVERTISEMENT)
(MUSIC PLAYING)
Chris Hayes: Who owns it?
Karen Hao: You know, that’s an, honestly, it’s a really great question. And one of the things that I realized through my reporting is, whereas Elon Musk is someone who plays legal offense, and he just tries to sue everyone and everything in his way, Sam Altman plays legal defense, where he creates these such deeply convoluted structures. OpenAI is actually just one example of the different types of structures that he’s spun up throughout his career, that it becomes impossible to fully understand who owns what, and where’s the money flowing and what is the tax situation?
And this is something that even employees have mentioned to me, like it takes an army of lawyers to understand what is going on. And there have been moments when employees have felt really in the dark themselves about like what even their rights are as employees. And they have to spend a lot of money on legal services to try and comb through all the documentation that they get, because there’s just so many nested entities.
Chris Hayes: Given that Sam Altman is now one of the most powerful people in the country, which I think it’s fair to say, and maybe the world, they’re starting to do development with, they just announced some new, they’re going to do development with countries. They just announced the partnership with the former Apple designer, Jony Ive today.
You know, they say, I mean, again, one of the craziest things, if you listen to the people generating this technology, they’re like, yeah, it’s definitely going to be the most dangerous thing that’s ever hit human beings. Like all of them say that.
There’s a line in your book and that was in the excerpt in “The Atlantic” about, you know, we’re going to have to all get into a bunker before we release, you know, artificial general intelligence. We could talk about what that means.
Can you just give me a little character sketch of Sam Altman? I find him a very difficult person to get a read on.
Karen Hao: You’re not the only one.
Chris Hayes: Yeah. I mean, I think that’s his shtick basically.
Karen Hao: That is, you know, the thing that was weirdest to me when I was reporting and just everyone that I interviewed, I would ask them. So, what do you think about Sam?
And no matter how long they had worked with him or how closely they had worked with him, no one could actually articulate what this man believes and what his deal is. And I got to a point where I learned in my reporting to stop asking people, what do you think about Sam? I would be like, what did Sam tell you in such and such meeting? Or like, what did Sam tell you about what he believed in this particular situation? What did Sam tell you to motivate you to do X?
And what they would say, what the sources would say, Sam would tell them exactly what that source believed. And it would, what he said changed from person to person based on that specific individual. And so, no one really, yeah, like has a grasp, but like he is a once in a generation storytelling talent who has a loose relationship with the truth, and he’s incredibly persuasive.
So he can paint the vision of the future, and he has this really profound ability to understand what people need to hear and what they want. And to frame everything, you join me because I can get you what you want. And he can sort of kind of say whatever he wants in that meeting, because it doesn’t necessarily need to be grounded in a reality.
Chris Hayes: There’s also been a series of kind of, and this is chronicled in the book of kind of whistleblowers and dissidents and people who have been really alarmed by the direction he’s taken it.
Tell me a little bit about who those folks are and what are the concerns they’ve raised about the trajectory of OpenAI under Sam Altman?
Karen Hao: Yeah. So, you were saying in the beginning that there’s sort of these really dramatic, different, dramatically different narratives about A.I. and in dominating discourse today, which is AGI will bring us utopia or AGI will kill us all. And those like colloquially they’re called the boomers and the doomers with the boomers being like the positives and the boomers being the negatives.
To me, those are actually two sides of the same coin because they both, both camps believe in the AGI religion as I call it like that. AGI is almost here, that it’s achievable. It’s just around the corner, that it’s going to be profoundly affecting of society.
Chris Hayes: Would you just, can we just stop there and can you just explain what AGI means?
Karen Hao: Okay. AGI refers to artificial general intelligence, and this is an incredibly poorly defined term. The very hand wavy summary of what people typically describe it as is an A.I. system that can ultimately do anything that humans can do. But this is a really challenging measure because like what makes humans intelligent, doesn’t have any scientific consensus. And so, when you’re trying to capture in software, something that we don’t understand about humans —
Chris Hayes: Right.
Karen Hao: — you end up with lots of different opinions about how to do it, what it should look like, who it should serve, all of those things. And so, throughout the decades of A.I. research and development, all the way from the 1950s until present day, there have been just tons and tons of debates, egos clashing, opinions clashing about these kind of core questions of what A.I. and AGI ultimately is.
The way that OpenAI has specifically defined it is highly autonomous systems that outperform humans in most economically valuable work. And so, they’ve specifically defined it as a labor automating machine. That is also really key, important dimension to understanding the truly deeply capitalistic nature of OpenAI.
Chris Hayes: Yeah.
Karen Hao: And, and also to understanding the trajectory that they’re taking as a company. Ultimately, they’re trying to build systems that they can sell to CEOs for a lot of money —
Chris Hayes: To replace humans.
Karen Hao: — to say —
Chris Hayes: To automate human work.
Karen Hao: Exactly. To automate a way, yeah. If they’re trying to build systems that outperform humans at the thing that makes people want to pay you, you’re no longer going to be paid. They’re just going to opt for the AI.
Chris Hayes: And to your point about the sort of boomers and doomers, the kind of utopians and dystopians being two sides of the same coin, that coin being that this level of generalized intelligence that outperforms human and economic relevant task is achievable and around the corner is the core faith that unites them.
Karen Hao: Yes. And that in and of itself, whether or not artificial general intelligence is achievable and whether it even has a well-defined definition, that is a scientific debate. Like there was a really great story in “The New York Times” that was just highlighting this by Cade Metz. Um, and the headline was why we’re unlikely to get artificial general intelligence anytime soon.
And there was a specific line in a recent survey of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, more than three quarters of respondents said the methods used to build today’s technology were unlikely to lead to AGI.
That’s why I call it an AGI religion. There is a faith that people have that’s not grounded in scientific evidence that AGI is coming, it’s coming soon. It’s imminent, and that it’s going to be dramatically transformative, either (ph) utopia, or dystopia. And ultimately these, the conclusion that both camps have, both the boomers and the doomers is because it will have such high consequences for humanity, it has the potential to bring humanity to heaven or to hell, they’re the ones that have to control it, right?
Chris Hayes: Right, right.
Karen Hao: Like the boomers are the ones that have to control it. The doomers are the ones that have to control it. Their adherence of their subsects of this religion need to be the ones that usher humanity to the next era.
Chris Hayes: I mean, I want to take a step back from one of the things that’s hard to sort of disentangle is like the mythmaking and the storytelling and the projections of the future from the actual technology itself.
Karen Hao: Yeah.
Chris Hayes: And this, I really find this difficult because I do find myself, I feel a little bit like one of those, like lame centrists, who’s like, well, both sides get it wrong, but I do feel that way a little bit about the discourse here where like, there’s a certain group who are like, this is all slop and schlock and B.S. and garbage, and it doesn’t do anything useful. And then there’s people who are like, you’re going to marry an A.I., you know?
And like neither of those sort of seem right to me, like it very clearly seems like profoundly useful and incredibly impressive and sophisticated technology. It already does stuff that was not conceivable to me three years ago. Like, it is capable of doing things I did not think computers could do three years ago.
It has a million different use cases. Very obviously, many of which I think are really worrisome for the like human welfare labor market implications. But I guess what, like what — how do you think about the technology itself independent of these —
Karen Hao: Yeah.
Chris Hayes: — the religion and the story and the projections of the future?
Karen Hao: Yeah, totally. I mean, there’s, to me, there’s a clear explanation for why there’s such a fragmented set of opinions about whether this technology is useful or not. It’s because the technology can be highly useful for very specific things and it can fall apart very quickly for other things.
Chris Hayes: Yeah.
Karen Hao: For example, if you switch out of the English language, it suddenly becomes much more difficult to use and much less effective. Yeah.
Chris Hayes: And that’s because of the just the dataset of English is so enormous that it’s trained on.
Karen Hao: Yes. And also because most of these companies only do their testing in English.
Chris Hayes: Right. Yeah.
Karen Hao: So they only do their content moderation in English. They only do their refining and tuning in all of these things in English.
And they also, you know, like OpenAI will use test cases internally where they will test the model on its ability to talk about A.I. research topics. So even the topic selection for like the stress testing of their technologies —
Chris Hayes: That’s very funny.
Karen Hao: — becomes hyper specific. And so, people within the A.I. world are like this technology is incredible because everything that it does is stress tested on the things that they need done.
Chris Hayes: It’s so funny, because it’s funny you say this because the thing that I have most used A.I. for is to teach me about A.I., and it’s actually been pretty amazing at it.
Karen Hao: Yeah.
Chris Hayes: It’s actually the one thing, like I have had a long running sort of self-tutorial about A.I. with ChatGPT and DeepSeek, and it’s been very good.
Karen Hao: Yeah.
Chris Hayes: Like I, you know, basically I started with like, what’s a neural network? I don’t understand this, how does this work? And walking me through, and then building up from that and predict, and then, you know, I want you to explain there’s a famous paper in A.I. that sort of transform things called “Attention is All You Need”.
Karen Hao: Yeah.
Chris Hayes: And explain this paper to me. Like I don’t have a computer science degree, like explain the paper to me.
Karen Hao: Yeah.
Chris Hayes: And then asking more questions. And it’s been amazing at that, I have to say.
Karen Hao: You have hit upon the exact use case that is the most, like if you had to use these technologies for anything, this is like the one use case that it is going to knock out of the park time and time again, because that is exactly what these companies stress test the technologies on.
But if you move towards, you know, like a particularly esoteric piece of cultural analysis, I don’t know, or like historical —
Chris Hayes: Yes.
Karen Hao: — fact of a particular tribe in a certain region of the world that is far from Silicon Valley, like that’s when you’re going to get some like really wild stuff coming out of the models that has no bearing on reality.
Chris Hayes: I mean, at a philosophical level, here’s part of what I think we’re wrestling with, because it really does stoke profound questions about what human intelligence is. To your point about like —
Karen Hao: Yeah.
Chris Hayes: — the question of what, whether we have artificial generalized intelligence is in some ways dependent on how we understand what —
Karen Hao: Yeah, exactly.
Chris Hayes: –human intelligence is. And that’s an unsolved problem, but I want to give an example of a early version of A.I. that we’ve all been living with for a long time, that I’ve thought about many times as a sort of philosophical experiment. And that is the algorithm banks use to detect fraud.
So what that’s doing is, it’s just scanning a ton of data and it’s making predictions.
Karen Hao: Yeah.
Chris Hayes: And then when you do something that deviates so much from that prediction, they’re like, ah, red flag, right?
Karen Hao: Yeah.
Chris Hayes: So it’s just a predictive, it’s scraping a bunch of data. It’s building a bunch of models using neural networks that say, oh, this is usually how these people operate. This is usually what their behavior.
Karen Hao: Yeah.
Chris Hayes: And what’s always amazed me was how good they were, because sometimes I would make a purchase where I was like, that was kind of an impulse purchase and the bank wouldn’t flag it, and I’d be like, huh, I thought I had human autonomy and human agency when at a whim, I walked into this place and bought this thing.
But it turns out that wasn’t enough of an autonomous decision to so wildly deviate from what some algorithmic prediction of my behavior would be, that it would flag a fraud thing. And then at the same time, when someone once got my credit card and bought like 30 pairs of shoes at a place, you know, in, I forget where like —
Karen Hao: Yeah, they did flag it.
Chris Hayes: They flagged it. They like —
Karen Hao: Yeah.
Chris Hayes: — they knew. And so, there’s something about that that’s kind of profound.
Karen Hao: You know, what’s so funny, okay, because this, like, I get flagged for fraudulent activity all the time when it’s like incorrectly. And this is like —
Chris Hayes: So, you are, this means that —
Karen Hao: — so frustrating.
Chris Hayes: — you’re a fully formed unique subject and agent.
Karen Hao: No, no.
Chris Hayes: I am just a lame predictable —
Karen Hao: No, no.
Chris Hayes: — walking automaton. No, I’m serious. There’s a profound philosophical point here.
Karen Hao: No, what I, no, what this means is that there is a specific profile. I mean, this is, this has been studied for a long time. There’s bias embedded in all kinds of A.I. systems —
Chris Hayes: Right, right, right.
Karen Hao: — and they train on certain types of data and it will work really well when you match that training data.
Chris Hayes: Right.
Karen Hao: But it will not, like, it will not work once you drift out of that distribution. And it’s called drift when you’re applying an A.I. system to something that it wasn’t actually intended to be used on. And this is one of the challenges with, I mean, this is, this is a challenge with all types of A.I. systems, but predictive AI systems, as you were talking about with the bank fraud, it is an easier problem to solve because it is a very specific use case. You can stress test it with kind of as much imagination as possible of all the ways that it could go wrong.
Chris Hayes: And you get definite concrete feedback. Like in every case you confirm, this was fraud, this wasn’t fraud.
Karen Hao: Yes.
Chris Hayes: So you go back to the training model and you say, you got this, right. Or you got this wrong and there’s always this definitive, as opposed to like write a paper. Right? Like —
Karen Hao: Yes.
Chris Hayes: — is it good? Is it bad? Like this is just a, it’s a binary thing. You got it right or wrong.
Karen Hao: Yeah, exactly. And the, and the thing, the challenge with generative A.I. or what people now increasingly call general A.I. systems, is that they’re meant to be everything machines, they’re meant to do everything for any one, but actually they only do some things for some people. And there are people that exist.
Chris Hayes: That’s good. That great point. Right.
Karen Hao: Yeah. There are people that exist out of the target audience that are doing things that are out of the target tasks, where everything starts to fall apart.
(MUSIC PLAYING)
Chris Hayes: We’ll be right back after we take this quick break.
(MUSIC PLAYING)
(ADVERTISEMENT)
(MUSIC PLAYING)
Chris Hayes: So if you’re a boring Brooklyn dad who just does just like, lamely goes through his life and he brings his kids to sports, and he buys the stuff that he buys is like ehh, like you train on the target data, then it works for me.
Karen Hao: And then you’re specifically asking if you’re about A.I., the exact topic that they stress test the model —
Chris Hayes: Right, exactly.
Karen Hao: Yeah. Then, it’s seamless.
Chris Hayes: That’s right. But if you’re, you know, if you’re, you’re a more unpredictable person or you’re not, you know, you’re not in this incredibly nailed target demo, right?
Karen Hao: Yeah. You’re not in the societal norm of the society that Silicon Valley defines to be the norm —
Chris Hayes: Right.
Karen Hao: Then things start to fall apart and it doesn’t, you know, like —
Chris Hayes: That’s a great point. That’s, yeah, this is actually a perfect illustration of that, right? For you to say, I get flag with this all the time. Like I —
Karen Hao: Yeah.
Chris Hayes: — and me to be like, well that never happens, it just kind of knows me backwards and forward. Like it does actually kind of perfectly illustrate the point.
Karen Hao: Yeah. Yeah. This is actually one of my like greatest frustrations when I’m traveling, because like, it’s so automated now that you can’t tell your bank. No, that was not fraud. Like, allow me to make this damn purchase.
Chris Hayes: Yeah.
Karen Hao: Like I want this thing. And now I just have to carry like three credit cards with me at all times to like make —
Chris Hayes: And this to me —
Karen Ho: — my purchases.
Chris Hayes: OK. So this to me is actually a place where we’re zeroing in on what I think is sort of an important, again, like middle place, which is that, like, this idea that it’s just going to — it’s this sort of panacea, it’s a universal toolkit and all things for all people.
Karen Hao: Yeah.
Chris Hayes: For the reasons you enunciated, A, how difficult it’s to achieve that. And B, the fact that like the training data really matters and who, what data it’s training on and for whom based on what, where in the distribution, right?
Karen Hao: Yeah.
Chris Hayes: That said it also just seems clearly the case that from this like labor replacement project, right? Let’s say, you know, that’s the project economically.
Karen Hao: Yeah.
Chris Hayes: Which we can talk about the implications of, you know, like, okay, fine. Brief writing, there’s a lot of first year associates in this country, they make a fair amount of money. I mean, you know, not as much as the people above them, but it’s a relatively remunerative job.
Let’s just train at how to do first year associate work. And that seems like, yeah, that seems like it’s probably up to that task.
Karen Hao: Yeah, 100, so there’s a part in the book I got access to a trove of documents that OpenAI specifically used to detail how the model should be trained on what tasks it should be trained on. And so, there’s a part of the book that writes about this thing.
And there was, there’s literally a line that I quote in one of the documents that says, think about all of the things that people might want to do that are economically valuable, like creating art, writing film scripts, writing emails, summarizing briefs, like, and it talks about all these, and like, think about the industries and it basically starts listing all of the lucrative industries, entertainment, media, finance. And those are, even as they’re trying to paint this, as it can do anything internally.
Chris Hayes: Right.
Karen Hao: What they’re actually doing is enumerating a list of the most economically valuable tasks that they want to focus on.
And so that is, yeah, you’re exactly right, like that is ultimately their stated goal. That is ultimately what they’re doing internally. And that is the impact that it will have.
Chris Hayes: I mean, and that’s where we end up to me where the thing that seems most likely, and in some ways the most unnerving is actually building a quite effective white collar labor automation machine.
And what I find pretty unnerving about this, and I’ve been thinking about this a lot, and I’m actually kind of working maybe on writing something about this, but as we sit here and we look at the last 30 or 40 years of automation and neoliberal trade policy, that basically kind of took the country’s manufacturing base, the possibility of like a unionized stable job with a high school degree that you could, you know, support a family and buy a house and maybe go on vacation, right? And kind of destroyed that. And it dislocated people’s geographically. And it, it annihilated entire like towns and areas.
And as we look back on that 40 years later, and we think, ah, maybe that didn’t go so well. Right? That’s like now the new consensus, you got yelled at, if you said that in 1999, but now, new consensus. At the same time we’re doing that. It’s like, hey, what if we do that to all the white collar workers, too?
Karen Hao: Yeah. Yeah.
Chris Hayes: Which is the explicit project they’re trying to do.
Karen Hao: Yeah. And the thing that I’ll add, I guess, to go back to your question of like, what do I actually think about this technology is I don’t think it requires this technology to be that highly performative in order for this reality to come to fruition because —
Chris Hayes: You’re saying the substitution effects like the labor automation.
Karen Hao: Yes.
Chris Hayes: Yes.
Karen Hao: Yes. Because if you are a worker going to the negotiating table and the CEO on the other side —
Chris Hayes: Yes.
Karen Hao: — and you on this side, believe that AI could replace you, even if it can’t actually effectively do so.
Chris Hayes: Interesting.
Karen Hao: You have no more bargaining power. And you’re going to get fired and replaced.
And we’re seeing that already, literally, with companies announcing, I mean, Microsoft just announced that they want to make 50 percent of their code based A.I.-generated. And then they laid off 6,000 of their workers.
You know, they’re going to do all the rhetorical maneuvering possible to say those are not correlated, but, of course, they are. And there are other companies that have been much more explicit saying, we are now entering an A.I. era. We need a shrink headcount. We need every single team to start increasing their output by using A.I. tools. And they’re doing layoffs at the same time.
And it was recently a story where one of these companies fired all the workers and then went, whoops. A.I. turns out is not that good. Can you guys all come back?
Chris Hayes: Okay. But see, this is where actually I do think that it’s not all just storytelling, right? Like in the end, it is going to matter whether it can do the job or not. Don’t you think?
I mean, like if you’re a law firm and you’re the one that gets rid of all your first-year associates, and then you start producing briefs that like, there was a brief famously that was circulating recently that like had a bunch of citations of cases that don’t exist.
Karen Hao: Yeah.
Chris Hayes: Like that’s a big risk. Like you know, you someone’s paying you a thousand or $2,000 an hour to produce work. They’re not going to be happy if you do that,
Karen Hao: If it’s sloppy. Right. So in the labor case, I think there is possibly ability to reverse, like if you make too early of a call and you try to replace all your workers with A.I., and then it doesn’t work out, you hire back the workers.
But there are lots of other things that this hype around A.I. is driving forward that is not reversible.
So, for example, the amount of data centers and super computers that are being laid all around the world to power these astronomically large models, once the brick is laid, you can’t just suddenly, you know, oops, we didn’t actually need that. Let’s just delete the data center.
It’s already there. And it’s already changed the way that a country or a town has designed their utility grid. It’s changed the positioning of power plants and how they’re distributed around the world. It’s changed people’s access to fresh water resources because these data centers actually need to be cooled with fresh water resources.
So that to me is actually the most concerning impacts that we’re seeing with the current Silicon Valley quest to AGI, is that it’s actively terraforming the earth, reforming political alliances, reforming economic structures in ways that will be extremely hard to reverse, even when the bubble pops, if the bubble pops,
Chris Hayes: It sounds like you think it will.
Karen Hao: I think the most likely scenario is that the bubble will pop, but I do not want to estimate how good Silicon Valley is at continuing to perpetuate the bubble with even more intense storytelling. And there is, I think, a very, very narrow path in which the bubble won’t pop because of the rhetorical footwork that is done. But that specific path I think will be very dangerous.
Chris Hayes: Karen Hao is the author of “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI”. It’s a great book. It is out now. I learned a lot from it, and that was really, really, really enlightening. Thank you so much.
(MUSIC PLAYING)
Karen Hao: Thank you so much, Chris.
(MUSIC PLAYING)
Chris Hayes: Be sure to check out full WITHpod episodes on YouTube by going to msnbc.com/withpod. You can e-mail us at withpod@gmail.com. You can get in touch with us using the hashtag #withpod. You can follow us on TikTok by searching for WITHpod. You can follow me on Threads, what used to be called Twitter, and Bluesky with the username chrislhayes. New episodes come out every Tuesday.
“Why Is This Happening” is presented by MSNBC, produced by Doni Holloway and Brendan O’Melia, engineered by Bob Mallory, featuring music by Eddie Cooper. Aisha Turner is the Executive producer of MSNBC Audio.
You can see more of our work including links to things we mentioned here by going to msnbc.com/whyisthishappening.








