In an exclusive interview with The Verge, Sundai Pichai spoke after Google I/O 2023 about generative AI and the challenges the company faces in Search. Here is the full transcript of the interview.
Read: Elon Musk names Linda Yaccarino new Twitter CEO
Sundar Pichai, you are the CEO of Alphabet and Google. Welcome to Decoder.
Nilay, it’s a pleasure to be here.
I’m very excited to talk to you. There’s a lot to talk about. I have some big Decoder structure questions because you made some big structural changes; it’s real Decoder bait. But I want to start with the news. Yesterday was Google I/O. You gave the keynote. You announced, I would say, generative AI features in every Google product that I can think of. What’s your favourite?
It’s got to be the new Search Generative Experience we are working on bringing to Labs. It’s our most used product, our most important product. So the chance to make the product better through an evolution like that was one of the more exciting product challenges. I think the team has risen to the challenge, so I’m definitely very excited about it.
I think that’s very exciting. I think you know I want to talk to you about Search a lot. There are two demos that caught my eye. One, you asked for a refund from an airline in Gmail with the Compose feature, and Gmail just wrote the email for you. And then, later, Dave Burke wrote an email to Rick Osterloh saying Rick had done a good job. How would you feel if one of your employees wrote a suck-up email to you using generative AI?
It’s a question I’ve been reflecting on, particularly in a personal context. Do you end up… I think there’ll be a societal norm, which will evolve over time. People will decide where it’s appropriate versus not. The last thing you want is an AI-generated email getting responded to by AI. I think that’s fine for the airline voucher case. It’s definitely not fine in a personal case. Though, I’ve had friends who have said there are moments where they aren’t the best at writing those types of emails and they could use some help. But I think, over time as a society, we will figure out where the right norms are.
Do you think, even in that airline case, there’s an element of it that’s programmatic, where, if you say the right words to the airline customer service agent, you might get a refund, and the AI might know those right words, and that airline might say, “Look, we’re going to have an AI just scanning emails for these correct words and giving refunds,” and that might actually be the loop?
I’m worried — maybe airlines are already using AI to look at your email, so maybe it gives the humans a chance to get through. There are times it’s okay to view it as there is efficient brokering in those cases so that two people can efficiently complete a transaction. And I think that’s fine. But it depends on what the use case is for. But you’re right, I think there’ll be cases in which people will figure out there is a better, efficient way to handle this back-and-forth, and maybe that’s okay.
Well, you’re on the bleeding edge of this here, so I’m wondering, as you see those norms changing, have they changed in your work here at Google? Or are you saying, “We’re going to give it to a lot of people and see what happens?”
I went through this with Smart Reply and Smart Compose. At first, I would feel weird using it. Later, it included emotion in those suggestions. So you would see the emotion, and I’m like… But over time, I think I’m better now only at using an emotion which I genuinely feel. So I think people adapt to these things better than we think, too. I think humans very quickly learn how to use these technologies, too. So I think that makes it different. But I think we’ll go through a similar journey like that with this.
I want to zoom out. You said something in the keynote that really caught my attention. You said that AI is a platform shift, and I think I agree with you. But it struck me that it’s important to understand exactly what you mean by a platform shift. Why do you think AI is a platform shift, and what does that mean to you?
Well, definitely I see it as an extraordinary platform shift. Pretty much, it’ll touch everything: every sector, every industry, every aspect of our lives. So one way to think about it is no different from how we have thought about maybe the personal computing shift, the internet shift, the mobile shift. So along that dimension, I think it’s a big shift.
But I think it’s deeper than that. In some ways, I’ve called it the most profound technology humanity is working on. I think it’ll tap into the essence of… Even to your starting questions, there’s a reason you asked about that. I think that shows the nature of what AI is. And so, I think it’ll touch everything we do.
So I view it as one of those deeper shifts that way. It’s tough to find the right words, but I view it as… Even as an industry, like a very traditional industry, you wouldn’t say the internet affected healthcare a lot. Did it affect healthcare? I’m not fully sure. But with AI, I’m like, it’s going to affect healthcare a lot over time. Along those terms, I think there’s a deeper meaning to the word “platform shift” here, as well.
On the smaller definition, there weren’t personal computers and then there were; there wasn’t the internet, and then there was.
That’s right.
There wasn’t mobile; there wasn’t cloud. And now, mobile and cloud, in particular, just changed the way we behave across every dimension you can think of. That’s what I think of a platform shift as. Right? It’s very narrow… It’s much more parochial than yours.
It’s still very big. It’s still very big.
I think we can get into philosophy.
That’s right.
We can get into the philosophy of: can computers communicate with us? And we will. But just on that level, okay, it’s a platform shift. A lot of people are going to change their behavior. That’s usually when companies emerge, and it is when institutions tend to fade. Google is a company that emerged, particularly with the internet, and with the shift to mobile, I think, became a dominant player. Do you see that risk for Google inside of this platform shift?
I felt the risk more with mobile. Here’s why. We developed Android, and I think we had to adapt to mobile as a company. We were built on the internet. We weren’t, by any stretch of imagination at the time, what I would call a mobile native company. So mobile was something which came, and we had to adapt to it hard across our products. And it was a disruptive moment. People were using applications now directly. You could install apps on your phone and so on. So there was a lot of questions. With AI, I feel like this is our seventh year as an AI-first company. I feel we are AI native. Part of the reason yesterday you saw all this… Pretty much most teams at Google intuitively understand what it is to use AI in our products.
Every year, I remember… Also, we’ve driven the state of the art. In some ways, we’re helping drive this platform shift. So I feel we are AI native. We deeply understand what it is to both drive the state of the technology and incorporate it into our products… All these shifts are disruptive, but I look at the scale and size of the opportunity ahead with AI, and I feel like we have worked. We’ve invested so deeply in AI for a while, and we have clarity of not just building AI in our products, clearly providing it to the rest of the world. And we have planned for that from the very beginning. So it makes me excited about this moment.
You said it’s been seven years of being an AI-first company. I’ve seen you demo LLMs in the past. I’ve seen other generative AI tech at I/O in the past. You’ve been talking about it for a long time.
In 2015, one of the biggest debates we had coming into Google I/O was I wanted to show… We were launching Photos. I wanted to show that these were powered by deep neural networks. There was such a debate coming into the keynote because we were taking a frog and showing how the network would figure out it’s a frog, and people were scared. They were like, “Why are you showing the legs of a frog?” That it first understands you’re breaking a frog into its component parts. But I felt it was important to explain to the world that there’s a shift called deep neural networks, which was going to change everything. Anyway, it made me reflect on that. We’ve been talking about this for a long time.
I remember very clearly, you once had a conversation with Pluto.
That’s right.
And no one could quite figure out why you were talking to Pluto. And you skip ahead to now, and it’s like, “Oh, that was the technology, and that was the demo.”
We had built LaMDA because it wasn’t an accident we were building a conversational dialogue based on AI. Because we had built Google Assistant, and we realized the limitations of our approach. At the end of the day, we had this vision for where it could go, but it was a handcrafted system. So we knew we would need a deeper AI approach. And so yes, by talking to Pluto, we were effectively conversing with LaMDA internally, but from a safety standpoint, we had restricted it to be Pluto.
So here’s the criticism. The platform shift that is occurring, that everyone can see, was not kicked off by Google. It was kicked off by OpenAI and ChatGPT and Microsoft, to some extent. And it’s because you were being responsible; you were being cautious. I think maybe the kickoff, this platform shift, was an accident. I don’t think that OpenAI was gunning for a moment like this. What made it so that Google is reactive to this moment instead of proactive in kicking off the platform shift?
I would argue some of what drove the platform shift was the work we did in Transformers. So a lot of the underlying technology, too. What I think changed the point of inflection is how ready users were. It’s almost like that moment where you realize… Because these technologies have pitfalls, they have gaps, but you realize you’re at a moment in time where people are ready to use it. They understand it, and they’re adapting to it. So that’s the moment, and we realized it, and we started working on it. I just think we took some time to get it right. And for us, that was important. I think it was important given our products are used by so many people, and in important moments, I thought it was important to get it right. So, to me, it was just that. Let’s say you go back all the way to the internet. Google wasn’t even there when the internet shift happened. So I think there is this notion that one of the deepest platform shifts on day one is what sets it. I just don’t subscribe to that.
Do you think the level of hallucination or error that you see in something like ChatGPT, is that just unacceptable for you, as sort of the head of all product at Google?
We have to figure out how to use it in the correct context, right? For example, if you come to Search, and you’re typing in Tylenol dosage for a three-year-old, it’s not okay to hallucinate in that context. Whereas if you’re just coming and saying, help me write a poem on some topic, it’s okay if you get it wrong. All I mean about getting it right is getting those details. And we’ve made progress on the hallucination problem in the context of Search by grounding it, corroborating what we do there with our ranking work. It just takes time. Things like that is what I meant. And it’s a research problem. We will all make progress on hallucination. I’m not saying it’s not usable; it’s just that we had to take the time to get it right.
But what I would say here is OpenAI is very much the disruptor here. They have a product that isn’t quite as reliable as Google Search in answering questions, but on some set of queries, it’s better. It’s more interesting to use. It’s a different paradigm. The users were ready for it, but then, it gets things wrong like left and right. One of my favorite examples here: people are walking into libraries asking to check out books that don’t exist because they’ve asked for a list of books. That would not be acceptable, I think, as a result in Google Search.
I looked for some products in Bard, and it offered me a place to go buy them, a URL, and it doesn’t exist at all. Right? And so, all these models have the same underlying problem. There are plenty of use cases which we all get excited by. So I think both can be simultaneously true.
But do you see that sort of classic disruption curve? This is a bad example, and I’m just going to use it, but forgive me for it. Google Search is the mainframe, and AI is the PC. This is a classic disruption example. It doesn’t do everything that the big computer can do, but it’s cheaper, more accessible, maybe the results are more useful in certain contexts, but it’s also worse on a host of other variables.
No, I don’t see it that way because Google Search is evolving with what you’re seeing. Google Search [wasn’t] always going to be where it was. For many years, we didn’t evolve beyond the 10 blue links, too. And people would ask us, “Why are you doing it?” We always would say, “This is what users are looking for.” The debate is sometimes users want answers. So we are always trying to get it right for users. This is a moment in which user expectation is shifting. We’re going to adapt to it. We’re also doing Bard, and we are now making Bard more widely available, and that gives us the sandbox where, in an unconstrained way, we push the frontiers of what’s possible, too. So between Search, the new Search Generative Experience, Bard — this feels so far from a zero-sum game to me. And that’s how we see it today. People are using Search, trying out new things, which is why I’m excited to push out this new experience, too, because I think people will respond to it.
So a few months ago, I was at the launch of Bing, powered by ChatGPT. I saw Satya Nadella there. And I’m sure you know this, but he said, “I have a lot of respect for Sundar and his team, but I want Google to dance.” And then he said, “I want people to know that Microsoft made them dance.” One, I just want to know how you felt when you heard him say that. And two, do you think you danced? Are you dancing?
Look, I’ve said I have a lot of respect for Satya, and the team as well, and I think he partly said that so that he would ask me this question.
I’m pretty sure that happened.
For me, maybe I’ll say it this way. We started working on this new Search Generative Experience last year. To me, it’s important in these moments to separate the signal from the noise. For me, the signal here is there is a new way to make search better and a way we can make our user experience better, but we had to get it right. And to me, that’s the North Star. That’s the signal. The rest is noise to me. So to me, it was just important to work and get it right, and that’s what we’ve been focused on.
So let’s talk about Search as it is now, and Search where you want it to go: the Search Generative Experience. The search business is incredibly lucrative. The European Union has spent two decades trying to introduce competition in search. There’s a browser ballot or a search ballot on Android. Google is still dominant, but it has decayed over time. Have you ever done a search and ended up on some horrible SEO content form? Does this happen to you?
Yes, but it’s happened to me over 20 years. So, to me, it’s like Gmail and spam. Search has always been finding high-quality content from others. So there are moments where we feel like, okay, there’s a direction in which we are not getting it right or falling behind, but then we work hard to fix it. Search has been that way.
Do you have an entire team–
We quantitatively measure these things, though, right? So our work and search quality is about… Internally, we work hard to quantitatively measure user satisfaction with Search. How are users finding Search? And we are seeing that over time. So in some ways, when you did the BERT work, the MUM work, all that led to some of the biggest quality improvements we saw in Search in a long time. So you are right. I’m not saying… I’ve run into content forms, and there are times users have said, “Look, I want more unique voices and perspectives.” And we’ve been working on how to get that right. That’s part of how we will evolve Search for certain use cases as well. But I feel like today in Search… You have a good example. I mean, you did a great redesign of The Verge. I think it’s about a year since you guys did it.
Closing in on that. Yeah.
You’re not designing it with any view of what Google Search wants you to do.
That’s in there. Our designers care about SEO.
But in a good way. But I think I see the StoryStream, and the most popular feed, I use it. I go there to see what’s important. And I think The Verge has done well through it, right? I think it’s still very possible to do great work. I also think the information ecosystem is so large. I think people constantly underestimate it. When I look at the world of Facebook, Instagram, YouTube, TikTok, destinations like yours, The New York Times, The Wall Street Journal, The Washington Post, in terms of news, I look at the sports destinations I go to. I think it’s way richer than people fully estimate. But this is not to say there’s not always hard work to get it right.
But what I’m saying is you’re only as good as the web.
Always.
At the end of the day, Google Search can only really show you what’s on the web.
As good as the richness of the web. Right.
Yes. But if you’re a new creator, and you just want to communicate with some audience, it is far more likely that you will end up on TikTok, Substack, or Instagram, maybe YouTube, which you have access to. But those platforms, they’re not so visible to the average Google Search user. So the new stuff, the high-quality stuff, the more interesting stuff, maybe, is ending up on platforms that Google Search can’t see. And the web is being pushed toward the incentives of Google Search. When was the last time you tried to get a new credit card?
It’s been a while.
Just that experience is a totally optimized experience. It’s almost not human-readable anymore, in a particular way. And I’m just wondering, do you see Search Generative Experience as your opportunity to change those incentives? Do you see that as creating better incentives to create for the web again?
It’s going to be a fact of life. I think mobile has come, video is here to stay, and so, there’s going to be many different types of content. The web is not at the centre of everything like it once was. And I think that’s been true for a while. Having said that, it’s ironic that all the recent launches of these products are all like Bard, ChatGPT. They’re all web-based products. And that’s how–
I can do 30 minutes on mobile app stores and why the innovation’s on the web. But I don’t think I have enough time.
And I worked on Chrome. I’ve cared about the web for a long time, but I think the web belongs to no one. So there is inherent value in that. And there are aspects of the web which are stronger than most people realize. But I won’t underestimate with AI. As AI becomes multimodal, this distinction we feel between text and images and video blurs over time. So today, we feel those walls. At Google, we’ve always tried to bridge these things. We did universal search. We try to bring all these forms together. With AI, I think, so what? Maybe a young content creator creates it in the form of a video, but down the line, maybe there are ways by which you can consume it in the context of Google. Obviously, all the details have to be figured out, and there are business models–
But these platforms have to let you in, right? You can’t search against Instagram.
But we can against YouTube.
Yeah.
That’s true. As an example. And maybe there are other platforms. We have to create the incentive for them to put it up there, and then, that’s on us to do. I look at it as: is user need for information going up or down?
There are more sources of information than ever before. Just somehow, I feel more optimistic over time because these same questions were very deep a few years ago. I remember people asking me about it. But I come all the way to today, and I think, if anything, I’m using a lot more of the web still. I go to The Verge website all the time.
We’re believers in the web. We’re the last ones.
Yeah, I understand. But my usage still tells me… I go to websites directly every day where each day they’re trying me over and over and over again to download the mobile app. They’re getting me to agree to some cookies and all that stuff, but the web has worked its way through all that. I hope it gets better, but I’m optimistic.
As you answer more and more of the questions in the Search Generative Experience, I think you gave an example of essentially an automated buying guide, right? I think it was for a bicycle. And then you asked a follow-up question: “I want it in red,” and it showed you the right colours. Do you think that you’re going to send out as much traffic from the search engine as you have in the past, or is it going to reduce it?
It was a big part of our design goal, when I talk about getting it right. I think people come to Google with many different intents. There are times you just want an answer. I’m going to go to New York tomorrow. I want to know whether it’s raining. You want the answer. But there are many times, particularly to Google, people come to explore, to discover. And I think that’s true. People want to read reviews.
So our Search Generative Experience, in fact, we really didn’t want to just do where you come and talk to LLM. That’s why we did bots separately at first. And in a Search Generative Experience, you will see a lot of links. You can click expand. We go through and give, for each of what LLM has generated, what are the supporting sources?
So one of our design goals is making sure people come and experience the richness of the web because I think it’s important for us to create that win-win construct. That’s something we have put a lot of thought into, so I’m optimistic we’ll get it right there.
I want to ask some Decoder questions. There’s a big one here. I joke that it’s a show about org charts. You are the CEO of both Alphabet and Google. You made a big decision about your org chart. You had a company called DeepMind that was part of Alphabet. You pulled it into Google. You combined it with Google Brain, which is the AI part of Google. You picked a new leadership. Walk me through that decision in the context of, “I wanted to make these products, and I needed to change my org chart to get there.”
For a while, obviously, I felt fortunate we had, arguably, two of the top three, maybe, research teams in the world. I mentioned this at I/O. I think if you go back and look at the… I didn’t even put up all the list of the things they had done. If you look at the 10 to 20 seminal breakthroughs, which led to this moment where we are, those two teams together account for a large number of them.
But it was clear to us that, as we started going through this journey to build more capable models, one of the things that was holding us back was the computational resources that we would need. And so, we would need to pull them together. In some ways, I think the good thing is the teams themselves came to the realization. So Gemini predates the combining of the two teams. So they started working together jointly on Gemini. And that was a great experience because seeing… It’s almost like being able to pull together two great teams and seeing that and how well it’s working. And I think conversations with Demis [Hassabis] and Jeff [Dean] naturally led to that moment.
So I think it was a good time to do it, as we are also pivoting more from research to commercial production-ready models at scale and also needing to do it safely and responsibly, which means you have to dedicate a lot of resources to testing and safety. The combination of all of that made it the right moment, I think. So that’s what led to this moment.
So that’s the strategy side, right?
Yeah.
You’ve got two teams. They have redundancies in their resource needs and their infrastructure needs. But then, there’s you actually deciding, “Okay, I’ve got two leaders on two teams. I’m going to pick one. The teams have different cultures. I want this culture. I would like to get more of this output and less of this redundancy.” How did you make those decisions?
You’re right, it’s always about… I think the most important thing is having clarity about what you’re trying to accomplish. And once you do that, everything else follows from that. In this case, Jeff clearly had expressed a desire for a while to be more of a chief scientist. Jeff, before, he literally has built some of the most important systems we use at Google today. He’s, at least to me, without a doubt, the best engineer Google has ever had… and his desire [was] to spend more time doing that.
And Demis is an extraordinary leader of teams. He has been working on building capable AI systems from the first day I met him. This is what he has been wanting to do. He’s super excited. So the combination of knowing the people you have, what makes sense, all falling from the first principle of what you’re trying to accomplish, is what leads to the other decisions. So in some ways, it was a clearer set of next steps from there.
Did you make a phone call? Did you have a meeting? Did you have Gmail write a note for you?
No, no. We had a lot of good meetings. James Manyika played an important role because, in the context of Gemini, we were bringing these teams together anyway. And Jeff naturally was spending time doing a lot of the engineering work. And so, it all made sense. And just a few conversations led to the right outcome.
I would say bringing the two teams together is indicative of a larger change in Google and the tech industry at large, which is getting smaller, more efficient, less redundancy. You and I have told many jokes about Google’s six messaging apps in the past. Are you focused on tightening up, on more focused execution here?
Yes. And the one thing I would say, look, I do think it’s one of our strengths. It’s not an accident we have 15 products with the scale we have or six products over 2 billion users each. These are products for which we have committed for a long time. But clearly, I think we all are trying to do more with constraints now. And areas where you can be more nimble, we have been very focused on that. We’ve always done things. At one point, we had YouTube Music and Google Play music, so I had to combine the two teams and said, “No, you’re going to be one music team.” There are always moments like that…
But that’s the default for Google, right, is you have multiple shots and then you combine them in the end?
So, in some cases, but you think about Search or you think about Maps or you think about Photos or you think about Gmail, you think about Workspace, you think about the focus we have had on cloud, you think about the fact we bought YouTube in 2006 and how we have executed since then to make YouTube what it is today. I think we have some high-profile areas like messaging, but even there, if you look at our last few years, the focus on both Google Meet and Chat and the platform side of RCS, the fact we have relentlessly focused to start from zero. And I’m confident… I mean, we announced RCS is now over 800 million users.
There was a big applause line yesterday.
Yeah.
I think Dieter [Bohn] was the loudest cheer in the room.
I heard him. I literally think I could pick out Dieter’s applause from the others. I think we are committed to being deeply focused. I mean, even AI is an example of that. We’ve been focused on AI for over a decade. And in the case of AI, it was a deliberate decision to make because of how important it was. We were fine with having that exploration that came from two different teams because those teams had different strengths. DeepMind were early believers in reinforcement learning in a way Google wasn’t. So, to me, that diversity was important, too. But there are moments where you say, “It’s time to approach it a bit differently.” But I think these are decisions you need to make.
I have a couple more, and then I have a wrap-up question, and then we’re going to keep going for another hour.
[laughing]
There’s another challenge for Google inside of all this, right? If you believe it’s a platform shift, this might be the first platform shift that regulators understand because it’s very obvious what kind of labour will be displaced. Lawyers, mostly, is what I gather. They can see, okay, a bunch of white-collar labour will go away, like a C-plus email about a transaction, entire floors of those people can be reduced. And they seem very focused on that risk. And then there’s the general AI risk that we all talk about.
When Google first did Search, it was an underdog, right? And it won a lot of court cases along the way that built the internet: the Google Books case, the image search case with Perfect 10, the Viacom case with YouTube. It was an underdog, but it was obviously delivering a ton of value. Now, you’re at the White House having an AI summit. I’m confident you’re going to end up in government capitals around the world talking about AI. Do you think you’re in a different position now than that scrappy underdog inventing the internet? You’re the incumbent. Are you playing a different role?
Two parts to the question. On the first part, briefly, for 20 years of tech automation, people have predicted all kinds of jobs would go away. Movie theatres were supposed to end and–
They kind of did.
Uh-huh. But movies are thriving more than ever before and–
There’s a writers strike, right? I mean, the labor cost paid to writers has dropped so precipitously, they’re on strike right now.
No, but there have been writer strikes before, and those things will continue, right?
Yeah.
There’s always going to be…
Unemployment over the last 20 years of tech automation hasn’t fully… Twenty years ago, when people exactly predicted what tech automation would do, there are very specific pronouncements of entire job categories which would go away. That hasn’t fully played out. So I think there’s a chance that AI may actually… Because I think the legal profession is a lot more than… There’s a chance you know more about being a lawyer. Which is why I can’t opine on it because I don’t know a lot about it. But something tells me more people may become lawyers because the underlying reasons why law exists and legal systems exist aren’t going to go away because those are humanity’s problems. And so, AI will make the profession better in certain ways, might have some unintended consequences, but I’m willing to almost bet 10 years from now, maybe there are more lawyers, I don’t know.
So it’s not exactly clear to me how all this plays out. I think too often we think… there are new professions constantly getting created. I don’t mean too lightly… I do think there are big societal labor market disruptions that will happen. Governments need to be involved. There needs to be adaptations. Skilling is going to be important. But I think we shouldn’t underestimate the beneficial side of some of these things, too. And it’s complicated, is maybe how I would say it.
On your second question, I think governments and legal systems will always have to grapple with the same set of problems. There’s a new technology. It has a chance to bring unprecedented benefits. It has downsides. I think you are right. With AI, people are more trying to think ahead than ever before, which gives me comfort because of some of the potential downsides to this technology. I think we need to think about it. We need to anticipate as early as we can.
But I do think the answers for each of this is not always obvious to me. I think it’s not clear to me you hold back AI in a straightforward way, that’s not the right answer. It has geopolitical implications. So it’s, again, a complex thing we will grapple with over time. I think, from our standpoint, we are a bigger company, so I do think we will come to it in a more responsible way. There are places where we will engage and try to find what the right answers are. And so maybe our approach will be different for sure, I think, as we go through it.
One of the things I think about a lot is that set of cases I talked about: Google Books or Viacom on YouTube… You were distributing more information than ever before. And there were a bunch of media companies who said, “No, that’s ours. You can’t have any of it.” And you had to go fight it out and just access the information. It was so valuable to people that Google was able to win. This is a different turn, right? Publishers around the world, media people, Hollywood artists, Drake are saying, “Hey, that’s mine. And you took it and you trained an AI on it,” and now, there’s fake Drake singing songs on YouTube, and they’re going to try to stop you, right? There are already copyright lawsuits. Do you think that, as the incumbent, you have a bigger responsibility to that conversation than some of the startups who might be running that original Google playbook of saying, “We’re going to ask for forgiveness and not permission”?
I do think we have a bigger responsibility. So, one of the things I think YouTube has done well, I think, with the work on Content ID, and I think it’s brought a deep framework by which content rights holders… The system works. So I think our responsibility there is making sure that this new wave continues to help artists and the music industry. It’s something we would think about deeply as we go through this.
Do you think that you’ll have to share revenues with publishers and musicians? Because this is the thing they’re worried about the most.
In the case of YouTube, we already, clearly, directly do, but I also think it’s important in these moments. We aren’t the only player. These are big disruptions coming. Our goal will be to help the music industry partner with them and help them. So that means maybe giving artists choice and control over transformative works and giving them a say in it and figuring out those right answers.
Do you think you have to get ahead of the law? This is the question here, is you’re saying, “There’s a lot of players and maybe government should make it so Drake can get paid when AI Drake sings a song.” Or do you think you have to get ahead of the law to be a good partner?
We all have to meet where users are going and where trends are evolving. So we plan to be, when we say bold and responsible, I mean it. You will see us be bold in some of these cases, but underlying all that is a responsible direction. We want to get it right.
I’m being told we have to wrap up. So I want to ask the biggest picture question of all. Google is 25, you took over from the founders, you turned into Alphabet. You’ve had a very intense and very successful run at Google, especially if you look at just the business. It’s grown immensely. We’ve talked here about big decisions? Restructuring, appointing new leaders, moving people around, changing the culture to be more focused, navigating regulators, competitors asking you to dance. All of that requires a very particular kind of ambition and focus. So, just for you personally, what is that ambition? What is driving you personally to take the company through this moment?
It’s really from first principles, having clarity. I believe in our mission. For me, getting access to technology made a big difference in my life. So the driving force for me has always been about bringing information and computing to more people to benefit society. And so, out of that comes the clarity for all the stuff I need to do. So working from the first principle in some ways, then it becomes simpler.
But it’s an exciting time. Look, I’ve been preparing for this moment around AI for 10-plus years. It’s not an accident at Google [that] we brought Geoff Hinton in or we built Google Brain or we acquired DeepMind. We spent the investment that’s needed. We built TPUs. We announced TPUs at I/O maybe six years ago now. This is something I’ve anticipated for a long time. I’m excited that it’s an inflection point. But to your earlier question, because we have been doing this for 25 years, we know how important it is to be responsible from day one, which is why at Google I/O you heard about our early work on watermarking and metadata in images.
I could have done an hour on metadata. You should be very happy that I didn’t.
Some other time. So both parts are important, but it’s an exciting time.
Well Sundar, thank you so much for being on Decoder. I look forward to talking again soon.
Thanks, Nilay. Appreciate it.