Episode 2

41 mins, 53 seconds May 7th, 2024

AI-Powered Networks

From LinkedIn and Meta to Airbnb and TikTok, there are various components that must come together for a network to actually work. Hoffman shares his thoughts on AI’s potential impact on the next generation of networks, comparing and contrasting with his experience building iconic consumer networks of previous tech eras.

Reid Hoffman

Partner, Greylock

Seth Rosenberg:

Welcome to the second annual AI dinner with Reid. Reid doesn't come to New York often, so thank you for being here.

Reid Hoffman:

Pleasure.

Seth Rosenberg:

So I think I know all of you in this room, but thank you for coming. As you know, I'm Seth. I'm a general partner at Greylock and it's so exciting – I'm also based in New York – and it's exciting to see just the caliber of technologists and builders in New York just continue to get better every year.

A few months ago I wrote this article called Product-Led AI, which is kind of this call to action of what are some really unique business models and experiences that you can build that are AI first companies.

The genesis for the discussion that Reid and I are going to have today is what I find a little bit annoying about the discourse in AI today, which is everyone's focus on Nvidia and Anthropic and OpenAI and all of the large models and the GPUs and the underlying enabling infrastructure.

But when it comes to actual products, you hear two pieces of skepticism. One is that all startups are just wrappers on top of ChatGPT and don't have any real defensibility. And then the second thing you hear is that incumbents are actually best positioned to win by just plugging into OpenAI and adding AI to their product.

I think both Reid and I fundamentally disagree with this premise, and as everyone here knows, Reid is kind of the perfect person to have this conversation with because not only is he a leader in AI as former board member of OpenAI, current board member of Microsoft, co-founder of Inflection AI, general partner at Greylock, investor in self-driving cars and just really a leader in AI, but also in Reid 's previous life, he was both a co-founder and investor in some of the most important consumer technology platforms in the world, whether it's as co-founder of LinkedIn, or an early investor of Airbnb or Facebook.

Given that background, broadly, how do you think about the product and application layer of AI? How is that distinct from the underlying models, and what use cases are you excited about?

Reid Hoffman:

It doesn't absolutely have to be the [case where] it’s just a thin layer on a model where you could potentially upgrade or trade out the model of the backend as the models are improved, but you’ve got to establish some kind of product ecosystem, some kind of network ecosystem or something else. Generally speaking, it's easier to do that if your product has some substantive quality software in addition to the model.

That's one of the reasons you and I did Tome and are both on the Tome board: it’s changing the notion of communication and enabling a set of communications, for example, communication of salespeople because it is a huge valuable category. It's part of the reason you see a lot of different startups here, not obviously just the super large ones like Salesforce, but a bunch of others changing that nature of communication because it isn't just like, ‘Oh, look, here is GPT-4, or here is a front end to tuned version of Stability AI’ or something like that. It's like, okay, it has to help people who are not themselves developers, but who are trying to do something (like salespeople do something real) and then you have to bring the right sensibilities.

And so with Keith and Henri and the [Tome] team, it's like a design sensibility for communication, a stack of other things also from similar background to yours and understanding of how networks operate from having been in Meta and been instrumental in some things in meta. All of those things are important to make that work anyway.

This is a thin cast, but it's [a question of] what are you doing for the point of it? What are you doing to say, ‘Ok, great. I have a theory about how I'm using the AI models. I have a theory about how I'm going to continue to succeed as the AI models improve, including the scale offerings from the giants, and I'm building a really good product that can be dominant within a market, whether it's because of network effect or enterprise connections or other things, unique data, other things, whatever those are to make that all work.’

Seth Rosenberg:

Let's spend a few minutes talking about the potentially new marketplaces or networks that can be unlocked with AI. I think a lot of people talk about areas of defensibility, whether it's data or fine-tuned models or workflow or enterprise integrations, but I think what has been less explored is the types of new networks that can be unlocked with AI.

I think one of the most public or interesting examples is TikTok kind of competing with the entrenched social networks by creating a new network that just connects individuals to an algorithm rather than to each other. The first use case of TikTok is just scrolling through an algorithm that gets better through AI and you don't even need to have any friends on TikTok to waste five hours of your time. But anyway, it's an interesting example of basically unseating or at least competing effectively with previous networks by changing the game with AI. So what are some other kinds of opportunities or threats that entrepreneurs should explore around new networks that can be unlocked with AI?

Reid Hoffman:

Well, I mean, for example, it shows you [what is meant by] that broader network definition. And by the way, part of the reason TikTok got started the way it is (probably most folks here don't know how this works in China), the Chinese government basically put a hard lock on any network where you had more than 500 connections. So basically you're not allowed to have one. So that was what kind of forced it. They couldn't just do YouTube following graphs or other kinds of things. They were like, okay, what else can we do?

And obviously, once you think about and look at it in retrospect, if what you have is a huge volume, just like a marketplace of suppliers, a recommending algorithm that makes everyone including the cold start (which is one of the general problems in networks, and which is one of the reasons why the most often thing that all networks that are trying to build), it’s like, oh, here are other people you can connect with in order to make it happen. But that's an interesting history thing on TikTok.

I do think that the notion of, look – are there interesting ways that AI could define different kinds of networks, enable it, the different nodes to spread it, engage? And part of the reason I'm not going specifically out looking for an AI-driven sports network or something, whatever, is because the actual thing that's really fun about Seth’s and my job is the founder who brings you something about that that you're like, ‘I never thought about that. That's contrarian, right? That could work.’

I'll give Airbnb as a previous generation example. The first person who talked to me about Airbnb mis-pitched it. And that delayed my meeting with Brian by a year because the person who pitched it to me said, “It’s couch surfing.” And I went,”Couch surfing? Not particularly valuable. I mean even if you succeeded and got the whole thing, it's low ACV, very risky, strange. So no, it is just couch surfing. I'm like, I'm not interested in couch surfing.”

A minute into the discussion with Brian, it was like, “Oh, this is awesome,” because it's like, look, it's a marketplace of space and that space ranges anything from a couch (sure, fine) to a castle. And it's the whole range of it. And you're like, “Oh, that's very interesting.” And I hadn't really thought about that ever before. I hadn't thought about that possible network configuration.

The challenges in the early days were obvious, would cities hate it? Hotels would certainly hate it. They have unions. Would you have weird behavior by hosts or by travelers? All these things you had to work through. But part of what you look at these things is, and this is part of how much risk you're willing to take for these new network definitions is look, the risk might be very high, but if the risk is very high, then the payoff had better be spectacular. And that's what the trade-off you're making.

And by the way, as founders, you're doing the same thing as investors with a somewhat different calculus because you're investing – as opposed to Seth and I doing seven to 10 investments – you're doing one. So you have to have that bet in a solid thing and the thinking is parallel, which is, “Is this the way the world should be? Is there a way for networks to create it?”

And now when you're thinking about AI, you're thinking, “Is there something I can do that if 20 other people are thinking the same thing, then you'd better have something that's interesting and unique in what you're doing.”

Seth Rosenberg:

Whether it's LinkedIn, Airbnb or Meta. What are some learnings or some principles upon reflection that made these companies win?

Reid Hoffman:

By the way, one thing that makes me definitely feel old (and you're part of the up and coming young Turk generation on this) is PayPal even before then. People may still remember that company. I was part of the founding team of that as well.

Seth Rosenberg:

Your resume is getting too long.

Reid Hoffman:

Yeah, no, exactly. Part of what I love about some of the thoughts that you had in Product-Led AI and much of other things is that networks are still super important. AI does enable next generations of networks across the areas you've talked about. I think [AI enables things] in other areas as well.

Building networks is hard. Whereas sometimes the usual batting average that's given to startups is like 10%, I think networks is more like 1%, because usually anything that's genuinely and truly a network, a little bit like the early days of LinkedIn…And so now I do think one of the things that people frequently say is they think, “Oh, all the comms networks are done.” They were saying that before Snap was built, et cetera. So it's actually in fact every five to seven years, some change in a zone of opportunity to launch something, some change of technology platform. AI is a change of technology platform that enables a new thing to happen.

Now to your frustration, some startups do go, I put a very thin layer on top of ChatGPT and I'm going to have a valuable product – and it may even be a valuable product – the question is how endurable and defensible as it is; if it can actually build a network effect behind what it's doing, then that is one area that's not the only enduring network effect (or the only enduring competitive advantage). Sometimes enterprise integration can be one lock on a key resource. For example, in a marketplace, if you have it where you're the real default place by which the sellers go anyway, there's a set of different things that can all go into that. But the grounding is what's your theory of the compounding loops of growth and engagement in your network and how do you get it off the ground?

Part of the reason why Airbnb was so much that I wanted to do it – and to give full color of that story –  something that I think it's fun to hear is how David Sze, an investor in LinkedIn, most valuable board member, investor in Facebook, discord, Roblox, et cetera, reasonable track record, was the reason I'm at Greylock. And he looked at me across the table and said, “Every VC has to have a deal they're going to fail on. Airbnb can be yours.” And you think, “Oh my god, amazing consumer investor, storied partner at LinkedIn, and meanwhile I'm new to this VC thing, I should stop.”

What I later told him – because by the way, he’s super smart, always learning [which is a] key thing for founders, key thing for investors – he came to me six months in numbers, hadn't changed, said you were totally right. I was wrong. What did you see and why did you do it?

Even though I was such a doofus at the partner meeting and said it this way: “Look, the way you said it to me made me increase my conviction that it was worth doing. Because contrarian and right contrarian is not foolish. People think it's wrong, it's smart that people think it's wrong. And then okay, if we're right, then it's going to be very big because a whole bunch of other smart people are also going to think it's wrong and you have a chance to go build something before the competition really then you are navigating an obstacle that other people think is insurmountable, et cetera, et cetera.”

And I think the same thing will be true in thinking about how AI grows networks, which is some interesting thought that most other people think is foolish or incorrect that actually works here and builds something really amazing.

Seth Rosenberg:

Just to follow up on getting these early networks off the ground, I know I've heard some of your stories on the early days of LinkedIn, one of the issues was kind of social hierarchy. How do you avoid the situation where all the people desperately looking for jobs go on LinkedIn, but you don't have people like Reid Hoffman on LinkedIn. And I think you had some creative ways of getting around that challenge. But first of all, being aware of it.

Reid Hoffman:

Yes, when we were kind of thinking about how many people had to be in LinkedIn in order to have LinkedIn have a valuable product to its users, e.g., a value proposition that a member could see in  LinkedIn, it was a million people. So you had to have a million registered people before person one had a relatively functional value proposition. That's a challenge.

Similarly, you need to have (in the case of Airbnb) density of marketplaces, so you have to have a really coherent theory about how you're going to build your network. There's a growth compounding loop, there's an engagement compounding loop. There's a question about whether or not a leaky bucket will essentially create zero value proposition in marketplaces. The classic thing is you, a supplier, shows up, no buyers, supplier leaves, buyers show up, no suppliers, buyers leave – impossible to get anywhere. So you have to get that critical mass going in various ways.

And so some of that is edgy decisions like in the LinkedIn case when we started, there were already two things going that thought that was individual consumers. There were another five or six that thought the professional network was the company, more things emerged as we were going. So you have to do that, you have to have a theory of that.

One of the metaphors I sometimes use for this is Marines take the beach, army takes the country, police govern the country. These are three different levels. So you have to think your strategy and what you're doing with your network is going to evolve as you're going. And then obviously there's a whole bunch of different attributes to make that happen and their availability at different times.

So for example, the way that we did LinkedIn with its email invitations and everything else  – even if LinkedIn didn't exist today, that wouldn't be a good way to build a network like LinkedIn today, because the environment's different,consumer sensibilities are different. There's mobile phones, there's a bunch of other things. And so you have all of that, which is part of making network strategy on this stuff.

So, as for general founder advice, product people advice? Ask what's wrong with your product? One of the conversations that I found very entertaining in the very early days of LinkedIn, I was talking to Bill Gurley (obviously a very smart guy about it) and he was like, “Oh, I would never use something like LinkedIn. That's for college students. I will never use it.” I was like, “Okay…” Two years later I was sitting at a conference at a dinner table and Bill Gurley was asked, what are the sites that he opens every morning with?And it was if anyone remembers Hitwise, Hitwise was one of them and LinkedIn was the other.

And the key thing is to make it so that it's attractive and it spreads in both cases on the value propositions. And it's usually not a continuous curve. Bill would've never joined until a bunch of entrepreneurs that he would potentially want to talk to were there – other VCs are not attractive to him. He wants competitive differentiation.

Seth Rosenberg:

I'm curious, in your experience, the prototypes of amazing product oriented founders because obviously part of your background is philosophy at Oxford, understanding social dynamics, which obviously plays into how you think about product, but what are the different prototypes that you've seen?

Reid Hoffman:

The funny thing is there are some commonalities, but it's more a collection of heuristics. So for example, Brian Chesky is spectacular at consumer experience design thinking and these sorts of things, but it's not particularly on compounding network loops. Mark Zuckerberg is spectacular at compounding network loops and not so much a design. I mean you've worked with him. I learned things from Zuckerberg as he was doing them at Facebook and applied some of them to LinkedIn. And so there's a different set of characteristics.

Now what's similar across all of them is this notion of what are the really small number of things that you have to get as maximally right as possible in order to have the network be vibrant and growing? And, in your particular case for your particular product and value proposition now and in the future, what are the things?

So for example, one of the things that I think predated your being at Facebook, if I remember the dates exactly, I think it was when it was part of being contrarian. Facebook obviously started as a college campus network and when it opened geographies, like half of the punditry of Silicon Valley was like, “It’s going to die now, it's jumped the shark, it's over.” And it was one of the things that Zuck got right and against a lot of public criticism. Another one was when he turned on the news feed, there were literally some demonstrations outside the office. You don't often get that in a software startup.

Seth Rosenberg:

Companies. Yeah, I remember that. Yes. And it was the most successful product launch.

Reid Hoffman:

Yes, exactly. So it's a conviction about what will have those compounding loops within your network.

Seth Rosenberg:

I feel like a lot of what you need to get right in building software and technology is the timing, right? There's different ways of ideas and sometimes they work. I feel like you've done a good job of going all in on social media in the early 2000s, and then some investments in crypto, and then now AI. What were some of the early signals to you that made you confident that this was the next wave and this was the time to build in AI?

Reid Hoffman:

One thing maybe folks here know, but most people don't know, is that my undergraduate was actually this thing called symbolic systems at Stanford, which is basically a version of cognitive science and artificial intelligence. And I went to get a master's degree in philosophy at Oxford. So I've always had an interest in AI, partially as a reflection of us. How do we understand how we think and what it [AI] is thinking, but also how do you create these artifacts? And my conclusion at Stanford was nobody really knew what intelligence was. Ok, I’m going to go talk to philosophers, figure out what intelligence is. They didn't know either. Ok, I'm going to go build software.

And so then when this next generation of AI started happening, I initially didn't pay attention to it because one of the things where you most often caught at a blindside is when you're too deeply expert in something. You don't catch the new wave because you're like, “Oh, those are the rules, not these ones.” And so you have to update yourself (whether it's investors, entrepreneurs, et cetera), to what is the current time. Now, then frequently people say, “Oh no, all those other people were idiots.” Like, well, you should understand why they died and why you won't die. So it isn't necessarily, “Oh, it's always a new time, it's always a new platform.” It's like, “Why did that one die…” The metaverse stuff is a classic.

Seth Rosenberg:

It'll come eventually.

Reid Hoffman:

Yes, exactly. It will come eventually, but why did those ones die and do you have a good answer for why this one [won’t]? And usually the answer is, oh, the glasses are better. And you're like, okay, maybe I don't think it's likely.

And so part of what happened is because I went, “Ok, AI is interesting.” The human amplification part of it is something that's too often overlooked. Part of the reason why AI led by the product I think is a very good thread, and what I saw that triggered my interest was that some new ideas that involve scale-compute were being involved. So the very first one from DeepMind was self-play games. And the reason why is it's changing the paradigm of how we program the AI, to the [paradigm of] the AI learns, because if you can actually make the AI learning work, then you can put 25,000 H100s against it and build something.

And that shift from a paradigm between programming it and learning it, between how do you bring scale compute into that, it helps with the learning and then obviously there's data and all the rest of the stuff we can go into, but all of that is the thing that started making it happen. Now even that's not sufficient to know there is a there there. And so that was when I started getting involved and talking to the heads of various labs trying to figure out what's going on, and I knew the applications would be there. That was part of Chris Urmson and Aurora and other kinds of applications.

But what I didn't know is would the general AI thing be there, until I tracked between when I was on the board of OpenAI between GPT and GPT-2 and GPT-3, and I saw what the increase in capabilities was and I went, “Okay, do I think I will see another similar jump between three and four? And the answer is yes, okay, this is going to be huge.”

Seth Rosenberg:

And that's when you brought Sam Altman into the partner meeting.

Reid Hoffman;

Yeah, I brought Sam into the partner meeting, and everything else and we started adding that to a major investment focus (now becoming obviously the major investment focus).

And now when I look at GPT-4, I think the same thing relative to GPT- 5. And that isn't to say it's only OpenAIi. One of the things that I actually [ask people], because you're always asking, you should always ask when you're talking to someone, “What do you know that I don't know and I should be investigating?” And so one of the ascertains that was made to me by someone I trust last week in Europe was that Gemini is much better for certain kinds of fiction and creative work than these others. And so I was like, okay, I should go try that. I should go, I've always thought about writing a science fiction story. Maybe Gemini can help me. Who knows? So you should always be looking at that and try to understand it.

And right now, I hope you all know we are so early in this AI stuff, then you say X is the absolute definitive principle. Don't say that with 100% certainty. Be tracking and updating. Now, I think it is a generally safe bet to go scale compute will actually in fact have some very major important attributes that it will have uniquely to the scale compute part of it. But that doesn't mean that there aren't other kinds of models and other ways and other things.

And for example, even in the construction of agents, the fact of bringing multiple models together for the agents is I think one of the things that's going to play out anyway. But it was that thing. And that's the thing that as entrepreneurs as investors that we should particularly focus on, which is how do new technologies change the paradigm of what products and services are? And essentially that's another way of saying ‘new platforms.’ Sometimes the platform's language always have someone thinking, oh, it's got to be like iOS or Windows or something. It doesn't necessarily need to be that from a platform's perspective, but it changes the field of one or more or all industries because of the way it's working, and then what's unique is what that other platform technology something else then enables. And which of those are startup opportunities? I'm also along with the rest of us at Greylock, very convinced that there's a lot of AI startup opportunities, that it isn't just going to be the giants.  Now they say, “Well, so that means that the giants are going to be disrupted?” No, the giants are going to build some really amazing things too.

Seth Rosenberg:

I think we all believe that AI is going to drastically improve the way that we live and work. And so I'm curious: In five years (in the most optimistic scenario) of where these models get to and the adoption and its adoption in all of our lives, what are the non-obvious second or third order effects that that will have on society or people or how we live?

Reid Hoffman:

One of the easiest ways to look foolish is to make very concrete predictions of the future, especially with technology evolution. Caveats aside, one of the things that I think is certain in the future is that there will be agents everywhere. I think every individual will have their own agent. They may have more than one. You may have more than one. You may have a personal one, a work one. And then the interesting question will be, when your work agent will have an agent (or more likely more than one) your work agent may talk to your company's work agent in various ways, et cetera, et cetera.

So essentially there's going to be a multiple of instantiated namespace kind of interaction agents that's a multiple of the human beings who are doing this. So that's billions. So you begin to go, okay, what happens when you have billions of agents? You can think a little bit about the question around, “Well, okay, my agent's talking to your agent, trying to figure out when we should schedule that coffee that we've been talking about…”

Then okay, is there some protocol of commitment? When we have human societies, we have laws. So are we going to have to have laws around agents? What would be the way? Is that enforced in code? Is it something else? Are there police agents for agents? That's the kind of thing where you're going to go, okay, so let's presume that you've got, maybe you get to a place where you have half the world has agents  – that's 4 billion people plus, and each of them has exposure to the multiple on them (just to be simple, let’s call it two or three).  So you’ve got eight to 12 billion agents. What does that world look like?

For example, if your agent or two agents are talking, what kind of information are they allowed to share? Well, the libertarian goes, “Oh, you can give individuals all that control. We don't take all that.” It's going to be in default, it's going to be built in.” So what's the information flow going to look like? Think about how already you have GDPR as kind of an anti-AI data nightmare. Think what the Europeans are going to try to make of that, right? The EU.

And so it's that whole space of proliferation, which I think is…it doesn't even need to be getting into science fiction to have agents that anticipate your needs or other kinds of things (which I don't think is necessarily science fiction), but you don't even need to get any of that to begin to go. The world's going to get different and complicated and interesting. And those second and third order effects will be strained. And generally speaking, the people who are technologically innovative go, “Look - let's try to be intelligent, but let's get into it and then fix it as we go,” which is obviously where I am. And then other people are going, “Oh my god, that's new and different. Will those agents cause mental health issues for some human beings?” And the answer is of course they will, just like the internet will or traffic will or wars will or anything else, but can they be on a positive benefit on those things [as well], and how do we learn how to do that as we get into it and do it?

Seth Rosenberg:

Yeah, exactly. It also increases access to therapy and everything else.

Reid Hoffman:

Everything else. For example, classically and obviously mental health and the internet is a robust discussion. One of the most central things that people actually say is, “Oh, you might get emotionally attached to an agent like PI or something.” And the answer is, look, you might somewhat, but also somebody is there to talk to you about the anguish you're feeling and to be trained the right way. That’s got to be net positive even if there are some corner cases that navigate intelligently

Seth Rosenberg:

Speaking of relationships with AI: Okay, let's open it up to maybe two or three questions and then we'll hang out and have dinner. Ben, see you over there with Espresso.

Audience:

So I'm curious to understand a bit more how you think about the network effects around AI as a product itself. You touched on network effects of social networks, they have clear network effects, marketplaces have clear network effects. AI certainly can be utilized by these different products that have network effects. But does AI as a product have network effects itself through the lens of an investor besides compute infrastructure, which is arguably independent of AI itself?

Reid Hoffman:

Yeah, so good question. It's possible that some of 'em will have it but won't necessarily be intrinsic. So, too often people are conceptualizing these AI agents as the equivalent of databases or knowledge stores and as they go data versus inference engines. And generally speaking, while quality data is useful in teaching the inference engine (which is one of the reasons why even products like PI put code in the training set to have inference capability, but not necessarily generate code) because the data can make a difference. So then the data that it gives you, you'll have a data moat that's increasing. It's like, well, it can be that. And maybe in a large pile of human interaction you're figuring out –  through the data of what you're doing, you're tuning your agent in a specific way –  that other people don't have access to the same data. But there's a ton of data in the world.

So someone says, “Well, I have a medical thing, I have a unique deal with this hospital system.” Well, but there's lots of hospital systems in the world and what's more, maybe synthetic data is going to get created and then synthetic data with smaller data then does that. So there's a frequent discourse around data that I think too often skates past [the question of] is there actually a network effect there? And so sometimes there might be something, but I wouldn't say it's intrinsically because it's AI that has network effects. And so it's one of the things to be precise about in your thinking.

Now the other thing, when it comes to network effects, is people are usually too sloppy in just saying ‘network effects’ and don't realize that there's different shapes and different ways of doing it. So for example, 20 years ago when I first started thinking about this… Boy, I left my walker at the door! There's what I was calling strong and weak network effects. Which is, a strong network effect is because I'm using it, I won't use something else. My use of this network and network effect means I really just won't use something else. And so certain kinds of marketplaces will have that because for example, say it's a collectibles marketplace, it's like well then all the supplies are here, and then the supply never can go anywhere else and so both the buyers and the sellers tend to be locked into that kind of thing because it's very, very difficult to get the right price for your collectible if you're off it.

On the other hand, there's also weak network effects which tend to be communications network effects. So for example, if I'm using Messenger, say I have my buddies there and my friends there, but I can also use Snap and I can also WhatsApp and I can also use –  and obviously there's an additional hassle to having an additional inbox – but if I got Seth and my friends on Messenger and Sarah and her friends on WhatsApp, I can easily be a member of both. So that's a weak network effect and which kind and shape of network effects is one of the things to look at. And there might be some interesting network effects that we haven't yet theorized about that also will come in AI. That's one of the reasons why I'm looking at it. So it's not categorically, but you need to get very good in your thinking to look at it.

Seth Rosenberg:

Yeah, that's a good question. That's for the longer interview –  exploring what types of networks can be built on AI because you don't get it for free.

Reid Hoffman:

Yes, exactly.

This direction. Hello.

Audience:

How many players do you think are going to get to the frontier of large models, and to what extent do you think the large models, do they get commoditized? Is that where the value accrues, and who gets there?

Reid Hoffman:

There's a bunch of questions around the large models. I do think that the large model training runs are going to get hugely expensive. And so that means on the small end (because the ability to move forward), I think there's a line of sight to the final training runs, let alone the computers and everything else and all the buildup to it being hundreds of millions of dollars today and can be challenging from a startup perspective. And so on the small end for doing that now, I think there's at least enough of an end that the providers of such Microsoft OpenAI, Google, others will be competing enough with each other to try to lower their prices to an operating margin of close to zero in order to have that in the space.

And so I think that we'll provision for startups, but it'll make startups harder if your theory is, ‘I'm going to build, scale, compute,’ For that, you have to have a very specific set about how that's going to work.

Now that being said, and while I believe that there will be a huge value to continuing to go to the scale compute and having training clusters that are powered on the power that we power cities with  – which is essentially where we're going and there will still be some unique things come out of it – the question is which kinds of capabilities of that begin to s-curve versus J curve and what are we beginning to learn about how to train these such that you can train a pretty interesting model that might have a smaller compute infrastructure.

So for example, one of the papers that I found really instructive from last year was from Microsoft research called All You Need is Textbooks, which was basically kind of how to do that.

And so I think there would be a bunch of stuff in small medium-sized models that will still be interesting and won't be just completely occluded because there's the frontier models. So all of that is the complexity of the space. But nevertheless, I don't want to have the answer of complexity occlude the fact that if what you're trying to do is compete with the frontier models and you don't have a theory of the game that occludes the fact, that the cost structure is going to go up intensely year by year, you probably have a bad theory of the game. And so that's where I think it'll play out. And I just came back, I was in Europe last week and this is one of the things they have as an intense question. They would like to be fully in the game or at least the forward- thinking ones do, which a bunch of folks in France and the UK do. So anyway, so I think it's a good thing to have more frontier models, but it again gets back to the large players will win, the small players will win, but it's not necessarily the same game

Seth Rosenberg:

Let’s do one more question.

Audience:

Hey Reid. So thinking about the land of agents I guess, which might be five years out, how do you think about decentralization, democracy preservation, and what role, if any, do blockchains have to play in it?

Reid Hoffman:

And then two hours later when I finished answering the question… Great question, but..

Seth Rosenberg:

From the founder of Aptos.

Reid Hoffman:

So one is we have to rebuild to a set of trustworthy information flows that inform democracies intelligently as collective learning systems that improve over time about what actually is the state of the world, what works in the state of the world, what sources of information at trust, et cetera, that is critical. And right now we're kind of a little bit heading in the other direction right now, or a lot of these cases may be. And that's for natural human reasons, which human beings tend to divide into groups and compete with each other. And so this is frequently described as we all know in this language, like filter bubbles or ecosystem bubbles or other kinds of things. And one of the good possible things around Web3 stuff is to say, well, here's systems of building on trustless trust. Maybe if people bought into that and it worked the right way and had identity validation of certain pieces of information tied to people and so forth, and you could get more trustworthy information.

Obviously one of the big questions is do people understand that system enough to understand that it has a certain trust coefficient versus others? Most people tend to build their trust systems based on who their community is listening to already. And so it behooves all of us to say, okay, what things I always ask myself, what things that my community might be believing that other people who are smart would have a way that I would update my ecosystem. Now, one of the questions around it is we do need to have some abilities. Like you say, well this is a trusted provider. I know what the providence of this information is. You still need the systems where you believe that is the provider of information, someone, something, or someone who's a trustworthy source too, and decentralized [systems] might lead to that.

Seth Rosenberg:

Amazing.

Well thank you so much Reid, and we'll all be here for dinner in the other room. Thank you Chris and everyone else for helping putting this amazing event on. And thank you to everyone here. We're obviously going to learn from each other. It's an incredibly amazing and impressive group and I feel lucky to be part of this community. And thank you guys for coming.