Seth Rosenberg:
Hi, I am Seth Rosenberg. I'm a partner at Greylock and the host of Product-Led AI, a series exploring the opportunities at the application layer of AI.
I'm excited today to have Aravind Srinivas, who's the CEO and Co-founder of Perplexity. Perplexity has made the bold move to take on search, but instead of links, it provides a cohesive answer – plus links. It's an interesting approach that has resonated with a lot of people, including me. Perplexity is one of the few AI tools that I use on a weekly and often daily basis and has exploded in popularity over the last couple of years. Now it's at tens of millions of monthly active users.
So I'm very excited to have Aravind on today to discuss the product and the future of AI applications including perplexity. So Aravind, thanks for joining.
Aravind Srinivas:
Thank you for having me Seth.
Seth Rosenberg:
So maybe to get it started, we'd love to just hear about your background and what are the series of experiences that led you to start Perplexity.
Aravind Srinivas:
I'm originally from India, just like a typical nerd doing all these olympiads and cracking the entrance exam to the IIT. I got a degree there in electrical engineering, stumbled upon AI and machine learning during my undergraduate (I used to borrow gaming rigs from students in the lab to train neural networks). Got excited about the DeepMind-Atari result, and started to try replicating it. I wrote a few papers that got me into Berkeley for a PhD in AI. I wrote a paper there that got me an internship at OpenAI under John Schulman, the guy who went on to be the lead scientist of ChatGPT. Little did we know, we would all be doing the same things later, but at that time it was not clear at all. AI was still like a pretty research [heavy] discipline. Then I got an internship at DeepMind. A year later we pivoted from neural to unsupervised and generative models, because the writing on the wall was clear that generative AI is getting really, really successful. Even at the time of GPT-2 days.
It was during those days that I would really think hard about what is the next page rank. I would even read all these books in the Google library on how Google works - In the Plex and all sorts of books written on the early days of Google.
The startup fever hit me too. I was in Silicon Valley already, not exactly the core of Silicon Valley, but Berkeley is pretty close to San Francisco. So I would hear about startups, and I watched the TV show Silicon Valley. I initially thought it was funny, but then people told me it's quite close to reality.
Seth Rosenberg:
Definitely.
Aravind Srinivas:
So I was pretty excited about all that and tried to start companies (literally) on lossless compression.
Seth Rosenberg:
That's a little too close to Silicon Valley, I think.
Aravind Srinivas:
I know, I know. But it’s actually a good idea and generative models are the way to do that. I think it'll be done in the near future where the hardware gets better.
But it's sort of the thing where I was really thinking hard about how to build a company in the Google-style, where there is a core underplay of research, but the exposition of that is the end user-facing application. We really wanted that sweet spot where it is a product that people can use every day, and the more people use it, the AI gets better and the only way to improve the user experience in a dramatically significant way is to make advances in AI.
This sort of dual plays essential. This duality is essential, and I really tried hard to think about what categories through which this can actually be done. Larry Page was famous for saying that the mission of Google stays relevant as long as AI is solved. It's an AI complete problem to solve search.
Seth Rosenberg:
Yeah, that's super interesting.
Aravind Srinivas:
You truly saw search, not just giving you 10 blue links, but actually getting you whatever you wanted really quickly. I was very inspired by that. Then, Transformers were really beginning to take off. I was excited about it, but it was still research stuff.
So I worked with the people who built Transformers at Google and then I went to work at OpenAI full-time, because it still felt like it was not the moment yet, at least in 2021. And then in 2022, DALL-E and GitHub Copilot and all these things began to take off, not just in terms of people using it, but revenue-generating machines. That's when I thought, okay, this is pretty serious. This is no longer a thing that we used to just do research and labs and papers on, but people can actually put front ends around it where the core model itself is a product (except, of course, just the model alone is not sufficient, but the model is doing a lot of the heavy lifting behind the product and then you can build a lot of cool UX around it and collect data and make it better).
So I reached out to Elad Gil and Nat Friedman around the time. I DMed them and both of them responded, and within a week or two, they both committed to putting in (together) around $1-2 million. My friends were like, “Look, you're going to regret it if you don't ever try. And these two are legendary investors, so you're not losing anything. Worst case, you go back and keep doing your research at Google and you'll get money anyway. So this thing is worth trying.”
Seth Rosenberg:
But the initial idea was not what Perplexity is today, right?
Aravind Srinivas:
I mean, I always wanted to disrupt search. I'm obsessed about search and Google. That obsession carries over even today. But I think Elad was right in pushing me to think more about, “Hey, concrete ideas. Don't just talk in abstracts.”
So we started to look at how it would look if you searched over your data sets you own, or very narrow data sets. And that went on to prototypes of searching over Twitter, searching over LinkedIn, GitHub. I even once met Reid Hoffman in those early days. I asked him, “Hey, there's this LinkedIn search prototype, but LinkedIn's known for suing people who scrape their data. Is it okay?” And he just laughed. Then we used to show these demos to people and we’re getting pretty excited, but one day we just had this.. I thought that, okay, what if we just search over the whole web, built a tool for ourselves to ask questions about anything? It came out of our own need. We just did not have any company-building or product-building experience. And then we just built a GPT 3.5 Slackbot and that was working reasonably well, except that it was hallucinating. It just made up stuff.
So then we saw how to solve a hallucination problem: combine search index and LLMs. Use search engine retrieval, and then have the LLM do the summarization. That became Perplexity. And that was the day where we had it, and a few people were using it and there was a lot of interest. I was using it myself regularly, but then the fear of launching it was there: “A lot of people will think I'm an idiot, or too ambitious.” So I was not sure.
And then when we sent the prototype to people, people would be like, “Look, I think this answer idea is cool, but you should still put the links. I want to be able to pick a link or answer according to how it works.” But we felt like if we did that, we would not differentiate at all. It would just look like another Google, except as some summary at the top, and we were pretty sure they would do it themselves. There is no differentiation, right? You really want to make it prominent that you're different.
We saw how to solve a hallucination problem: combine search index and LLMs. Use search engine retrieval, and then have the LLM do the summarization. That became Perplexity.
Seth Rosenberg:
Right.
Aravind Srinivas:
We thought, okay, the real difference is it's no longer a search engine – it's an answer engine. It just gives you an answer and there are sources. That's it. And the source thing comes from our academic background. We always cite things when we write papers. We thought it was worth the experiment. At least when they want to ask questions and get answers, people will come here, and for regular searches, they go to Google. That was the thinking.
Seth Rosenberg:
I feel like you're taking advantage of two things. One is this new technology that enables a new way of basically retrieving information for people. And two is kind of the innovator's dilemma of the business model for companies like Google, where you have this window of time to build a new behavior and become the new default, as you say, answer engine.
Aravind Srinivas:
Only later after we launched and we realized that this is actually disruptive to their business model. There was always this question, why would Google not do Perplexity? Google can do a search generative experience. They can put the experimental AI summary at the top, which makes the page even more clunky and slow, honestly. When you're going to Google, you want the 10 blue links and to quickly get away from there.
But the business reason is that, okay, people actually just get the answer. What's their incentive to click on links anymore? It's much lower. At least a fraction of link clicks will be much lower, and they make money off people clicking on links, which means they don't have an incentive to actually lower the link clicks. They only have an incentive to increase it. So even if they do the SGE thing for every query, they don't have an incentive to do it for commercial queries.
So it's kind of like we felt like, okay, clearly there's an angle here and then ChatGPT was not using the internet for all the queries anyway, and it's starting to do everything in one chatbot. So we felt like there needed to be one focused product on just web search in the form of an AI conversation, and that became Perplexity.
Seth Rosenberg:
I've heard you talk about how everyone has natural curiosity, but not everyone knows how to ask the right questions.
Aravind Srinivas:
It's hard to ask a question, right?
Seth Rosenberg:
So how do you solve that in the product?
Aravind Srinivas:
So I think there are a few ways we try to address this. First step is suggesting questions to ask you when you come onto the search bar itself, which is something search engines already do, but we don't do it in a way where it's an auto-suggest, where as you keep typing, it keeps changing in the dropdown menu, because we didn't find that users liked it. That was more tuned for the search engine use case.
And then we have Discover – Perplexity Discover –that has interesting threads of recent news and other scientifically curious interesting things. You can look at that and sort of get a feel for what questions to ask. And we also do this thing of the Pro search when you're starting with a question and then you are using the Pro search toggle. We iterate with you and we ask you clarifying questions and then we expand your question together. For example, you can start with “Trip to Japan,” and then we ask you, “Which part of Japan are you interested in? What are you interested in doing in Japan? Is it culture or is it nature or is it food and sightseeing?” We ask you all these additional questions and then that helps us get you to a much better refined version of the question than what you started off with.
Seth Rosenberg:
Yeah, I like your perspective that not everyone needs to be a prompt engineer and solving this problem of answers includes solving the problem of asking the right questions.
Aravind Srinivas:
That's sort of the direction to take all these AI assistants towards, where we don't blame the user for not being able to come up with a good question. Instead, try to have the AI work for the user to get them to a good point. Start with something really vague. It could even be a single word. I think that's the sort of thing that we want Perplexity to be – the ultimate app that keeps working for you to make you smarter and smarter.
Seth Rosenberg:
What are people using Perplexity for today? How do you see the segmentation of all search queries in the world?
Aravind Srinivas:
So this is how we see Perplexity in three dimensions, accuracy, latency, and readability. So this is talking more about the readability part of it, and the readability should be tuned for different query categories. I think Perplexity started off as being very useful for fact checks and deeper dives into scientifically interesting topics. I mean the traditional search use case of just getting the latest score in an ongoing NBA game, getting the latest score in a tennis match or converting US dollars into Euros or Indian rupees, these use cases are not covered well in Perplexity today. That's the truth and we have to improve here. Google does an amazing job with this today, giving you all this information really quickly in a way that is so tuned for your visualization to consume them very fast. Basically what I'm trying to say is summarization giving you answers in whole paragraphs, so having all people just converse and chat in this UI is not the right model.
If I'm asking about you, Seth Rosenberg, I should get a knowledge panel about you that has the salient parts of you highlighted to me. And if I ask about Greylock, I should have the same thing. But if I ask about Greylock investments, I probably want to get a table of top investments and which stage you did it in. [I should get things like] What's the current AUM, or market cap of your companies? Or if we talk about the AI strategy of Greylock, I should just get all the AI companies that Greylock has done, and what's the rationale behind it? This would be pulled from your blog.
These are the kind of things that you can only do if you're ultra product-focused and you're working backwards from the user. What would the user want? And then how do you do it without hard coding? All these things. You don't want to hard code it for all these cases because that's just not scalable. Maybe you'll be better on these queries, but that's not the way to build a hundred billion company either. So how to do it in a way where you're really building a nice taxonomy of things and trying to optimize all these use cases really well, but giving the 80/20 on any query in general also. This is the challenge, and that's why building Perplexity is a lot of hard work.
Seth Rosenberg:
Definitely.
You recently launched a pro subscription. How are these users interacting with Perplexity different from the normal users?
Aravind Srinivas:
Look, we are not very actively trying to convert users to become paying users. If we actually did that, we'd be making a lot more revenue than today. The right way to think about this is to get people who are anonymously using it today to become signed-in users. We see a very high correlation between signed-in users and weekly active users. Very high correlation, almost like they're the same thing. And then we see a very high correlation between daily actives and paying users.
So the right way to think about getting more users to pay for the product is convert them to DAOs, and the funnel for that is anonymous users to signed-in users, and then signed-in users to daily active users. That's the way in which we are thinking about the funnels. We are not thinking about it as like, “Oh, we have 10 million users, and then we have probably around 100K paying users, so that's just 1%, which is not that big, so let's try to insert more paywalls.” No, that's not how we will think about it.
Seth Rosenberg:
If you zoom out for a second, just in five years of how Perplexity is going to win and be a $50-$100 billion dollar company, I could see at least three potential paths.
One is there's this new technology and new way of consuming information, and you go for the large consumer opportunity and you live alongside Google and ChatGPT. With the combination of model orchestration and UI and fine tuning, you're able to occupy a distinct use case for the masses. That's one path.
The second path is maybe go more enterprise focused or more vertical focused, where you really go deep on a couple of use cases.
Then, maybe option three is Perplexity evolves into more of the agentic use cases, where it becomes a router between different agents that eventually exist in the world.
Aravind Srinivas:
I think all these are interesting scenarios. The challenge for us is to not try to do everything at once and fail at all three.
I don't even think about it as “consumer, router, enterprise.” The way I think about it is, how many queries per day do we have today, how many do we want to have in five years from now, and how do I maximize that?” That's it. That just means that we are going for growth and usage even in an agent's world.
My belief (maybe Sam Altman or somebody else might say something else), but my belief is people will still want to ask questions and make their decisions themselves based on the answers the AI gives them, and then task the AI to go execute their decision. You're doing your research on Perplexity and it helps you to arrive at a decision of what you want to do, and the agent takes the part of like, “Okay, go do this for me,” as the last step.
And then there's another vision for this, which is to say, why do you even want people to do the research? It's this one single thing. Go to ask a person and say, “Oh, figure this out for me,” and they do their research, they arrive at the decision, they execute the act. I don't think people want that, in terms of just the basic human agency aspect of it. So even if you give humans a superpower, just have some AI that understands them so well that the AI just runs their life? That's not going to be the case, basically.
That's just why I think Perplexity is fine even in a post-agents world, and we just have to integrate all these agents’ steps within our own app, that way we are ready for the agent workflows and for the enterprise.
A very simple explanation for why we even want to do it is there's people who use Perplexity just like a regular product at work and are getting banned by employers for that, because people are afraid that the data is leaking to an AI. That shouldn't happen basically.
So by the way, there are browsers that offer this and make hundreds of millions in revenue and it really is just like Chrome, but comes with a lot more security that enterprises want (I'm just talking about that sort of a use case where it is just literal Perplexity that can run on your work laptop, whatever browsers supported there and tracked by the employer, but making sure that it is happy with it). So that's the start for us.
And then there are going to be use cases that are very specific to your work time, uploading files and asking questions or connecting to your Notion or connecting to your Slack workspace, your Gmail, and we just see this as extensions of pro use cases. That's the free Perplexity thing and there's a Pro, and the Pro is meant for even deeper research, harder questions where the value of the answer is a lot higher, and we just see this as a natural way to transition there. So I think that part we'll definitely work on.
And for the consumer, it's just covering the biggest surface area of queries that we can get. So we have to do all three honestly, but I don't see it as three different projects. I just see it as a way for us to maximize our surface area of queries, volume of queries and what you do after you get an answer.
Seth Rosenberg:
How would you describe the tech stack of Perplexity? Where is your team focused on building?
Aravind Srinivas:
So we have a very, very strong backend team that works a lot on the core latency. Even today people tell me Perplexity is still the fastest. This is also the thing of the wrapper part: If you are a wrapper, there should be no advantage in your infrastructure or your system, your product over somebody else that's trying to build a clone of it. I always see a lot of people on open source Perplexity. There's a lot of people who do these things and obviously it gets a lot of interest and hype on any of these social media, and I try to go and use some of these and they are just so slow. And I'm not even talking about this in the scale of millions of queries a day, it’s literally just like a hundred or thousand people testing it.
To actually build production grade infrastructure, you need to work hard on many details. It's not just the first step. As Elon Musk says, prototypes are easy, production is hard. And that's what it is. So we have a lot of people working on that.
And then you talk about the underlying APIs or which includes the models, the search indexes, retrieval engines. We have to build a lot of it in-house over time, but not for the purpose of building a moat. If you talk about building moats, then we all have to go build our own Stripe. We have to go build our own AWS, build our own Nvidia. Obviously we can't do that. The reason we are building our own models and our own indexes is mainly because that's the only way to make the product even better in the long run, not building it for the sake of it.
You can think of models as three categories, those we use models as three categories: Taker, shaper, maker. Taker is like someone who just takes an existing model, prompt engineers it and puts it into production. Shaper is like, “Oh, I'm fine tuning it. I'm post-training it a lot on my own data and specializing it for my product.” Maker is the one who builds the core foundation model itself. I would put us in the shaper category where we can take the base model that people open source and do the RHF and SFT step ourselves, as well as take RLHF fine-tuned chat model and train it even further to specialize it for our product. We do this with both open source and closed source models. We fine tune GPT-3.5 or we fine tune Anthropic or we fine tune LLama. We fine tune and we deploy a bunch of them in our product.
We are not in the maker category that our own foundation models. It's very expensive, and I also think it's a race to the bottom there, unless you're funded in many billions of dollars. And we are not in the taker category. We started off in the taker category because it's the right thing to do and you're just literally launching a product, but we are not in the taker category anymore.
Seth Rosenberg:
I'm curious if you take, let's say five parts of the stack that you're building, if you look at the underlying infrastructure latency, the second is kind of the search index, the third is the model orchestration layer. The fourth is fine tuning the models themselves, and the fifth is the UI. Out of those five things, are there one or two things that you believe is going to be the kind of core IP moat for Perplexity, or do you really believe the magic is in how it all works together?
Aravind Srinivas:
I think the magic is in how it all works together, but I do believe that there will be an asset class in that what is hard to create is usually the most valuable. Of course, the reason the orchestration is hard to create today is because it requires a unique combination of people design, user experience and product, extremely good backend orchestration, good understanding of AI models (at least to an extent where you can shape them for the product) and good understanding of search indexing and retrieval infrastructure too. In which company do you see the five different skill sets in one company? It's very hard. That's a unique special thing, but not incredibly hard to recreate. So on a longer term basis, I think the even harder thing to recreate is going to be our search index.
You can ask, “Why is it hard? You're not even scraping the whole web, you're not even indexing the whole web, you're only doing it on a part of it.” But that's the thing - the whole web has never been really useful. It's only been a part of it that's been very useful. How do you know what parts of the web are useful? It's almost like a chicken and egg problem. The parts of the web that are useful are those that can be used to answer people's day-to-day questions in the form of an AI chat bot. That is our definition, but how do you know that before even having a product, and having it used by people and seeing where it fails and not fails, and go and curate your scraping according to that? So the only way to know it is to have a product get used by a lot of people and then go back and redesign how your crawling and indexing infrastructure is.
So that's why when we keep succeeding in terms of getting more users and having that update to our index and crawling and ranking when it's done over a period of multiple years. That thing is basically impossible to recreate unless you are also an equally big product, equally big consumer applicant. And even for Google, it's hard to recreate that because even though they have an index, they're not extremely curating it for the AI chatbot use case. They do not have an incentive to do that. They have an incentive to keep the index as general as possible for the search engine use case. So this is a game that can only be played if you're committed to this for the really long term. And so if we succeed at it, then I think that becomes a real moat.
What is hard to create is usually the most valuable.
Seth Rosenberg:
Yeah, I totally agree and it's a very unusual thing for companies to be great at this depth of the stack. And I think that's part of the reason why it's hard to have a breakout consumer AI application, but I think it is part of the maybe non-obvious moats that you can build here.
One more question on this thread on the model orchestration layer. There's obviously one version of the world where OpenAI's models become more and more powerful, and really just a single model is able to serve the majority of use cases at the highest quality. There's another version of the world where routing to many different models that are specialized for certain use cases is the optimal situation for the next several years. I'm curious what your perspective is.
Aravind Srinivas:
Models are becoming somewhat of a commodity. It's obviously easier if you just have to serve one model at a time. From our own perspective too, if there's a single API, it's a lot easier. The reality is that the best model is the cheapest model for a certain capability, and that’s constantly changing and so you are incentivized to make best use of anything that's out there. I don't think there are going to be models that are great for one query category and another query category or something like that. I think that's going to be difficult for the query. You don't want to use Opus or GPT-4 for answering, “What's the capital of France?” At the same time, you do want to use Opus or GPT-4 for an answer to, “What will happen to the world today if the interest rates go down in the next few months?” Or, “What would happen if that was an actual World War III?”
Those kinds of questions are stuff where you’d want the highest capable reasoning model to be part of the Q&A workflow. So my sense is we constantly think about the difficulty level, and use many models together, and all the square models that are clarifying user intent. You don't need the largest models there, so by design you have to have an orchestra of models. Some of them will be open source ones, some of them will be closed. So that is our advantage too. We are not tied to any family, we are not tied to one infrastructure. We can adopt whatever is best in the market today and there's a certain level of focus and benefits you get by being an application company.
Seth Rosenberg:
Definitely. Yeah. So it sounds like in a world where cost and latency still matter (which is likely the world that we live in), then routing to the appropriate model, the appropriate power for the question is the optimal experience.
Aravind Srinivas:
Yeah.
Seth Rosenberg:
I'm curious: In a world where Perplexity answers everyone's questions, what does that do to the value of an underlying website or application?
Aravind Srinivas:
There's no way to cook a good meal without the right ingredients. I see Perplexity as this final cooked meal, but the ingredients are coming from different websites that keep creating interesting content. And I think websites that get cited on Perplexity are going to be an important thing – whether your content on your site actually becomes part of a Perplexity answer, and that'll be the whole citation economy created out of it.
This is already true in professions like academia where the number of citations you have is tracked and whoever is highly cited is very respected. So domains that are highly cited in a Perplexity answer on day-to-day human question answering, I think, would be really respected. They'll build brand authority scores and trust through that. This is sort of the equivalent of the number of visits a month you get on similar websites in the new citation economy, the whole page ranking thing like sort of reemerging. It was inspired from academia that pages that got highly cited are important. So websites that get highly cited are important too, except that was done through the backlink structure, but now it's done through whether the website's content got cited in an actual answer and that can be used to build a new different page rank of the web.
And of course, how do you ensure a new website can still catch up and become highly cited? It's just like how a new scientist can come and create amazing content and amazing papers, amazing discoveries, and become highly cited too. These things are stuff you should think about, but I feel like that's sort of what's happening or going to happen to the web in the near future.
I also think the whole notion of browsing on the phone will change a lot. I think the web browsers are still fine. They're not going away anytime soon unless we completely redesigned the whole desktop native apps – instead of opening Chrome, you're just having an app for, say, Mac's app Perplexity, and you're doing all your work there. I don't think that's going to happen because we're also used to using the browser and we still need regular browsing to get onto a link and stuff like that. But I think on the phone it's a different scenario.
I do feel like the earlier era when people were worried about Google losing market share because things are moving more to the phone. That didn't happen to them. And that's often the thing that Sundar Pichai says – like, “Oh, we survived the mobile thing, so we are going to survive the AI thing. But actually the AI thing is where the mobile becomes an even bigger problem than before because even though mobiles came, there was no technology where people could directly ask questions and have AIs to work for them instead of having to go to websites and manually enter details, which still happens on the phone. But now I think all these things are becoming quite close to possible or already possible to some extent. So the necessity for you to actually open the Chrome app on your phone is much lower and that'll definitely decrease the surface area of ads on the mobile browser. And so that's definitely going to impact all the CPC and CPM economy on the phone. Mobile web is definitely going to get disrupted more and people are going to use mobile-native mobile apps more directly.
Seth Rosenberg:
Yeah. One thing that's really interesting that you said about the citation economy is I think there's one way to view the future of Perplexity as commoditizing webpages, but there's another way to view it where it actually increases access for the long tail, because the navigational Google search is really a winner-take-all for every individual search: You click the top link, and that's usually where you stay. Whereas in a world of Perplexity, if there's an interesting paragraph on page 30 in the Google search results,that actually may be elevated into part of it.
Aravind Srinivas:
That's right. Yeah.
Seth Rosenberg:
Two closing thoughts. One, I'm curious how business models will evolve when you're delivering an answer. Obviously you want to stay true to the integrity of the answer, but that could also end up being a valuable auction process in a way that's different from the existing Google business model that is famously making almost a billion dollars a day today. So I'm curious how you see these business models evolving in a world of perplexity.
Aravind Srinivas:
People are expecting immediate changes. That's not going to happen. These are empires that have been built over a decade or two. The AdWords started around 2001 or 2002, I believe, so it's been like 22 years or something. It's not going to just come crashing down immediately. Whatever advertising revenue Instagram is making or Facebook is making, these are all things that ideally should have been Googles right? You can make a case for that. And because they created a new way of people discovering content, they created a new ad economy around it too.
And that's going to happen to the AI chatbots as well. Right now, a lot of these are weekly usage tools, but as they start becoming more and more daily usage tools and part of your workflow and the way you work changes the way you live – changes like how you make your decisions changes, where you get your decisions from changes.
I think all the people who are currently advertising right now on the traditional old platforms will also rethink their strategy a lot. And this requires education from our end to them, that [message of], “Hey, look, you can also come and advertise here, and this is how you do it. These are the ad units that you're going to bid for and this is how the economy is going to work.” People automatically change. And like I said before, it might be a lower margin business, most likely it'll peak. It's not going to be this 80% margins business, but that's the thing for a newcomer – any positive margins are fine for an existing incumbent. Even a marginal reduction in the margins, existing margins is really bad because it's happening at scale. So that's why I think this is one of the very rare moments in computing history where the startups are in power over the incumbent.
Seth Rosenberg:
I totally agree. Okay, well, let's leave it at that.
Aravind, thank you so much for joining. Thank you for building this product that I use all the time. I’ve really enjoyed getting to know you over the last several months and appreciate you spending some time today on Product-Led AI.
Aravind Srinivas:
Thank you so much Seth