From Startup to Exit
Welcome to the Startup to Exit podcast where we bring you world-class entrepreneurs and VCs to share their hard-earned success stories and secrets. This podcast has been brought to you by TiE Seattle. TiE is a global non-profit that focuses on fostering entrepreneurship. TiE Seattle offers a range of programs including the GoVertical Startup Creation Weekend, TiE Entrepreneur Institute, and the TiE Seattle Angel Network. We encourage you to become a TiE member so you can gain access to these great programs. To become a member, please visit www.Seattle.tie.org.
From Startup to Exit
Gen AI Series: Gen AI from a VC perspective, A conversation with Vivek Ramaswami from Madrona Ventures
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Generative AI has taken the industry by storm as we all know. Besides Open AI. a number of players like Meta, Google and Anthropic have announced new foundation models. In this episode, we get Vivek's perspective of the competitive landscape and how it will likely evolve. Will open source models start to match Open AI's GPT 4o in terms of capabilities? We also discuss Apple's recent announcements around "Apple Intelligence".
Hundreds of millions of VC money have poured into Gen AI startups but who is really making any money? Vivek gives us his insights as well as the layer of the AI stack that Madrona invests in. Finally, we ask Vivek to make one big prediction for the industry over the next one year. Tune in to listen to his insights and predictions!
About Vivek Ramaswami
Vivek has spent his career investing in high-growth companies across software and fintech. Prior to Madrona, Vivek helped launch the venture capital arm of Steadfast Capital, where he led investments in Zapier, Klaviyo, Lucid, Forethought, Sendbird, Jumpcloud, Algolia, Wealthsimple, and other high-growth companies.
Prior to Steadfast, Vivek was a Principal at Redpoint for five years, where he helped source or work on investments in SentinelOne, Nubank, Bright Health, Cockroach Labs, Hashicorp, and others. Vivek started his career in technology investment banking at Goldman Sachs, advising clients on M&A and IPO opportunities in the software and consumer internet sectors.
Vivek Ramaswami joined Madrona in 2022. As part of the Madrona investment team based in California, Vivek partners with ambitious founders across enterprise software, cloud infrastructure, data tools, intelligent applications, and fintech. He focuses on helping founders who have achieved product-market fit scale to the next level.
Brought to you by TiE Seattle
Hosts: Shirish Nadkarni and Gowri Shankar
Producers: Minee Verma and Eesha Jain
YouTube Channel: https://www.youtube.com/@fromstartuptoexitpodcast
Welcome to the Startup to Exit podcast, where we will bring you world-class entrepreneurs and VCs to share their hard-earned success stories and secrets. This podcast has been brought to you by Thai Seattle. Thai is a global nonprofit that focuses on fostering entrepreneurship. We encourage you to become a Thai member so you can gain access to these great programs. To become a member, please visit www.seattle.tai.org.
SPEAKER_03Welcome back to another episode of our podcast from Startup Exit. My name is Gabri Shankar. I'm on the board of Thai Seattle, a serial entrepreneur based in Seattle, Washington. My co-host Sharish Natcarney, who is also a board member of Thai, is a serial author, and I'll get to it in a second. But we are very excited to have our next guest on our generative AI series. He is from the Bay Area, so we'll bring a unique perspective about the generative AI movement that we are on at the moment. This podcast is brought out, brought to you by Ty Seattle. And Ty is an art-for-profit that fosters uh entrepreneurs uh all over the globe. Uh Sharish has written two books. Um we borrowed the title of his first book from Startup to Exit. And the second book, Winner Take All, is also out where books are sold. We thank you all for subscribing and supporting us uh through our one-year journey that we've been on doing this podcast. And uh please uh subscribe, share, and uh it the podcast is available everywhere we uh you can listen to podcasts. And it's also available on YouTube where you can see it on video also. With that, let me hand it over to Sharish to introduce our guest and uh get this episode underway. Sharish.
SPEAKER_04Thank you, Gauri. Uh I'd like to welcome Vivek Ramaswamy uh to our podcast. Uh Vivek is an investor at Madonna Ventures, which is a premier firm based out of Seattle and now in the Bay Area as well, where Vivek is based. Uh Vivek also writes a really uh awesome newsletter, which I would recommend highly, called Aspiring for Intelligence. Uh it talks a lot about AI and latest uh trends in in enterprise. Um so uh today we'll be you know exploring uh the this the space of generative AI. He's you know Vivek has written a lot about it, so we'd love to tap into his expertise. So welcome, Vivek.
SPEAKER_01Hey Sharish and Gaurie, thank you so much for having me on. Great to be here. Great.
SPEAKER_04So uh let's start uh with your um uh background. Uh tell us a little bit about your journey from Goldman Sachs to Madrona Ventures and what led you to join Madrona in the Bay Area.
SPEAKER_01Yeah, absolutely. So I'm actually not a Bay Area native myself. I'm originally from Canada, uh, from Edmonton, Alberta, uh, and uh you know, went uh to university in the East Coast near Toronto um before making my way out to San Francisco to work at Goldman Sachs, um, where I started my career and worked in the tech investment banking group for a couple of years. Um, you know, that was in uh 2013, uh, and it was a really fun time to be at the bank. And Facebook had just gone public, and there were some really amazing uh startups and tech companies that were growing and scaling. And I just felt like everywhere around me in San Francisco, uh, people were building startups and products. And I was like, okay, this is really interesting. I'm I'm more of a finance background type of person, but I want to find a way to get involved in these startups and companies. What's the best way to do that? And I started looking into venture capital as a as a career, and uh one of the uh the uh the things that I was doing at the time was like I reached out to some people who are a couple years older than me or had just you know left Goldman and were at other venture capital funds. And um, one of the gentlemen who I was close with, he had just joined Red Point Ventures and he was telling me a little bit about it, and he was telling me about VC, and I was like, oh, this sounds really cool. You get to meet really awesome founders and entrepreneurs all day. You get to look at different industries and you get to look at the tech landscape and see what's changing and try and make investments here and there. Um, I thought that was a really uh interesting field. And so ended up pursuing that and uh, you know, I ended up going to work at Red Point uh in 2015 uh on the growth team. So we were um, you know, uh the 400 400 million dollar growth fund at the time. Um that's now uh uh you know that was fund two when I joined, and I think they're up to fund four or five. Uh, but you know, terrific place, spent about five and a half years there. Um we were mostly investing in Series B, Series C stage investments, uh, in terms of where I was focused, um, enterprise software companies, cloud infrastructure. Um, you know, we were investors in companies like Snowflake and HashiCorp and Sentinel One and a bunch of other great businesses. Did that for about five and a half years, um, learned a lot, had a really good time. Uh, and then I was looking at doing something a little bit different and um, you know, something a little bit uh were uh more entrepreneurial for me uh at the end of 2020. And that's when I linked up with my current partner, Karan Mahindru, uh, who is one of my partners at Madrona. But at the time he was coming from Trinity Ventures, I was coming from Redcoin. We joined a large hedge fund called Steadfast. Um they had been looking at launching the venture fund, and we started that and did that for about a year and a half, um, made about 13 investments, mostly uh mid to late stage software companies. Uh, and that's when we heard from Madrona, and we had known the Madrona folks for a little while. Obviously, that Tim and Soma and Matt, uh, who you mentioned, were uh leading the fund and doing a lot of really incredible investments out of Seattle. And they had been thinking about opening up a Bay Area office, starting a team there, and you know, Karan and I were chatting with them and they were like, hey, why don't you come join us? And uh Madrona had just raised two new funds. Uh we felt like, hey, we've got an office here, we've got our networks here, and increasingly in the areas that Madrona had uh, you know, uh an early lead in in AI, AI infrastructure, where we had been investing for the past 10 years, uh, we felt like there was a lot of opportunity in the Bay Area and you know, really beyond across the US and the East Coast as well. And so the fit was really good. And so uh Karana and I joined in uh late 2022 and uh been here for the last uh almost two years. Okay, great.
SPEAKER_04Um, so let's talk about um what's happening in the field of generative AI. I'm sure uh you must have uh heard and read about Apple's recent announcements, uh, you know, enhancements in uh with Siri and integration with Chat GPT, you know, writing tools, summarization tools, uh image generation, and so forth. Um what were your thoughts about the announcement? Was that what you were expecting? Uh any surprises?
SPEAKER_01You know, I would say in a way, it wasn't incredibly surprising that Apple realized that they need to start talking about AI. Uh and you know, my I I think you had mentioned that I was a writer of this blog, um, Aspiring for Intelligence. I co-write that with my uh colleague Sabrina Wu. And Sabrina and I were chatting about um, you know, what we should talk about about, I think two months ago, and we were saying, hey, like, you know, Apple is the one giant that everyone would think that they would incorporate AI into their products in some way, but they haven't been as loud about it. Now, at the same time, in a way, they have been incorporating AI, right, in Siri, and and Siri was an early iteration of being able to use uh data and models to um to act as a personal assistant. Uh, but then I think they certainly got lapped by Microsoft for sure, uh, especially through their partnership with OpenAI and Google and all these other giants. And so I think in a way we were all expecting Apple to do something with AI. We weren't exactly sure what it was gonna be. And I think they're doing the the smartest thing in that the best advantage that Apple has over everyone else today is their distribution, right? The one thing that uh OpenAI or NVIDIA or even Microsoft does not have is a billion plus devices and users worldwide where everyone is buying those products and putting the most amount of their personal detail and information into those products, in iPhones, in MacBooks, in iPads. And what Apple realized is hey, we have this incredible distribution advantage. We have all of these devices globally, we've got an incredibly committed user base. What's the best way we can incorporate AI into those products and make it useful for everybody else in a way where they may not necessarily have to release a foundation model, you know, and open source it in the way that Google does or Meta does or others do? And so I think they are taking a different tact in the AI game. At the same time, a lot of these demos that they're showing, people are like, well, when is that actually gonna be usable? Right? Like, what are they doing that I can use today? And I think that's gonna be the next question is all the really cool things they were showing that Siri can do, uh, that you can do on your iPad, that you can do with Apple Notes, um, the real question is gonna be when does that actually become generally available and how are people gonna use it? But you know, partnering with OpenAI, I think was a strong move by them.
SPEAKER_04Right. So overall uh uh in line with your uh generally in line with your expectations. So the only question being uh when the um uh the features will be made available.
SPEAKER_01Yeah, I think the biggest surprise with Apple is frankly they weren't talking about AI 12 months ago, or they were they were they were this late to the party. And Apple is known for being well ahead of the game, you know, most of the time on consumer devices and things like that. And so, you know, we might say in a year Apple was waiting until they found the perfect opportunity and they got it right and then decided to implement it. And so, you know, we were all questioning when was Apple was gonna start talking about AI in a meaningful way, and now they finally have. And the next question will be how does this actually get implemented in its products and what are people gonna think? Right. Okay.
SPEAKER_04So um the the space of LLMs has also been uh heating up. Uh you have new models from Meta, from Google, and and uh OpenAI has had had a new model since their original announcement. Um so the space is very uh heating up uh with a lot of uh competition. Um what is your sense for how these new LLMs compare to um open AI?
SPEAKER_01Yeah, it's it's interesting. I would say that for a while now, for at least you know the last six to twelve months, within Madrona on our you know Monday partner meetings, we often talk about all of this, right? We're just like, what's what's happening in the world of AI? What every week changes, every week is different, every couple days is different. Um, and our view for a while now has been we're not gonna have one single model that is gonna rule everything and that's gonna be significantly better than everything else. We think the world is moving towards a mix of closed source and open source models. Open source models are getting better all the time, which is great. But right now, it is still GPT that is the best model, right? And there was an article today, I think it was in the information, where the there's some recent benchmarking examples showing that GPT-4.0 is just truly state of the art, and they are significantly better than the competition. And I think what's what's gonna be interesting to see is OpenAI isn't gonna go anywhere, right? GPT is gonna be, if not the best model, a top three model for a very long period of time, and a number of enterprises are gonna adopt GPT, but there's a lot of users of models out there, right? And so we easily think there could be one or two very large proprietary models and companies that are built off of proprietary models, whether that's open AI and Anthropic, and maybe one or two more. And then there's gonna be a number of folks that use Llama and a number of folks that use Mistrol and Mixtral. So I think we're gonna see a proliferation of these models. I think for the time being, open AI has the lead. And GPT, especially GPT-4.0, is just very, very good. And uh OpenAI has the best mix in terms of an abundance of capital and abundance of talent. And those two things put together is uh what currently allows them to stay ahead on the model game. I think they know that that is not gonna last forever. That's not um, you know, that's not a perpetual lead, that's a lead that is gonna be short to midterm, which is why they're uh, you know, apparently exploring areas like agents and applications and how far up the stack do they move? Because at some point the models will get more commoditized, the their lead will shorten, other folks will get better, and then the question is what do you do with those models? Right? What is the way that enterprises actually make use of those models or consumers make use of those models? And I think that's going to be the next big paradigm shift is going from racing to saying who has the best model, to then it's okay, three to five models all look pretty comparatively similar. They're all roughly the same price and performance are around there. There's some small deltas here and there. Then it's the question is what do you do with those models once you have it?
unknownRight, right.
SPEAKER_04Now, do you think that um, you know, one of the ways that um open III has been able to uh create a really good model is they've uh you know, from what I've uh heard is that they've really literally scraped the web uh to train the model. Uh but now you have um you know lawsuits from likes of New York Times, etc. Uh, and they're paying for license, you know, licensing Reddit and so forth. Um so and there's also uh you know, we spoke to the uh transparency coalition, it's a nonprofit that is pushing for uh you know information to be disclosed as to where you're getting your data and so forth. Um so as uh more and more and more data has to be paid for uh or is not available, what do you see happening to the quality of these models?
SPEAKER_01Yeah, I think that uh it's it's definitely a question that's still open right now and that all these model players are figuring out who actually owns the data, right? Reddit's a great example, right? Of Reddit has a licensing deal uh where Reddit is getting paid by OpenAI um for uh the data that they provide. At the end of the day, if you look at Reddit and if you if you scroll through Reddit as I often do on a daily basis, uh you will see how many posts there are from users saying, wait a second, Reddit doesn't actually deserve that money, the users deserve that money because Reddit is all user-generated data or mostly user-generated data. So some of these users are saying, wait a second, if my data is being used for the model and for the model purposes, but Reddit, the platform, is getting paid, then that system feels broken to me. And do I continue posting on Reddit? Do I go somewhere else? I think the same thing you're gonna see happen with the New York Times, with user comments on the New York Times, with other publications where there's user-generated content. So I think there's a whole new question of who is actually responsible for the data that's going into the model and who should get compensated for it, who should be right, you know, what is rightful compensation look like. I think these are all open questions that are being figured out. And in that time period, that hasn't stopped any of the models from trying to go and collect the most amount of data that they can to train the model, right? Because once you're done with training the data that's on the corpus of the internet, then you go into enterprises, you're right. You go into figuring out the every enterprise. I think Jensen at NVIDIA said this on a recent um podcast or speech where he said every enterprise is sitting on a gold mine because they have proprietary data that's not available on the internet or that's not available to anyone else but that enterprise. So how do you make that enterprise data the most used uh and the most usable and the most and you can actually leverage in a high quality, highly valuable way? Um so going back to your original question of like, you know, how do I think this is all gonna play out? I think that expenses will probably go up, right? I mean, all these model players, if they're gonna have to pay for it, but they're making so much money, hand over fist, that it's gonna be a drop in the bucket for them in terms of what they're gonna pay for. I think the bigger debate is gonna be who deserves to get paid. And then for each of the enterprises who may not care about the enterprises don't care necessarily about getting paid for data. They want to say, how do we use our data to make our product better? Right. They're not gonna strike a uh an agreement or a license agreement with OpenAI. There's gonna be some other fashion in which they use these models so that they can make their products better and eventually hopefully help their end customers as well. So to sum it all up, I think data is gonna be the continues to be the lifeblood of models and high quality data, not just quantity of data, but high quality data is gonna matter the most. So would you pay as much for maybe the 14th page on some random uh you know uh chat group about automotive? Do you want data from that? Or is the quality data that's you know sitting inside of uh Kellogg's way more valuable to you? Probably that.
SPEAKER_02Yeah.
SPEAKER_04So speaking of um enterprise uh data, um, you know, I uh spoke recently to um some Google uh researcher, uh researchers, and uh uh what they are trying to do is actually use small language models uh and train them on enterprise data as opposed to taking a large language model like OpenAI and train training that on enterprise data because um uh you you know the cost is going to be a lot less in terms of training uh that model as opposed to trying to use open AI uh for that. What are your thoughts on you know using small language models versus large uh language models or enterprise data?
SPEAKER_01I think that's a growing trend for sure. I I think that as far as we've seen today, um you know, the the models that we that capture the most amount of attention are still the large language models. But small models are are gonna get better. And I think the other thing is small models that are suited to the task that that enterprise or that that you know that that use case is trying to achieve, that's gonna be where that makes the most amount of sense. And we did a podcast recently with this company, Predabase, which is um you know helping fine-tune open source models. And um the the founder Dev had a great line talking about, you know, some folks don't need a model. They need a model that solves a specific task that they are really curious about. They don't need the model to write a sonnet for them or write a classical piece of music. Right? It's like that is not necessary. There's a bunch of this stuff that is not necessary for the specific task that I'm trying to create. And so that might be related to usage-based pricing, right? A very specific model that's good at usage-based pricing. It could be a vertical uh model related to construction software, right? It's just taking construction data and getting very, very good at that specific task. I think what we're also trying to figure out is how much does the model need to have generalized information in order to get really good at something specialized? Do you have to be trained on a general amount of data first? Think about it like humans, right? You know, we go to kindergarten, we go to grade one through 12 where we're learning all the subjects, then we specialize in college or university, and then we specialize even more at our job. So does a model need to look like that? Start general and then get very specific? Or are we gonna get to a point where you can have a spun-off model that's just specific to construction or just specific to music or something else?
SPEAKER_04Right. Yeah, it makes sense. Yeah, it makes sense. Uh got it. Uh, one last question before I hand it over to Gowry. Um, you know, you written that uh uh open source models will, you know, by the end of this year, uh will be fairly close in terms of performance uh uh compared to uh Um, you know, closed source models like open AI. Um did I did I capture that correctly? Uh is that still your do you still feel that way?
SPEAKER_01You know, I I think what we may have written at the time is that open source models are definitely gonna catch up. I think that there is the open question is how long will that take? Uh and you know, I think that open AI continues to churn out incredible models, right? 4.0 is is is great. And um, you know, you like I mentioned earlier, that is state of the art. And so uh whether it's at the end of this year or sometime next year, I don't I it's hard for me to give an exact date, but it's gonna happen that open source models get just as good as closed source models. I think we are I think we can safely say that over time models are gonna get much closer, they're gonna get more commoditized. Um and at some point in a few years, we're not gonna think about, hey, do I use this model for this? You know, because it's significantly better than the other. Generally, it might be better for a specific purpose. So I do think that open source models are getting much better. Uh, whether it's Llama or Mistral or you know, Gemini or others are getting very, very good. And um, you know, I I could see in a in the not too distant future that uh they'll catch up to to the proprietary models that we're seeing today. Cool. Uh over to you, Gauri.
SPEAKER_03Oh great. Hey, thanks, Sher. Vivek, um, you're from Edmonton. I'm sure you're partial to your oilers. They're trying to make a comeback. So that's kind of feel good. Now you talked about Apple entering uh uh uh entering the AI. Is it more a comeback story? Uh I mean Siri was also ran. Now, you know uh they read the first AI, right? I mean in in in some ways that how we in uh how uh we were introduced to. Are they being careful, cautious that they don't want to make a Siri mistake, or did they miss a turn? And so they had to go with an open AI as opposed to building an LLM on their own? I mean, for once we heard rumors they're building a car. They could have been building LLM. So, you know. So what what is this a wait and watch by Apple, you think?
SPEAKER_01It could be, and I you know, I don't have any proprietary knowledge there, but I would say that it's very hard to underestimate a close to three uh three trillion dollar company, right? I mean, I think that often, you know, and myself included, we make the mistake of thinking Apple is like a startup, right? Where it's like, hey, these startups are moving really fast. How come Apple is not keeping up? And it's like, oh wait, we forget. That's a company that turns out, you know, through it's a three trillion dollar market cap, billions of revenue, hundreds of millions of cash. I mean, it's it's we just are are I think we often make that mistake of saying, how come Apple's not moving quickly? I think people often viewed Apple as the the leading edge of innovation, right? With iPhone and with Mac and with everything that they came out with. And I think what's happening is that they found some products that worked really, really well with them, well well for them, and their install base is really big, and they probably weren't thinking about AI in the current iteration of how we think about AI as much. As you said, they were early with Siri, right? They had Siri for a long time. That was probably the first real personal assistant. And then when we saw ChatGPT and others come out, we're like, hey, how come we don't have that on my phone? That's the first thing. How come that's not running locally, right? Like, how do I get that to run on my device? And how can I get that incorporated all my tools? We can't underestimate Apple. At the end of the day, they're a giant company with a lot of resources and a lot of really smart people. And as you said, like they had the self-driving car, they decide to ax it. Okay, AI is a thing. They probably let AI big for a couple years, saw what's working, saw what's not, and then decided, okay, maybe now's the time that we enter into it. And I I, you know, my personal opinion is I don't think Apple really cares what people think about whether it's the leading edge of innovation anymore. They care about what works. Yep. Right? They're a mature company, they have a lot of devices out there, they've got a reputation to keep in mind, they've got trust that they need to engender and maintain, and they can't just roll out a product and say, oh, sorry, it didn't work. You know, that's just not how it works anymore, is my my sense at Apple. It's good, if it's gonna go out publicly, it's gonna be like it works, and people like it, and then it then we'll get it out there. So you're right. I I'm off into the trap of like, ah, how come Apple's not moving as quickly? And I was like, oh, they're bigger than most startups that we look at.
SPEAKER_03Uh let me shift to the enterprise space that you talked about, right? So enterprises are going to absorb AI, they're all going through their own stages of um watching, learning, and incorporating. So the obviously the agents have become such an important nexus uh within the enterprise of discussion because they have all this data, the agents can do it. Where do you see uh enterprises really uh you know navigating towards as the entry point into AI? The obvious ones like customer care, those things we hear about a lot, because those are still, I feel, on the edges uh or the periphery of the enterprise. They're cost-saving, but they're not yet clearly changing the direction of their own uh destiny. Where do you see uh from your vantage of an investor uh startups going after and saying, hey, hey, hey, enterprise, here are agents that can change the trajectory of your commerce, so to speak?
SPEAKER_01Yeah, uh it's a good question. I you as you say, the first few areas within we're seeing enterprises actually saying, Hey, let's roll out AI is something like customer service, CX, um back office products and tools, right? Legal operations, finance. Like I think those are really areas that we're gonna see a lot of AI happen, and it's already happening quickly. The thing that I'm most interested in and that we're most interested in um, you know, collectively is thinking about okay, how do enterprises take AI and transform their products and revenue to do things that are completely different from what they were doing before? It's not just about making things a little faster and a little cheaper, but completely different in terms of the products and services that they provide their end customers. You know, and I'm thinking about this in the sense that uh the number one thing enterprises have in terms of what they're gonna do better than well in in terms of the advantage that they have other over other companies is the data that they have. I keep going back to this data piece because what they're gonna find is the data that some of the data that JP Morgan has been sitting on, you know, the petabytes of data that they've been sitting on for a long time, they're suddenly gonna be able to transform the way that they underwrite insurance or the way they underwrite new customers, the way they enter new markets. We're gonna see that with small companies and big companies. And so that's gonna take time because you can't just use your enterprise or customer data in you know any form or fashion. And I think the other thing is often people are willing to take a bet on saying, hey, can we make my legal department a little bit better? Can we make my finance department a little bit better, my CX department a little bit better? But the minute you touch end customers, your own customers, especially new prospects and you know, the the customers that are in that make up your top 20% of revenue, and how do I transform the relationships that I have with them or that we as an enterprise have with them? That's where I think that there's going to be really interesting ways of using AI. And you mentioned agents. Agents is a space we've been thinking a lot about and and you know and spending time in. And we're looking at um agent infrastructure companies, you know, that are doing really cool things and using the web browser. We're looking at uh AI agents that are trying to um, you know, uh complete tasks and automate tasks. I think this is where enterprise is going to be really interesting, right? If ServiceNow can augment their own products with AI agents, does that mean that the person who is actually the IT admin using ServiceNow doesn't actually have to do that anymore? And you have an agent running it in its place? Or think about the person who is like managing a Salesforce instance or at a big company. Now, if they have an agent that does that and Salesforce provides you the agent, the nature of which Salesforce is selling you a product looks very different, right? They're no longer selling you the workflow software, they're selling you the end outcome. They're selling you almost a person or a business function that does this. And so that's what I'm most interested in is like how do you automate the actual work itself as opposed to just providing the workflow.
SPEAKER_03Let's uh shift then to this uh thesis on companies you look at, right? As an investor today, you look at the AI stack. I mean, for the sake of this discussion, let's just say NVIDIA is at the foundation of everything. So we fork up a lot of money to them. You know, I recently read somewhere that they create an Australia every couple of months in their total market balance. But in the AI stack, there's a few at the foundation layer, you know, the the the the uh large and uh large players, Microsoft, Google, uh, you know, Meta, etc. And then uh the startups are all depending on that foundation. As an investor, where would you say startups are uh uh congregating? Where do you see open white spaces and where are a lot of me to's that someday open AI will just flip a switch and you wiped out? How do you reevaluate it?
SPEAKER_01The last question is something that every investor has been thinking about for the last 24 months is what happens? The question used to be, what if Google does this? Right. And then now the question is what happens if open AI does this? And you're right, the way we think about it is there's a stack, right? There's the NVIDIA, which is making money from everybody, and then you have the foundation models, that's the OpenAI and Anthropics and and um you know coheres of the world. Then you have an infrastructure middleware stack, which is developer tooling. How do I make the best use of these models? And then on top of that, you have the application layer. We have not, as a fund at Moderna, we have not participated in them in the model layer, in the foundation model layer. And um, you know, there's some reasons for that, but we think that you know that the kind of capital that's being consumed by those types of companies right now is different from you know the kind of capital we like to see companies consume and pull out. And then um, but we have been active at both the infrastructure and the application layer. So at the infrastructure, you know, stack where investors in companies like Unstructured and Octoai and DeepGram, these are all companies that are saying, hey, how do we take the you know, how do we help enterprises, companies, consumers make the best possible use of these models, whether that's pre-processing data or making your data actually more usable or creating new speech uh to AI and sorry I said speech to text and text-to-speech models, things like that. We've been pretty active at the infrastructure layer. And that's honestly where we've seen a lot of activity over the last couple of years. Um, because I think when any new tech paradigm shift happens, the first thing you see is a lot of developers that start to experiment with it, right? And developers trying to figure out what are the best tools we need to make use of those models. Um so that stack is pretty interesting. But I think where the most amount of value we're gonna see over the next 10 plus years is coming from the application side. Right. And that's the end user applications, whether that's business-to-business applications or business-to-consumer applications. But the actual apps that users are using, I think we're not even scratching the surface of what's possible in that area. Right now we're seeing a lot of, you know, I don't like the term, and we've we've written about this, we don't love the term GPT wrapper, because that that actually takes away from what a lot of these companies do, which is UI matters, the way you actually present this data, the way you structure this data. There's a lot that goes into it. But I think right now we're seeing kind of this V1 of, okay, hey, there's a model, you have a current task, how do we use the model to make the task better? And I think there are going to be some pretty interesting companies that come out of that. I think there's applications we haven't even thought of yet, or we don't even know what the possibility is when the models get really, really good and when all the developer tooling is there. So just in the sense that if we look at the last cloud wave and we saw the hyperscalers, you know, a few of which got really, really big and did really well, you had a number of developer tools that did really well, you know, that's the Twilios and you know the the Herokus of the world. And then you've got way more applications, 10x plus applications, right? Everything ranging from Salesforce to Netflix to ServiceNow to Workday, all of that in every function. I think the same thing is gonna happen with AI, where we're gonna have some really good infrastructure companies, some great middleware tooling, because that's gonna be necessary, but then the app market will be 10 to 100x bigger. Um, and so we're seeing more and more companies that are now starting to build on the app side, vertical apps, horizontal apps. And the thing that we always have to keep in mind is okay, how what what not only what happens if OpenAI might do this, and there's always a might. OpenAI has a hundred different things that they have to do, but also the incumbents, right? You have Microsoft and uh, you know, and obviously Google and Apple that are building AI into their own products, you also have winners of the last generation like Canva that are doing really, really amazing things with AI. And so it's not like you're just creating new white space where you're just gonna go ahead and create a product on its own. That product has to stand its own against the incumbents and against some of these other folks. So I think that's really important. The other layer that we're really, you know, I would call it a layer, but it's really almost like it's it's perpendicular in a way because it it's across all of these is security, right? There is new security vulnerabilities, new security paradigms that we're starting to see open up, uh, whether that's things like non-human identities or deep fakes or AI-powered security analysts. There's a whole new generation of security companies that we believe are going to be birthed in this current era. And so that is not gonna go away. Security threats are not gonna go away, and so we're very much focused on those as well. Um, but yeah, for us at Madrona, as you know, day one for the long-run investors, where we're investing from the seed formation stage companies to series B, Series C, and beyond, those are the areas that we're most focused on.
SPEAKER_03Got it. Yeah, uh, you know, it's it's uh fantastic that uh you mentioned that the incumbents are not just going to roll over and say, oh, yeah, yeah, all business startups use us. Uh the incumbents, such as Snowflake, uh Salesforce, ServiceNow things, and others you mentioned are also going to tool uh their platform with AI capabilities that the enterprises can already already use. And the developers have to come in with them, around them, or uh integrate with them, right? So when you look at startups at this point, they're all uh grabbing whatever is available. But eventually some of it will sort of become narrow um alleys for startups to really come through. But security seems like uh what you just mentioned, seems like quite wide open because the vulnerabilities are unclear to all of us at this point. It could be anything. We we really don't know where it's coming from, and you could go after that. So maybe you can expand on that. Would that not be the hyperscalers uh focus, responsibility, or uh to the extent theirs to lose, quote unquote, uh because they know it all, right? And in some in some regards.
SPEAKER_01You know, it's funny is I was having a conversation with um, you know, with a founder about this a little while ago, a security founder, and he's like, you know what the funny thing is? Security is top three budget item and has been for CIOs for many years. We have billions of dollars every year that's being spent in RD by Palo Alto Networks and Zscaler and all these companies. You have even more billions that are being poured in the space, and yet we still have hacks every week. So security is one of those things, it's just not solved, right? It's just not solved. And we're seeing, again, new attacks, new paradigms, new vectors all the time happen. So, to your point about hyperscalers, I think they will protect their environments as much as they can, but there's always gonna be holes. That's why in cloud security, even though Amazon and Google have products, you have a company like Wiz that's just done incredibly well and has become sort of uh, you know, almost just like uh the go-to cloud resource that uh cloud security resource that CISO is gonna go towards. So I think that no matter how much cloud security you have with it from the high spear scalers, each application is gonna have their own security. Snowflake had their own security, they had a vulnerability. Everyone is gonna try and secure their own app and environment as much as possible, but that's not enough. That's there's gonna be gaps, there's gonna be vulnerabilities, there's gonna be holes, and that means you need third-party tools. And so then the question is okay, if I've once I've got my Palo Alto and I've got my Zscaler and I've got my Wiz, what else do I need? And what else do I need to use? The reason I said that we're interested in the space is there's always new security companies being being built and scaled. Because as you say, there's always new problems, right? The problem of deep figs, for example, is something we think about a lot. And one of our venture partners, uh uh Ornatzioni, has been very involved in this base up in Seattle, as you know. Um, but that's a new problem that isn't solved at all. And we're just starting to see the issues that come with identity fraud related to deepfigs. I believe there's gonna be a new business that that is created to solve that problem, right? Same thing with non-human identities. Um when AI agents proliferate, there's gonna be 10 to 100x more machines and agents inside of an organization sharing passwords and keys and things like that that are not humans or tied to a human identity. I think there's probably gonna be a new company that's being built on that. So, you know, I I think that as much security as all of the incumbents say they implement in their own products, it's not enough. And every enterprise is gonna have to think about what else they need to protect their own environments.
SPEAKER_03Right. Especially, you know, um my uh sort of uh thinking is that there'll be a ton of AI deployed for drug discovery, really things that save humanity, right? I mean, uh if if if uh big pharma or yeah or a small pharma wants to deploy AI-based uh uh models to get it get their discovery better, uh it's probably faster than ever before. So that changes uh that changes the way we look at things like identity, because who is calling on who to solve a problem uh in the drug discovery sequence that that can you know I don't know, cure cancer one day, right? That kind of a thing. You're uh you're exactly right on these deepfakes become or security in general, maybe the uh maybe the tip of the spear that goes after where the enterprises have to deploy a lot more. Um shifting a little bit, right? So obviously you're seeing uh the two sides, right? This is those who are running scared that a ton of jobs will be taken away, those who are saying more jobs will be created. Um you're the business of creation. I mean, you guys wake up invest so that you create uh I mean you create products that leads to jobs and that leads to uh prosperity. Uh has the mindset of uh startup founders changed in the way they approach this problem now, as saying, hey, I can eliminate this or make it that, or have are they still deeply connected to how they solve the problem? Uh what what what is it that you're seeing a day-to-day as an investor from a startup mindset?
SPEAKER_01It's a great question. I think it's more of the latter, and I would say that you know, certainly we we are there to support the creators uh as investors, and and uh certainly can't take credit for what our founders do. But I think that right now we are seeing more. Let me rephrase. I think right now there's been talk about hey, you can go create a company now, or you can create a business and you only need two people and you don't need a hundred people and you don't need to take venture funding and all that. I think that's gonna happen, right? I think there will be smaller companies that can be quite valuable using AI. But in this current period, venture funding is going up, not down. Right? No new startups, number of new startups is going up, not down, and the capital being consumed is being going up, is going up, not down. And I think a big reason for that is because we're still in the part of the cycle where creation is happening and and things haven't been completely figured out. You still need great talent, right? To build a really good company. A lot of the things that we look at when we meet a series. Series A founder or seed founder series B founder, a lot of the questions we ask are still the same questions we asked in the pre-AI era, right? Like who are you hiring? What customers are you going after? Where are people finding value from what you're doing? And yes, over time, I think that we'll get to a point where you need less people to create to make a hundred million dollar business, you may need a lot less people than you did before, right? Even to create a 10 million or a$1 million business, you need less investment because there's more tools available to you. There's more products available to you. You might have agents doing a lot of this work. I think a lot of that is going to happen. I think just in this, that's more of a three to five years out, as opposed to the next six to twelve months, where I see the number of pitch meetings we have. I mean, we're meeting tons of founders. It's really fun, actually. It's a great time because we had a bit of a lull, you know, when let's say at the end of the cloud era before this current AI era, where you might have seen a lot of these Me Too companies, and it was the 12th, you know, version of a marketing SaaS company or something like that. And that now you have like real innovation happening that all these founders and really, really talented entrepreneurs are trying to build new businesses and they still need great talent. They still need capital to do it. But do I think in the next five to 10 years plus we're gonna see many more companies that don't take any venture dollars and that are five people and a few tools or one person and a bunch of tools, and they're doing what it used to take 50 to 100 people to do? Absolutely. I think I think we are heading that way. And I think we're probably heading to a future where we're gonna have more founders and more entrepreneurs, not less, right? Whereas maybe an entrepreneur in a previous era needed to have a network, they needed to know people, they need to have a certain background, they need to be able to raise money. That's not gonna be necessary when you can have AI agents running around doing a lot of this work, or a lot of this work has been automated by some other product. Um, so I think that's that's sort of the future state that I can see happen, and and I think it's gonna be really exciting.
SPEAKER_03So uh, you know, it's a unique uh uh moment in time where uh quote unquote corporate venture funding is is at an all-time high, whether it be Microsoft investing in open air, let's just take that as the biggest venture investment they've made, right? As a bet, so to speak, even though they couch it differently. But yeah, and that's that's not going to stop it just in Microsoft. Uh obviously Amazon's made in Anthropic and so on and so forth. Everybody is going to make a lot of investments as a venture fund. Is that uh a new thing? Do you guys see it as like, oh, because they are essentially uh investing their balance sheet as opposed to how you operate, which is to an LP uh your answer to an LP base. Uh is there a lot of discussions that corporates are coming to you and saying, hey, what are you guys doing? Or is there a mindset different? Because they've never been leaders like they are at the state they are at the moment we are seeing right now.
SPEAKER_01Yeah, you're right. And and we've we've co-invested and co-led investments with a number of them. Um, you know, Salesforce Ventures, Nvidia Ventures, Databricks Ventures, and often each of these CVCs or each of these organizations have a venture arm and then they have a corp dev arm, right? So they may even operate separately, or the venture arm makes certain kinds of investments, corp dev maybe makes more strategic and bigger investments. You're right. At the end of the day, for us, we have LPs that are, you know, our our two constituents that matter the most for VCs and for obviously for Madrona are the founders that we work with and invest in and the LPs that invest in us. And so I think in a in in one of the ways that a lot of the C VC money has has shifted the landscape a little bit is that our often some of these CVCs, especially if they're more from a strategic arm uh strategic perspective, they don't have to care so much about financial return. Right. Because they're just like we want to invest in something that we think we'll be great partners with, or we want to help propagate them, we want to partner with them, maybe we'll think about acquiring them at some point, but we want to be involved at any level. And so then if that creates the round and that creates a certain price of the round, then we have to look at our models and say, hey, do we think we can get a financial return at this price that we would if we were leading the investment on the round size that we would like? Right. So I think that's probably the biggest way that we've seen things shift is like, hey, suddenly you have NVIDIA and Databricks and Snowflake and you know all these folks putting money in, and that's great. I think that's that's that's healthy for the ecosystem, great for founders. But then we have to think about does that mean the round construct is different from what we would have liked to see or what we have seen? Some cases we have co-invested with them, some cases we've decided to pass on these rounds. And again, we have to make sure that there's a a viable uh, you know, sort of visibility to a great return for us and our LPs. Um, but it's certainly interesting. I had never seen in the eight or nine years that I've been doing this, I've never seen the amount of capital being invested from the corporate VCs that I have today. And like you said, they never used to lead rounds, right? It used to be we don't want this corporate venture to lead our round because then they have more rights, they have more information rights, maybe they'll try and acquire us and all this. But at this point, I've seen them play very fairly, and I I think that they're just in many ways, they can see the future almost more clearly than many of the venture capital funds because they're seeing what's happening inside of their own organizations, and that's getting them excited.
SPEAKER_03That's uh a bit of an advantage, I guess they they have that they can uh and if they invest, that means you can be sure they have seen something that they want to come and invest. Uh, one last question from me. We kicked off this generative AI series in January with four investors making a prediction. So we're not going to let you off the hook without you making a prediction for us. One of them came from Tim Porter, no pressure, by the way. So uh what would be your prediction for the uh 12-month prediction that is likely uh that you we will record and then come back to you in 12 months and say, Rivek, this is what happened. So uh what's your prediction, Rivek?
SPEAKER_01That's a great question. I I should have gone and listened to uh Tim's prediction again, and then I'll I'll I'll record that. Um I would say I think I think going back to one of the things we were talking about with with open source models, I think 12 months from now, we'll be at a place where open source models will be just as good as GPT-4.0 and just as good as the proprietary models, to the point where maybe on the leaderboard there's a little bit of a difference in, you know, and you might see that there's some delta, but the difference will be very negligible. And I think that's just gonna open up the possibility of the kind of applications and the kind of companies that are being created. And I can see in at the rate of progress that we see on the open source side, I think in 12 months we'll be at a state of maybe not exact parity, but close to parity. So that's how I'll left myself off the hook. So we won't give you exacts to say it's right there, but very close.
SPEAKER_03But but you know, we're gonna hold you to it, we'll come back. I I agree. I I agree that that is the most uh uh likely thing to happen, also. Open source models will prevail and will get better. Uh I I agree with you completely. And I think we're only gonna hold you to the 12 months. So there you go. Uh Shirish, back to you to to close it out. Yeah.
SPEAKER_04Great. Uh so thank you so much, Vivek. Uh, this was a fascinating conversation. Uh look forward to reading more of your aspiring for intelligence uh newsletter. Uh again, I would highly recommend that to our uh readers. Uh it's on Substack. And uh look forward to talking to you again in a year's time to see how your predictions turned out.
SPEAKER_01Great. Thank you so much, Yarish and Gary. Really appreciate it. And uh yes, we'll we'll chat in 12 months and see what happens.
SPEAKER_03Thank you.
unknownThank you.
SPEAKER_03Thank you for listening to our podcast from Startup Exit brought to you by Dai Seattle. Assisting in production today are Isha J and Mini Verba. Please subscribe to our podcast and rate our podcast wherever you listen to them. Hope you enjoyed it.