A blueprint for AI acceleration
Breakout talks, Trends and inspiration
Runtime
Complete form to watch full video
AI/ML has entered the mainstream, but for many businesses it remains unclear how to identify and capitalize on new possibilities. Learn how AI/ML enhances Stripe’s products and infrastructure, and hear from users about how these technologies are accelerating their business growth.
Speakers
Emily Sands, Head of Information, Stripe
Aravind Srinivas, Cofounder and CEO, Perplexity
EMILY SANDS: Hi, folks! I'm here to share what we've been up to with AI at Stripe. You've heard how Stripe processes over a trillion dollars annually for the millions of businesses that run on us. Our AI stack is all about harnessing that wealth of data so our users, all of your businesses, can increase revenue and decrease cost. Let me start with three examples. The first is transaction fraud. We all know the pain of fraudulent buyers. You let them through, you lose money with disputes and chargebacks, or you block them with imperfect signals. And unfortunately, you also lose money because you're blocking some legitimate buyers too.
Radar is our AI solution for fraud. It's assessing over 1,000 characteristics of each transaction to identify and block just the truly risky ones, to the tune of over $400 million in fraud blocked last year alone. It does all that on the charge path, so in less than 100 milliseconds. And because it's powered by deep neural nets trained on billions of transactions across the Stripe network, Radar is correctly clearing over 99.9% of legitimate charges.
The second example I want to speak to is merchant risk and compliance. Each day, thousands of users, hopefully more than thousands today, come to Stripe and need to be onboarded quickly and safely to get going with their business. Last year, we incorporated large language models in our onboarding systems, and LLMs do a brilliant job of unstructured, open-ended tasks, like helping us verify that each user meets our terms of service.
With them, we're now able to auto-clear over 80% of supportable merchants instantly so they can get started with their businesses. And I tell this story because these kinds of merchant risk models aren't just about making sure good businesses can onboard at scale to Stripe. The 13,000 platforms running on Stripe also need to onboard businesses, whether it's theCut, which is a platform for barbers, or Playtomic, which is connecting racket sports players to clubs.
And that's why we empower platforms with much of the same intelligence we have internally, like surfacing merchant fraud signals directly in the Stripe Connect product. And then, there's optimizing payments, because AI at Stripe isn't just about playing defense. Take Stripe's Optimized Checkout Suite. We now offer over 100 payment methods, and we're dynamically surfacing the highest-converting ones to your users.
Under the hood, our recommender system is considering the location, but also the currency, the device type, the browser, the transaction amount, even the activity of that particular customer across the Stripe network. The result is a 3% boost in conversion for all of you, and a 7% increase in average transaction value. And that's just one of dozens of AI optimizations that operate, including many behind the scenes, like the Smart Retries feature in Billing.
For those of you running subscription businesses, you probably have noticed a full quarter of lapsed subscriptions are unintentional cancellations. Maybe there's not enough money in the customer's bank account, or maybe they've gotten an expired card number or new card details. The good news is that involuntary churn is largely avoidable. Our algorithms are figuring out the best time to retry the payment, like right when the paycheck is most likely to have been deposited or when the new card is most likely to have been issued.
That means more of your customers can keep their subscriptions, and you can grow your revenue. In fact, Smart Retries on average recovers $9 in revenue for every $1 you spend on our Billing product. You all are a bunch of business decision-makers. So, who's not excited about a 9X ROI? I got at least some.
All right, so that was a quick sampling of how Stripe AI can power your business, and we're seeing our AI stack get even more powerful as we incorporate more generative AI. I think we all intuitively know what generative AI means at this point, but it never hurts to ask the LLM. And one of the most common questions I get asked these days, perhaps second only to my toddlers' incessant “Whys?”, is “how does Stripe get going with generative AI?”
So, I want to start there, and then, I'll cover how gen AI is manifesting in our user experience and how we're seeing companies take gen AI products to market more broadly. So, this particular generative AI story opens early last year, when we, like many of you, were witnessing the amazing technical breakthroughs of large language models and wanted to get started fast. We decided to equip every employee at Stripe with LLM tooling and ask them to apply it to their work. So, this is our LLM Explorer. It's an internal-facing web app with a ChatGPT-like interface where Stripes can engage with various models, and we built it critically with the right security and privacy controls for our users' data so that we could confidently unleash it to the entire company.
We're glad we did, because within just a few days, LLM Explorer was being used by over a third of Stripe employees, and then we quickly added prompt sharing and discovery features to make sure we were really building on each other's work, and soon found ourselves with hundreds of reusable LLM interaction patterns. So, one example is this popular Stripe Style Guide. It transforms any text input to match the tone we use in our user-facing material. So, this is an account exec who is writing an email to a prospect. It works equally well for a marketer who's working on website copy—honestly, even for folks like me who are preparing to speak in front of all of you.
We also wanted to support the development of more advanced applications by engineers, so we started offering the backend service internally in the form of an API. It abstracts away LLM access, it supports over a dozen models, and it comes with a whole host of developer experience, security, and reliability features. Think things like auto-model selection based on context size, or logging for auditing purposes, or back-offs when hitting rate limits. And today, these APIs power over 60 LLM applications across the company.
I'm going to cover several examples in a moment, but in a nutshell, we're seeing generative AI applications at Stripe doing two things. The first is enabling nontechnical folks to do things they couldn't do at all before, and the second is enabling technical folks to move an order of magnitude faster. And that's true both for our own employees and for the employees of our users.
So, here's an internal example. We regularly receive automated emails from financial partners, and we used to process them manually, one by one. So, like computers sending emails to humans to kick off the next step. Now, we have LLMs as a bridge between the emails and the API calls, and that allows us to automate responses to disputes, it allows us to trigger downtime notifications to users, and we can inject human oversight where needed.
I started with some internal examples because while most of us first dream of user-facing applications, internal use cases are often where we actually start. They provide a sandbox to build confidence and comfort in our model outputs, and they're an easy space to get groundswell. I'm sure everyone in this room has at least one brilliant idea about how LLMs can improve your own personal work life. But of course, we're also applying gen AI to our products, and right now, that's in two primary dimensions: information retrieval and writing code.
I'm going to start with information retrieval. So, we've long invested in excellent documentation to help users get started on Stripe quickly. Now, LLMs sit on top of that documentation and make it even easier to find what you're looking for. So, instead of reading pages of content, you can get instant answers to your specific Stripe questions. You may have already interacted with this assistant, which is on our support site, or with a related assistant in our Stripe Docs.
Under the hood, we use retrieval-augmented generation, or RAG. And for the non-AI practitioners in the room, RAG is just an efficient way to improve the LLM output for specialized context. So, I like to think of LLMs kind of like a librarian, like you wouldn't walk in to even the most knowledgeable library in cold and trust them to choose a book for your next vacation. You'd more likely first give them a bunch of context and color on you, and then, ask what they recommend.
And RAG is doing the same. Because LLMs are trained on these vast volumes of data, they sometimes present overly-generic information or pull from non-authoritative sources. And so, RAG allows us to just extend the LLM capabilities to specific domains and knowledge bases without needing to retrain the models. The assistant I just showed is one of dozens of RAG use cases that has emerged at Stripe, and transparently, we actually didn't have great shared foundations for RAG from the start.
But once we spotted multiple teams managing their own RAG architectures, we quickly moved to a shared knowledge layer like the one here. So, underlying datastore, ingestion pipelines, search and lookup APIs. Trying things out bottoms-up has a lot of value, like covering ground quickly, but it also demands we keep an eye for shared needs and leverage points and build against them quickly. We have a more developed assistant. I'm super excited. It's coming soon to your Dashboard. It's going to answer questions not just about Stripe, but also about your business more broadly. Like here, it's providing you guidance on subscription pricing models.
And then, there's writing code. So, you heard earlier in the keynote about Sigma Assistant. Sigma is our SQL-based reporting product that allows businesses to get insights directly from their Stripe data. And for most of our users, that's all of their revenue data. So, which customers are buying what for how much? Who's retaining and churning on their subscriptions? A lot of good insights to be gleaned. With Sigma Assistant, our users' employees no longer have to speak SQL to get access to these insights. They just use natural language to ask questions of their data. So, this employee wants to know the per-customer value of last year's cohort. You see the SQL generate, not a simple query. I may be revealing too much about my technical capabilities, but I'm pretty sure it would take me 10 to 15 minutes to write this and debug it. You get a natural language summary there of the analytical approach, and then, the results render.
And I like this example because I think it highlights how LLMs are helping nontechnical folks do things they couldn't do at all before. A business user could use this and debug and technical folks move an order of magnitude faster. Doesn't matter how much of a SQL ninja you are, that was faster than you could produce it. So, all of that is what's live now. But where are we headed with gen AI from here? Patrick earlier today mentioned domain-specific foundation models. They're showing promise across industries, and payments is no exception.
In this morning's keynote, Will mention the near-infinite optimization opportunities in the payments space. Each payment involves many micro-decisions. What's your gateway choice? Network token or PAN? Bin-level ISO message formatting? Cross gateway retries? Dynamic MCC selection? We're already using AI to make these decisions to maximize your profits, and we're experimenting with whether a foundation model trained on Stripe data can execute that even better.
One level up for those of us who geek out on software-defined financial services, there's a broader opportunity beyond payments. As an economist by training, I'm very interested in what a gen AI model trained on Stripe data could do for all of our users. Could we become more of an economic operating system for all of you? And you can imagine all sorts of ways this could be productized. Like the assistant we showed earlier could proactively serve you business insights. You could hit an API to get customer-level predictions, their LTV, their propensity to churn, their propensity to buy. Maybe you even turn important personalization decisions you're currently making for your customers, pricing, discounts, recommendations, on autopilot with Stripe.
And then, even further up, at the macro layer, businesses around the globe rely on economic signals like the Consumer Price Index for tracking inflation or the Small Business Index. But these are lagging indicators. So, can real-time data at Stripe scale speed the time to insight? The point is, we're already abstracting away the need for our customers to worry about payments and refunds and disputes. And we can do all of that because of our scale in data.
With LLMs, it's not hard to imagine a world where we're able to also execute a bunch of higher-order tasks, really hyper-personalized to your business and the needs of your customers. Built on payments data, yes, but well beyond the payments space itself. Of course, AI isn't just powering financial services, it's powering a whole host of industries. So, I want to step back and look at how AI companies are doing business.
Stripe has worked hand-in-hand with the builders of several technology waves. The current wave is AI, and this wave looks different. So, first, unlike some of the past generations of software companies, AI companies face substantial compute costs straight out of the gate, and that puts pressure to build monetization engines quickly. Second, a lot of these companies are seeing global appeal for their products. They're building models, infrastructure, digital art and music, super borderless things. Plus, LLMs are great at translation, so increasingly, companies are going global from day one. Third, many AI companies are iterating fast on monetization models. Is it fixed-fee subscription? Is it a pay-as-you-go usage-based model? Is it a credit burndown? And that's not just because of those high compute costs, but also because the market is scrambling to figure out that intersection of supply and demand.
And then, fourth, because today's AI companies are generally monetizing at a much earlier stage, we have very lean teams needing to operate like very real businesses. And all this manifests in how AI companies are using Stripe. Our payments product is a fast way to get revenue in the door, especially for small dev teams who are really focused on just shipping the core product.
Last year, twice as many AI companies went live on Stripe as the year prior, and revenue from AI companies on Stripe more than tripled. We're also seeing high adoption among AI companies of products and features like Stripe Billing to experiment with those subscription pricing models and usage-based billing. And more than half of new startups adopt at least one product in our Revenue and Financial Automation suite within their first week on Stripe, which includes offerings like Tax and Revenue Recognition and really speaks to their need to scale their revenue journeys.
I am thrilled now to be joined up here by the CEO of one of those users, Perplexity. So, Perplexity is an AI research tool that is increasingly known as the replacement for Google Search. It was founded by Aravind Srinivas, who was previously on the research team at OpenAI and at DeepMind before that, and founded Perplexity in August of 2022. Since then, very short window, Perplexity has already reached more than 10 million monthly active users, and as of very recently achieved unicorn status. So, please welcome Aravind.
You're a busy man. Thanks so much for coming.
ARAVIND SRINIVAS: Thank you. Also, I just looked you up on Perplexity, and like you're a PhD student of Larry Katz.
EMILY SANDS: I am a PhD student of Larry Katz, busted.
ARAVIND SRINIVAS: Larry Katz was the chief economist of US Department of Labor, so that's pretty impressive.
EMILY SANDS: I'm impressed that Perplexity could find out those deep details about me. Did it tell you about my toddlers? No, we can get into that another time. I'm curious if you can maybe just give the room the essence of what is Perplexity and what motivated you to build it.
ARAVIND SRINIVAS: So, Perplexity is an answer engine. It just directly gives you an answer to any question you ask. A lot of motivations. But really, the long story short is when we started, and we were trying to build LLM-based search products, including a version that we tried to use to search over all the Stripe data, Stripe Sigma.
EMILY SANDS: He built Sigma Assistant before we did. We were inspired.
ARAVIND SRINIVAS: Yeah. And that was because my investor, Nat Friedman, was an advisor to Midjourney, and he had all the Midjourney subscription data. He was like, “Hey, I have no idea how to make sense of this. Build this integration with Stripe Sigma, and I just want to ask in English.” And we built it for him, and he was like, “I love this. Why is Patrick or John not doing this?” So, that's how we got started.
But at one point, we had no company-building experience. All of us were real noobs, both on the operational side and engineering side. So, we just had to have someone to ask all our inane questions without the fear of being judged, and also can disturb them at any point. Saturday night, Monday morning, doesn't matter. And then, only AI can do that, not humans.
So, we had a Slackbot that would just plug into GPT 3.5 and answer our own questions. It was amazing. It felt like the problem [was] solved, except it would hallucinate and give incorrect information, so that's a big risk, right? And so we had this idea that, what if we took one principle from ourselves, that is academics, which two of our cofounders are, including me, which is that you always are only allowed to say something you can cite.
EMILY SANDS: Yes.
ARAVIND SRINIVAS: You know this, too. You're a PhD. So if you write a paper, every sentence you say or write has to be supported with a citation. And so, what if we bake that principle into a chatbot, and always make it go look up links and answer? Yes, it'll be boring. It's going to talk like a scholar all the time, but it's fine, it's useful. In fact, when we launched, some people even called this ChatGPT's educated uncle. It's fine, it works. And that's kind of what led to Perplexity. And we initially thought, like, maybe enterprises will want to work with us after seeing our execution, but nobody's going to use it as a consumer product, but famous last words.
EMILY SANDS: Yeah, and here we are. You know, earlier I said, “Internal applications are a great place to get groundswell,” because everyone has an idea about how an LLM can improve their own personal work life. I had no idea that that's part of how Perplexity arose, to improve your own personal work life.
ARAVIND SRINIVAS: Yeah, yeah. And Paul Graham is famous for saying this too, that you have to build products that you care about. Only then the users will care about it.
EMILY SANDS: So, fast forwarding, for a bunch of noobs, you have a pretty amazing product, including that, it's very snappy. Tell me a little bit about how you built the infra, how you make it run.
ARAVIND SRINIVAS: Yeah, initially we just started with the simplest thing, like connect GPT 3.5 and Bing and launch it. Yes, wrapper, and then, but wrapper… When you have $1 or $2 million of funding, what do you expect us to be like, with our own infrastructure and launch it?
EMILY SANDS: But you were trying to get to product-market fit before you over-invested—
ARAVIND SRINIVAS: Exactly, the best way. First get users. Even see if, like, after a week, if people are still using your product. That's the first step. For us, it was real because we launched it on December 7, 2022. So, after a week or two, people are going on vacation. So, a new product from an unknown startup, why should anyone use it? But our usage was sustaining throughout the winter vacation. That's when I knew it was a real thing. It's not a fad. It's not a viral product either, because it just literally gives you an answer with references. So, that's when we decided, “Okay, there is something here. Let's commit to scaling it.” And we raised our first venture around after that.
EMILY SANDS: I've always been curious what people are thinking about during the winter break. They're like workaholics, suddenly they're on winter break, they don't know what to do with themselves. So, they're like on Perplexity, asking questions. I have to hear later what those logs look like. Well, you have a lot of folks in the room who are wondering how they too can get started with AI. You've already given some tips, including get to product-market fit before you overbuild the infra. There's no shame in wrappers. But, what advice would you give to a founder?
ARAVIND SRINIVAS: I think this is the one thing I've said every time this question was asked, and I want to repeat that, which is it's easier to find—like, basically, you want to work on something you really care about and you really would commit yourself many years to, and not, like, what the market wants. But obviously, if you want to succeed as a business, you will be driven by the market to do things that users want anyway. But it's hard to motivate yourself for that. So, the core DNA of your product or a company should be something you care about. For us, it's being scholarly, knowledgeable, like, we're all nerds. We love learning about new things all the time. I spent several hours as a kid on Wikipedia, so all that's coming back and we're pouring that passion into Perplexity. But it's hard for me to work on something like a sales copilot. I just don't care, right?
EMILY SANDS: Someone does, someone does. It's just not you. Yeah.
ARAVIND SRINIVAS: Yeah.
EMILY SANDS: Yeah, yeah, awesome. There are also a fair few enterprises in the room. I do. And they're going zero to one with some of their AI products. Any sort of advice for them?
ARAVIND SRINIVAS: For going to zero to one, I would say just launch it ASAP if you're a product company. If you decide to be an infrastructure company, then, that's a separate thing. I've not done an infrastructure company to give you advice, but if you're launching something, just go directly, talk to the users, get it in the hands of people, watch them use it.
This “watch them use it” thing is something that I actually took from the Collison brothers. They would actually go to the office of the potential Stripe customer, watch them get started with the APIs and see how they use it, look at all the issues they run into and make notes of it, and go and fix it for them. So that was the extent, obsession, attention to detail. These are the things—when you start, the obsession and the attention to detail and the ability to work Friday nights and Saturday nights, that's your strength. That's the thing that big companies don't have. And including closing people, you can literally close people on weekends. Big companies cannot do that because the recruiter or HR would not be working at the time. So, that's—
EMILY SANDS: Culture.
ARAVIND SRINIVAS: Yeah, the hunger, and that's your moat. Everybody asks for your moat. We still don't have a moat. There's still no moats. And it takes a decade or two to build a moat. So, until then, your moat is your relentlessness, your hunger, and your intensity.
EMILY SANDS: I mean, I think a decade ago, people would have said payments was a commodity. [You’ve] got to have the hunger. The growth of Perplexity has been incredible to watch. You mentioned the product. Where are you taking the product from here?
ARAVIND SRINIVAS: Yeah, so we really want to think hard about what you do, what can make asking a question even easier. I have this thesis that the harder thing, of course, answering a question accurately is pretty hard, but the even harder thing is asking a good question. We're also used to typing in one or two words, so making it as easy as possible for people to ask questions is the right thing to do, instead of the other. The other vision is training people to become better prompt engineers, but I think that's not the natural way a product should be built. Product should be built for the user. The user is never wrong. So, the AI should take the work to clarify with you and ask you clarifying questions, build your prompt together with you, iterate with you, allow you to ask questions by your voice, give you concise answers.
The way the answers are presented for different query categories should be different. If you're asking about a person, you want a nice knowledge card about them. If you're asking about a restaurant, you want some visuals. I think all that specific, verticalized work in a generic horizontal product needs to be done. Only then you're a real product. Otherwise there is the saying, “If you're just yet another chatbot, there's no difference.” So, we have to work hard on that, and we also have to work hard on the part of, what after the answer? Once you got a great answer, you earned it. You've actually asked some good questions. So, you should be able to do something beyond just reading the answer and going back, you know? And we are thinking of ideas around that and how you can use the answer in your regular workflow at work.
EMILY SANDS: People are so proud of the answers they write on Quora. They should be like, “Oh, here's the answer I helped generate with Perplexity AI.”
ARAVIND SRINIVAS: Yeah, that's a core insight that we are tapping into for launching something very soon. And I also want people to be able to do something after the answer on the app. We will be moving from answers to agents. It's still not working that well yet, but the agentic workflow has to be part of the app too.
EMILY SANDS: We're seeing some shifts in how generative AI companies especially are doing business. I'm curious how you're thinking about your own business model. And any tips for folks here thinking about monetizing gen AI applications?
ARAVIND SRINIVAS: Yeah, so subscriptions is how we started. Honestly, don't ask me why, we just copied ChatGPT for that. I wish they started off at $40 a month. All of us would have made more money, and you guys would have made more money too.
EMILY SANDS: We're all doing okay. We're all doing okay.
ARAVIND SRINIVAS: You know, like, it's sort of the thing, like $20 a month, because they fixed it, everybody else has to work with it. But that's still a good enough price point, actually.
EMILY SANDS: But Larry Katz might have to give a little lesson on supply and demand, yeah.
ARAVIND SRINIVAS: Yeah. But it's a good price point. It works. We can actually make things more efficient in the model layer and models are getting cheaper and better. As you see, the Llama 3 model, even the 8-billion-parameter model is like so good. And the 17-B is very close to GPT 4. So, things are getting really good in terms of making this actually profitable.
And I would say that the Enterprise Pro that we launched yesterday, where, basically, the key one-liner for Enterprise Pro is, like, a version of Perplexity Pro that your boss lets you use at work, because they're not worried about data leakage anymore. So, that will be charged at twice the price. But we're also going to give more for that, because it's very useful for you at work. And, again, that will also run with Stripe.
And then, the next version of that, we're going to do APIs. People want to build custom versions of Perplexity for themselves, for their work. Example, you just want searches for specific domains, you want to customize integrated with your sheets, so, like in Notion, build your own Slackbots. We're going to support that. And we are going to use... yeah, we're going to bundle that with Enterprise Pro.
And the last part that we're only going to do it at scale is ads. I think that's the highest margin business mankind has ever invented in software. So, I think we got to figure out a way to do that without corrupting the truth of the answer, correctness of the answer. And we're not going to do it anytime soon, but we want to try it out later. These are the four ways we...
EMILY SANDS: I didn't know all that was coming. Super thrilling. We are coming up on time. So, maybe just one last question, sort of zooming out. What do you think is under leveraged about LLMs? What do you think the industry at large is underappreciating? What are sort of the collective opportunities that we're missing?
ARAVIND SRINIVAS: I think the truth is we are still in a bubble here. The rest of the world still doesn't use LLMs as much as we are all excited about them. So, even the current products that exist today can reach a lot more people. Like, if you find an average person in a New York City subway, they're probably not even using any of these tools today. And it's not because the tools don't work. I think the awareness and the habits are still yet to change, and the products have to work and earn that. Don't blame the user.
The second thing is also all the stuff that doesn't work today, don't discount that. Like agents are not working well yet, but maybe by end of the year or early next year, we'll all start having agents. And when we start having agents, so many amazing things can happen. This whole business model of making people click on a link and charging costs per click, which is Google's model, can be disrupted even more if people don't even have to click on a link. They can just have call-to-action buttons on your app natively, and you're charging for transaction. That's it. You're not actually charging for the referral. And that way, Stripe becomes a bigger business than even today because agents can be executed natively on AI chatbot apps. These things are not being thought about very actively today, but will definitely happen in the near future.
EMILY SANDS: Okay. So, I'm hearing democratization and build for the future.
ARAVIND SRINIVAS: Exactly.
EMILY SANDS: Yeah, okay. Incredible insights to close us out. Thank you so much, Aravind, and thanks to all of you for joining.
ARAVIND SRINIVAS: Thank you.