Anthropic Co-Founder Daniela Amodei on AI adoption, Claude 3 and impact on payments

Joining us now is Daniella Amadei, the Co founder and President of Anthropic. It’s great to see you. Thanks for being here. Thank you so much for having me. Great to chat with you Kate. This is really full circle because you started your career in Stripe. That is right. That’s that was that’s amazing. And then Open AI, which will get into your whole career path. But very cool that we’re back here at a Stripe event. Yes. So but Anthropic is the topic for today. I want to talk about financial services and artificial intelligence. That’s really the theme of this conference. How do you think AI will affect financial services, banking payments? What’s your take on that? So something that I think has been really incredible to watch is just the adoption of this technology across so many industries, financial services and healthcare and legal services technology, right. Some of these businesses are not necessarily kind of the first adopters of something like generative AI. But I really think it’s been interesting to kind of see this inversion, right. So many enterprise businesses have really flocked to use tools like Claude to really help transform their businesses and how they operate in financial services in particular, businesses like Jane St. and Bridgewater and Associates use Quad for everything from financial analysis to helping with investment decisions. And I really think there’s going to be a huge amount of opportunity for generative AI at places like Stripe and beyond to really just transform not just the sort of core bottom line of the business, but also on the back end how people communicate and really help save time with administrative tasks. It’s interesting on the Wall Street aspect. I don’t think of a Jane as someone who’s maybe using AI in a chat bot version. Is it chat bots or is it other uses for artificial intelligence for trading, for example? So one of the things that I think is so interesting about this technology is just how wide-ranging and broad the different applications of it are. So the kind of chat bot application is of course one that’s used for things like customer support or even helping with things like, you know, developing marketing materials. But there’s so many different applications of a tool like Claude. So this really ranges in everything from, you know, helping programmers to write code, right, or financial analysis of complex, you know, public information, right. So really going through and looking at what are the market trends and helping to analyse them. And so I think, again, part of the sort of opportunity of this tech is just that it’s able to do so many different things. What are some of the more surprising use cases that you’ve seen in the corporate world for Claude and for AI in general? So I think again, really this this theme of just there’s so many different applications. And so something I think that is really amazing to watch is the way in which different businesses use Claude, for example, like across multiple different types of applications. So some businesses use Quad 3 Haiku, which is a really fast, very price competitive model for things like customer support. But then elsewhere in the business, they might also use Quad to do something like if they’re a healthcare company or a research company, help them look for things like genetic markers of cancer, right. So I think this sort of broad application really never continues to surprise us, right? We have programmers that write in and say, hey, you’ve really transformed the way that I do my, my work. We’ve had creative people write in and say, wow, Quad has really been, you know, helping me get unstuck and you know, writing this, this part of my book. So again, I think this kind of broadness of the tech is something that is both an opportunity and something that continues to drive innovation. And so you have companies using different versions of Claude, you’re saying within the businesses kind of exactly on what they’re looking for. That’s really interesting too. I wonder, I think of like that that book analogy, we just think about copyright issues. What data are you guys using to train the models? So we train on a wide variety of publicly available information, and this ranges from everything from literature and history and science material. And then on top of that, we also use a technique called constitutional AI, which really helps us to take the information and the data that we’ve trained and sort of on the output end help ensure that it’s really aligned with human values. So we use founding documents like the UN Declaration of Human Rights and over a dozen other inputs to really help give Claude kind of this sense of ethics, right. So to say, here’s how, you know, we want you to respond in ways that are helpful, honest and harmless, really interesting on. I mean, I think training in general, you’ve seen companies sort of strike these partnerships with whether it’s the New York Times or AP, any sort of partnership possibility for Anthropic when it comes to some of this content. Part of what I think is so amazing and inspiring about this space is just everything is sort of being developed, right, really all around us. And so Anthropic is a very partnership based company, right? We work really closely with some very large enterprises, but also with startups and with individuals. So I think there’s a huge range of possibilities for something like the type of partnerships that you’ve talked about. And does that ever make you worried? It’s such a new space, and I wonder how you kind of navigate what could be sort of a minefield of copyright infringement when it comes to training. It feels like it’s just such a big potential risk for a lot of these companies. Again, I think something that is just so kind of interesting to watch and observe here. So much of the kind of newness of this space, I think means that there’s really just sort of new precedents being set sort of as we go. And I think something we’ve found really, really kind of interesting to watch is a lot of this case law is just literally being written as this technology is being developed. That is such a good point that I think it’s sort of like the analogy of building the plane while you’re flying, and I’m sure that’s hard for you guys to to navigate at certain points. I want to ask you about this debate of whether AI will become smarter than humans or when we had Elon Musk predicting recently that AI will outspart the smartest human by 2025 or at least the end of 2025. Do you agree with that? I think the way that we really try to conceptualize of these tools and sort of their progress, we absolutely think the models will continue to push the frontier in terms of intelligence and capability. But really, at the end of the day, I think you can think of these as best used as a very helpful assistant, right? When we look at the ways that businesses and individuals are using Clod today, it’s always best when used in concert with a human in the loop. So in some of the kind of examples I gave you right around researchers doing things in the scientific research field or in financial services, all of those customers of ours use Clod to help supplement a human’s skills and abilities. And I think really our goal in developing Clod is to make this helpful, honest, harmless assistant that can partner with you to help achieve whatever it is you want to do. It’s interesting. I mean I feel like there is this, it seems so black or white. It’s either this awful worst case scenario, doom scenario or you know, it’s it’s amazing for the economy. It just seems like there’s two sides of it. I wonder where you kind of sit on, you know, the the potential risk of all of this. It seems like a lot of the messaging from Anthropic has been about trust, has been about safety. What are your thoughts on just that debate in general? I think our view here is that it is because just like you said, so much of this technology is so new, right? It’s so untested. We really believe truly that there’s an incredible amount of benefit that can be derived from working with these transformative AI tools like Claude. And again, I sort of keep coming back to this healthcare and science application, but there’s a huge amount of you know potential to really transform, help make peoples lives better, both from a really blue skies perspective, but also just from a day-to-day administrative task, time saving perspective. I think our view is really in order to be able to realize those potential benefits, we really have to get the safety stuff right, right. This is why we’ve always invested so heavily in building these tools in a way that’s, you know, trustworthy and really in line with human values. I think if we can really nail the kind of reliability, trustworthiness and safety, the potential positive benefits of this I think are just are are vast, interesting on sort of the human values, it feels like there’s certain values that are sort of subjective. Google had some controversy around Gemini and the way that it trained those models and some of the image generation. How are you avoiding some of those same mistakes? So anthropic has really been sort of a pioneer in much of the technological safety research that we do. So much of what we look at are just broad ways of applying safety techniques to our technology. And as I mentioned before, we use this technique called constitutional AI, which sort of sets these guardrails really to say what is it that we want the models to be sort of responding. We also use a technique called reinforcement learning from human feedback that takes individuals inputs. Most recently, we actually developed a new sort of it’s not quite a technique yet, but we ran it as kind of a pilot, a concept called collective constitutional AI. So rather than just taking sort of written documents, we sort of gathered information from a representative sample of people in America and said, what are values that you think are important for these AI systems and tools to really represent? And what we found was that this led to a more rich, diverse, complex sort of set of inputs that we’ve now used. So you’re sort of using average human values and we all have our own human bias that must be hard to not insert that into certain models. You’re kind of going at the average value. And is there a way to quantify that? It just feels like such a complex problem. We actually have separate teams that also work additionally on societal impacts of AI and work extensively on topics related to fairness and bias. I think so much of what we’ve seen is that it’s really important to consider sort of from day one, what is the training data you’re using? How do you clean it? And then also, what are the sort of reinforcement learning techniques that you use on the back end? Of course, it’s impossible to perfectly remove bias. Everyone doesn’t even necessarily agree. But so much of what we invest our time in doing is figuring out how do we make these models sort of as unbiased as possible in the responses that they give. Yeah, that’s a big challenge. Speaking of Google, that’s one of your big investors and partners, same with Amazon. How do you balance having some of these big tech companies who are also competitors but also have them as you know, your main cloud computing partners and really partners in general and things like chips. It just seems like sort of a balance. How do you think about that? So we are very grateful both to GCP and AWS for being providers to us, both of the compute that we use to train our models, but also as channel partners for us. So all three Claude models, the Claude 3 model family, Opus, Sonnet and Haiku are available on GCP Vertex and also an AWS bedrock. So we’re very grateful to be able to partner with them. To also just help really extend the reach of where Claude is able to operate within different businesses. It’s a really capital intensive business too. So I imagine that that helps to have some of those partnerships and have cloud computing companies also. As an investor, is that you know fair to say that having a company with cloud computing capabilities, there’s also some upside of having them as an investor. There’s I think really the thing you’re gesturing at is it’s quite expensive to train these models. And so much of what I think has been kind of interesting to see from just a market perspective here is the amount of investment kind of required to make Claude and other sort of frontier AI systems. You’re right. Is is extremely vast. And so we talked a little bit about your time at Stripe. You also were an early Open AI employee. Is it seven of the founders of Anthropic? It came from Open AI, is that right? Yes, that’s right. So what went down there? Why did you guys end up leaving and now they’re one of your biggest competitors? Talk to us about sort of that career journey. So. Really, we had this vision, this sort of group of people that went and founded anthropic of being able to develop these tools in a way that sort of put interpretability and steerability and reliability at the center of everything that we did. And we felt very passionately that this was a great idea and we really wanted to be able to kind of go off and do it on our own. And I think we felt sort of very happy to see that this really is something that the market wants. So many of the businesses that we talked to really sort of express some of the same feelings that we had, right. They said this technology, the potential is amazing, but how do we know that it’s going to be safe and reliable, right? If it’s something that ultimately these Fortune 500 companies are exposing their end customers to, how do they know that it’s going to be something that’s sort of trusted and reliable. And so I think that sort of guiding vision was something that really propelled us towards founding Anthropic. And there’s been some questions about Open AI as a non profit. So Anthropic is a public benefit corporation, so creative for social and public good. I think of Patagonia and Ben and Jerry’s is sort of the examples of that. How did you come up with that structure? Why go with the public benefit corporation versus a nonprofit versus a for profit? Talk to us a bit about that. So you’re totally right. A PVC is about as close to AC corporation as any other kind of entity. The reason that we chose a PVC was we really felt it was important. We’re a traditional company in the sense that we have investors, we issue equity, but we also wanted to really incorporate sort of in our founding documents this commitment to public safety, right. We really feel very strongly that we have a responsibility to develop this technology in a way that’s good for everybody. And so it’s very helpful to sort of have that very clearly baked into our incorporation documents. And really just when we’re talking to you know candidates, employees to be able to say this is really something that is an important value of ours and we really try to live that value every day. Does that complicate things if and when you ever wanted to go public? I don’t know if that’s in your road map that you have plans to be a public company, but some of those are private companies. Would it ever get in the way of a potential IPO? So conveniently, PCs are almost outside of this kind of social mission, being kind of part of the original kind of incorporation documents. It’s very similar to AC Corporation. So there are many sort of public PCs in the world. It wouldn’t preclude anything like that. The Ben and Jerry’s of AI, the Ben and Jerry’s of AI, that’s great. In terms of Open Eye as well, I mean, there’s so much competition out there. You talk about some of these LLMS that are are growing. Do you think there’s going to be new LLMS coming up from new companies? I just wonder about the startup space. Like you said, it is so capital intensive. Give us kind of a broadview of the competitive landscape and how you think about it. There is. So I think our view is there’s so much happening in the generative AI space and as wild as it sounds, I I think we’re actually still very early in the journey of how this will impact the economy and sort of the market, right. Generative AI has really only been around sort of in the most generous version of of kind of time for for like less than a decade, right. And I think it really kind of exploded onto the scene over the past, you know, two to three years. So my sense is there’s quite a lot of room and space still for new upstarts to, you know, get involved and you know, develop sort of their own market niche or or sort of part of the ecosystem. And I think we will see a lot more innovation really to come for businesses that are also building on top of tools of foundation models, but also innovating on their own. What about the global landscape? Where does the US in your mind sit versus China for example? There’s, you know, a lot of global competition as well. Something that I think has been really interesting to see is how quickly AI became a global phenomenon, right. We see a lot of market adoption obviously in tech hubs like the United States and Asia and Europe. But it’s been really, really fascinating to watch that Latin America and Africa and other parts of the world are also very quickly kind of jumping on jumping on the AI train, both from a business perspective and from a consumer perspective. So my sense is that we will see continued innovation really across the globe in AI. And there’s been a lot of interest from sovereign wealth funds as well Saudi Arabia, There’s been reports about them raising massive funds. I was talking to sources who did say that Anthropic really did not want to take money from Saudi Arabia because of the potential national security risk. What would be a national security risk of maybe having the Saudis on your cap table? So I think as part of the FTX bankruptcy estate sale, we actually, you know, didn’t control the sale, but we were able to just sort of weigh in and say, you know who we prefer to have on the cap table. What would, even if it’s not the Saudis potential national security risk of any investor on your cap table? I just wonder broadly about AI, if there is a national security at risk, what is it? I really think from our perspective again because this technology is just so new and developing, I think we’re just trying to be sort of as thoughtful and cautious as possible about sort of where we draw those boundaries. And because there’s so many possibilities like you’re saying, I think a lot of it you’re probably figuring it out as you’re going. That’s right. I wonder about that FTX stake. There is still sort of a sliver left. Sources I were talking was talking to said there. The bankruptcy wants to hold on to that because there’s a thought that anthropic would raise more money and become more valuable. Any thoughts on that remaining stake? Is it something you’re involved in and would just love to get your take on on that slice of the company that is still technically up for grabs. So obviously we’ve been you know coordinating closely with the estate fully supportive of you know whatever decision they choose to make. But obviously we, you know we don’t get to sort of weigh in or control any of their decisions. But again you know very happily been coordinating with them on that. Yeah that makes sense it’s it was a long process really interesting. And then in terms of future for Anthropic, what should we expect on sort of the the road map for this year, whether it’s Claude or other potential models coming out. So we listen very closely to what it is that our customers are asking for. That’s something that we’re always aiming to do. I think the things that sort of never change, we’re always trying to make kind of the core clog models better. We’re trying to make them smarter, more reliable, cheaper, more available in more places and I think we’ll see continued improvement on those dimensions, you know, over the course of the year. Additionally, we work closely with enterprise customers and start-ups to really hear what are the product features that are sort of most missing from the cloud offering today and really try and respond to the kind of market feedback that we’re hearing from them. What about the broader economy? Any thoughts on what AI could mean for the labor force, for inflation and things we talk about on CNBC everyday? I really think that there’s again, you know, so much uncertainty here, right? I can imagine things sort of going many, many different ways and none of them feeling you know, completely surprising in retrospect, right. My sense is that we will continue to sort of see adoption of this technology continue over the course of the year. I think 2023 it was really about people sort of testing, right? It was the first year that businesses really were starting to understand and wrap their heads around how does this technology work, right, How can it be implemented into their day-to-day workflows. I think 2024, so much of what we’re seeing on the business side is businesses are just getting much smarter, right? They know exactly sort of what their use cases are. They’re figuring out more how to optimize and deploy, whether it’s with one model or across multiple models. And so I expect we’ll see a larger scale up of the technology really over the the remainder of the year to come. It seems like productivity maybe as well. I mean that would be a big economic implication if just the workforce got more productive. Is that something you’re already seeing? I think this is actually a really, really interesting and and very perceptive point. I think so much of how we’re seeing tools like Claude use today is in helping people to save time, right. So hey, can you help me, you know, update these marketing materials to make them a little bit snappier? Can you do this, you know, quick calculation for me? Can you just help me optimize this line of code? So I really expect that productivity is going to be one of the first places that we see this economic impact, you know, start to happen. Really interesting. Well, we’re watching the Fed very closely, So that could have. To take applications, I do want to ask you about running a company with your brother Dario, and we’ve heard from recently as well on CNBC. What is that like? We, you know, we’ve all got siblings. A lot of people can’t imagine running a company with their brother. What has that been like? You know, Dario and I have always been very close since we were little kids and I think we always dreamed of doing something big together. And I was saying this to you earlier, but being you know back at Stripe where I saw John and Patrick work together, you know, so beautifully as siblings but also as Co founders and leaders of the company, I think it must have really sort of planted a seed in my mind. And so Dario and I have been very fortunate to be able to work together across two companies now and being able to lead Andropic with him is, is, is a genuine privilege. It’s fascinating. We’ve been looking up and seeing John and Patrick and then you guys are another great example of that. So it’s been really interesting to follow. Daniella, thank you so much for your time. It’s great to see you here. Thank you so much, Kate. Really appreciate it.

News Related

OTHER NEWS

Fantic Enters The Sporty Side Of Town With Stealth 125 And Imola Concept

Fantic Stealth 125 and Imola Concept The Italian manufacturer’s sporty offerings are designed to appeal to the beginner segment. The 125cc segment, pretty much non-existent in the US market, is ... Read more »

Discover the Health Benefits of Valencia Orange: Serving Sizes, Nutrition Facts, and Concerns Curated by Nutrition Professionals.

Valencia orange image Perspective from Roseane M Silva Master in Health Sciences, Bachelor in Nutrition · 7 years of experience · Brazil Possible Side Effects People who are allergic to ... Read more »

Kibsons at the heart of the better food systems debate bound for Cop28

Leading grocery delivery company Kibsons says it is already answering the call for greener production processes as food security and sourcing enter the Cop28 spotlight later this month. The UAE ... Read more »

Government passes draft budget law for FY2024

AMMAN — The government on Wednesday endorsed the draft general budget law for 2024 with estimated public revenues of JD10.3 billion, marking an increase of 8.9 per cent compared with ... Read more »

New forecasted capital expenditure for fiscal year 2024 stands at JD73 million — Gov’t

AMMAN — The new forecasted capital expenditure for the fiscal year 2024 stands at JD73.317 million, according to the 2024 public budget draft law. The government allocated JD1.729 billion as ... Read more »

Historical insights: Evolution of archaeological research in Jordan from post-World War I to 1960s

AMMAN — The post World War I period marks the beginning of scholarly research in Jordan. During the British Mandate in Jordan, the Department of Antiquities in Amman was founded ... Read more »

No fruit acids, whitening creams: UAE authority issues guidelines for salon cosmetics

The Sharjah City Municipality has issued a set of guidelines for the use of cosmetic products in hair salons and beauty centres. The authority urges salons to stick to these ... Read more »
Top List in the World