Cornell Keynotes

AI, Innovation and Risk: Can the U.S. Maintain Its Market Dominance?

Episode Summary

Former U.S. Chief Technology Officer Aneesh Chopra joins Lutz Finger, a visiting senior lecturer at Cornell, for a conversation on the future of AI-driven innovation.

Episode Notes

Lutz Finger, a visiting senior lecturer at the Cornell SC Johnson College of Business, and former U.S. Chief Technology Officer Aneesh Chopra explore AI’s transformative potential, the competitive landscape and policy imperatives for maintaining U.S. leadership.

The two address ways to balance opportunity with risk across three key lenses—citizens, enterprises and government—and discuss how AI is reshaping the workforce, daily life and privacy. The conversation also covers digital literacy, operational transformation, regulatory challenges, national security and public sector innovation.

What You'll Learn

The Cornell Keynotes podcast is brought to you by eCornell, which offers more than 250 online certificate programs to help professionals advance their careers and organizations.

Learn more in our AI certificate programs, including Designing and Building AI Solutions, authored by Lutz Finger.

Did you enjoy this episode of the Cornell Keynotes podcast? Watch the full Keynote.

Episode Transcription

This transcript was generated using AI and has not yet been edited for clarity and accuracy.

Chris Wofford: On today's episode of Cornell Keynotes, we explore the busy intersection of technology, policy, and business. Our guest host today is Lutz Finger, who is Visiting Senior Lecturer at Cornell's Johnson College of Business and the creator of the Designing and Building AI Solutions Certificate from eCornell.

Joining Lutz is Aneesh Chopra, who is Chief Strategy Officer of Arcadia and the first ever United States Chief Technology Officer. How cool is that? And as a current member of the National AI Council for Congress and former Secretary of Technology for the state of Virginia, Anish brings sharp insights and really cool perspectives from both the public and the private sectors, and he is a great guest.

Lutz and Aneesh share a great rapport from their working experience together, and here they tackle some of the most pressing questions in tech policy. Among them, how should America approach AI governance in today's highly competitive global landscape? In addition to AI [00:01:00] governance and policy, they discuss healthcare transformation and the future of public and private partnerships in technology.

Check out the episode notes to learn more about Lutz Finger's Designing and Building AI Solutions Certificate from Cornell University. Gentlemen, take it away.   

Lutz Finger: Today I would like to talk about AI innovation and risk, and whether the U. S. can maintain its current market dominance. And for that, I'm so pleased to welcome our guest, Aneesh Chopra. He is chief strategy officer from Arcadia. He has built an Successful data company in healthcare. He served as the first us chief technology officer and he is currently a council member of the national AR council for Congress.

Thank you so much Aneesh for joining us.

Aneesh Chopra: Oh, my friend, it's a pleasure to be with you, and we're gonna have a great discussion. 

Lutz Finger: Yeah. So what is. A council member of the [00:02:00] National AI Council actually do.

Aneesh Chopra: Well, that's a great question. Uh, and I'm new in my role, probably three months, so a little bit of context. towards the end of the Trump administration in its first term, Congress worked together on a National AI Act. The premise being, as with most jobs and industries of the future, uh, the U. S.

aspires to win, economically. And so, Congress, called on the administration to establish an expert advisory council. That could report out both to the administration and to Congress on where we are, kind of set the baseline, how competitive is the American, uh, economic muscle around AI. And what are the strategies we could put in place to put us at a more competitive footing going forward.

So, we have lots of examples in our country where we have kind of a generational consensus that we want to make progress on a certain set of topics, while different presidents might have very specific advisors guiding [00:03:00] them on how their ideology or approach can make a difference. So, we're kind of in that expert panel.

Take the questions, give answers perspective. Uh, we meet probably once every other month. And we recently concluded a report, , on, on recommendations, for the incoming administration on, uh, a number of key initiatives that could be done in the first hundred days to keep the momentum going on our AI, economic success. 

Lutz Finger: This is pretty cool. Essentially, like, the government takes a bunch of, experts like you, who has worked in data, who has worked in AI, who knows an industry, in your case healthcare, extremely well. And ask them for statements. Um, is EU doing this as well? Is China doing this as well? Is this a, like, uh, do you, do you have your counterparts?

Aneesh Chopra: You know, uh, I, I don't know if this particular thing is the most important question of the day for how we organize these issues. our democracy is, is unique in that we have this, kind of shared view of, of government. So, executive branch, legislative, judicial. [00:04:00] I think in China, they've got a little bit more of a top down, uh, point of view, as we'll get into in a moment.

And Europe is in a unique place, they, they seem to have been really enamored with the idea of regulating their way towards these new technologies. And so that's a different perspective. And, my, my view is, It tells you something that there are topics that transcend party and ideology, where we want to get everyone moving in the same direction.

The natural extension of that was the work we did on the CHIPS Act, which maybe was a very, you know, big shift in terms of putting dedicated resources, uh, in, in jobs and sectors of the future. 

Lutz Finger: I mean, okay. So let's like, since we're diving directly into Europe, so J. D. Vance goes to Munich 

Aneesh Chopra: Yes, he 

Lutz Finger: and he talks. 

Aneesh Chopra: let's go to Paris first for his 

Lutz Finger: Yes, yes. He goes to Paris first. He goes to Paris. Actually, let's start with a Paris speech, which is true. He gets asked by President Macron, who early on said, I'm sure we'll regulate us out of the market.

So [00:05:00] Macron asks him to take a speech, like to do a quick speech. And that wasn't planned. And he goes and he talks AI for the American people. Very patriotic phrase, but what does it mean in practice, like what's your, what's your sense?

Aneesh Chopra: Well, let me just say, I actually found JD Vance's speech quite refreshing and, uh, clear. to some degree, when you've got a hundred problems to solve, as we're going to get into AI has, you know, economic competitiveness, infrastructure investment, , issues around free expression, safety, multi stakeholder input.

There are a lot of issues. You almost want to have clarity about what's your top priority. And, we heard that very clearly first from the Trump executive order on, on AI, uh, but we heard it very clearly from JD with put color and texture, the idea that the AI safety summit, which is what was, it was entitled, you know, when, when London held it, it really makes a statement that the [00:06:00] main reason for gathering is AI safety.

And, and maybe what we saw in the speech and in the aftermath, maybe we want to maximize the use of this technology for good and framing it as AI opportunity versus AI safety probably speaks a bit more to my, perspectives, which is that you want to do that responsibly, but you really want to diffuse the benefits of this technology to the greatest good.

If all we get is better social media algorithms to get us even more addicted to our phones, that's not, that is not the end game. We want it to improve our educational system, make it much more effective in our healthcare system, uh, advanced, , small businesses. So, as I think about, you know, going back to the Obama administration, how do we maximize the value of technology, data, and innovation to advance the big priorities of the country?

I heard in AI opportunity, a lot more of the gas pedal moving, and I hope [00:07:00] we're going to get into this. Uh, it was not a statement that we don't need safety. It was simply a statement that we're going to lead with opportunity and we work, want to work with the industry on, on getting towards, uh, uh, which by the way, Isn't that dissimilar to policymaking around public private partnerships to begin with, which we will probably cover? 

Lutz Finger: No, what the audience don't know, like you actually, you are part of the online certificate that I created for, Cornell. You came on and, um, like we cover technical topics and then you, like you and I, we have a discussion around applications. And this is probably for me, it's one of the highlights in the course, this discussion.

And you, at that point, you focused a lot on saying, look, if, the government gets involved, it has actually three stakeholders. And I just repeat this from the course, the three stakeholders you outlined is these are the citizens. Uh, these are the [00:08:00] companies with IP rights or innovation, and these are the nation states with the national security. If you put JD Vance's speech in that context. and compare it, for example, to Europe or China. How do you see this differently?

Aneesh Chopra: Well, , here's how I would characterize it. The, speech for JD Vance was very economic and industry focused. The supply side, to some degree. How do we make sure that the model developers, the labs, Are racing faster and stronger towards, let's not call it a GI, but some future state where we reach a position where we've got a flywheel, where our models are the models and anybody that wishes to use these models will rent those models from the us.

I think it was a heavy economic stakeholder pitch. China clearly has a bit more of a thumb on the scale on the security. Uh, nation [00:09:00] state use cases. So, as it builds its A. I. Capacity. it's got a big customer. Uh, already, which is to to advance their their, uh, national capacity, advancing the state.

And, 

Lutz Finger: Alex Karp would say the same thing probably for the, uh, U. S., right? Alex Karp is the CEO from Palantir Technology.

Aneesh Chopra: to some degree, let me set this at the outset. President Obama quipped. I think in one media story that Washington is a battle within the 40 yard lines in U. S. football terms, meaning we're largely going in kind of a similar direction, but we're going to see a slight tilt. So my commentary is not that we're having 100 percent economic focus, zero on security and zero on citizenry and then vice versa.

You know, it may be. 35 30, uh, you know, 32 [00:10:00] or whatever. So may be, um, it may be a slightly nuanced twist. For sure, the U. S. government itself wishes to be a consumer of AI. And within that, the DoD has obviously a very big voice. Uh, you see this in drones and all of the activity happening with, um, even in how the war in Ukraine is playing out.

But you're also, acknowledging that that is just one component , for us, the creating an industrial base that can solve, every sector, bankings, healthcare, real estate, you name it. Europe confuses me and it has confused me for a little while the if you were to sort of map out where have all the unicorns come from over the last 20 plus years, they don't they don't really concentrate much in Europe.

We see that heavy concentration in the in the US. We see some obviously success stories in [00:11:00] Asia 

Lutz Finger: Which, by the way, just to put down, it's sad because Europe actually publishes quite a lot. So content wise, Europe creates good knowledge, but unicorn wise, Europe struggles.

Aneesh Chopra: to some degree, the American story is story of the dynamism. Of a model that takes talent and knowledge and grinds it through a competitive process to generate productive goods and services. And that, for lack of a better term, innovation pipeline management, we're really good at it in the U. S. And that's why the best talent comes here if they wish to commercialize their ideas.

Uh, as opposed to an academic, , focus where you can, you can certainly Rank the best universities and thought leaders and find a high concentration of talent, , and I'm not trying to put down the Europeans. I have a lot of respect for the Europeans, but this idea that you're going to be the greatest place to start a business by having the strongest regulatory [00:12:00] regime doesn't strike me as the winning message.

And so we sort of struck a balance in the U. S. I thought the Biden approach was actually reasonable. Uh, but I appreciated the JD Vance commentary that like, well, I'm gonna tilt it this way. So everybody knows the flag is on opportunity while we still encourage industry, efforts , to do so responsibly and maybe we'll get into responsible AI in a minute. 

Lutz Finger: It's actually an interesting, like two comments to this. This is interesting because like the, the Um, if you have those three entities, you're saying it needs a balance and very well said my understood standing from the JD Vans discussion was not so much I put the flag on opportunity more than the balance is decided by us, the US and not by Europe.

Aneesh Chopra: Well, the implication, let me, parse that a little bit because even I'm frustrated, uh, about this issue. So if our tech [00:13:00] firms. are building global platforms, and they are subject to rules in a given jurisdiction. At some point, those rules become the foundation for what those platforms have to adhere to, and effectively do become the Standard.

So to some degree, Europe has positioned itself to say we may not be the homes of the companies, but we're going to be the ones to set the rules for how those products are going to make it into our market. So 

If they end up hurting American companies, or in my view, misallocate the problem to solve with the intervention.

So, if I were to think in cost benefit terms, It, it may be that we have similar objectives. We wanna reduce, uh, disinformation and we wanna promote competition. I mean, these are basic rules like. Mom and apple pie, but if the how to get there is like a very complex regulation, where you have to always ask the courts, is this, did I get it right?

How far do I have to go? You could [00:14:00] find yourself spending a lot more resources to get an incremental improvement on a baseline, service. So, was the juice worth the squeeze, would be the question. And I think JD was sort of speaking to Europeans by saying, Look, we've tolerated your taxing the American entrepreneurial spirit, the innovation ecosystem.

But for so long, I'm going to say at this point, get out of the way, let us do what we're going to do. So You don't harm the West against the shared concerns, , around China's leadership and with deep seek and all the activity that happened in the last few weeks, there's clearly a sense that the two countries may be a lot closer in terms of their, their advances.

And, and so. Um, look,

Lutz Finger: You actually asked 

Aneesh Chopra: Democrat. Yeah, I'm a Democrat. I care deeply about societal good, and I appreciated uh, J. D. Vance's commentary around, you know, we're gonna run, plant the flag on [00:15:00] opportunity, and, and, and we're gonna, I hope the corollary to that is, and oh by the way, the industry should self regulate, we'll collaborate, create standards, so there's a second and third sentence that is after this speech that I think will kind of fill in the blanks to keep this within the 40 yard line. 

Lutz Finger: Let's pivot for a moment to china and then we can look actually from Trump over biden to like from obama Trump biden trump now to how the china policy is looking like And you asked me in the prep for this call 

where I 

Aneesh Chopra: start do we have? 

Lutz Finger: Sorry?

Aneesh Chopra: Yeah, how much of a head start does the U. S. have on the, on the models? 

Lutz Finger: And my answer is, not a lot anymore. We are at an equal playing field in models. Yes, they are incrementally better, but you know, this is, uh, like, this, like, this is difference between PlayStation and a computer, right? It's like, it's, it's an arm's length race. And [00:16:00] at the moment, uh, okay, Grog came out today.

Amazing. But two weeks ago. It was actually DeepSeq. so I don't think we have on the math level on the computational we are the best a big head start anymore.

Aneesh Chopra: Well, here's the pushback. My view is, if I were in seat, okay. So back to my time, uh, let's say cloud computing, just to go backwards. Let's, I would say one could argue, you know, circa 2005 to 2010. Cloud computing growing up a lot of, it was a jump ball. You could have, in theory, you could have seen leaders around the world, step up and build platforms that could sell global, you know, SAS services globally.

But the U S really, kind of spread its wings and became the flywheel for SAS. I mean, I think, I don't know what the share of our revenues are from, so it's a, [00:17:00] it's a dominant. Uh, platform. You know, one of the areas that we tried to help build that strength was to say, look, we commercial companies can buy cloud services and there isn't as much regulatory risk or other operational risks.

And so they could thrive, but maybe in a smaller Tam, but these rest of these markets for them to go to the cloud would be complicated. So we worked on FedRAMP to create industry standards towards a more secure cloud. And in some degree, banking and others piggybacked off of that, so we kind of continued the growth engine by bringing it into the public and regulated markets.

So this China model race, a question to some degree is, there may be a specific computational task on a benchmark. That shows parody. But if you lift it up and said, no, no, are the guts of the systems and the processes and the whole [00:18:00] package and the application developer ecosystem, are they that piece feels like America has got a little bit more of its act together.

And so the probability that the Industry vertical solutions are going to be US born and scaled feels still right to me, even if on a specific benchmark mathematically and, and, and in the context of whatever, I mean,

what is the value of these benchmarks? Lutz, let's, I don't even know, like, does winning the benchmark get you like a, uh, a prize?

Not what it gets you nothing you to 

win on the game. You got to have products and services that can be scaled. 

Lutz Finger: I'm so much with you. This is, and we should probably, um, we have a, we have a question from Elvira about technology singularity. We should touch this in one second. However, I know that you and I. And that's also the, the general senses of the like general idea of the, online certificate. It is about the application that matters.[00:19:00]

And if you, if you buy, let me do a test with you. I, I actually didn't prep you. So let's, this is life here, ladies and gentlemen, let's see. , if I say the application matters, then I say the money that companies currently have. Because they have an enterprise value, because they have customers, they have data, that matters.

If I now take the top 10 companies that are doing a lot of AI in the US enterprise value, and the top 10 companies enterprise value all summed up in Europe. And let's make this easy. The US is my height, 6. 4. How tall do you believe me? are the top 10 companies together in Europe. 

Aneesh Chopra: Six inches foot. 

Lutz Finger: is so good. It is this.

It is 4 inches. This is really the size of the European 

Aneesh Chopra: And you held up a soldier of a knight from the European order like back when it was like it had a military muscle to do all this great work. Anyway, 

Lutz Finger: Well, I don't want to make an [00:20:00] advertisement for, your favorite online retailer, but I, , tried to buy something which is Four inches tall and they offered me obviously, um, an iron man. I was like, I cannot buy an iron man. And so I paid 2 extra to get this night. 

Aneesh Chopra: I love it. 

Lutz Finger: now let's stop, but this is the idea here is AI will help with applications.

What's your. Like one of the questions from the audience and audience. If you're listening to this, this is your moment to get prime time with Anish. so type in your questions and send them out. But Elvira, thanks Elvira for your question. uh, uh, what are the conversations around technology singularity and how this will affect jobs?

let's come to the jobs part a little bit later, but like, let's, let's touch on what is your view on the technology singularity idea?

Aneesh Chopra: Well, [00:21:00] beautiful thing as a policymaker is it's not my place to have a view on the current state of the technological evolution and the opportunity to see machine, human, there's a, there's a set of experts that will decide when and if we're going to see such a thing. I don't have the expertise to have knowledge to say it's coming by this date or that date.

The question I come back to is what is the role of government in an environment where this technological progress is taking place. And, I believe the Obama framework for this holds even in the Trump administration world. And that is, uh, we refer to this, our strategy for American innovation. And it basically asks the question, what can we be doing collectively to ensure that the jobs and industries of the future are [00:22:00] designed, scaled, the outcomes and the benefits of which are accruing to the American people.

Again, if there is a singularity, what are the implications of this? We'll get to the jobs issue in a minute. So we've historically made investments as societies on infrastructure, roadways, railways, and runways. The new obvious infrastructure looks a lot more digital. R& D, computing infrastructure, hence Stargate and some of the questions about You know, uh, what can the, you know, public private partnerships 

Lutz Finger: It's financed.

Aneesh Chopra: Yeah, human, human capital, talent. So, if you think about this question of singularity, it speaks to what's the infrastructure that facilitates people thinking, doing, acting, investing, and building towards this kind of future. Well, you need to have this, you need to have a robust, uh, foundation. 

Lutz Finger: I actually like this. Let me, let me answer the, like the, the, the clarity [00:23:00] part, um, from my point of view, um, in, but I like the approach that you say, no matter what, whether it is there or not, we need to have. The singularity also is there to serve us, the people, and because for that we need regulation and we need a structure and I like this structure and the structure should be focused on opportunity coming back to the earlier point.

But let's talk to the singularity. And during my time at Google, I had also the pleasure to meet Demis Hassabis, the Nobel Prize laureate as he worked with Google on AlphaFold. And he just recently, doubled down on saying, guys, the AGI, I don't see it yet. If like, there's, he actually said there's a 50 percent chance that we might see more generalizable as a human in five years time, which is still some way out.

Other people would give it more time. So [00:24:00] only because. A tool like ChatGPT, DeepSeek, Gemini, or whatever sounds intelligent does not necessarily mean it's generalizable. Only because a car can drive by itself doesn't mean it's generalizable. Only because my chess computer beats me, it doesn't mean it's generalizable.

And you can actually tell this when you go on OpenAI. And then you have to select between five different models, the model, which is for planning or the model, which is for search or the model, which is that's not generalizable. Now we are on a path, but first we need to get AGI being like a human, and then there might be as a singularity.

And that is even further out there. So that would be my technical part to it. But no matter what. We need the supporting structure of the nation state to thrive,

Aneesh Chopra: Yes, and so to finish that part of it, because it, it needs three things. The first is this infrastructure layer. The second [00:25:00] is, and this is where you get into the Europe versus China versus the US rules of the road, uh, to promote competition. And outcomes basically to facilitate greater results that accrue to the benefit of the American people.

So historically, this has been antitrust rules, net neutrality rules around, you know, how do we leverage the pipes to our homes? And so what we're hearing right now is an active debate. Should AI safety be rules and regulations by a nation state? Should they be self regulatory, benchmark driven industry norms? 

so one can have a pretty healthy discussion about when do we stay light touch, when do we graduate, but that's in the second category of rules. And then the third area, I think, is where we come back together again as American people and that is, what are the key priorities as a country? So [00:26:00] is going to Mars the priority, right?

Then that, you get an all hands on deck approach. Is it kind of getting healthcare to the American people at an affordable price? Uh, is it everyone gets access to a Cornell education, uh, at a fraction of the price? 

Lutz Finger: please.

Aneesh Chopra: so whatever the Uh, national priorities are, even if not kind of controlled by the government, there's a call to action by the government that fosters, investment and opportunity.

If you put those three things together, the American dynamism is really clear relative to European or Chinese at the moment is that we've found a way to bring harmony to these critical components. And I think, look, I think the Trump administration can be framed in the same way. They're going to be investing in infrastructure. the National AI Resource Computing Center, or NAIR, came out of a Trump administration effort. So there's investments in R& D, there's recruiting the best and the brightest, this whole debate about H 1B [00:27:00] versus O 1 visas and all that kind of thing. So you see a lot more similarity than, differences, when you take that overall American strategy into consideration.

America has a different approach to innovation than Europe. 

Lutz Finger: And I like this, this framing of continuity. I mean, you have been in the White House with Obama. You do know Biden. You are now in that, um, set up where you are looking at the council members and discuss with them. And you stress here this continuation of. The American approach to innovation, which I think I mean, it definitely has paid off if we look at the outcome here, but deep seek is now hitting the market, right?

I could argue, all this investment from Google and open AI and all of this money and the debt funded star Stargate announcement might depreciate very soon because we have open AI [00:28:00] And again, we have, 

Aneesh Chopra: All right. Deep 

Lutz Finger: source models competing. This could be meta thanks to Zuckerberg. This could be as well. Um, uh, the Chinese deep seek and Dr.

Barnhill, Benjamin Barnhill had a question for this, which I think fits perfectly into this context. So first of all, thanks for saying this. And, Audience, please put your questions down. Uh, we are completely reachable on all channels. In a government perspective, what is the implication AI ML based models, specifically thinking about deep seek LLMs in terms of data sovereignty, or like I would cross out the data sovereignty.

And you. You said, this is nothing new. You set there as continuation. How do we see this? How did we do it in cloud? How do we do it now?

Aneesh Chopra: Yes. So, so here is, uh, herein lies the, the debate. Will the economic value in the era of AI accrue to the applications that bring it to the last [00:29:00] mile sectors of the economy, in which case the more open source foundation models that can be run on an AWS cluster. at low cost. I, I love that Satya Nadella introduced this Jevons Paradox to this dialogue.

I never heard of it, 

Lutz Finger: Amazing.

Aneesh Chopra: it's an Amazing. observation. 

Lutz Finger: explain it quickly.

Aneesh Chopra: Yeah. So the basic principle is as the price of the good falls, demand for the good will far outstrip it. And then the aggregate pie grows so big that it actually is net positive to the industry. So while I can't speak to the economic Return on investment of whatever the 100 billion Stargate plan looks like and its version of, you know, how it affected Colossus or whatever that Elon did in Tennessee or wherever that was I can't speak to in our in our risk taking culture.

It almost doesn't matter like companies are going to have cash and they can [00:30:00] put that cash to its highest and best use. And if they've come to the conclusion that it's going to be in building out the foundation. Uh, power centers and data centers and putting 100, 000 machines connected to each other.

Well, God bless our economy allows for that to be a bet. And it's not going to like ruin the world if that bet was off by an order of magnitude. deep seeks introduction and availability. The fact that this Chinese, whatever hedge fund, whatever it is, open sourced it. What is it? It's something. 

Lutz Finger: it, it was a research center and they essentially, they couldn't build the models, um, because it didn't have, US export restrictions. So all the design decisions were like, make it on a cheaper model.

Aneesh Chopra: Right. So, MacGyver, you know, I'm a huge believer in India, frugal innovation, uh, there's a lot of debate whether India should be in this, 

, Nandan Nilakani, who I respect deeply, is sort of the godfather of a lot of [00:31:00] the digital infrastructure in India. When he announced the India strategy here, it was on the all the way on the other side of the use of AI Not on the, model, uh, supply.

And so back to this issue of, uh, where will the economic value accrue? If it turns out that model availability is ubiquitous, open, cheap, and it works, then a whole swath of our economy can become much more productive at much lower, you know, investment. So that's great! India is going to push on use cases that are cheap.

So if the supply side's cheap, use cases are cheap, we could actually have an unbelievable gift, which is every single person in your country can have a supercomputer that does, that makes them all superhuman, right? If everyone from every neighborhood and every, you know, the poorest neighborhoods have the [00:32:00] same capacity as the richest kids with all the resources, Man, what a story for economic opportunity for everyone.

So Thank you, to the folks who've been democratizing this, that it can actually result in more economic opportunity, more American dream. So I'm, I, 

Lutz Finger: Thank you, Zuck. And in this case, right, because 

Aneesh Chopra: If he didn't do that, his decision, yeah, right, he set the, here's I will say an interesting thing that happened back to this nuance, , there was a bit of a debate I felt with the approach in the, uh, Biden administration, so maybe I'll highlight this as an example.

So when the executive orders, , and the, the initial statements around what, what the administration was gonna do, were being contemplated, one of the first actions was this idea of a frontier model commitment. And this got a little bit wrapped up in some of the [00:33:00] complaints by JD and others. They wrote in these documents, the, the Biden team did.

If the size of your model is greater than x. then you are expected to communicate with our, , AI Safety Institute and other, government agencies. And so that, that, whenever you set a technical standard into a regulatory framework, it is, like, always going to look in hindsight like it was a ridiculous thing to do.

Because we've blown past whatever that, uh, milestone was, is my best understanding. So, , it did also emphasize collaboration with all the proprietary models, and it took a little while to pivot to recognize and to celebrate open source. I do believe, it would have been nicer to have a bit more encouragement of open source.

From the outset, because, as we're seeing now, it facilitates a lot more diversity of of [00:34:00] app developers. And in America, that diversity and democratization wins. So I wish we would have more celebration of open source in the early days. There was a lot of I mean, the defense, a lot, a lot of the politics looks on these things is that like, The military, the national security crowd, they kind of want a few throats to choke, if you will.

So they kind of prefer, you know, a smaller number of people that they could get in a room and engage on really complex issues. So I, I can imagine in the room. Now the Biden team never had a CTO, so I don't know 

Lutz Finger: not like you, you were the CTO from the Obama team.

Aneesh Chopra: Trump had a CTO, uh, now Trump has like promoted the CTO like times 10, they have like four of them with all the AI officers and stuff. So there's a, there was an acknowledgement that you want to have like the economic council with its equities around how do we grow the economy. You want to have the security people say, here's what our equities are to make sure that we don't like give our [00:35:00] enemies an undue advantage and that we can.

You know, harvest some of this stuff for our own. You want to have like a technological point of view, what's the science and research agenda, then you got to want to have a roadmap, like, well, how do you put all these pieces together? And that debate, that debate should have resulted in more support for open source. 

Lutz Finger: but let me push back a little bit on, on the statement. We should like discuss public private partnership in this respect. Yeah. So I applaud Zuckerberg for going, uh, open source, but from his business model, like meta as a platform, he wants open source because the open source allows more creators to create content, which more people consume, which makes his platform more successful.

So in like, it gives the, give the public the tools to create another metaverse so that meta can really be meta, right? This is, this makes sense. If you, if you think about it, and I think we discussed this in my course in the program, there's like [00:36:00] how many companies. After the mobile revolution, if we take the last big thing, which got Silicon Valley, I'm sitting here in Silicon Valley, but got Silicon Valley excited.

That was a mobile revolution, right? I remember I was at LinkedIn at that time, and we stopped everything in order to catch the mobile train and change LinkedIn into a mobile solution, right? How many companies actually entered the S& P 500? And after, let's say the start of the mobile revolution was Apple iPhone, after the iPhone came out, seven, seven new company ended because the most value was from this mobile revolution was generated for the companies that had access to data and access to customer.

If the same thing happens now with AI, because AI is a very neat interface, , then we would see. That the private corporations using open source will generate the most value. Now, you are a big [00:37:00] supporter of public private partnerships. How does this play in and how, you know, when I want to be a fly on the wall and one of your meetings discussion, how do you, how do you balance between how much?

public versus how much private.

Aneesh Chopra: Well, this gets to where in the stack do you want public private partnerships? So. R and D is inherently, , public private. You have research centers, computing clusters, and you want to incorporate access for researchers who don't have the budget to cover the cost of these sort of things. So developing infrastructure is critical.

In the world of standards to me, the most powerful, highest leverage point for public private partnerships is in technical standards development. So I would argue AI safety is the place for public private partnerships. I would argue the model content, , compute protocol MCP that Claude released is an ideal public private [00:38:00] partnership.

So we'd have a common protocol. that the industry could rally around for, , how we allow a machine to interact as a human , on websites and, , data portability and so forth. So, , if you identified market competition as a key priority, to drive more economic diversity and economic success. You'll find that if you can get, , standards through public private. partnerships developed, uh, you increase the possibility for more, uh, more market actors.

Now, easier said than done, , I,

don't know in the mobile computing blip, when everything was hot, whatever it was, 2014, 15, whatever that time frame was, whether we got that mix right. you know, we got the infrastructure right. We open sourced a whole bunch of, uh, spectrum. We released more spectrum in the mid to late [00:39:00] 2010s, than in the prior decade.

And I think that, you know, represented a bit of a public private partnership to spur, , the applications in mobile. But the competition part wasn't as great, to your point. We had a few winners. And maybe antitrust played a little bit more of a role where, you know, back to this debate about did Meta get too big and should Instagram in hindsight should have been allowed.

That's a harder legal question, but, we win when we have Open standards, more competition, and, applications that, to your comment about specialized models can go to market, uh, so, so there may be lots of players who, who build products and services on AI that are not the Mag 7. 

Lutz Finger: so the NAIAC, the National AI Advisory Council Report, it's a long document. it's [00:40:00] worth At least summarization from ChatGPT, but it's worth a read. So, you talk about boosting AI adoption for businesses. You talk about training and funding. Mark Logan, one of the people who are on the stream, uh, Mark Logan asked the following question.

What's one regulatory change you would push for? To help the U. S. companies innovate with AI without drowning in red tape or losing against less related competitors to China. So if you take the NIAAC report and, like, what's your favorite thing?

Aneesh Chopra: Well, uh, well, I'm a healthcare guy, so I'm a little biased in that I want to unleash innovation in, in healthcare. So I might speak to that one, uh, in particular, look, at the moment we don't have a lot of regulations on AI. Commercial AI is not a regulated, space that, in fact, that that's a little bit of the discussion.

Uh, the J. D. Vance was having in Paris is that I don't want to go down that road, and so I can't [00:41:00] say, Oh, turn off something that doesn't really exist yet. It's more like don't turn on the thing?

that would get us in trouble. Let's keep moving is kind of the headline. So, the one action. There are several actions in the report.

I think in threes. So to me, the commitment to keep funding the science and the R and D speaks to the investments in infrastructure. We have to do more. So I think that part of the recommendation of like, keep boosting the quality of the supply is there. the multi stakeholder, encouragement around, uh, governance, I think that's and ensuring that we have more of that public private, you know, that voice, uh, I still think is the right answer.

And in fact, I do think it, you could make that work even with, JD Vance's, uh, speech in Europe, because you want the industry to self. regulate, and you want it to have a set of guiding principles so that, people aren't, competing on shipping unsafe products. You want them to ship, stronger products, and then can iterate [00:42:00] when they get feedback that there's a problem so that you can have a flywheel where the, private sector can self organize.

But in the healthcare space, I will say, in the report, my number one focus area, in the chapter I spent the most time on, the, the AI models today are trained on The internet data represents like whatever, 20 percent some number of the global data pie. every year, but like a bare fraction of it have been used in any AI training.

So I am eager to bring healthcare data into the training program and even more eager to ask the question, how do we, build the equivalent of an AI supercomputing doctor? Uh, so that we can diagnose disease early, as early as possible, and we can Actually, identify interventions that can reduce, that disease from progressing, , further so people can live longer and healthier lives.

Lutz Finger: Actually, let's do a [00:43:00] quick, quick stop here and, , let's explain a little bit your background. So I, I met you. Oh, I met you during my LinkedIn time, time, but then we started getting connected. I, I joined Google health as one of the early members. We're trying to 

Aneesh Chopra:

Lutz Finger: it up. And you had built a data company in healthcare 

Aneesh Chopra: I did.

Lutz Finger: and, you sold it later.

This was a very successful one, but explain a little bit to the audience what the premise was from care journey.

Aneesh Chopra: Well, in the Affordable Care Act, which, in hindsight is a gift for anyone who wants to make the healthcare system better, whether you're a liberal or a conservative. And there's something in the ACA that you can say, wow, I want to double down on that because that's how I'm going to make the system better because nobody believes the current system is giving us the biggest bang for the buck.

Uh, we've got exceptional medical device innovation and pharmaceutical innovation and, top talent surgeons and everything else. But for whatever [00:44:00] reason, we're not able to extend the benefits of all that to as many people as we should. And so there's a lot of room for improvement. The thesis in the ACA that I wanted to highlight was that prior to the Obama administration, it was illegal for any commercial entity to access and use the claims history of government plan, program beneficiaries, Medicare and Medicaid.

So if I was trying to figure out what, which doctor has the best experience Caring for people that have diabetes by helping them avoid the hospital unnecessarily or the emergency room and to get them, you know, care at the lowest cost. You could track this. If you had access to the data, you could basically build a cohort of diabetics who could track all over the country, what their journeys look like in aggregate and say, who's got the least number of those avoidable conditions and is it [00:45:00] statistically relevant signal.

So we shifted the policy to yes, not no. from, uh, you know, when it comes to this issue of opening up government data. In 2015, then, Andy Slavitt, the head of the CMS, , Medicare Medicaid program, said, if you have a commercial idea, tell us about it, and we'll go through a Privacy Review Board process, but we'll authorize you to have de identified access to the Medicare data for purposes of building, uh, models that can make the system better.

And so CareJourney's thesis was that we should model for high value care, and we should model for low value care, and we should shine light on organizations that are high, and maybe a bit more scary highlighting of those that are not so high. And so CareJourney built, I think one of the, most effective physician quality rating engines.

in the country. U. S. News licenses it [00:46:00] in a more public way, but a lot of our customers are payers and providers who incorporate that into their referral, plug programming, network development programming. and soon I hope extending it into consumer navigation and search.

Lutz Finger: And I think the, core principle here is the stunning of the good information, right? People talk about AI as something which is out there and taking over the world. And this is always like code, totally like a nightmare for me. And we're like, AI is there to support us. AI is there to help us. And in order to do that.

It needs data right as much as I go to my doctor and tell my doctor everything so that the doctor's AI called the human brain is hopefully coming up with something to helps me. We need to enable data access and there are so many good examples in what care journey has done. And I, at one point in time had the pleasure to, to work with Marty McCurry, [00:47:00] who like he is becoming, I think the FDA. 

Aneesh Chopra: FDA.

commissioned that. 

Lutz Finger: And he, he built as well a database to show quality metrics and build out a quality metric. So he was like an application layer, like on top of your data. If you make data available, then you can measure, and if you can measure, you can see something like quality and improvements. 

Aneesh Chopra: Yes. In fact, , we, we licensed, uh, Marty's, analytical package to bring it in a, in a weird way. He and I both built applications on the base data layer and, we went in a direction he took it in a different direction and we were able to bring his constructs into ours. So, you sort of have a better library of insights that we, we went to market, , to the payers and providers and some of the life sciences companies.

With his package. So it's, it's a, it's a wonderful thing that you can do. And in many ways, it's the only job you can feel comfortable doing [00:48:00] to bend the healthcare cost curve that doesn't make you feel like you've cut people's access or you've boosted costs. 

Lutz Finger: I push against it, and so we started off by J. D. Mann's talk, and we said it's the opportunity, and we talked about, um, that there is a balance between those different entities. Now, if you look at healthcare, that balance in general has been tilted way more towards the patient. Don't do harm, which is not true.

When we do, like when new medications are tested, we try to limit harm, but harm is happening in the process of, , research, right? We try to limit it, but it's happening now. What's your point of it? Or is something, and this is a question from Stephanie who says, do we need an ISO for AI and industry standard guidance?

Is there, is there something needed in order to prevent us?

Aneesh Chopra: So I launched with about 40 health systems and [00:49:00] health plans about a year and a half ago, healthcare, AI commitments. com 

Lutz Finger: Nice 

healthcare 

Aneesh Chopra: ai commitments. With an S. Commitments, plural. And, and the idea was, the Biden administration was issuing the first regulation on electronic health records, who wanted to introduce AI models on the data sets, like our personal health data, that's in the health systems, uh, systems.

And so, the idea here is that if we could strengthen the governance of the use of AI, locally. Uh, we can, we can get to a better place where there's a feedback loop about what's considered useful and what's not useful. Give me a, I'll give you a silly example. There was a New York times article, that email from your doctor may have been written by AI and not your doctor was sort of like the headline.

[00:50:00] And you know, one of the debates in AI communities is when to watermark. And so, uh, one of the health systems UCSD quoted in the article said, You know, we even though we have lots of templates and macros and tools in our daily lives that we don't say, Oh, I use this template to start my email or my memo to you, , that I felt different and to earn trust.

Maybe it made sense to disclose that this, this feature, uh, that allows a doctor to respond, uh, in a pre populated way could be, , a productivity tool and not a lazy, just like, send whatever the machine says. So UC San Diego describes their governance model that allows them to sort of see whether this is, you know, down the fairway.

And their conclusion was, we want to add a passage at the end that says, This message was partially automated and reviewed by Dr. Smith, your clinician. So, that [00:51:00] watermarking meant that they were trusting their customers to figure out whether this was happening and if it was good. But other health systems were like, you know what?

We concluded that you don't have to say anything. So we, we chose not to disclose that this message was partially automated. And, New York Times kind of shone light on this, even though these two organizations came to different conclusions, but having gone through a similar process of governance, I think it's kind of good that we have, you know, 

Lutz Finger: I agree.

Aneesh Chopra: attempting different things, and the market will provide feedback, and we'll see what's the, but they can iterate much faster. 

Lutz Finger: There is a very fascinating study done by Eric Bjorn Jolfsson. He is a professor and a senior fellow at Stanford. And he looked at productivity and he figured out that like the productivity gains for junior folks are not knowledgeable folks are enormous. And we have seen something in healthcare before.

If you show a complicated x ray, To a general, [00:52:00] like general doctor, they don't know. And if you send, send them, there was a test done. So you, you sent to doctors, , exactly the same description. One time you say it's a human doctor who wrote this description on the x ray and the other time you said, it's an AI, and then you ask the doctors how they feel about it was published in nature and the general practitioner.

They say, Oh, thank you. This is so helpful. Obviously the x ray doctors say, no, no, no, no, no. I don't trust this. This is like the AI is wrong. Now. The point is, AI can scale up and why would we need a disclaimer? We just should deliver a better service. And obviously we are responsible for the service because we humans have agency.

We just use an AI tool. That's the essence of the course. Now, nobody needs to do the certificate, but yes, that's it. We are running into the five last minutes, , Anish. Outlook to the future. Like, obviously, nobody knows and I will keep you [00:53:00] responsible for what you say now and tell you forever that you got this wrong.

Now, but like, where do you, where are we going? What is the future of AI look like?

Aneesh Chopra: to me, it's, it's probably the most exciting time if you wanted to solve big problems in our world, because AI will make us that much more productive, and if the productivity gains can accrue in health, energy, education, these are the sectors where it's been flat, , for the last decade or longer, uh, despite huge capital investments, then that's gonna waken up a lot of, uh, sectors of the economy, so I love this SAS Uh, software as a service versus service as software.

I don't know if you've tracked all this. 

Lutz Finger: Yes.

Aneesh Chopra: I love it because we're going to shift from selling, IT systems through the IT department and more like labor [00:54:00] substitute through the business units. And, , think it's going to be a real positive. I think it's going to be great that we're going to see Nursing units incorporate AI into their planning process, etc.

So it's, to me, I feel very bullish that we're going to see health, energy, and education markets benefit the most. And it'll only go faster with the more open source models that come. So I think, uh, I'm quite bullish on the impact it'll have in problem solving in the next two to three years. 

Lutz Finger: Yeah, me too.   

Chris Wofford: Hey, thank you for listening to Cornell Keynotes. Be sure to check out the episode notes for the details on Lutz Finger's Designing and Building AI Solutions Certificate from Cornell University.

Thank you for listening, friends, and as always, please subscribe to stay in touch.