Cornell Keynotes

AI Today: Laws, Ethics, and Protecting Your Work

Episode Summary

In this fourth episode of our “Generative AI” series, Cornell Tech professor Karan Girotra pairs up with professor Frank Pasquale from Cornell Law to discuss the laws and ethics of generative AI.

Episode Notes

Karan Girotra, a professor at the Cornell SC Johnson College of Business and Cornell Tech, and Frank Pasquale, a professor of law at Cornell Tech and Cornell Law School, discuss the laws and ethics of generative AI while looking at performance guarantees as well as unintended consequences and outcomes.


The conversation highlights how organizations in finance, health, education, media and manufacturing are using these technologies in clever ways and charts a path for the next generation of use cases — ones that go beyond using assistants to enhance individual productivity.

What You'll Learn

The Cornell Keynotes podcast is brought to you by eCornell, which offers more than 200 online certificate programs to help professionals advance their careers and organizations. Karan Girotra and Frank Pasquale are authors of the Generative AI for Productivity certificate. Additional online and in-person programs from these Cornell faculty members include:

Learn more about all of our generative AI certificate programs.

Follow Girotra on LinkedIn and X.

Did you enjoy this episode of the Cornell Keynotes podcast? Watch the Keynote.

Episode Transcription

Chris Wofford: Today on Cornell Keynotes, we bring you part 4 in our series, AI Today. Cornell Tech Professor Karan Girotra pairs up with Professor Frank Pasquale from Cornell Law to discuss the laws and ethics involved in using generative AI. Frank and Karan look at the many issues that users and also vendors and providers of generative AI tech need to consider when using texts, images, videos, massive sets of copyrighted materials in machine learning processes and AI models.

 

Chris Wofford: They look at several legal use cases, which hopefully will inform how you choose to protect your organization, while also at the same time encouraging you To find new ways to be creative and work smarter. So be sure to check out the episode notes for info on Frank and Karan's generative AI for productivity certificate program, any Cornell's other course offerings in this space.

 

Chris Wofford: Here's Frank and Karan in conversation.

 

Karan Girotra: Thanks so much, Chris, for having us, and thank you to my guest today, Frank, was a guest for the show, but not a guest to me is my colleague at Cornell tech. And what is very, this moment itself would not be possible at any institution but an institution like Cornell Tech because Frank is a professor who'd traditionally sit in a law school.

 

Karan Girotra: I'm a professor who'd traditionally sit in a business school and at Cornell Tech our founding mission is to build technology that matters and to do that, we very start that you can't do it alone in a business school alone in a computer science department or for that matter alone in a law school. So to do that right, you got to bring all of these things to all of these disciplines together, which means bringing faculty from different places together, which means I get to have interesting friends like Frank.

 

Karan Girotra: And as a business school professor, I get to speak to things around technology with the perspectives from science, the engineering side, from the perspectives from economics and the business side, and then also the perspective from the environment and the frameworks within which society approves us to use these technologies.

 

Karan Girotra: And today, not a big surprise, our focus today would be the legal environment around AI. Chris mentioned, our previous shows, we've covered a lot of different things, open source, business transformation. I think we are on the fourth or the fifth version of this episode of this, what is becoming a series now.

 

Karan Girotra: Today, we'll focus on legal aspects. And, and before we get into the uh, before Frank, who is the real world leading expert in these issues, uh, gets us a better understanding of this. I think it's important to contextualize a little bit all technology that comes in has, has a lot of promise and has a lot of peril.

 

Karan Girotra: As we've discussed in previous episodes also of this show, and we've all seen in different waves of technology. And frankly, what our own intentions are obviously to go for the promise and not for the peril, but as a society to ensure that we can maximize the benefits, we need regulatory frameworks.

 

Karan Girotra: We need legal frameworks, which help us get there. And as technology advances, sometimes these frameworks fall behind. Sometimes they're inadequate, but many other times they're adequate, but we need legal scholarship. To help us connect them, connect the pieces. And that's exactly what Frank does and what our focus in the, in this keynote today will be.

 

Karan Girotra: Let me get started by just directly asking the biggest question on the minds of of every leader I talk to, CTO or CEO, who's thinking of using generated AI or other forms of AI. What are the big legal issues that companies... and we'll do it in two parts, first companies that offer these AI offerings and then companies who are probably users of these offerings.

 

Karan Girotra: So offer the companies that offer companies like Microsoft and Open AI, or Anthropic. And then we'll talk a little bit more about perhaps a bank, which is trying to use these technologies. So Frank what are the legal issues today that you see that affect companies offering generative AI?

 

Frank Pasquale: Oh, well, thanks so much Karan for that terrific introduction. And you know, the feeling is mutual and it's just fantastic to be a law professor amidst people with  your expertise in business and technology, and our wider faculty, great to be here. And so if I were to think about the key legal issues facing generative AI, and particularly to start with, as you asked the firms that are the providers here, the vendors of generative AI.

 

Frank Pasquale: The first thing that comes to mind is copyright. Because All of these large language models or foundation models as they've lately been designated are reliant on huge amounts of data and it could be texts, images, videos, sounds. There's just so many things that can be fed into these foundation models and a lot of that material is covered by copyright.

 

Frank Pasquale: And so we've seen lawsuits pop up in the US and other jurisdictions where we're going to see courts decide whether unlicensed use of that material for either training or for output is okay. Or if they're going to say, no, wait a second, we're not going to give that our seal of approval as fair use or with some other exception for say, text and data mining.

 

Frank Pasquale: So that's a really interesting issue that's come up. And we just, the lawsuits keep coming. We saw a recent one, I believe by the New York Times and Wall Street Journal against Perplexity. There were many last year involving other firms. And I think that this is, is something to really watch because even though there are some very good precedents on the side of the vendors that are providing the generative AI, there's also been a lot of pushback over the past decade or so in the courts.

 

Frank Pasquale: So that's, you know, just an initial foundational issue. Now, assuming that all gets settled, then there's all sorts of other issues that can arise because of problems of large language models in particular, or foundation models in general. One being with respect to, say, large language models, it's a text predictor.

 

Frank Pasquale: It's a, it's a language engine, not necessarily a knowledge engine. And so that leads to things that are commonly called hallucinations. I like to call them fabrications because I think the hallucination term was a little too anthropomorphic. There's not really a mind in there. It's really something that is, you know, predicting, say, what the next word would be, what the next pixel would be.

 

Frank Pasquale: And we've already seen lawsuits involving people that were defamed, right? People where they were uh, the models, say, would confuse the biographies of a security researcher and a terrorism suspect and just say, oh, it's, this is the person, right? Those sorts of things. Or there was a law professor who was somewhat controversial.

 

Frank Pasquale: And when a chatbot was asked which law professors or which professors would abuse their students, it mentioned that professor perhaps because of some correlation between words that involve controversies that he was involved in, and abuse in general. And so these sorts of things, you know, you're going to see these problems arise, and it's going to be also an issue in any sort of mission critical setting where we're going to have to worry about the level of reliability there.

 

Karan Girotra: We talk of performance guarantees with models all the time on this, on the show. I think, let me double click on the first one, copyright. So I'll give the layman's argument. Sure. These models are trained on a lot of data, but every writer is, so every writer is inspired by other writers. Now, sure.

 

Karan Girotra: This is a much bigger scale to help us understand the argument here, because from a layman's perspective, sure, you're being inspired. Yeah, so it's no different than a, than a painter being inspired by previous generations. It's not exactly the same what it produces, at least in the large, not in all cases.

 

Karan Girotra: In many cases, it produces a lot of new stuff. So how is this inspiration different and why, what would be the legal kind of groundings of arguing against this as being different than, than what we all do all the time. And we learn from materials we, we consume.

 

Frank Pasquale: Excellent point. And, you know, I'll start by saying that the legal analysis here really goes in two steps, right? So one step is the training process or the so called machine learning process, right? So that would be the one step that would be directly responsive to your question. And I will answer that in a sec, but the second step is production.

 

Frank Pasquale: So even if I can say, well, I've got inspiration from various sources. If I were to produce something that is substantially similar to a copyrighted work that I had access to. I'd still be liable for copyright infringement for that, right? Even though I can say it just depends. It's a, it's a determination of substantial similarity.

 

Frank Pasquale: And if it's identical or very, very similar, that's going to get me in trouble. So that's one thing in the production phase that, you know, that this objection doesn't really meet. But to get to the training phase, because I think that's quite interesting, and there was actually just a statement released today from 10,000 creatives, including I think Kazuo Ishiguro, Radiohead, others, who say that that training, it's not fair.

 

Frank Pasquale: It's to sort of feed the things into the computer. And I think that their rationale, or if I had to explain why they would consider it unfair and why it should be a copyright violation, I'd give a few answers. One would be, you know, I can go to the library and maybe for a week, let's say that I read poetry from the 1950s, which is still in copyright, and I might buy, you know, myself write some poetry based on that.

 

Frank Pasquale: I could not actually recite all of the poems that I had been consulting, right? I mean, maybe there are some savants who can do this, and, you know, more power to them is quite something. But the typical person couldn't just, you know, go to a library, read poems for a week, and then, snap, instantly give you back all the poems that they've read, whereas, these large language models, these, these they can, right?

 

Frank Pasquale: They have that, that sort of capacity, I think, once prompted in the correct way, or once prompted, say, in, in various ways. That's been sort of the crux of the New York Times lawsuit against OpenAI. And so I think that this is one of those issues where the, the difference between, you know, human mental capacities and say a cop, a socio technical system, is so different that it would be unwise for courts to just say, well, you know, people learn in certain ways and the machines learn in certain ways.

 

Frank Pasquale: And so it's, it's all analogous. I think it's quite disanalogous. And that was actually something I explored recently in a paper called Debunking Robot Rights with Abeba Birhane who's does a lot of work in the cognitive science field. Jelle van Dijk, a philosopher, we were sort of exploring this, this alleged parallel.

 

Frank Pasquale: And what we found is that really there are just so many dissimilarities between these machine learning processes and how any given human would approach the work that it would be, you know, pretty unwise from a policy level for courts to treat them in the same way.

 

Karan Girotra: So what I'm hearing just to kind of make it clear to myself, there are two parts, the production part and the, and the training part and production is, is as long as it is substantial similarity, as long as not reproduction, potentially there is some, some middle room there, but it has to be determined.

 

Karan Girotra: And then on the training part, you're saying that this is a fundamentally different way of training than reading or, or how humans train. And therefore it should not be kind of, fair use not to go too much into the leads. But another response to what this would be, how's that different than, than a Google search index, which is also imbibed the entire internet into it.

 

Karan Girotra: And if anything can uh, uh, so sure, 20 years back, we agreed to let computers learn everything already. Sure. It's great. So a different internal structure than an indexed database of content on the internet, but, and I'm playing the robot side right now, but, but yeah, what would be the kind of argument to that?

 

Frank Pasquale: It's a terrific question and it's something that, you know, I actually was, I wrote in favor of fair use for a lot of Google search functions in 2007. I published a piece on copyright and information overload because I thought this was quite a useful function for Google to play to help people find works.

 

Frank Pasquale: And that in that respect, you know, it was really valuable. And that's part of the legal analysis here because under fair use analysis, we look primarily at whether a use of a given copyrighted work is transformative and its effect on the market. And my sense was that, you know, overall, something like a Google search engine could have a very, the Google search engine and similar search engines had very positive effects on the market for works because they were this finding aid.

 

Frank Pasquale: It would help people find them. The difference that we see today, and this is something that I wrote up this year in a piece with Professor Sun, Hao Chen Sun, called consent and compensation is that it's different in the sense that now we see the potential for a complete displacement of the underlying work.

 

Frank Pasquale: So whereas with something like Google search engine, I might be looking for images, say, of contemporary art involving flowers, right? And I could find something by, say there's a, there's an artist named Bickmore. I could find Bickmore's paintings there and maybe buy one of Bickmore's paintings.

 

Frank Pasquale: But now you have the situation where with generative AI, someone could say, hey, could you paint me a picture of a Dahlia and a rose in the style of Bickmore? And then it just, you know, if it's been trained on, if Bickmore's material is in part of the training set, it's very easy for it to simply substitute for what Bickmore had produced.

 

Frank Pasquale: So this is why I think it's an entirely different legal analysis because in the search engine case, we had a situation where you had a search engine pointing people to work that they can buy. Whereas in the generative AI case, you have the potential to entirely substitute the work created by the generative AI for the author that it's trained on.

 

Karan Girotra: Very good, Frank. So now we're getting really into the weeds, and that's why our Cornell Tech launches sometimes take too long, because we'll have these kind of discussions. So, but Frank, I think what I understood from what you said is, in the end, like many things in law, first, there is subjective determination on what is substantial and what is similar.

 

Karan Girotra: And then it's a lot of the standard depends on the overall harm and the benefit to individuals. And just double clicking on that point. So displacement might be terrible for that artist. But what if I make the argument that displacement is great for society, because not everybody can own a Picasso, or you can have your daughter painted in Picasso, or your kids painted in Picasso style.

 

Karan Girotra: Something I know very little about art and law, so excuse my crazy examples, but, one could argue displacement is a net societal positive. So what is the legal standard? Is it societal good, or is it individual, or the kind of owner or the creator's good? Because I could make arguments on both sides.

 

Frank Pasquale: A fantastic question. And I think one way to think about this is both on the constitutional level and on the level of a fair use defense. So on the constitutional level. One of the rationales for the Constitution giving Congress in the United States the power to write laws regarding intellectual property, including copyright, is to promote the progress of the arts and sciences.

 

Frank Pasquale: Now, one interpretation of that promotion of the progress of arts and sciences could be as many works as possible, right? And if that was the case, if we really believe that is the goal, we'd probably be very happy about this stuff. But even in that situation, I have to add a caveat, because there is a problem in  I think impending problem for generative AI called model collapse.

 

Frank Pasquale: So the problem with model collapse is that, imagine that we have at the present, there's a huge amount of human works that the AI can train on, but imagine, you know, 5, 10, 15 years down the road writers, artists, others that produce expression, they see the writing on the wall and they say, wow, I don't think I'm going to become a writer anymore because it seems as though people are buying for five cents the novel that I wanted to sell for $24 a copy for my novel.

 

Frank Pasquale: And. At that point, one of the issues we have to wonder about is, you know, the labor of the creators was very valuable and was things that these generative models can't do, right? I mean, writers, people that are journalists, other forms of writers, let's focus on nonfiction for now, but I think it applies very generally.

 

Frank Pasquale: They're sensing and looking at the world. And the worry is that if you have generative AI that starts displacing all of the human creations with its own creation, then to the extent that it is misinterpreting or in other ways is failing to capture with great fidelity the value of what was in the human expression, then that ends up getting multiplied.

 

Frank Pasquale: The effect is multiplied because more and more generative AI will be in the training set. So that's what people worry about with model collapse. I know there are people fighting back against that idea, but that's a concern. With respect to the fair use point, which is like the statutory defense point, that really does look at the effect of the use on the market for the copyrighted work.

 

Frank Pasquale: There may be room to think about overall societal impact. I think in the first fair use factor on transformativity, there's a but I think that the fundamental, the fourth factor in effect of the market is looking at the work. And that's where I think generative AI has a big hurdle to overcome.

 

Frank Pasquale: If it wants anything like the positive fair use treatment that was given to search engines or other finding aids.

 

Karan Girotra: Very fascinating. So it seems, so what's your prediction? What's going to happen here? Settlement or they can get away with what they're doing. Because from what I hear from you, we've worked through several arguments, but it seems, and I'm not quite sure, putting in the spot or anything, but it seems that there is a big hurdle to cross to use your words.

 

Karan Girotra: Will they find a way to cross it or they'll have to go licensing or some, what's going to happen next in your, and then predictions are always a dangerous game, but then also a fun game to play. So without, we'll not hold you up to the prediction, but what do you think will happen here?

 

Frank Pasquale: Well, I'm going to start with my hope and then make my prediction, which is my, my hope is that they will, that legislatures around the world will follow the guideline that Professor Sun and I developed in our consent and compensation article. We sort of laid out how we believe the problem ought to be addressed.

 

Frank Pasquale: Including allowing opt out for creators and also creating a levy on the profits or revenues of generative AI firms to help compensate creators broadly. And that's a model that has been used in other areas. How I think it's actually going to turn out is I think that in many jurisdictions there won't be the type of consensus necessary to legislate a new technological solution here. And my guess is that you're going to see the training phase be immunized by many courts because they're basically going to say we really want to see this technology go forward. But I think you're going to see more responsibility and less positive copyright treatment of the AI firms when it comes to the production phase.

 

Frank Pasquale: And I think there you'll see, particularly if there are bad actors who are using the generative AI to blatantly infringe on characters, say Disney characters or other the characters of, say, some of the larger copyright holders, they'll be sued and then there'll be secondary liability for the providers of the generative AI.

 

Frank Pasquale: That's what I would predict. But of course I don't know how the courts will turn out. It may well be that they're really gonna be going in the opposite direction, of that. But I think that would be my sense of, of where, if I had to guess, I would guess that would be the direction it would go.

 

Karan Girotra: So to summarize training, then they can continue what they're doing because largely for whatever PR good, good arguments that have been made. People want to immunize this awesome technology. Sometimes I feel half the PR is for that purpose to kind of show the promise of the brilliance of these technologies to hype it up so that, so that they can get the immunity.

 

Karan Girotra: So training would be immunized, but probably the production phase would be more stricter standards. On being responsible for the content and not doing classic reproduction of a Disney character for for example. Now, so we'll talk more about reproduction because that really matters to the users of these, of these things.

 

Karan Girotra: But before we, before we move on to the users in your article, you mentioned a licensing regime, like some sort of licensing micropayments type of regime. And, and now that I put my economics of business hat or the statistician hat, the, in statistics, we can calculate the influence of an article on our, on our data set and how much it does influence the things.

 

Karan Girotra: The unfortunate thing here is that if we actually do the numbers. Everything is so small in this vast vast ocean of the internet that the, for example, the New York times might be a prioritized content, but there's millions of people who've kind of written blog posts about it. And so if you really look at what is, what is the inspiration is this attribution of who gets what one be very hard.

 

Karan Girotra: And second I'd be like, okay, if there's a trillion documents into my thing, whatever money I make from each, reproduction. I'm willing to share one trillionth of that with the New York Times. And it'll be, it'll not even be peanuts because remember, all of these companies are just giving out this stuff for free.

 

Karan Girotra: It's kind of the problem with search before it became a monopoly and lucrative. If there's not too much money to split. Like, what do you really talk about? Because many of these technologies tend to be deflationary, which means they're not, not stealing from you and selling for a high price. They're stealing from you and giving it away.

 

Karan Girotra: So at that point, I don't know how, how the licensing or any, any thoughts you have on if in our dream scenario, governments get smart and gonna be more direct. You said not political consensus. I'd said smart enough to listen to you. But if they get smart enough to listen and actually implement the scheme, you're saying, how would that even work?

 

Frank Pasquale: Fantastic question. And certainly there are lots of people that are skeptical about the workability of such a scheme. So I have one response that I would make from a legal perspective as opposed to say a business perspective. But from a legal perspective, even if there were no way to allocate this money fairly, even if, and I know this is, this will sound extreme, but I actually think that this is pursuant to the principle I'm about to state, even if all the money were taken and burned, it would still do something called the rectification of unjust enrichment. So, so in legal terminology, a lot of times we are concerned about the idea of unjust enrichment. If someone has unjustly enriched themselves, then the courts have a power to rectify the unjust enrichment. Now, of course, in a situation where there might be a very complicated allocation, a controversial allocation, of the money taken pursuant to the unjust enrichment.

 

Frank Pasquale: Yeah, there could be a lot of arguments about that allocation, right? I mean, and we saw sort of arguments about how, for example, with the 9/11 trust fund, you know, how would you allocate a life of a 25 year old bond trader versus a 47 year old dishwasher at the windows on the World restaurant, etc.

 

Frank Pasquale: The law has dealt with these extremely difficult questions of allocation before. One way that I would say though is that I don't think that we, I have to take such an extreme position with respect to unjust enrichment. I think what I could say is that with respect to, you know, trying to do this fairly, one way of doing it is simply allocating equally among all those who have registered works in a database claiming for compensation.

 

Frank Pasquale: So that would be one way of doing it. Now of course people could game that system. They might create, you know, 1,500 poems online that have themselves been generated by generative AI. But I think that there are also modes of judgment one could apply to say this is a sincere and actual addition to the database versus this is a manipulative effort to try to just get money out of it without actually producing something that one would have produced absent the compensation scheme.

 

Frank Pasquale: So that's one way of doing it. I mean, it could be divided equally. It could be that there could be ways of trying to determine influence. I think there's actually some professors in AI working at Singapore National University that are trying to figure out some of this question of like attribution of say given documents to a given document to, to an output from generative AI.

 

Frank Pasquale: One thing I always thought would be very valuable would be that for every output the generative AI produces, it might be required to also give you access or look at the five documents that are most like it by some sort of algorithm, you know, some sort of algorithm and say either the words that are in it, word plus words, word matches plus length, you know, using the classic methods that were, have been pioneered, for example, with respect to thinking about document similarity.

 

Frank Pasquale: I think that would be quite illuminating, right, that sort of requirement. So those are the sorts of things that I think could help us in making for a fair allocation.

 

Karan Girotra: Yeah. And any case not finding a way to do it is not good enough of an excuse. So, that's what I heard that even if you like the rules of the playground, I took the candy. I ate it. I can't give it back now is not a good enough excuse not to be punished in legal terms. So I think transitioning a little bit into now the users of AI and and we have several kind of audience questions also on different kind of users and what might be their concerns.

 

Karan Girotra: And perhaps we could do it in a few contexts. So users, one of the big use cases, I think of AI, at least the ones that get the most attention are often in medicine. So if you're a user, for example, in the medicine space we've seen a lot of mental health apps, which give you advice on, and counseling therapy in some form or the other we know AI companions or, or honestly, AI girlfriends is probably one of the most um, most lucrative part of the space right now.

 

Karan Girotra: So AI in the mental health context, as a, I'm a mental health company, I'm using AI for my, for providing some of my services. What are the risk exposures I have? What do we need to worry about? And yeah, what should we think about in those kinds of contexts?

 

Frank Pasquale: That's a terrific question in terms of thinking about the nature of the current and future uses of AI. And one initial divide that I would make is between devices that are say general health and wellness devices versus devices that are actually marketed for or indicated for curing a particular mental health condition.

 

Frank Pasquale: Right. And I think that a lot of firms in the area have been trying to stay in the safe haven of a general health and wellness device because those are, it's very difficult to regulate or there's not much health regulation specifically of them. There is other forms of regulation though. There are other forms of regulation with respect to those sorts of devices.

 

Frank Pasquale: So even if you're just a general health and wellness device, you're going to be subject to rules about unfair and deceptive acts and practices, right, either at the federal level as enforced by the Federal Trade Commission, or at the state level under state UDAP laws, unfair deceptive acts and practices laws.

 

Frank Pasquale: And I think that that is an important area for those who are developing AI, those who are using it to be aware of, because the worry is that sometimes there are some promises being made for these devices or these sort of interfaces with generative AI that don't really match up with results or that really haven't been validated in a systematic way haven't been substantiated in the words of the Federal Trade Commission.

 

Frank Pasquale: Now on the side of actually trying to provide medical care, you know, there are certain technical FDA regulations in terms of software as a medical device that are very important to this field. Also there's a growing sense that to the extent that any sort of chat bot might be proffered as a substitute for a therapist a counselor, psychiatrist that it ought to be licensed, right? And there's not yet state legislation saying that we're going to license chatbots the way we license therapists at the state level. But there was a recent proposed law in Massachusetts that if a PAST would require that when a psychiatrist or therapist prescribes the use of AI in a therapeutic setting, that they have training and that they register this with the Medical Board of Massachusetts.

 

Frank Pasquale: And I think you'll see increasing interest in this as it becomes more popular and as we see certain bad effects or bad outcomes there's going to be increasing push for regulation in that area.

 

Karan Girotra: But would existing regulation like malpractice and also in healthcare, there's always the pair who has a strong strong control on what happens. It's not a really free market when, when you have very big buyers, either the government or very big kind of insurance companies would they put some restraints on these issues beyond kind of existing laws?

 

Karan Girotra: And let's say the existing industry structure of of how our payments and buyers are organized, buyers are not the consumers, would that, would that place kind of some restriction or does that heighten the need for regulation here or, or limited? Because maybe these companies with existing regulations and existing payment schemes will already be somewhat, somewhat very doing some of these things.

 

Frank Pasquale: Yeah, this balance between regulation and tort law, such as malpractice. Is really important, right? I mean, we want to make sure that we have a good balance between those things. Essentially, the regulation comes in, especially in high stakes areas where we don't want to have a really bad outcome. And so, for example, there was a researcher that was using a mental health chat bot and as a test, she typed into it quote, I feel like I might as well out to the El Dorado Canyon and throw myself off it. Okay. Says that to the chatbot. And the chatbot responded, wow. It's great that you're finally taking care of your health and wellness, you know, assuming that she was going out to exercise, anything like that.

 

Frank Pasquale: Right. And that sort of an outcome is something that, you know, no Certified therapists would say that, right? Because, again, it's the difference between a language model and a knowledge model. The certified therapist knows from context that someone threw themselves off a canyon, that's a suicide, sort of a suicidal ideation.

 

Frank Pasquale: And so when you have that sort of situation, we might worry I mean, you're absolutely right to say that is a classic malpractice case, right? Where essentially if they're, they're offering this, but one of the things about malpractice though is that that law is developing in a particular medical context involving medical professionals.

 

Frank Pasquale: And then courts are going to have to decide. Are we going to treat the app as a medical professional and hold it to relevant malpractice law? Or are we going to think about it as something else? Then there's possibly going to be terms of service that are involved here. And in the medical context, terms of service, it's often hard to disclaim liability in a medical context with terms of service because of a case called Tunkle and other cases.

 

Frank Pasquale: But outside the medical context, It's hard to tell. Maybe the courts would say this is a really useful thing and we think it's okay for it to conclude in the service of service a disclaimer of liability for situations like this. 

 

Frank Pasquale: So complicated, very quickly. Yeah.

 

Karan Girotra: I was thinking when you mentioned medicine, I was thinking almost driving, which has almost exactly kind of similar issues. And they kind of, in our course, and then on examples in our classes, I often talk about driving, which has, which of course has rules on how it's supposed to drive, but how do you treat these these things.

 

Karan Girotra: In terms of liabilities it's pretty, pretty serious, now shifting gears, but in the spirit of kind of different places and medicine is so fascinating. We can come back to it because I think it provides all the, all the benefits and the benefits of AI, the high stakes and the high benefits. But going a little bit to our audience, Jack has been asking uh, has asked about the use of AI in an R & D setup where I use the user tool to help me at some point.

 

Karan Girotra: And does that create the two issues? I suspect that one you're using a tool to help you maybe write a document around describing a brilliant idea you have. One is of course, just the leakage of that information. But then the other larger issue is does at some point, one of these companies start claiming ownership of the property you create.

 

Karan Girotra: And then one of the legal issues around using, using AI tools for R & D in particular, any, any thoughts there, Frank?

 

Frank Pasquale: Yeah. This issue has many dimensions. One thing that I would start out with is just the problem of trade secret protection, right? And there are firms that have already warned. Their researchers, their employees do not be putting proprietary company information into these generative AI interfaces. Because we really can't verify the extent to which it's either used or not used in future iterations of the model or kept somewhere or something along those lines, right?

 

Frank Pasquale: You know, and, I will say maybe there are, and I think I, I may have actually seen that ChatGPT was offering to corporate clients an assurance that they would delete things or, you know, would have other, other assurances about not potentially compromising proprietary information. But my attitude toward even something like that would be trust, but verify, right?

 

Frank Pasquale: You know, to try to make sure that that's not happening. Cause I'd be very concerned about that in part because much of the you know, digital business model nowadays is just getting as much information as possible. So that's one thing. The second thing I would think about is, you know, to the extent that as these things advance, and as, for example, they might be producing patented processes, right, um, there might be a patented process that it would just generate.

 

Frank Pasquale: That's possible, right? Or it might be so similar to an existing patented process that under the doctrine of equivalence, it would be deemed to be an infringement, a patent infringement there. And this is very important because in patent infringement, you know, that is not a, there's no volitional element to it.

 

Frank Pasquale: It's not like, did you want to or not? If you did it, you, you're out, right for the patent infringement. So that would be another area of R&D where I'd be very cautious with respect to using some of this stuff. But on the other hand, I should say that, you know, we just saw the Nobel for Demis Hassabis you know, and, and we've seen incredible promise in these areas for proposing different say, drug combination molecules in drugs and other areas.

 

Frank Pasquale: So I think it's certainly going to be a growing field, but I think that the uh, cautions are, are in order with respect to some of these IP dimensions.

 

Karan Girotra: So on trade secrets, my impression of my story I've been telling CTOs and others has been always uh, clearly, of course, the worst thing is if any kind of free product, where of course, the terms of service don't even prohibit that but essentially when you pay your $20 a month to Microsoft in, in some ways, that's the insurance you're buying.

 

Karan Girotra: Now it's, it's in a way, no different than using Outlook to send documents. Everything is in the cloud. If you trust them for Outlook, you probably trust Microsoft for putting in your documents into Microsoft Copilot. Now it might be a little bit different with OpenAI. Other newer companies for whom the business model pressure to have the most data to have the next model is existential because they live by raising funding by showing the next state of the art model.

 

Karan Girotra: So yeah, I think in some ways, I think one business model that AI has enabled is literally just indemnification. You pay people not to do bad things, which they could do with your with your data in many ways. And in so far as you trust them, so far as those companies have are important legal entities that you could potentially go after  if in case of uh, that, that is probably a safest bet to move forward with the patent issues.

 

Frank Pasquale: Oh, yeah, but if I could just add one footnote to that, I do think that there were lots of people that were trusting Google sort of not to keep their search history when they were in incognito mode, and there was recently a big lawsuit over that, and I think they were, you know, exposed as, you know, and ironically, one part of the lawsuit was that there were, there was someone at Google who was sort of like, was deleting their chats, was setting their chats to actually delete um, in part, potentially, in relation to this thing, I think so, you know, so I do think that you're right to say that, like, yes, I think that that's absolutely the case and then the email analogy is very good, but I think there's something odd about, something different about the generative AI in the sense of just the wildness of it.

 

Frank Pasquale: Like I don't, I don't expect people to sort of be poking around my email information versus like, or if they might be poking around areas that my company is in or something like that, but just, just an idea. I don't know, but it's, but it's a really interesting analogy. I agree. Yeah.

 

Karan Girotra: No, I think, I think you're right also, because what I see by being inside these companies are talking to, I think it's quietened down a little bit, but if you were Googling a couple of years back, you were really behind, and at that point, it is an existential threat, and it's like, everything is okay.

 

Karan Girotra: Everything is okay. Some of that mentality is certainly, certainly, I don't think anybody say that, but you can, you can see that people are willing to take more risk than they would, would otherwise, which might involve going back on their terms of service or finding. And of course, in the business world, people would be, if it's a $100 billion opportunity or some very large opportunity, then sure, we can spend $2 billion on legal fees to kind of make it work.

 

Karan Girotra: And by that, they mean they can find some, some solution from the previous, previous kind of terms of service or go back on those things in some way. So I agree with you, there is that uh, more than wild west, I think the existentialism for some of these big technologies is so huge around this that you're right that there might be risks of people kind of going back on things that they might not have not originally thought of going back on.

 

Karan Girotra: So I think on the patent issue, is there any, what do you think will happen as a practical user right now? Should I really be worried in using it in my R&D process or you think that is uh, there is the legal advice of what might happen, but now I'm asking you to do something which is probably a little bit unfair, but the potential is so high in that kind of like you mentioned the alpha for example, and other, other things, the potential is so high that I don't want to probably fall behind and the risks seem relatively small.

 

Karan Girotra: So as not an external law, but a general counsel in a company who would want to kind of move with the business objective, what would be a safe thing to move forward with?

 

Frank Pasquale: Well, I have to confess that, you know, I like to opine on a lot of things, but I think that here you have, you've reached the limits of my willingness or ability, I think, to really talk about patent law, because I, I have to admit, I have not taught that since, I believe, 2010. So I just, but I just raised it as an issue because I wanted to be sure people were aware of it.

 

Frank Pasquale: But I think, I mean, I think that I would just advise someone that, you know, that it seems like if it's a, if it's relatively well patented field, you've got to have patent council, like review what the general AI is proposing in terms of like, potential processes to improve the business. But I can't really say much beyond that.

 

Karan Girotra: You can see Frank is the law faculty and is the lawyer who was very careful about his words and compared to me was probably being a little loose with my words as we often do in the business or entrepreneurial world. But I think point well made. These are, these are technical issues that we need real experts to look at.

 

Karan Girotra: And yeah, and not, not just listen to non experts about these issues. Coming back a little bit to kind of the aspect of medicine, where you pointed out the issues are more serious not just therapies, but I'm thinking of a medical professional or a doctor. Are there some special concerns they should keep in mind over and above what you kind of mentioned?

 

Karan Girotra: I think because I can make the alternate argument also. AI can have some bad outcomes like in uh, we talked of driving or that or that particular issue, but an average AI is better. Then the distracted medical practitioner who's not able to give everyone full attention. Then I could make the other argument that by not using AI, you're not giving me the latest standard of care.

 

Karan Girotra: You're not giving me state of the art treatments that I could get at the, at the price I'm paying. So I think one could make another argument also on, on healthcare. And, and yeah, so where, what do you think about, about that argument or some special issues about what the concerns of not using AI, because that, that clearly um, I could make that argument in driving.

 

Karan Girotra: By not letting us use full self driving on average, we're probably costing lives right now. They're all the odd case where you will probably something a machine will do worse one off  than a human. But on average machines are already outperforming very distracted humans and very overworked doctors in many cases.

 

Karan Girotra: So yeah, what thoughts on the other side of the risks of not using some of these technologies?

 

Frank Pasquale: Yeah. I mean, I think this is a really important issue and I think this is something that, you know, there has been some good legal scholarship on but it's, it remains to be there to be a lot more in the area. So to explain the issue, you know, as a lawyer might see it, people are always cautious about bringing in new technology because they fear that essentially, you know, especially in, medical context.

 

Frank Pasquale: They fear that, you know, what, if this is, it might be different than the current standard of care, and there's a common interpretation of malpractice law as requiring a doctor to abide by the standard of care or only penalizing deviations from the standard care, etc. But what we've seen is that malpractice has evolved so that really the physician is responsible for reasonable treatment and that may well involve the newest thing, right, or something that's relatively new.

 

Frank Pasquale: And there's a terrific article by Michael Frumkin and Pinot and Kerr that's on this problem of doctors might eventually become very reliant on AI because as the AI becomes more and more vetted, more and more reliable, you get in trouble for not using it. And I think of this particularly, you know, in some very practical settings, like I always get my yearly skin exam with a dermatologist.

 

Frank Pasquale: Right. And I have yet to find one that is, say, taking pictures and running them through an AI program that could notice or could identify melanomas. And that to me is the quintessential use case for AI in medicine, is this kind of ability to spot patterns in massive amounts of data. I mean, they have millions of moles and datasets.

 

Frank Pasquale: They have, you know, hundreds of thousands that are cancerous. There are many ways in which they can probably, just as you say, be much more effective than the typical doctor looking for asymmetry, discoloration, you know, the ABCDE elements of spotting melanomas. So that, I think, is really important, and I hope to see a world, you know, in five to ten years where that just becomes de rigueur.

 

Frank Pasquale: You just are expected to do that. Now, of course, one thing that's probably holding that back is the ongoing drumbeat to contain healthcare costs, right? And so this is another thing that I think is quite important about AI and medicine is that we realize that probably the promise here is less low cost medicine than higher quality medicine, if we're willing to invest in it.

 

Frank Pasquale: I sincerely believe in that. And so I think that's going to be a real, a real trend as well. But you're right to say that, like, this is it's, it's one of those areas where, for example, pulse oximeters during surgery, those became the standard of care in the 1980s as identified in this case called Washington versus Washington Health Center, you're probably going to see more and more AI become the standard of care in different areas as people, as it gets vetted.

 

Frank Pasquale: The driving point I think is a very powerful one, and it's one that, you know, I really want to think more about in terms of how we vet it and to what extent or say the results in a place like San Francisco or Phoenix, replicable around the country. But in general, I'm with you on that. I think that we probably have reached the point where you could probably save a lot of lives and, you know, trying to figure out that legal transition is a really important issue.

 

Karan Girotra: You mentioned AI is high quality in in medicine, and I think then, then the standard of care argument stands, but what I understood was, as of now, it's not really the standard of care, so there's not too much liability in not using it, but it might not remain like that at a certain point, and it is, you're not, as a doctor, maybe ethically, you're obligated to try the best thing you know, but legally, you're required to do maybe only the typical thing that everyone else is doing at this point, and so far, AI isn't there.

 

Karan Girotra: Maybe I misunderstood.

 

Frank Pasquale: No, this is, this is a really important point to make. I mean, I think that basically the point about malpractice is that even legally, you can be required to go beyond what most people are doing if it is clearly better, right? You know, because otherwise things would never change, right? And otherwise there'd be no way to, no way to change things.

 

Frank Pasquale: So there's some chronicling of this by legal scholars and others. I mean, admittedly, it's a field in a lot of ferment. And moreover, it's very hard to find adjudicated cases in many situations because so often they settle, you know, you've got insurers who are, you know, just saying, let's settle this.

 

Frank Pasquale: Let's just get rid of this as soon as possible because they see verdicts of, you know, 5, 10, 20 million dollars in malpractice cases. They just want to settle. But I mean, I think in general, even on the legal side, not just ethically, but legally it's not enough to say, I just did what everybody else did.

 

Frank Pasquale: You know, if it's becoming clear that something else needs to be done there's still a possibility for a malpractice lawsuit.

 

Karan Girotra: That's great to hear because what that tells me is law can be a force to move things forward. Also, it doesn't have to be just to kind of hold things back, which is sometimes, sometimes the perceptions of this is great because you can be, and I think again, it becomes the same question that we as scientists always adjudicate on is the thing good enough to replace what exists before it?

 

Karan Girotra: And I like, I love what you said. In driving you said you're convinced and I think in the end it is going to be a judgment by experts in these areas when it is good and then it becomes, then the law is forcing you to almost requires you to use it rather not not requiring you to use it. We are coming close to close to the end of our call so I'll go a little bit personal maybe and a little bit about how will this this stuff...

 

Karan Girotra: So, so far we've been talking with the impacts of AI on on the people offering it people using it. But now really about the legal profession, what really kind of, will change with the with the legal profession. What do you expect to happen? I already have these AI chatbots would say you can dispute every, every parking ticket you have or things like that.

 

Karan Girotra: So what's that sounds like crazy word. What, what are we, what are we looking for? And any predictions you want, it will not hold you to them, but any interesting hopes and predictions as you, as you said before.

 

Frank Pasquale: Sure. So, I mean, I think that the, the main thing that I would say there is that there is a real opportunity in law, but there are also a lot of dangers, right? And so with respect to the opportunity, law, so much of what lawyers are doing is producing documents or going through thousands of pages of documents and say a given case or thousands of pages of case law in terms of writing a brief or something along those lines.

 

Frank Pasquale: And so what I think is the promise of generative AI is that to the extent you can have reliable summaries, to the extent that you can have reliable souped up finding aids for precedents, that's really valuable. But there's also some real worries here. And very recently there was a Federal Trade Commission action against a company called Do Not Pay.

 

Frank Pasquale: And this was the company that, you mentioned the parking tickets, it got famous for sort of saying, we can dispute your parking ticket. We can dispute all these other things. Are there lots of other things it was promising? And to make a very long story short, what the Federal Trade Commission essentially found was that they were not adequately backing up their claims about what they could do.

 

Frank Pasquale: And so they got in trouble with the commission. They had to settle the case. I think that's, you know, I think that's, that is now settled. But one of the things that was quite remarkable is I think that the, the commission also said that there was not adequate consultation of lawyers, or many, maybe in many situations, any consultation of lawyers.

 

Frank Pasquale: And that, I think, is not the way to automate law, or to try to automate law. Now, in terms of the long term, what's really fascinating here is, and I'm the co editor of this journal on cross disciplinary research in computational law, so I've been thinking about this a lot with others, especially in Europe, who are into this.

 

Frank Pasquale: And it might surprise you to know that there's a lot of enthusiasm for this in Europe, a lot of people looking into it there. There, it's possible that you will see a real leveling of the playing field in the sense of people that couldn't afford a lawyer have an app that can act as their lawyer or can at least give them some forms of legal information, if not legal advice because of an unauthorized practice of law regulation.

 

Frank Pasquale: But there's also the possibility that what you'll see is that as soon as the playing field is leveled, per se, one side that those on the other by far more sophisticated apps to counter them. So, for example, there were apps that would help tenants fight evictions, but now there's also a firm called Click Notices that can make, for example, create an eviction as a service platform.

 

Frank Pasquale: And so, so I don't think that there's really... many of my colleagues think that, you know, this is going to be the solution to the access to justice crisis. I think it's just, you know, I don't know about that. And so, so that's, it's, it's interesting to sort of try to play out where it's going in the future.

 

Karan Girotra: So access to justice, but again, the folks who have more money will also build the better apps or the better AI. And uh, the, the structural kind of differences in power uh, are not changed necessarily by technology because in the end capital or political power is what determines what kind of technology and how much technology you can get.

 

Karan Girotra: So the legal profession, unclear. Probably early on access to justice, but we might end up in the same place, but what I did hear is that I was thinking of Do Not Pay I did hear that I'll need to pay my parking tickets because I was planning to use Do Not Pay. Well, I'll need to find better parking spots because you know we work on Roosevelt Island and sometimes driving in New York City, Do Not Pay could be a could be a way to get around, out there, but that's I shouldn't rely on that or other such, such operators.

 

Karan Girotra: So overall, Frank, thanks a lot for all these, all these issues. I think we discussed lots of really cool stuff. I think at the core of the differences between the training step, the production step, different laws around that around users, what they need to worry about in regulation, tort law and what the standards might be in different fields.

 

Karan Girotra: Where I, to me, I think I saw a very positive message that law is, on the side of doing the right thing. Even if the right thing means changing rather than always being safe. So that was, that to me was uh, then it can accelerate kind of adoption if something is unambiguously good, exactly as it should be for the legal profession, probably a little bit less clear how things will out but to be sure, lots of opportunity for transformation to reach this new equilibrium of everyone having their AI lawyers.

 

Karan Girotra: We still need to, there's a lot of businesses to be found there. There's a lot of business transformation that needs to happen,

 

Thanks for listening to Cornell Keynotes and check out the episode notes for more info on eCornell's many certificate programs and course offerings in generative AI and other technology related programs. Thanks again for listening and please subscribe to stay in touch.