AI in finance is about human-AI partnership rather than replacement, exploring practical strategies for effectively integrating AI tools while maintaining essential human judgment in financial decision-making.
The AI in Finance certificate program starts April 13, 2026!
AI is reshaping finance, but it's not about replacing people — it's about empowering them to work more effectively. As these tools become part of our everyday roles in financial services, knowing how to successfully partner with AI while leveraging human judgment has become key to success.
Join Cornell Professor Victoria Averbukh and Andrew Chin, Chief AI Officer at AllianceBernstein, for a practical look at AI in finance. They'll explore how AI can enhance financial decision making and discuss why human expertise remains essential. Through real-world examples, they'll explain how professionals can confidently work with AI tools to achieve better outcomes as partners in the process.
What You'll Learn
Chris Wofford: On today's Cornell keynotes, we are exploring how artificial intelligence is reshaping financial services. But this isn't about AI replacing human workers. It's about partnership and how these tools can amplify human judgment and enhance decision making. The real story is how financial professionals can successfully collaborate with AI while preserving essential human insight.
This shift is happening now across trading floors, investment communities, and client meetings worldwide, which is raising practical questions like which tasks should be dedicated to ai, and how do professionals maintain their edge?
While everyone has access to the very same tools, we have two exceptional guests to guide us. Cornell Professor Victoria Averbukh brings decades of experience educating quantitative finance talent and has witnessed how required skills are evolving. She's joined by Andrew Chin, who is Chief AI Officer at Alliance Bernstein, one of the first executives to hold this title, putting [00:01:00] him at the center of this transformation. Andrew has been a long time partner educating Cornell's financial engineering students, and his evolution mirrors the entire industry's journey from classic quantitative finance in 2014 to pioneering work with large language models.
Today, together they will share real world examples of successful AI integration, exploring the balance between technological capability and human expertise, and they'll also offer practical strategies for working confidently with AI tools. Andrew and Victoria co-instruct Cornell's new AI in Finance certificate program, which launches in April, 2026 through a partnership between Cornell Engineering, the SC Johnson College of Business and Industry Leaders.
So check the episode notes for more details on that. And now here's our conversation with Professor Victoria Averbukh and Andrew Chin.
Victoria Averbukh: When I look back at the syllabi for your courses that you've taught [00:02:00] here and the projects, the topics that you've sponsored for our students, I can literally over the years see the evolution of industry.
In 2015, the project was a classic one, finance, focused on hedging with options. By 2017 the project was all about financial text mining. In fact, I have to say it was probably revolutionary at the time. And I think our students were kind of like ahead of the curve under your guidance.
So if I fast forward today, our students are now experimenting with large language models to generate factors, build portfolios, et cetera. So clearly you've been an incredible guidance to our students. I should also probably admit this publicly in this forum that you don't just challenge our students.
You also challenge me. You are really the reason why I finally agreed that we definitely need an AI and finance certificate [00:03:00] that you and I have been collaborating and going to be co-instructors when it launches in spring. That initiative is a true collaboration between College of Engineering and Cornell School of Business, along with industry experts like you and some others. And, uh, I think that's the theme that we may have come back during this conversation in terms of, you know, what is AI really bringing to industry and what kind of talent is going to inherit the future in finance? But before we get to all of this, you really are helping to lead this shift that artificial intelligence is bringing as one of the first chief AI officers in the industry.
So can we start by you telling us what is that shift, what it is really that you do as a Chief AI officer? What does the role entail?
Andrew Chin: Yeah. Thanks Victoria, and it's good to see everybody. I should also note that you've been at the forefront of building [00:04:00] quantitative talent for the financial services industry. So that's been fantastic, uh, from a collaboration perspective.
So my role right now is the chief AI Officer for Alliance Bernstein. In that role I am to help the organization really transform how we do business by leveraging AI to hopefully empower us to make better and faster decisions.
Victoria Averbukh: this is the leadership for the 21st Century finance. Um, before we get into the weeds, let us start with defining this foundational thesis for the today's conversation. I often hear opinions around AI that go along the lines of, all right, this is all about humans versus the machines. But I really think this is a very narrow approach and it potentially can miss the real opportune.
This approach can miss the real opportunity. Because much like in many aspects of human world, I think we should be, you know, more focused on collaboration, working together. So [00:05:00] from your point, in your ideal world, as a chief AI officer, is there a place for human machine partnership. What would it look like?
What do humans do great but, and what can technology do better?
Andrew Chin: Yeah. I, I think that's a fantastic question. I've been in the industry for over 30 years now. I may not look like it, but I've been in industry for over for 30 years. And what, what I would say is we've been using AI ever since I joined the industry.
So lemme define what I mean by AI first, right? So usually I group AI into three categories. The first is traditional ai, and these are the machine learning techniques that we all know and love. You know, pattern recognition types of techniques. Secondly, there's gen ai, which came about really with chat GPT.
So I think we know what that is. And then the third one is, I call it autonomous ai, agentic ai. Right? You know, those are terms to describe these agentic frameworks to, to use ai. If you haven't heard about agent ai, you will, because it will be in every, in, in the, in the industry [00:06:00] for, for at least in the next couple years.
So if, if we think about it that way, you know, we've been interacting with AI for quite a while, right? And in all those cases we, we really partner with ai, and AI's really augmenting what we do, right? So now having said that, you know, we've seen so much progress in so much acceleration around AI in the last two, three years or so.
Right? We see these tools becoming very powerful, right? In, in some areas doing more than what humans are capable of doing and faster, right? So that's been exciting. having said that, I think the other side is with such powerful tools, we do have to consider what the impact is on, on ourselves and society and of course in investments, what's the impact within the financial services industry, right?
So, Looking at both of these sides, you know, look, looking at these trade offs I think is critical, right? Being able to manage this revolution, how to use these tools effectively in a responsible way. Is really critical, right? So when, so when, when I think about this machine- human [00:07:00] collaboration, the analogy that I always like to draw on is the Iron Person analogy.
Have, have you seen Ironman, Victoria? Have you seen the movie? Okay. So it's, it's, it's really based on that. So Ironman iron person. The point behind iron person is that you have a smart subject matter expert inside. Surrounded by armor, ai armor that gives you hopefully fast and quick synthesis to help you make better and faster decisions.
But the critical element there is the human. The human inside must be a subject matter expert, in our case, finance, right? You must be a great investor. You must create, be a great technologist. You must be a great accountant. Doesn't matter what it. that subject matter expertise is foundational. But then you have these tools around you that hopefully can help you be even better, right?
So, so that's why I, I like the I person analogy. And another reason why, why I like the analogy is because it's clear who has accountability for decision making. It's the human inside, right? So it's not the armor that says, Hey, Andrew, do this, or, or, [00:08:00] or that, but it's the human inside that says this is, who's accountable. At the end of the day, you are responsible for the decision, right? So I think that's how I, I'd imagine future investors, future, future accountants, future salespeople to look like this iron person analogy.
Victoria Averbukh: So in other words, we should view AI as a tool that strengthens our decision making, but it doesn't replace the human responsibility, right?
Andrew Chin: Yeah.
Victoria Averbukh: And, uh, the judgment is being, human judgment is being replaced. That's when we actually probably see potential problems with AI utilization. Right.
Andrew Chin: Yeah. I mean, these tools are becoming really, really powerful, as we said, right? And, um, for, for routine problems and certainly some reasoning problems.
But humans bring judgment, intuition, and experience that these tools do not yet have. So, so, so I think that is why it, it's critical, to use these tools in that way and not have these tools run [00:09:00] autonomously, especially in financial services where we have a strong fiduciary responsibility for our clients.
Victoria Averbukh: We want to be correct a hundred percent of the time, right? Like, we don't want 99%. We want a hundred percent.
Andrew Chin: That's right.
Victoria Averbukh: So if we accept this partnership between human and artificial intelligence as the goal, then I'm guessing the next challenge would be turning the information that AI can help us extract into something that's genuinely useful.
You know, many companies believe that they're very data driven. But how many actually succeeding into translating that data that's now becoming, you know, like what is the data even these days, right? That's turning them into an action that seems still difficult. So, what do you think is sort of like the most challenging implementation arises?
Where, where do you see the [00:10:00] disconnect or is there a disconnect? Right? And how should leaders, what should they do different? How, how they should be viewing this, you know, ai, decision driven making.
Andrew Chin: You're definitely right. Data is the lifeline of these, uh, tools, right? Um, without good data, these tools are pretty much useless. So, yeah, so it, it all starts with data. So I, I would say first, having a good data governance program in your organization is critical, right? So this is from a top down, in saying that, look, here's how we think about data. It is a lifestyle. Data is an asset within our organization.
And so it's everything around that. Ownership, lineage, cleansing, storage, publishing, all those things that, that you normally do to make sure that you have good hygiene around data is critical. hopefully most organizations have that.
But even with that, the AI does present some new challeng. So for example, right now there is a lot more unstructured data, right? Structured data is, think of it [00:11:00] as stuff you can put into Excel or database easily. Unstructured text, audio, video, you know, stuff that's not easily, you know, uh, you can't easily put into an Excel spreadsheet, let's say.
So because of a lot more unstructured data, I think we have to figure out, well, what's the best way to ingest this information into our organizations so that we can use it for decision making, right? And, and that, along with that comes lots of different problems, right? Like, what tools do I use to store this information?
What tools do I use to synthesize, you know, and, and understand this information, right? So, so, so I think that's part of the consideration. So that's in the data side. Once you have the data, then you have to think about, well, what tools will I now use to, uh, to really access, and to make it accessible to my employees to use, right?
So this is definitely where large language models and AI tools come in, right? What's the right platform for my employees to have so that they can access these tools easily, get insights from these tools easily, and, uh, things like that, right? So I think that's an important aspect.
What I usually also like to say is, yes, we are in [00:12:00] the age of ai, but let the questions really drive the tools that you use, right? It may turn out that AI's not the right thing. And as you know, you know, sometimes a, a simple linear regression may solve the problem. And you know what?
That's perfectly fine, right? So understand your problem and make sure that the tools that you use are fit for purpose, right? Don't think everything's a nail when, when you have a hammer, right? So that's what I would say.
Victoria Averbukh: You made me just think about the data aspect, the, the first part that you were talking about.
It's almost like AI is putting the pressure on organizations to review their whole data infrastructure, right? And that whole pipeline process from, what's my data? How, where are they stored, how they stored, you know, how can I use them? And then all the way down through tools to the insight.
So we're not limited by data. We're limited by clarity.
Andrew Chin: Yeah, yeah, I, I usually think about like the four V's Uh, to describe what data means in this age of ai. So the first V is [00:13:00] volume, more of it as we just discussed. Second is velocity, which is coming at you much, much faster, right?
And you have to respond much, much faster. That's certainly how the markets move and certainly how our clients view interactions with us today. They demand and answer immediately, right? Velocity,
Andrew Chin: Third V is variety, right? So data comes to us in many different formats, whether it's text, audio, video, images or whatever.
So, so certainly those three v's. But the fourth V. Which is veracity. Truthfulness, veracity is the most important because to your point, if, if you don't have good data, it doesn't really matter. Right? And, and these tools really depend on high quality data. And the first comment that you made about organizations really having to come back to data before they use AI is a hundred percent correct.
Right? So many of us are, are to try to make sure that we have strong foundational golden data so that we can then start to use them. One simple example could be a research analyst. Right? They, they've been doing research for quite a while. They, like myself, we've [00:14:00] accumulated research to over let's say 10, 15 years, right?
If you just blindly turn a large language model onto this, well, where does it look? Does it look from 15 years ago? Does look at last week. I may have made some errors in my model a couple years ago that cleaned up, but which one does it look at? It might not know which is the right one. So cleansing that information and just structuring information in a way that these tools can use is actually critical.
And that's part of the, uh, data hygiene process.
Victoria Averbukh: Exactly. That's a very good term, data hygiene. I love it. So let's say data hygiene is in place and, the very responsible use of infrastructure and data, and finally you get to the, you know, how do you again, implement that whole process from data to extracting valuable insight?
In my opinion, you cannot get far in that process if you don't have the right environment. So it's almost like there's a cultural shift or you know, like you need some [00:15:00] cultural awareness and acceptance, whether a AI initiative is going to succeed or stall. So, really, in my opinion, it comes down to people being able to communicate and, uh, communicate the challenges and, constructively discuss what it takes to get, again, from data to the inside.
So if we do accept that AI adoption is as much a cultural shift as a technical one, what does it take to build a culture where business leaders, engineers, and data scientists trust each other and can speak the common language?
Andrew Chin: Yeah, transforming through ai is not just a technical challenge. That is a part of it, certainly, but it's, it's also a cultural trans transformation like Victoria described, and it, it starts Top down, uh, meaning where the senior leadership believe AI's place is within the organization, right?
And that sets the tone for the organization. That sets the risk appetite for the rest of the organization, right? How much of these tools you want to [00:16:00] use, how do you want to use them? How do you want include humans in the feedback process, right? What interactions would you want? That's just the tone for how you, you want to use these tools, right?
So, so that cultural aspect is really important. Once you have that governance, the, the vision at the organizational level, what you have to do is then execute on that. And I would say training is a huge part of it and a huge part of transformation. Right. Helping employees understand how to use these tools, how to leverage these tools, and really making it, really tailoring that, uh, training for individual teams, individual roles.
So, for example, I may teach the organization on prompting, right? And we can all do that. But what does prompting mean for an accountant? What does prompting mean for acls person? Right? They mean different things because the way you write the prompt. Is different. 'cause what we want to get out of the large language model is gonna be different, right?
So you have to customize that training. And in, in addition, when these individuals see training that's adapted to what [00:17:00] they do, they actually get a lot more excited. Like, oh, I know how to use that prompt for my client X now. Right? So it becomes a lot more tangible for them. So that sort of customized training I think is a very important part of it.
So hopefully by doing all of these things, and then by the way, by. Rewarding, you know, the use cases that, that really work, right? Highlighting the, uh, successes, right? Making sure that you bring that out, that, hey, here's what organization rewards. This is a great use case by Victoria. This is fantastic.
Look what she's done. Right? So by reinforcing those successes, you build the culture that you want, right? So all of those elements I think are important part of an AI journey.
Victoria Averbukh: Okay. So the real differentiator is not the algorithm itself, it's how people communicate the algorithm among themselves and within their different business function.
Andrew Chin: Look, I mean you know that in our industry there's some powerful models, techniques, tools, right? And many times, you know, the hurdle is not the tools or techniques or models themselves. It's how we apply them and, and how we internalize them, [00:18:00] and hopefully how we adapt them to the practical real life world that we live in.
So that, that is always the challenge.
Victoria Averbukh: That's right. So, in that statement that you just made, you brought up effectively. Um, implicitly maybe, so like the breadth of ai, applications and implementations within large enterprise, within large industry. Right.
You mentioned accountants, you mentioned, you know, on the investment side. Let's talk a little bit about that breadth. So most of the public conversation, or at least maybe, in the world where you and I revolve and focus on investing, but you know, not, it's not just our world, it's also because this is the most visible part, right?
It's, it's easy, it's like fun to talk about this, but I wonder if there's much more of the true transformation is occurring or should be occurring in areas that are, you know, maybe less glamorous, but absolutely essential. So where do you see AI sort of like [00:19:00] making the biggest influence, maybe in the near term or maybe over, I don't know, five years of It's hard to predict.
Five years. Yeah.
Andrew Chin: Yeah. Five years is a really long time. Maybe next week it's more likely. Right. Look, I think, you're right that when these tools first came out, and even when, when I started using these tools, I was focused on the investing side. And the reason is because, as I said before.
We've been using traditional AI for decades, right? So this was an evolution of that. So therefore, we're thinking, can these tools help us make better forecasts? That's why the starting point is in that area. Now, having said that, my role does encompass a whole organization and we've, and we've seen lots of impacts across the different areas, right?
So, a few areas that I would say would be coding. Coding, um, you know, being a a, a developer, that is an area where we've seen measurable impact, right? Having these tools to help you code better, code faster, and then I think more importantly, document your code. So, so I think that's really real impact there.
And not just in finance, I think in general, people have seen lots of impacts there. [00:20:00] Secondly, I would say on the distribution side, client engagement, marketing, right? Those are areas where these tools can really help us understand our clients better, understand our business models better, and really help our advisors think about what the next best action is in terms of engaging, hopefully better with our clients.
Lots of opportunities there and these tools are definitely used for things like that. Risk management and extension of what we just said about investing, lots of applications there. Not, not just understanding the markets better, but, but tools like synthetic data generation, right? Giving us more ways to think about what a crisis might look like.
Can we use these tools to simulate that? Right? Obviously we don't wanna see an event like COVID again, but if we wanna simulate how our models may work, be
Victoria Averbukh: ready, but we want to be ready.
Andrew Chin: Exactly. This may be a way. And, and so as you think about these impacts across these different areas, you might want to think about, well how do I measure, like, impacts across these different areas?
I usually like to describe, impact in three ways. The first [00:21:00] is linear. Linear impact is. I know how to summarize, you know, these tools save me like five, 10%, 15% of my time because now I can summarize emails faster. That's great stuff. Linear. Next one is exponential. Right now we're moving further ahead.
Linear is about doing the same things, but faster. Exponential is about doing more things , being able to do more things that you have not been able to do before. I'll give you one example. I'm a research analyst and usually when you ask me to cover a company, I, I look at these 20 websites.
Great. And so by using AI, maybe, you know, I can do that faster. That's linear. Um, exponential is instead of just look at the 10 websites, maybe these large language models look at 50 websites. Now 30 of them may be useless, but maybe there's a few that may be very, maybe powerful that I'm not seen before, right?
So now I'm able to do more than what I've done before. That's an exponential impact. Maybe the impact is in the 30, 40, 50% range.
The last type of impact I like to talk about is transformational. And this is what we [00:22:00] all aim to do, even though it will take some time. Transformational is really about just redesigning a whole workflow, right?
Meaning you don't need to do things the way you've that, you've always done it right. And our industry is prone to this. We've done things pretty much the same way ever since I've joined. And we have the, the various groups. We do the, the functions that we do. Do these tools give us opportunities to reimagine what our business looks like, to imagine how we engage with clients, to imagine how we make investments and therefore transform the way that we run our business.
I think it's an opportunity there, but I think that's further out.
Victoria Averbukh: It sounds like the values in the plumbing. Something that you don't really see, but you cannot function without.
Andrew Chin: Yeah. Certainly it's in the plumbing. And I would say it's also in the imagination of the leaders, right?
Andrew Chin: Which is why, you know, creating leaders who understand the potential of these tools as well as the risk of these tools, you know, hopefully can bring out their imagination. And then [00:23:00] therefore, new business models and opportunities going forward.
Victoria Averbukh: Right. you just brought up large language models.
Everybody's talking about large language models whether you're investing or not, right? They're, they move sort of like from novelty to everyday utility. So they help us, you know, make maybe more informed investment risk decisions, right? Accelerate research exponentially, linearly.
Um, the changes of the day-to-day expectations for many roles. So, you talked about, you know, the right talent already, but how do you think should finance professionals think about their role, whether it's research, communication, productivity inside the firm? Talk to us.
Andrew Chin: This is a topic we're passionate about because I really believe that large language model is gonna transform how our industry runs, how we do research, how we do investments.
And so I'll give you one simple example that, um, I think LLMs are gonna really change [00:24:00] how the investment teams operate. So right now, for example we usually have two types of researchers. We have discretionary or fundamental managers. And then we have quantitative or systematic managers.
So both of those teams are typically separate and they do things differently. Discretionary managers are ones that, uh, you know, that, that really understands, let's say stocks really, really well. And they do deep research into companies, right? Traditional, fundamental research.
What they don't have, by the way, is breadth, meaning they can only do it for a few specific companies 'cause it takes a lot of time. How do they do it over a range of companies? LLMs give them that breadth. They now can take their insights and use these tools to apply them across a wider range of companies.
So that's the opportunity for discretionary managers. On the quantitative and systematic side. As you all know, they have breadth, meaning they use models that can be applied across all the securities in the universe, but they don't have much depth, right? Usually factors that they look at momentum value, which is great, but these tools give them [00:25:00] more nuance.
You can now understand a lot more about companies. You can look at earnings calls or following, so understand things like obviously sentiment, but also like management quality, deception, evasiveness. You can use tools to understand what nuance about the companies. Therefore, these tools give systematic managers depth, right?
So this is how these tools can potentially help. It gives discretionary managers breadth so they can cover more, and it gives the systematic managers depth so they can understand these assets that they investing in a lot better. So I see this evolution happening and empowering all types of investors to be better, but it also means, by the way
that the edge everybody has is eroding and we have to get all of us better, you know, to get to the next phase,
Victoria Averbukh: right? That's right, but also, sort of like, thinking from my vantage point, as an educator, I think because foundationally, LLMs much like, you know, predictive analytics are highly technical concepts. There is a misconception out there that everybody needs to [00:26:00] be, you know, to have a deep technical expertise.
But perhaps, you know, educating people the right way to give them just the right level of familiarity is enough to understand what these tools can and cannot do. And then enough to be able to question an output to recognize what something looks off or to use the technology in a way that actually strengthens rather than shortcuts their judgment.
Andrew Chin: Yeah, yeah, that's exactly right. And you just mentioned some of the critical skills that are needed for leaders to leverage these tools well. knowing how to use them well, not necessarily knowing how they're coded or how they develop it, but, but knowing the, the assumptions, you know, that where they're weak, where they're not so good.
And then knowing the, uh, the use cases where these are most powerful, a leader hopefully will understand that and then therefore apply those, uh, learnings in the appropriate areas.
Victoria Averbukh: And there is also this inherent risk aversion. Again, we want to be right a hundred percent of the time.
And given how quickly AI seems [00:27:00] to be scaling across business lines. Firms that probably have to start establishing some boundaries, right? And expectations among different units and, you know, the uses of AI in real time. The challenge seems, is how do we ensure that, the ways that are safe, consistent, and defensible right across, especially if we're talking across different teams.
So let's talk about the understanding this risk, what should a decision making process look like when a financial institution sets their AI and hopefully responsible AI policies?
Andrew Chin: Yeah, it just starts with the governance framework and the governance process, right? So we, we talked a bit about the firm's risk appetite for senior management to articulate that.
I, I think that's the beginning of it. Once you have that, then you can have a governance process. A governance committee, let's say , that vets use cases that ensures that the tools are being used appropriately. They set the policies and [00:28:00] procedures that you expect employees to follow.
I actually find that having this governance structure actually stimulates innovation rather than stifle innovation. Because once you have the rules of the road. The employees know what to do, and they can be very creative and very intelligent in terms of how they want to use these tools. So, so I think that's actually helpful by, by having a good framework in place, right?
And obviously we didn't talk about training and things like that. I would also say that you know, making sure the organization has the foundational tools that you want your employees to use is critical, right? You wanna build those tools. So they don't need to go home and they use, a LLM at home or something like that.
You wanna make sure they have the foundational tools at work that, that you want them to use. And of course then teaching the employees to use those tools, get the tools, teach 'em how to use it. And now on, on the policy side, you know, I would just highlight data. Data is the biggest issue, especially in financial services, right?
We care a lot about data. Wanna make sure that the data is private. And as you all know, the regulations are around the world, right? That really defined what data can be or, or cannot be [00:29:00] used, right? So I would say that having a clear policy on what data can, how data and what data can be used within these tools is a critical area that organizations have to think through, right?
Because you don't wanna lose any of that data and you wanna protect that. And some data, by the way, is proprietary. You don't wanna share that information. A clear data policy is important.
Victoria Averbukh: That's right. But ultimately, responsibility is not just a technical feature, right? It's a combination of communication, judgment, and very clear decision making.
And then of course, on the other end of the spectrum, there's another question. Once you have the guardrails in place, once you establish this. You know, responsible use phone data and the focus shifts to competitiveness, to, you know, differentiating factors. And now that everyone has access to powerful LLMs, the same LLMs, can the firms still meaningfully differentiate themselves, or are we just gonna end up having the [00:30:00] same insight?
Andrew Chin: If we all have the same insight, that'd be quite boring. Right. I think that's why it's important for us to become educated on these tools, right? Especially within finance. 'cause the way we use these tools in different industries is different. And so having a way to learn how these tools can be used in finance is critical.
Right. So I would highlight a couple areas maybe, where, um, folks can differentiate themselves even as these tools become very similar across vendors. Right. One is around proprietary data. I think that is an edge that organizations will have because if you have data in terms of how you do research, how you make the investments, how do you engage your clients, doesn't really matter what it is.
That is data that's really useful. 'cause you can then turn that data, feed it back into your models, by the way, and then fine tune the models so that the models will now be aligned with real philosophies. Right. And your way of thinking. I think that process, fine tuning is gonna be a way that every one of us can differentiate ourselves, right?
So learning how to do that. Learning when it's [00:31:00] important, when is not important is a Important competency the executive should have. And then we touched on prompting earlier, right? So obviously that's also important. Learning how to prompt well within the area that you're in is important.
So learning the skills and capabilities to provide context to set up the question correctly to help each other define the problem. I think those are ways you can differentiate yourself in terms of using these tools. And then over time I would imagine each of us. will have our own areas of emphasis and, uh, and competency, and therefore we'll use these tools differently going forward.
Victoria Averbukh: So, not much different, but on different tools, different uh, areas, where people uh, refine, interpret tools and data. But much like one company, one global bank, one asset manager, one hedge fund differentiates themselves from the peer group. That pretty much still stays.
Andrew Chin: Yeah, I think so. I think so. Otherwise I [00:32:00] won't have a job.
Victoria Averbukh: Neither would I. Right. In educating our students and speaking of education, I, like I said in the beginning, I definitely see the shift and more and more of our students are interested in data science and ai.
should we shift to the audience questions?
Andrew Chin: Yeah. Yeah. I will address the, one of the first ones. We talked about human in the loop, right? There's a question about whether there are instances where a human in the loop may not be needed, right. It's an excellent question.
I would say in finance, Most areas will require human in the loop because of our fiduciary obligation, right? So on the investment side and also on the client facing side, I would expect that there'll be a human in the loop all the time, right? Because that's what our clients expect, that's what our regulators expect.
Now, there may be some areas where [00:33:00] very simple low risk use cases where you may not. So for example, if Victoria sends me an email every single day. And I wanna summarize it, every week. And then I wanna suggest an email that I wanna send back. Okay, maybe I don't need a human in a loop there. It is low risk. I know Victoria well, and if there is an issue, she's gonna call me and say, Andrew, why did you say that? Right? So it's low risk, so I would imagine there are use cases like that, that are probably more internal facing, and more efficiency oriented the way you may not need a human.
But I think anything that touches investments, touches clients, will require human the loop.
Victoria Averbukh: There's a question about security concerns when dealing with client data for personalization purposes. Open source versus closed source data. I think you touched on that when we were talking about responsible ai.
Andrew Chin: Yeah. Yeah. I mean this question about the open source, closed source models and then data privacy. One of the reasons why we thought we wanted to, um, create a program and a certificate [00:34:00] around, you know, AI in finance is because we wanted leaders, future leaders to really understand the trade offs of what these tools can do and the responsibility that we all have, right? As, as corporate citizens and as citizens of society, right? So, uh, part of the learning will, will be around that. Like how do we choose when we use proprietary closed source models and when do we use open source models, right?
So, these closed source models from all the big vendors that, you know, they're really, really good, right? And for some types of things where, where high quality, fast response is needed, those tools are excellent, right? There are other ones, I'll give you one example where we, we have, um, processes where we are trying to look for alpha factors or high returned factors.
Within different types of media, different types of audio, video, text, and uh, things like that. You can imagine there's a lot of texts out there, right? And so we have these, agents which are trying to look for signals. You can imagine that that process takes a long time [00:35:00] and costs a lot of money, right?
So in those cases, what we do is we would leverage open source models to try to see if we can replicate or do some of the same things that the more powerful models may be able to do. Because it's a more cost efficient way to do it. And the point about data is exactly right. There are some cases where we are very uncomfortable loading sensitive information into some of the the closed source models, right?
So we prefer. To have the open source model within our own network and then using, processing the data within our own network because we know the data won't leave our organization. Right? So those are trade offs that I would expect future leaders to be able to dissect through and then think about what the right decision is.
Victoria Averbukh: Thank you, Andrew. How do you see the world models of physical AI playing roles in the finance investment business process?
Andrew Chin: Yeah. Yeah. It's, even though we, we've been talking about, AI for a bit and we've been hearing a lot about it in the press, we're still very early.
We're still like in the early stages of it. [00:36:00] So for those of you who are, who know, baseball, we're still in the very early innings, you know, of the, uh, this AI revolution. Which means by the way that we, we, we can't, uh, you know, really envision exactly how things will play out. Some of the things I'm saying to them may be completely wrong, you know, in, in, in two years, right?
So it's important to remember that. It also means, by the way, that. We should be very adaptive and nimble in terms of the models that we use because there might be different, more powerful models in the future. Now, having said all that the, uh, questions around different types of models.
I, you know, the, these models that we see out there, the large language models are purpose built. They're meant to be good at many, many different things. Right. Some of them are like geared towards like solving math or Olympia problems, but in general, you know, the, the good at very general things.
I suspect that we're gonna have to fine tune some of these models for finance, Because we look at the world slightly differently. We value, different things, right? So I, so I suspect companies will improve the quality of the models for their own purposes by fine tuning those.
On the physical side, you know, [00:37:00] I, I, I, I don't know. I mean, most of what we do in, uh, finance, you know, is, is not on the physical side. Right? But I would expect Ai, you know, impacting robots and, uh, things like that, that will happen. I don't know to the extent will happen in finance, but as a society as a whole, I think that will certainly happen.
And then, in terms of these role models and other, you know, broad, broad models. Yeah, certainly lots of things that we can learn from those general purpose models and trying to adapt into finance.
Victoria Averbukh: I'm almost thinking you know, the flow trading with many, many screens, whether, you know, like this can be somehow optimized for human consumption and take some of the pressure off the human traders.
Maybe that's what comes in. I don't know. But I think, at this point, I think we're only limited by the limits of our creativity. That's it. Right? And, uh, human creativity has no bounds. As, uh, you know, history shows,
Andrew Chin: I see a, uh, question around, cultural change, right? And, um and how to [00:38:00] think through that and how to communicate. The impact of these changes. That is a, a sensitive one. Right. And it's definitely not easy. Yeah.
Victoria Averbukh: It's very important.
Andrew Chin: Yeah. And so you know from, from a leadership perspective, you know, this is part of it, technology is one piece of it. Cultural transformation and leading that change. Change management is also another important part. I think the first thing that we all have to do is, um, it comes from the top.
So what is your vision for how these tools are, are gonna work right from the very start? Victoria and I talked about these tools as augmenting humans, hopefully amplifying what humans can do today. So if you position it that way, hopefully every employee will see that look we're trying to help every single employee hopefully get better at their roles, right? That's important, but then you have to back that up. And how do you back that up? You have to train them, right? You can't just say that and not teach the employee how to use these tools and really leverage the tool to be more effective in their particular roles.
Right? So doing that training is the way you back up your words and say, [00:39:00] look, I'm really invested in you. I really wanna develop you so you can use these tools better. But I also continue to meet you as a subject matter expert. You still need to be a great accountant. You still need to be a great research analyst, but I'm gonna help you use these tools, And then you continuously train that individual and monitor their progress and then get their feedback, right? And, and then if you do that, it's a joint partnership. A collaboration to help the individual. That takes time, that takes resources. But I think that is a way that you can really, um, make sure that your employees are aligned with the vision that the broader organization has.
Victoria Averbukh: That's right. There's certainly some burden shifting maybe to the shoulders of more junior people to learn, to educate themselves, to like, you know, using the word upskill themselves too. Interpret and explain the intuition behind the tools to senior managers. And I think this is where we are beginning to see that convergence between [00:40:00] technical and non-technical talent.
And in my opinion, it's very important for junior people and also for senior people to recognize that. For the, for the very long time students who come out, you know, out of my pro alumni coming out of my program, which is, you know, effectively master in financial engineering, we were viewed as a geeks with formulas, right?
And then there was like other finance people who are, you know, the regular finance people. We were either parallel or even like at some point divergent sort of, right? Like there was no crossover in industries. I think this now we are merging, right? Like, I don't think you can be truly successful in this AI transformational finance, financial industries, AI transform financial industry without being able to cross over and explain the intuition, you cannot just rely on the model.
On the formula. And the same way for people who are not trained in technical terms, they need to be able to develop the, you know, [00:41:00] some sort of sense, some sort of language and ability to communicate with technical people more closely. Not just, relying on a project manager who goes and talks to, you know, my IT people.
So I, I think that is very, very important.
Andrew Chin: I, I think you highlight two important elements of this iron person vision that I was, I was saying, right. You need domain expertise.
Victoria Averbukh: That's right.
Andrew Chin: And then you need fluency around how to use these tools well. So both of those are critical if you want to be successful.
And interestingly, LLMs actually give our younger employees or younger, graduates, let's say an opportunity to actually, you know, do it much faster, Be because these tools really help you, decrease the, uh, learning process, right? So I, I used to think by being in industry for 30 years, you know, I have so much experience, so much insight that it's so hard, you know, for somebody to really erode my edge, but that's no longer true.
Recent grads can come out, they can learn a lot of things that I already know and gain some experiences you know, very quickly. And then [00:42:00] therefore that's why I, I talk about the traditional edge that we have being eroded over time by using these tools. So even for myself, I have to continue to learn.
I have to continue to, uh, to get better. It is no longer by, well, I learned X in school and therefore that's enough, and that just kind of continue, right? We have to continuously learn and, and programs like these and also, large language models can hopefully get us there.
Victoria Averbukh: We are really, I think, moving forward towards the world where everyone needs to have this sort of like, shared baseline of understanding. And uh, I think the sooner, especially junior people, but not just the junior people, the sooner professionals. On both sides, embrace that, I think the stronger they will be in their careers.
Chris Wofford: Thank you for listening to Cornell Keynotes. If you are interested in learning more about Cornell's new AI and finance certificate program, check the episode notes for details. I wanna thank you for listening, and as always, [00:43:00] subscribe to stay connected with Cornell Keynotes.