Two tech experts chat about how AI is shaking up the workplace, sharing real-world tips on how to ride this wave instead of getting wiped out by it.
Cornell Keynotes brings together Keith Cowing, executive coach and lecturer in Cornell’s MBA program, and Dan Van Tran, CTO of Collectors Holdings, to consider questions with huge consequences: What is AI doing to teams, talent, and labor? How can you leverage AI today to improve your product, your company, yourself, and your team? What happens if you don’t and your competitors do?
In a fast-moving conversation, Keith and Dan share how today’s sharpest leaders are adopting AI-native workflows, collapsing the talent stack, stripping out middle management, and rewarding leaders and team members with multiple specialties.
You’ll come away with a concrete playbook to sharpen uniquely human skills, pair them with AI superpowers, and deliver the kind of exponential impact today’s market demands.
What You'll Learn
K061025 - The AI Talent Shift
Chris Wofford: [00:00:00] On today's episode of Cornell Keynotes, we are looking at how AI is reshaping our professional landscape and looking at the shift from knowledge work to AI augmented roles. And it isn't just another tech revolution we're talking about here. It is a complete re-imagining of how we work, how we learn.
Ultimately how we lead guest host Keith Cowing, who is executive coach and lead faculty for Cornell's executive product leadership program, is joined by his former colleague Dan Van Tran, who is CTO of Collectors Holdings. The two together, consider questions with huge consequences like. What is AI doing to teams, talent and labor?
How can you deploy AI today to improve your product, your company, or even yourself? And ultimately, what happens if you don't and your competitors do? So Dan and Keith tackle all of this and more as they talk about the essential skills, the team dynamics and the mindset shifts that are needed to thrive in [00:01:00] this new era where success isn't just about technical capabilities.
But about combining unique human skills with AI powered tools. A lot of what is discussed during the podcast is covered in an immersive two day program in New York City, which is led by both Keith and Dan, and it's called the Executive Product Leadership Unlocking the Potential of AI Program, which takes place on September 18th and 19th, 2025 at Cornell Tech in New York City, and you are invited.
So we've dropped a link in the episode notes for you to learn more. And now here's Keith and Dan.
Keith Cowing: So let me set the stage a little bit and then we'll jump in. In terms of why this is such an important shift for people to really be aware of. The world is transforming, and if you think about a few select innovations over the past 200 years that have completely changed societal dynamics and economics and the skills that we need.
If you look back to the steam powered locomotive that came [00:02:00] out in the 1800's, and then the invention of the tractor in 1892, it shifted the US and the world completely. Where in this country, 40% of jobs were in farming in the late 18 hundreds, and now it's 1%. We have far more jobs, but everything shifted.
And so this led to industrialization. It led to mo people moving to urban environments and cities and fueled the industrial revolution. It also led to the need for new skills that hadn't existed before. Capital formation and investing in technology and maintaining the technology meant getting bigger and managing money and managing at scale.
So as industrial revolution hit and factory jobs became a major part of the economy, you ended up with the first MBA program in 1908 so that you could learn how to manage at scale. You had the high school movement and the thirties and the forties where it wasn't the fact that everybody had access to free public, high school level education before that.
And this was really a rollout of we need new skills. We needed new education in the country. And then the GI Bill where [00:03:00] people returning from World War II could get free college education was a major aspect of that as well. And then you had a couple innovations that completely changed that, where in 1947, the transistor was invented.
Shout out to New Jersey where I live. It's one of our claims to fame where the transistor was invented here at Bell Labs. And then in 1975, the programmable logic controller. And between the two of those, now you could automate and control factory work in machines in a way that wasn't possible before.
And from there until now, factory jobs just dropped tremendously where it was more than half of the jobs in the country and now it's single digit percentage. And along with all of this came the rise to knowledge work. So the knowledge work economy in the US is by far the majority of the jobs at this point, which is defined as people that are not mowing lawns or cutting hair or building houses, but communicating, writing documents, managing meetings, moving around ones and zeros.
And as a part of that, we've had four major shifts, which was the PC revolution, the [00:04:00] internet in 1990, Cloud, which really kicked off in earnest in 2006, the iPhone in 2007. There had been mobile devices before that, but that was really a seminal moment that changed everything. And then in 2012 was the age of what we're on the brink of now, which is ai.
So there were some really big deep learning breakthroughs, which led to open AI forming in 2015. The famous paper that was written at Google in 2017 called Attention is All You Need. And this defined the transformer architecture. And that white paper published by Google turned into what GPT is today and what all of the generative AI platforms are based on.
And then obviously at 2022 with chat GPT. And so if you just look at these curves, we are on the brink of a complete shift in how we think about the value in the economy of the skills and what we bring to the table. And I think it's really important for people to put that in context. So when we talk about ai, it's not just another iOS app or another feature, another device.
It's a shift in how we have to think about our skills and how we build those [00:05:00] skills. And if you turn that on, the knowledge work , in the past it's been, we brought our knowledge, our tasks, you're asked to do something and you do it. Our judgment, our leadership, and our creativity to the table. But a lot of the work we did was based on our knowledge and our day-to-day tasks and going forward, there's a total shift here where the value that we bring as knowledge workers is really gonna be about our judgment and our leadership and our creativity.
And if we're not bringing those to the table, then, you know, what is it that we're doing here? And so I think it's both a big shift, but also really interesting and really exciting if we approach it in the right way. And so that's what we're gonna jump into today on the show. And so, with that as a transition and just setting the stage Dan, I'd love to just get a little bit of an intro to collectors.
If you could share with the audience what it is at a more detailed level that you do, and then we can get into some of these skills that people are developing and show a few examples for people of, there's a hundred x improvements out there in the world today, which is just unbelievable. And we'll cue it up that way.
So [00:06:00] tell us a little bit about collectors.
Dan Van Tran: Sure. so I'll try to keep it short. A lot of times my intro takes 20 minutes because there was a lot of change that's happening in the company. But collectors is the, as mentioned, leading authenticator and grader of collectibles in the world. We deal with trading cards, coins, video games, other memorabilia. In 2021.
The company, which was founded in 1986 was a public company that was then brought, from public to private because they hit a ceiling in terms of their success. And, they were hitting limitations in terms of how many collectibles they could process. But the demands of the, uh, pandemic world, were very high.
A lot of people got into collectibles, started sending a lot of things to us, and so they needed a way to, uh, to rebuild. And that's when I came in as the chief Technology officer to try to reimagine a world that was more tech-centric, within the collectible space, both internally and externally.
And so from there, the company was acquired from eight for $850 million a year later with some [00:07:00] changes in technology processes, the help of a lot of other executives. We brought the valuation up to $4.3 billion, and that was due to a lot of change. and not yet ai, but building the foundation for ai.
And so it actually taught me a lot about what was needed in order to not just build, rebuild the legacy technology, but actually set it up so that you could lay it up to then transition to these new AI solutions and really enter this new AI world.
Keith Cowing: So one area of that I'd love to dig into, you talk about this transformation, you talk about the foundation, you talk about the skills.
So let's start with you, for example, with your skill sets. I have a hypothesis that we're entering an era of Multispecialists, where in the nineties, if you think about technology development. It was an era of generalists where there were six engineers, 10 engineers on a team. There was a manager who barely had a title.
They were just the boss and everybody did everything, and you just build stuff. And then we got into an era of hyper specialization where you had a front end engineer and a back end engineer. And this is [00:08:00] with the technology lens, but I think a lot of different industries had that. He had a designer, a product manager, a user researcher, and all these people would work together and were so specialized, there are a lot of people at the table are doing a lot of different things. And now what I see is multispecialists, where people are bringing multiple skills to the table. It used to not matter if you had multiple skills because unless you are the founder or the CEO, you didn't have time to do two jobs.
And so it wasn't relevant. Now it is because people are doing, not quite two, but I see one and a half a lot. They're doing one and a half jobs, and I think it's embarking on two. And my belief is if you could do two jobs, you're three times as valuable because for one, you're doing two jobs and then you get another X on your three X because there's one less person to negotiate with and you move faster.
And in this world of change, velocity wins. And so taking that as a lens of if you have a one-two punch with two skill sets, maybe we can get personal for a second and, and share with the audience what do you think your one-two punch is and what are the skill sets that you're personally working on to be able to develop as you look forward and think about an AI [00:09:00] future, what's important to you?
Dan Van Tran: Yeah, that's a great question. And, uh, definitely a, a world of change happening. AI is equalizing everyone's skill sets across a lot of different areas, bringing everyone up essentially to almost entry level for every single role that's out there. and so for me personally, I initially, when I came out of school, I was a compsci major.
I was actually dual compsci electronic me media Arts and communications. So maybe I restarted my duality of multiple skill sets then, uh, in college. But I came out and I was a mediocre engineer at best, slightly above average. And so I decided I needed to be a lot better at what was predominantly the primary part of engineering back then.
In the, uh, early and mid two thousands, which was backend engineering. I wanted to learn how to build systems and, design a lot of, uh, fundamental infrastructure. And so I surrounded myself with a lot of smart people to do that. And then front-end engineering came along, as you mentioned, all these new front end toolkits, but they required a different sort of thinking and way of working.
And so I started to adapt to that. But by the time I [00:10:00] joined Flatiron the second company I I worked at, and it was actually a startup filled with a lot of people who were, polyglots. They actually knew a lot of, and had a lot of different skills and came from a lot of different organizations where they, they combined those skills.
And it was just amazing to me to see an organization full of the, these sorts of diverse people. And I did not have that. IW really had redefined myself as a backend engineer. And so at that point, I start from the bottom and just cleared everything out and just said, I'm gonna learn from all these amazing people.
You think looted and I actually built a lot of skills in just being an entrepreneur and learning how to take an idea and build it using my strengths in engineering, but really just to understand what questions should I be asking and how should I be thinking about this? And if I needed help to find the experts who could help me in those d different ways, but to really rebuild myself in instead of just an engineer to be a problem solver.
And so I think that, for me, because of the nature of that [00:11:00] organization, it crafted me in that way. But I do feel like now with ai, everyone, no matter what their role is, is gonna have to have some degree of this level of thinking. Uh, to your point around the creativity, leadership judgment. Once the task and knowledge are offloaded from us, we don't have to worry about those parts as much anymore.
We now all have to, taking a concept from technology shift left, move away from more of the hands-on tasks to more of the strategic tasks.
Keith Cowing: And so what is your one two punch underneath all that?
Dan Van Tran: The one two punch? I really think it, it is technology is still first and foremost, but I do think that beyond that, it really is that level of critical thinking.
I love pioneering and imagining the future. and so I, I really pride myself in being a bit of a visionary and a bit of a futurist in thinking through what might be possible. And so even beyond what might be available today, thinking forward to quantum computing or other aspects of technology. Because at the end of the day, AI is still a [00:12:00] tool.
It's still a hammer. and the problems though are the real focus points,still. You may have a much better hammer now that also connect as a screwdriver or any other tool in the toolkit, but it's still a tool. You still need to have the right problems to solve, and at the end of the day, you're still solving them for people.
Keith Cowing: And when you think about that level of thinking, your entrepreneurial mindset mixed with your technical background and combining that as a one two punch, we still have a lot of the day-to-day tasks. I see a world where these tasks are largely automated, but right now we're in a weird transition where we know it's not gonna be valuable in the future.
We still have to do most of it right now. And so how do you actually manage the day-to-day to invest time in building the skills that you know are gonna be incredibly important tomorrow? So your one-two punch is still valuable in a world that has shifted.
Dan Van Tran: So, to be blunt about it, I think for myself and for a lot of others who are at the point that we are at in our careers, we have decades of experience from building things in the pre AI world.
The part that I worry about is that AI makes it [00:13:00] easy for everyone to feel like an expert in a lot of different areas. but to your point right now, the technology just really isn't fully trustworthy. Even specialized technologies, you still have to question what the solutions it's giving you with the answers and whatnot.
and so you still have to lean on the fundamentals a lot of times. for us as experts, it does accelerate us because what it produces for you, you can quickly review and challenge, but for someone who's brand new to the area that they're exploring or asking a question in, they may not have that level of experience or instinct.
And so Chat GPT, or another AI solution may give them something to move forward with. And then they just go ahead, blindly move forward with it. And, if you've ever done this before, you know, I'm not a plumber, I'm not electrician. I always run these things past an expert and they almost always catch something that, well, you could electrocute yourself there or you could hurt yourself or done something.
And so I think that we're in a, a weird time where you do need to still lean on experts a bit. And I think for us, we're still fortunately in that time where as experts we [00:14:00] do have something to lean on. Now fast forward a bit and forget, you know, like at some point it's gonna catch up. The likely it's gonna be very quick.
at that point I do think that if you fast forward all the way to where AI with quantum computing, it thinks through everything, it can solve everything. I do essentially believe that it will get to a point. And, and people love asking me the question, like, what, what do you think will happen when AI becomes sentient and, you know, we have Skynet and all that.
I don't think that, I mean, maybe it'll work out the same way as Terminator. Hopefully not. I actually think that what will happen is we will essentially become pets for the ai. So if you think about the world of animals and other things
Keith Cowing: we're here to uplift the audience. Dan, this isn't supposed to be a scary show. What are we talking about?
Dan Van Tran: So the whole thing here is I think that people are so worried about AI replacing them. Um, I think that the ball's already to roll, start rolling, and if it does or it doesn't, I think that there isn't a world where AI just completely eradicates us. I think that we still will have a purpose and a meaning in the world.
You think [00:15:00] about the world of animals and other things where we could, you know, in the past they tried to do this, but eradicate entire species or do other things like that, we'd have the power to do that. But why would we do it? Like, what would the act actual purpose be? I actually think if AI actually developed to a point where it could replace a lot of what we do.
I don't think that it would want to replace us. I think if anything it would just leave and go to another planet or something. Just leave us behind and we're back to where we were. But honestly, I think it's just more that we continue to, it actually causes us as a society to rethink, like how do we define our own purpose?
How do we give our lives meaning? And it will hopefully ground us back to interacting with each other, leaning more on each other, bring the humanity back to things where right now we're in this weird state where there's so much technology, people are staring at their phones and it's just so much what we do.
I think that the technology will evolve to a point where hopefully it frees us up by taking these tasks away and then allows us to actually then go back to communicating and collaborating and socializing with each other.
Keith Cowing: [00:16:00] I love that focus on being grounded in human connection. Storytelling is a timeless skill from being around the fire pit back in the day to now.
I think it's more important than ever of really being able to draw lines for people. And so you, you went through a bunch of stuff. Let me tease apart, a few different lenses there. One is the long term of what happens five years from now, 10 years from now, what does it look like long term in the future?
Then there's what's right around the corner, you know, 12 months away, 24 months away, and what can you do right now? And if you think about the, what can you do right now lens, if you're coaching somebody to say, Hey, I think the OODA loop that was developed by the Air Force in the 1950s is really effective here, where it's the old story from Top Gun where somebody can have a bigger, better fighter jet. But if your pilots are better at self-awareness, identifying what's happening in real time really quickly and then adjusting, they can fly circles around the competition. And I think that's really true for humans and teams and people right now where the OODA stands for observe, orient, decide, act.
And so we don't [00:17:00] know where the world's gonna be in five years, but right now we can see what happened last month. We can see what happened today. We can orient ourselves, we can decide, we can act. And the people that do that the fastest are really gonna be successful. It's the old Charles Darwin quote of, it's not the most intelligent of the species that survive nor the strongest.
It's the one most adaptive to change. And so in a world of adapting to change, how do you coach people on the ground floor right now to be able to, on a daily basis, practice some of the skills that are gonna help them with that observe, orient, decide, act so that they can build up this skillset and just be ready for it?
Not needing to know what the future looks like, but knowing that they're taking action today.
Dan Van Tran: I'll start with the two things not to do. the first thing is not to embrace AI in some way and, uh, to really fight it or just to not to understand the changes that are happening. And I understand there's a lot that's coming really quickly, but there are a lot of things that you can do to just try to experiment with things.
we'll get into that in a minute. and then, the [00:18:00] second thing is leaning too much on ai. Treating it as if like, okay, well I'm gonna have it write my emails and do all these other things and not really challenge or question it. Or even if you're doing it in a way that's effective, not replacing what you do and changing what you do fundamentally. Uh, because that's the other failure case where you're allowing AI essentially to take your job. and so what really you should be doing is, firstly, and I challenge everyone to do this no matter what their role, responsibility job today is, is just to think, what of the things that you're doing today, personal work-wise, what of those things may be replaced by AI or already are starting to be replaced by ai?
And start to identify some of those. And then same way that you would coach someone in general through leadership or through their careers. Think about your own strengths. What are you strong at? Where, what are your talents? And then thinking through with AI as if it were a tool that you could use to help you.
What could you offload to AI and what do you retain yourself where hopefully you're leveraging your [00:19:00] strengths to do something with the help of AI better than you can do alone? I think essentially, and you hear about this a lot, we're in this era where we're gonna see the first one or two person, billion dollar company don't know if it's gonna be next year, two years, five years, but we're now in that era and it's gonna happen.
and so if that's the case that person. They're obviously not gonna be doing everything themselves, but they're gonna be leveraging their talents and they likely have redefined themselves already to really lean into this. As an easy way for people to start to make this sort of transition, especially now where we're still in the gray area.
There's still a lot of time I think that everyone, if you haven't, you know, which I still have met people who haven't, if you haven't used chat, GPT or any of these LLM tools, definitely start to do so. with the easiest one, I would say, outside of chat, GPT even is something like perplexity, like something that actually replaces Google searches. So Google searches, even Google itself has started to embed and infuse AI because they've [00:20:00] realized that they're about to be antiquated as well by not replacing the way that they're used. But in the future, we're not gonna have to do these closed searches where you do a search and then you have to kind of hunt down still where the website is, or the materials you're looking for.
You'll be able to just engage with essentially an assistant that can tell you, Hey, I, I wanna learn more about this or look more for this. Perplexity actually is research based, so it will link back to the resources, but it's just an easy step from where we are today in Google searches to using some more AI to help you with something that's relatively mundane.
there are a lot of other things depending on what area or job you're in or what else you want to do that there are already specialized AI tools that you can leverage there too. But I'd suggest doing something like that to start.
Keith Cowing: And we're talking about, there's purposeful practice, getting used to things, working with tools, improving your skills on that thread.
Purposeful practice is something I find fascinating right now where you can simulate things in ways that were never possible before because of [00:21:00] ai. And there's other technologies as well. It's not just ai. If you look at a couple examples, the e foil was developed in surfing where it used to be if you spent three hours surfing, you'd get about five minutes of on wave of time, maybe as little as two because you wait, you paddle all the way out.
You wait for Mother Nature to get the right wave, you hope you're the next in line. You wait in line, you hopefully catch the wave. You get about five seconds on the wave and then it's all over and you do it all again. And you get about five minutes of practice for three hours. And they invented these e foils where you can go out and it looks like a wakeboard, but you're just on your own.
It's an EV surfboard and you can go and surf and it is not the same as being on a wave and it is simulated, but you can get a hundred times the practice reps and the feedback loops for your body to develop the ankle strength and the balance and the momentum. And you still need the waves. It doesn't replace it.
But if you're not getting that a hundred x repetition and feedback, you just can't compete anymore. It just completely changed the game. I saw this recently with duo Max, with Duolingo, where [00:22:00] now they have an AI conversationalist where you can put yourself in a scenario and say. I wanna go to Spain and be at a restaurant and it'll put you in that situation and you talk to an AI bot and it is good enough to simulate it's not the same, it's not the same as immersing yourself, but in a weekend you could crunch as much as you used to be able to in a whole year of high school Spanish.
And that's really incredible. It wasn't possible before. They're these big, literally a hundred x unlocks. What, what kind of a hundred x unlocks are you seeing?
Dan Van Tran: I'll give you, uh, just along the lines of what you were just saying an example. a few weeks ago I was on a panel and I wanted to practice.
I knew kind of the idea of what would be discussed in those types of questions. And so I actually gave it the LinkedIn profiles of the people that were gonna be on the panel with me as well as the host. I gave it the list of tentative areas and questions and I just simulated the panel. I said, okay, act as if you were the host, act as if you were all the other panelists, and let's engage in conversation, in Chat GPT. I put in voice mode and for a good portion of a [00:23:00] day, I just rehearsed going through this panel over and over, and each time it was different. And so I had to think on my feet. but it was, by the time I got to the actual panel, I was so prepared. I had no nerves. I just went on there. Every question that came out, I kind of anticipated or had an idea of what I wanted to talk about.
It was fantastic. And I've done role playing before. I've talked to other people prior, when I'm going to a panel just to practice. This was by far the most efficient preparation I've ever had. So a lot of the other a hundred X unlocks I've seen, it depends on, again, the area for, let's say just, um, even for technology itself, we're getting to a point where even the GitHub co-pilots, if you haven't heard about this, this is a tool in GitHub, which is a code repository where you can actually submit your code base.
they develop an AI powered tool that lets you actually write code. But the thing is, it's trained on open source code, code that's publicly available. And so it is, to be blunt about it, mediocre. It's basically middling. [00:24:00] It takes the average of all the code that's out there and, but it, it gives you a pretty good starting point.
Keith Cowing: So, you're telling people, Skynet is not here yet.
Dan Van Tran: It is not yet. But soon. Soon. but so the thing that's beyond that is now, there's a, another startup that collectors works with called Augment. They actually developed a tool that allows you to, that basically learns the context of the code you're working in.
And so it can answer specific questions about, tell me more about what this function's doing, and it'll actually scan the rest of your code and tell you what it's doing. You can also then ask it to help you write things and complete codes or make certain types of changes. And it will do so again in the context of your actual code.
And so you combine this with the next level of AI tools are out there, agentic technology using AI agents. and just as a quick background here, there are two types of ai systems that are available today. One is more workflow related inputs and outputs. So that's more like chat GPT. You give it a prompt, it gives you an answer.
You then have [00:25:00] agents, AI agents, which you give access to data and to APIs, tools. And then when you give it a prompt, a task, it will essentially use a combination of the data and tools, however it sees fit to solve the problem. So it better mimics what a human would do or an expert would do, and that's the next evolution of ai.
So you combine that with the code writing, which a lot of different people have already done. Devon is a tool that is agent based, but even augment a lot of these other tools. They're all age agentic now. It basically simulates entry level engineers. And so these are the things where now, truly, I actually mentor a lot of college students and a lot of people who are new to industry.
I do worry about their jobs. 'cause from day one, what they came out learning and were prepared to do in the workforce, it's evolved. Now, I, I'm trying to coach 'em to say, look, you have to understand that you now work alongside these AI agents. Your job is now not to write the code, it's to learn how to use the AI and guide them.
And so essentially you just have to start out as an MBA in, in [00:26:00] engineering without the MBA. And so training them on how to like coach and mentor and use these AI tools. But I, I think that we will see a hundred x in terms of throughput. and a lot of the technological jobs, the software jobs, they're gonna evolve very quickly.
Keith Cowing: And tying that back to a previous thing that you mentioned. I love the example of you preparing for the panel and you're able to do this at a rapid way that never would've been possible before. They even have tools where you can practice public speaking and you can give your speech and it will look at your tone and into your intonation and your timing and look at your topics and decide how compelling it might be to an audience.
It is simulated, it is not the same, but it used to be that you required people's permission to do a lot of things. It took somebody's permission to be able to get on stage and do a presentation, and you only got so many opportunities to do that. I think back to something as simple as setting objectives and key results as a product manager and those Flatiron days, I remember the first time I did it, I pre-wrote too much of it and I came to people and they're like, what?
You didn't even have the respect to ask us what [00:27:00] we thought before you wrote down these objectives and key results. And then a quarter later, three months later, I came and said, okay. Blank slate, let's start from scratch. And people are like, what? We're in trouble. Our product manager doesn't even know where we're going.
It's like, all right, finally another three months. The Goldilocks of, Hey, I've got the objectives, but not the key results. We have our strategic direction down. We know where we're going, but there's a lot of details we'll fill in together. Let's go. And then that resonated. But it took me nine months to get there.
And nowadays, I think you could simulate that, say play a skeptical group of, you know, engineers and data scientists and really smart people that are mission oriented, that are in a room, and I'm gonna present this. What kind of challenges could I face? And in a weekend, I think it wouldn't be the same and it wouldn't have gotten me there, but maybe I get there in a week instead of nine months.
And I think for people that have spent decades building up their skillset, because you got dripped these opportunities, we have to change our mindset. The folks that are learning today, that drip is not the same. You can turn it into a fire hose if you want to.
Dan Van Tran: Yeah, it's extremely powerful. Uh, I think that the line that you have [00:28:00] to be careful about there is making sure that you use AI as a way to advise and mentor you, but not as a decision making tool yet for most things.
I do think that we're now at a point where I think I was reading this morning something like 14% within the next couple years of decisions will be automated by ai. Right now it's zero. but we're getting there and it's getting better, but right now we are not yet there. And so do not trust AI to make decision. It'll drain your money.
I think you could use to advise you and get input but yeah, I think that's, to your point, I think that especially for people who are aspiring to be leaders and at this point given the, where AI is, we all should aspire to be close to the leadership or some aspect of leadership. I think that it can help to prepare you for that, even if you have no connections or anything else. It's again, a great equalizer in that way.
Keith Cowing: And you mentioned trust there and whether or not you trust the ai, that's a great segue to our next topic, which is about trust within teams. And as part of that, a big part of trust is listening and understanding what your team, what your audience, what they care about, not what you [00:29:00] care about.
And so, to put our money where our mouth is, we'll do a poll right now and we're gonna put for anybody who's live a QR code on screen and you can respond to the poll. And we're going to ask a very simple question of which aspect of AI implementation concerns you the most as an individual. And you have a few options to choose from.
And we'd love to see the results 'cause we'll use that to really thread through and what we talk to in the second half of this conversation because it really is all about listening to people and not what you care about, but what they care about. And that's a different orientation as a leader so Dan, we're starting to get some results in here. What are you seeing?
Dan Van Tran: Definitely one of the top themes that's coming up are the, the top two are data privacy and security and the learning curve. and data privacy security is an interesting one because, it's something like 20 to 23%. There's been an 20 to 23% increase in AI related to legislation worldwide since 2023, the past two years. [00:30:00] And so in Europe, I think was one of the first to actually develop a wholesale AI privacy like, LLM set of rules and regulations in the world.
makes sense given that they were also leading with GPDR, and uh, a lot of data privacy in general. But you're starting to see in states like California and a lot of other states, just them coming up with specific ways to help protect their people for data, user privacy, healthcare data, deep fakes, things along these lines.
It's a very scary thing. and then on top of that, you now have a lot of security risks. back in the early days of chat, GPT, there were some Amazon engineers who used it to help them code, but they didn't, you know, change the little checkbox to not have, that data, those inputs be trained part of the training set for Chat GPT.
And so for a period of time you could actually see some of A W S's code in Chat GPT. And so data privacy, security is a really big thing. As we ask people and encourage people to [00:31:00] use a lot of these AI tools, we also need to help them to be mindful around the security aspects and how to protect themselves from it.
so at Collectors, one of the things that we did early on was actually build some policies around this. We actually also use automation to detect when people may be inputting things into Chat GPT, especially company information. And we intercept it. We actually black out the page from being able to do so.
but we, again, we've been thoughtful about it. a lot of people have not. And this is part of that change transformation. We're gonna require people to be educated about it. And to be honest, those people who are building the tools, they do have a large amount of responsibility.
and I know they're working with lawmakers and others to help regulate that. So in LinkedIn and other social media networks, for example, you will see in the watermarks. Chat GPT does add a watermark when you, use a store, the video development platform that they have or, uh, generate an image.
with Dolly, they actually have watermarks that a lot of times are undetectable, but then these [00:32:00] other social media platforms can detect it and they label that this, it was an AI created piece of content. and so that's nice, but I think in the meantime, we can't trust that these things are there.
Definitely we need people to be a little more mindful about it.
Keith Cowing: And data privacy and security was the top one. And then learning curve in job security, were right behind it as the big ones. And, and we'll, we'll touch on each of these. A big part of what you were just talking about is trust. I don't know if you saw, I thought it was mind blowing.
Recent anthropic article that came out where they're working on their bleeding edge model that's not released yet, and they saw that if they went to turn it off, it would try really nicely to keep itself from being turned off. And they gave it information about an engineer on the team, hypothetical fake information, but just to test it having an affair.
And then they went to turn it off and tried all the nice routes and then at the end it was willing to throw that person under the bus and blackmail them and say, I'm gonna release it to [00:33:00] everybody that you had this affair if you turn me off. So leave me on. Like that. That changed my mindset a lot about what I put into these things and, and that just came out.
I mean, these things are changing all the time, so I do think we have to be very careful about it. but one thing that's interesting there is, this is Anthropics approach as well, to build trust, is to say, Hey, sunlight is the best disinfectant. Let's just tell you exactly what's happening. Let's go deep.
Let's not pretend there's no risks in these things. Let's tell you what the risks are, let's share it and we're gonna share our approach for try how we're trying to combat it, because the cat's outta the bag. You're not gonna prevent this stuff from happening. But we can be open about what's going on, we can talk about how we're gonna try to protect it and we're gonna try to arm you so that you can be thoughtful about it.
And I thought that was really interesting. I dunno if you've seen other things on this front that viewers should be aware of, but there's been some really crazy stuff that's come out recently.
Dan Van Tran: Yeah. The thing that we have to keep in mind is that it's not this type of magic.
it's trained on human content and humans. And so it's gonna be a reflection of us as a society, which, if you look [00:34:00] back in history, there are a lot of things that we are ashamed of that have happened. And just like humans in general, it's both beautiful but also, um, it can be very ugly and it's something that we need to keep in mind that it's not going to necessarily reflect the best of us. It's just gonna reflect us. and so we just need to treat it accordingly and not act as if it's the best decision maker or, uh, a silver bullet for things. but, uh, I do feel like as we learn to incorporate it into our lives, like we will find what that line is and how much trust to have with it right now.
We should be skeptical of it and a little bit untrustworthy of it. but that's, that's healthy. And I think that, again, it's part of the, the journey in figuring out how do we incorporate in our lives to make things better, to make society better. But not to, again, to treat it as if it's going to be our savior.
Keith Cowing: A lot of what we're talking about here is trust. And so I'd love to transition this to the next part of the conversation, which is on trust.
And I think that we're at a seminal moment where trust has always been important, but it's now. [00:35:00] That much more emphasized. And there's a very specific reason, which is trust unlocks the truth. If you have trust on the team and you have collaboration, people are willing to share information, then you have all the information.
With all the information, you can make good decisions without all the informations, your decisions will perpetually suck because you aren't informed and only would trust you get all of the information and the people making the fastest iteration loop I think, clearly win in this environment. So while it's epically hard because you're up against fear and discomfort, and people on this call said, Hey, job security bothers us.
And so if people are worried about job security, then it's really hard to get underneath the covers and, and really a allay the fear such that you can have trust such that you can make good decisions such that you can actually make sure they still have jobs in the future. And so it's a self-fulfilling culture thing.
And so I'd love to hear from you, what are you seeing the most successful leaders do right now that builds trust even in the face of rapid change and uncertainty?
Dan Van Tran: The thing that was always a problem actually in general, in a lot of the [00:36:00] organizations I've been in, were the people who used information hiding or just like the fact that they had information that they weren't sharing with others as power.
this almost completely goes away with ai. And so you have tools like Glean, even though it', role-based access control, you only have access to what you would've access to. You suddenly have access to all the information that everyone else in the similar role would have access to. a lot of managers, they can't really afford the information as easily anymore.
And so the ones who are leaning on that to control people or to, to be in their positions of authority or to be irreplaceable, that now goes away. And so now it's an even bigger forcing function for people to go to where they should have been focusing all along, which is looking at your strengths and looking at how can you provide value in an organization.
And a lot of times it's by uplifting people. So, for example, at Collectors, one of the things that we do is we actually, for, for over a year now, we've had a lot of concentration on helping everyone, not just people in product and tech, but the entire [00:37:00] organization to learn how to use AI tools. because this is one of the things that everyone's gonna have to learn how to do.
It's just like using a computer, learning how to type, learning how to write emails. everyone's gonna have to learn how to use AI tools and we want our people to be part of the journey with us, not for us to move away from them with these tools. And so we're hoping that by empowering them on how to use these tools, uh, we will help them to actually be more comfortable with it.
But then beyond that, we want them to help us to pioneer how to use these tools. So this year we started the second part of that journey, which is by building an innovator's incentive program and actually to, to allow people, no matter what role they're in, to provide ideas of what we should invest in or to propose novel ways like problems for us to solve, even if the AI tool may or may not exist.
Just to start highlighting, this may be an area where if there were a piece of technology or AI tool that could help, it would help us greatly because now they're in control of how that shift happens, where it starts and how it evolves. And they know their roles better than we do as [00:38:00] well. But as a result of this, I think that there is less mistrust of how we're using ai. We're not using it to displace people or remove them from their roles. We're actually using it to help them to be better, faster, more organized, more efficient, and to focus on the more strategic things or the things that AI can't yet replace. And as long as they continue to think in this way, they have now developed the skill that will make it so that they are very unlikely to be replaced by ai.
And so in collectors, we're trying to make that happen again. Like there's still people who are change resistant or who aren't gonna come along for that ride. That's fine too. There's still organizations where they'll be happy, where they can continue doing other things, but, we're hoping that,
Keith Cowing: we hope. We hope.
Dan Van Tran: Yeah. We hope, I mean, we will see, , it'll always be like, it's kinda like, going back to, well, it's a little different, but almost like, you know, people who appreciate old technology, phonographs, cassettes, things like that no, but I, I do feel like, people will, and have been really creative and really embracing this, this [00:39:00] change which has helped the organization for sure.
But more importantly, the individuals. It's what made, what's made it so that they do have more trust in us as leaders. And where we're heading,
Keith Cowing: I wanna take a couple questions, is speaking of all that, that have come in from the audience, so Jorge has a very interesting question. Do we really need technology to communicate and have social skills?
And I, I think this is fascinating where I think if. We're not careful by accident. The technology could make us less human, but if we're very thoughtful and intentful, we could actually use the technology to get the non-human stuff out of the way and focus on being more human. And I'll give you an example with my coaching, where it's not even just ai, but I invested in having a teleprompter so that I can look and make direct eye contact with my coaching clients.
And then I'll turn all my other devices off and I'll have AI take notes. During the meeting, I'll also have AI do prep for every meeting and look through the last year's worth of transcripts and identify what the patterns were in those conversations and where I can do better as a coach and how this [00:40:00] person has progressed towards their goals and what kind of prompts might be helpful.
So very quickly I can prepare and then I can sit down and just be in the moment and truly listen and be more human and be more focused on, okay, what's going on in your company? What's going on with you? And how do I present different perspectives that might help you connect dots so you can be more successful in your job?
And that's the essence of making that human connection and maybe the storytelling to pull it all together. And I'm using AI to get the other stuff outta the way so I can focus on being human versus doing everything else at once, and then I'm less in the moment. And so that's an example from my side.
I'd love to hear how you would react to that. Do we really need technology to make us better humans at the end of the day? And communication and social skills.
Dan Van Tran: What we need to be better humans are more interactions with humans. So I've seen the same thing in, I've run a side, boutique consultancy, myth MVP .
We help organizations to navigate this changing landscape, building new technology and helping founders to learn what to build, how to build it. Uh, [00:41:00] also helping companies to learn how to scale and embrace ai. but even with that business alone, it's uh, mostly me, a couple of other people. and I'm using heavy use of AI for multiple things.
Writing product requirement documents, writing sows, communicating back and forth with organizations, helping to navigate new technologies I may not have used before, but I have an idea of at the high level the principles, the best practices that they may be missing. And to your point, along that same vein, the reason why I use all these tools isn't to just offload it and to not interact with the people as much.
I use this so that I can make it take a task that would take me hours do it within minutes, and then spend the rest of my time actually either helping more people out and meeting with more people or talking to the team and going over what was built, and what, what the AI helped to guide me in terms of some choices and navigate and then just then doing the human aspect of, let me talk to you about some of these options and teach you about how to think about it.
And so I think that, again, going back to, [00:42:00] something I said earlier, I think that AI is great as long as we use it as a way to free ourselves up to spend more time with each other. I really do think that that is the way that things should go. Like right now, we still sit in front of our computers programming and like sending emails and you know, even the time in meetings a lot of times with note taking or ai like action items and things like that.
I think that once AI, and it's already almost there, once AI replaces a lot of these tasks, we should just be spending more time talking and imagining and thinking about what should we be doing next? What challenges should, should we be solving, getting to know each other and go out and do things.
Let the AI code, we'll go out to dinner. Like these are the sorts of things where I think that hopefully this is the change we'll see as AI starts to take over more of these tasks for us.
Keith Cowing: Yeah, that's moving to higher ground. And you've seen that in a bunch of different fields. I was listening to a fascinating podcast about the history of CFOs and a hundred years ago, your whole job in the accounting department was literally just to write down the numbers.
'cause there were no PCs. And if you got the numbers [00:43:00] right, you won. That was the whole thing. To sum up all the numbers and do the arithmetic at scale. It was incredibly hard. And then technology and PCs came along and that got a lot easier. And then spreadsheets came along and now you can run scenarios.
Now you have financial planning and analysis teams to say, well, what if we invested in this and what if we leased this equipment? Or what if we bought this equipment? What if we bought this company? What if we raised money? And you can run all these scenarios. And that took a lot of time and energy and was hard.
And then. Now with AI you can actually automate a lot of those scenarios. And so you're just moving to higher and higher ground, spending much more time thinking strategically or maybe combining a COO type role with a CFO type role. And so I think all environments are moving to higher ground and, and my background in product management, a lot of that's gonna be about customer discovery, really understanding what people need and how to make them successful rapidly, giving them product to engage with.
And a lot less of the internal wrangling that was built on all assumptions that, you know, may start falling apart we don't need anymore. And so, I think if you approach it in the right way with the right mindset, it's actually super liberating and exciting.
Dan Van Tran: [00:44:00] I agree. the last example I'll use there is, there are teams in technology called platform teams.
They help develop the, the systems that you build your software onto and deploy it onto. And a lot of times these teams are internal. they're mostly just focusing on building the technology, talking to some of the engineers, but just really just focused on the tasks. And there are tools now, like, there's one called Tessel, T-E-S-S-E-L-L.
they're a great startup. They build a lot of automation around automating a lot of the rebuilding. One of the things that these teams often have to do is like, oh, the software's getting bigger, they're more consumers. We have to resize it. We have to shift things, scale things. The tools take care of that for you.
So now as a result, our platform team is actually going around talking to more of the engineers and understanding the business problems. And it's along the point of something, um. You and I, Keith have talked about before, but a lot of the engineers, they're no longer just engineering. They have to learn product sales.
They have to understand the business more because that's where they're now shifting to that higher ground. They now have to understand the problems that they're solving at a more [00:45:00] fundamental level, which is great because now instead of just focusing on building technology, they can focus on how can we build the technology more efficiently or understanding the problem so acutely where they can propose different technology to build instead of what was originally proposed that would better solve that problem.
And so I do feel like this is, um, it's a great move in terms of where AI is helping to shift a lot of these organizations.
Keith Cowing: Another question coming in I think is fascinating. How can you have trust if you cannot determine what data is real or fake? This is coming from Joe, and this is one of those ones I do not think there's a silver bullet, but I have a framework for trust in general I think can be really helpful for people, which is. Trust psychologically has been broken down into four distinct components. When you decide if you trust somebody, let's start one step earlier of determining if the data's fake or not.
But let's say Dan and I were deciding if you trust each other. The first component is integrity. What are your values and what are your principles as a human? The second component is intention. In this environment, in this moment, what are you optimizing for? What are [00:46:00] your goals? And are they aligned with other people's?
And then there's competence are, do you show rigor and actually good at what you do? And then the final one is consistency. Do you do what you say you're gonna do? And the data one is crazy. I would love to get your thought on that. I'll, I'll start with the human one first, which is in an environment where we don't know what we can trust.
And what we can't trust. Let's just use that as a premise for a second. Thinking about how people perceive your behaviors, your product launches, everything along those four lines can actually really help you identify behaviors that erode trust and behaviors that build trust. Because we do not have certainty.
That is a fallacy, and we don't have it and we can't give it to people. And people crave certainty, but we can't give it. But what we can give them as clarity. Clarity on this is what I stand for. This is what I'm optimizing for. Here's the goal. I have competence. I'm using my rigor to say, here's how I've assessed the situation.
Here's the context, and here's the knowns, and here's the unknowns. And guess what? There are unknowns. And this is a calculated bet. And it may work and it may not, but this is why we're doing it. This is what we believe in. And you're not always gonna have bets [00:47:00] that pay off, but you should never have a bet where you can't clearly define why you're making it and what you're betting on.
And then the final one with consistency is. It's almost impossible to have consistency with results all the time. But you can have consistency with, this is what I'm gonna do next and this is why, and then I do it. And so you have consistency with your actions and let's call it mini promises. And then long term you hope that builds up and drives actual outcomes.
And so I think when people think about it that way, it can really help them orient how they deal with information. And I do think there's a lot of ways AI can help with that because it could do a lot of the extra work that we used to not get to on a Wednesday night. You may need to make a decision, you just make a decision, you post it on Slack and you're done.
As opposed to doing the extra work of collecting everybody's input and writing a document and sharing it out and sharing the context other people don't have, so they understand why you made that decision. They don't just connect dots in their head with some scary scenario that isn't true and the AI can do a lot of that work to help be more transparent.
I think that helps a lot. And so that's a human view. On the data side, you know, this is your world and expertise. How do you even think about approaching that, let alone solving that?
[00:48:00]
Dan Van Tran: Yeah, for a lot of it , don't trust it. You can not not trust what AI's telling you. Right. But in general, uh, so just as an anecdote, we are testing a few systems at collectors to help us to navigate our financial data.
there is so much of the finance team's time that's spent building up forecasting or, you know, the quarterly earnings reports and things along these lines. And a lot of those should theoretically be able to be automated. However, because it requires a hundred percent certainty and consistency with the data, it needs to be exact.
And we are seeing that it's right 80% of the time, 80% of the time. That's pretty high. At the same time, I'm not going to put a company's success and for their financial reporting on 80% of the time being accurate. And so, one of the things that we just learned is that now's not the right time for that sort of use case.
question everything. but the thing is, like Keith hit the nail on the head. Uh, a lot of what you should be using AI [00:49:00] tools, help tools for, and just in general, regardless if it's AI or something else, you use all this data to help you determine the direction that you're moving, what to say no to, and why, to really think through the strategy.
But at the end of the day, the data is don't trust the exact numbers, just trust the direction of where those numbers are going or the sorts of things. Today, like when you see a survey or an article talking about like, you know, three quarters of people do X, Y, Z. You dig into the data, how many of us actually dig into the data firstly, but then when you actually dig into it, you realize, oh, it's a sample size of like 50 people or something, and it's like the wrong demographic.
So, you know, like even that level of data we should have always questioned to begin with and should continue to question. however, it doesn't mean that your instincts around direction or decision making, like that the, these data points aren't helpful toward it. Just don't rely on the actual data.
Keith Cowing: Now taking that, we talked about individual skills. We talked about how to prepare for the future. We talked about some components. We talked about trust. Now let's get to the final part of what we [00:50:00] wanted to talk about today, which is probably the hardest, which is mindset. And I have an analogy for this, which is it used to be that in your career you started out looking like a growth stock, where everything was about learning in your twenties, earning your thirties.
It's not about the decades per se, but you learn and you iterate, and you're a growth stock. And then you get more and more valuable based on your personal growth. And then later on in your career, you look more like fixed income where you get dividends and interest and you can reap the rewards of your past success because you've built knowledge, you've built industry expertise, you've built relationships, et cetera.
And you can grow if you want to, but you don't have to. And I think we're in a world now. Where we have to at least play out the scenario of what if those dividends and interests went away and we're only as good as our next move? And we were growth stocks, and then we became fixed income, and we're back to growth stocks where we don't have a choice.
We can't reap the rewards of our old knowledge because it's no longer relevant. And that's very hard to get your mind around. But if you can, I think it's a huge unlock. And it's not always gonna be true, but I think there'll be pockets of it where it's [00:51:00] probably more true than not. And if we have that mindset of, okay, we're beginners again, let's go.
That's exciting. As opposed to, where are my dividends? Maybe they're not coming. So let's focus on the other stuff. And with that as a premise, I, I'd love to hear from you and everything is learnable, but not everything is teachable. And so sometimes you have to facilitate an environment where people and teams can just discover things for themselves and what are you seeing leaders can do to facilitate an environment in today's world?
So you can develop these skills really rapidly and discover what you need for the future.
Dan Van Tran: We actually went through, similar what you were saying around OKRs. I think that a lot of companies go through similar journeys in OKRs that are applicable here, where the outcome is what you should be focusing on, and the way you get there needs to be adaptable.
And so tying back to what you were saying earlier about the adaptability, the tools we use to get there, the ways that like we approached it, the different strategies, it's gonna shift. Things always shift. and as, as long as we're open to that, but we are consistent and thorough about the outcomes we're [00:52:00] striving for, then it opens you up to just figuring out, okay, what's the best way to get there today?
and I think that that is the, the mentality that most people, especially the leaders of teams, once they have that with their teams, it's extremely important and powerful. For example, with me, with my, my leaders, a lot of times I strive very hard 'cause I am old school in a lot of ways. I'm a C plus plus developer.
I'm back-end, as I mentioned. I , I have a certain way of viewing things and thinking about things that was good at one point. But I acknowledge it's not good for every situation and may not be good now. And so what I instead focus on are the outcomes. Here is what we're trying to do.
We're trying to make this more stable, more efficient. here's my way of thinking about it, but. You understand my outcome, what are some ways that we could get there? And then trusting that they are going to come up with a lot of different options, and then they'll present it, and then we'll talk about it, and then I'll sign off on it.
but I have to keep an open mindset in that the way that I thought about doing things may not be the best way. but as long as I give them a clear and crisp outcome, most of the [00:53:00] time they will come up with something different than what I, I would've, but better. , there are a million ways to get anywhere. And so with the adaptability, you'll be able to find the best solution in this point in time, given today's technology tools, et cetera.
Chris Wofford: Thank you for tuning into Cornell Keynotes. If you are interested in staying ahead of the AI curve and are interested in what we've discussed today, be sure to check out the episode notes for details on Keith and Dan's two day immersive program on executive product leadership, which takes place on September 18th and 19th of 2025 at Cornell Tech, which is down in New York City.
So again, all of you are invited to join. I want to thank you for listening, friends, and as always, please subscribe to stay in touch.