This podcast features Cornell Professor Dirk Swart and AT&T Business executives discussing how companies can navigate the practical challenges of scaling AI from innovation to mainstream adoption, covering topics like building capabilities, data governance, talent development, and enterprise-wide implementation based on AT&T's real-world experience.
Artificial intelligence is transforming every sector of the global economy, from healthcare and manufacturing to financial services and telecommunications. We have passed the initial hype stage — AI exists in many organizations as an established business competency and is now being deployed in detail to solve specific business problems.
So what current issues should companies be focusing on? What challenges are organizations facing as AI moves past the innovation phase of the technology life cycle and into mainstream adoption?
In this Keynote, Professor Dirk Swart of Cornell Engineering is joined by Luke Corbin and Jonathan Huer from AT&T Business to explore these developments. They examine how emerging AI solutions are being deployed across industries and what lessons can be learned from AT&T's experience as an early adopter of AI technologies.
This discussion covers practical aspects, such as building AI capabilities, managing data governance, scaling solutions enterprise-wide, and developing AI-ready talent and culture. You’ll gain insight into how AT&T's current AI initiatives compare with industry standards and consider common challenges faced during implementation as well as future possibilities in AI-driven business transformation.
Professor Swart’s Technical Product Management Certificate - https://ecornell.cornell.edu/certificates/technology/technical-product-management/
AI for Digital Transformation Certificate - https://ecornell.cornell.edu/certificates/technology/ai-for-digital-transformation/
Generative AI for Productivity Certificate - https://ecornell.cornell.edu/certificates/technology/generative-ai-for-productivity/
Chris Wofford: [00:00:00] On today's episode of Cornell Keynotes, we are going to explore how organizations can effectively deploy and scale AI solutions. Now, while many companies are past the initial AI hype, the challenge now lies in transforming this technology from theoretical potential into actual practical business solutions.
Professor Dirk Swart from the Cornell College of Engineering, who is also faculty author of the Technical Product Management Certificate, is joined here by at and t Business Executives, Luke Corbin and Jonathan Huer. In a special collaboration between e Cornell and at and t, the three we'll explore critical questions about AI implementation.
Like how do we move AI from research environments to operational infrastructure? What metrics should we use to determine success? And how do organizations balance rapid innovation with practical restraints? So our guests are gonna break down these real challenges of scaling [00:01:00] AI in today's business landscape.
So you'll find information about Professor Swart's technical product management certificate program, and also some information on at and t's current initiatives in the episode notes. So now here's Dirk, Luke, and Jonathan.
So let's get right into it. Big picture. Dirk, I wanna start with you. Let's, let's level set here. Where is artificial intelligence right now? Like in a sentence or three, how would you characterize where we're at?
Dirk Swart: Thank you, Chris. Yeah, I mean, I think AT&The moment we are seeing pretty obviously broad, but more importantly habitual adoption if we use the technology adoption lifecycle curve as a way to think about this problem.
We're past the innovators and early adopter stage, and we firmly established in this what we call the early majority, right? This sort of one standard deviation from the mean. And I think there's broad but early enterprise adoption. A few companies like Morgan Stanley are sort of leading [00:02:00] the way. but finally I think that, governance and regulation, legal regulation is quite far behind.
And the reason I mentioned that is I sort of see that as kind of a canary in the coal mine as how well AI is being adopted into the enterprise and into enterprise business processes. And I think that's a place where there's still a decent, a decent way to go. We are not quite there yet. So AI in organizations is still to some extent its own thing, or it's used extensively, but at an individual level.
So I still think there's huge business potential. You know, and we've looked at 61% of Americans have used AI in the last six months, so that's still a lot of people haven't, but I think the 61% are the 61% in business.
Chris Wofford: I'm thinking about the headlines. I'm thinking about a lot of the chatter that we hear within our organizations and, and without, we're hearing about a transition that's taking place, from generative ai.
Thinking about that 60% use use rate that you were discussing towards something called a agentic ai, which is essentially moving beyond content creation [00:03:00] to like goal oriented action automation of processes. Some of us are beginning to use it at work, but I want, again, I want to kind of set the table here and get some clarity on what we're talking about when we talk about agentic ai.
What, what's that all about?
Dirk Swart: But this idea of, I mean, to me, how you partition problem solving and problem solving complexity in an organization is a very interesting topic for me. How do you divide and allocate work and, and the greatest algorithm of all time is divide and conquer, right? You take a problem, you break it down into smaller parts until each part is tractable and then you do that part, and agentic is in some sense AI's way of doing this, right?
If we extend that idea up the ab abstraction permit a little bit, and also get more abstract, it's a way to give AI some autonomy. So you create a series of usually task specific machine learning models with a goal, not just of breaking the task down into smaller parts, sort of divide and conquer, but [00:04:00] also mimicking some idea of human level autonomy and decision making for what, whatever that means right now.
So these agents, each one of these things is an agent and they connected with end points that can take actions. So creating what we hope as designers is a system that behaves less like a computer and more like a human or a group of humans might behave.
Chris Wofford: Jonathan, Luke, I want to turn to you. So, uh, tell us first of all, a little bit about your roles.
We'll go in sequence, Jonathan, and then Luke and about some of the challenges or opportunities that you've come up against when implementing AI solutions. We're talking from the high level theoretical to actually making things operational. So talk about some of the challenges, opportunities, and maybe how you overcame them in recent days.
Jonathan Blake Huer: Yeah, I think actually Luke, why don't, why don't you go first?
Luke Corbin: Sure. , So just a brief introduction, Jonathan and I are on the same team. I, uh, lead what we call our technology transformation and AI transformation for AT&T business here. [00:05:00] And, just to kind of build on the theme that I, that we were already discussing there around the, the shift from Gen AI to a, I think that's kind of AT&The little bit of the heart of your question, Chris.
Gen AI proved to be really exciting. We all knew that we could create really interesting information. We could generate content, we could summarize content, and yet I think from a, a larger enterprise perspective, struggled with a little bit is how do you really turn that into outcomes. How do you turn that into measurable productivity gain?
How do you turn that into driving growth for an organization, whatever the, the KPIs or objectives are that you have. And so one of the things, and this might be a good way to segue to Jonathan here, one of the things that we've tried to do really hard with that technology. Is to create education, to drive evangelism, to teach people, to try to demystify what it is.
And we've seen really good results in the increased adoption rates of [00:06:00] employees within our organization simply by just making it tangible for them. So, uh, and maybe with that, who is our AI evangelist, I'll hand this to Jonathan and he can kind of introduce himself.
Jonathan Blake Huer: Yeah, I appreciate that. And so as Luke said, my role as AI evangelist . and the way that I think about it is if you think of like the Gartner hype cycle, I usually try to say, I wanna make it into a, a flat line.
Right? It's not gonna do all the things that you want it to, but it's also not going to be completely worthless, right? Overhyped. There's a tremendous amount of opportunity that we continue to find every day. And I think part of the real challenge and opportunity is getting as many of my colleagues to utilize gen AI as we can to help our customers as well as our other colleagues.
Right? Because the reality is this is such a new technology that we're finding new, interesting opportunities every single time someone uses it. And there's no way that I or Luke, or anyone else on our team [00:07:00] has all of the answers because there's such a wide range of things that it can do. And there's such a wide range of opportunity within the business to utilize AI and then, you know, on top of that agents to, to improve things.
So really it's just about getting as many people using it as quickly as we can so that we can all learn, we can all stay on top of all the changes that are happening really quickly and then contribute back with ideas and opportunities to advance the business.
Luke Corbin: I think that exactly what, what John just said is really an important point is that unlike maybe some traditional technologies just teaching you point use of this, and here's the manual now go. It's significantly more complex than that. And so how do you open up not just instruction, but also a collaboration platform that people can share uh ideas with each other, I think is really central to that progression.
Chris Wofford: Dirk, I want to turn it to you. How should teams determine if a particular task or a business unit will actually benefit from applying [00:08:00] ai? Are there criteria or some kind of viability test that we should run through?
Dirk Swart: I'm, I'm smiling 'cause that's, that's the nine figure question, right? That's the big bucks question that we all wanna know the answer to.
that's a huge topic far beyond what we have time for today. So what I would just wanna do, I think, is I want to take that in two directions. the first direction is a sort of cultural direction and the second one is sort of slightly more pragmatic. first of all, the cultural one, I mean. As managers, we should think about humanness, right?
We should think about the, think about our teams and AI, to sort of riff off what Luke was saying, AI is this new layer of abstraction, right? This new layer of abstraction we're sort of putting on top. If we imagine, I guess you remember Maslow's hierarchy of needs, right? It had like food and water at the bottom and self-actualization at the top or something.
If we have like a, a permit, not of needs, but of abstraction, right? This is another, it's not a new thing, it's another whole level of abstraction. And that means we need to teach our [00:09:00] staff and we need to think abstractly, right? If you think about skills that are gonna benefit you in the, in the new world of ai, that's definitely one of them.
So at every level of technology that we have, there are people who are left behind, And we have to think about that. So there's this TV show called That Mitchell and Webb Look. And, um, if you just think you should definitely go to YouTube or wherever you watch them and look at a, the Mitchell and Webb, they've got a show on bronze orientation, like bronze age two, bronze Age caveman, basically having this orientation class about bronze instead of stone.
It's hilarious. I mean, they're brilliant, brilliant comedians. But it also captures this idea perfectly, right? So should we be creating a system which includes as many people as possible? so I would consider that, I mean, sort of in that, I would say I would consider if you're a CEO, a project's ability to move the overall culture of your organization in the direction you want is a key factor.
And that's why you have people like evangelists to move it in, I think [00:10:00] that's important. But, more pragmatically, I think we could create a scoring metric that we could use, right? If you want a scoring metric, and I think the project management discipline has got this pretty well thought out.
You know, think of sort of things like RICE, right? You know, reach, impact, whatever scoring mechanisms. We could create a model for assessing the benefits and the wins of ai. So I guess if you're listening to this, what would you include, right? What are things that you would include in a scoring model to assess whether or not a business unit is a good fit. And I thought about this and I have four things that I would include. So the first one I would say is task characteristics. So is it a rule based or pattern driven system, or is it sort of more organic? 'cause pattern driven systems benefit from AI really easily.
We're at a place where patterns work well. Even if the patterns are hard for humans to detect, they can still be work well with ai. The second thing I would think about is probably some kind of absorptive capacity [00:11:00] metric. So that's the, an ability of a firm to recognize new value and new external knowledge and to assimilate it right, and adapt it for your business, like the rate of ingesting new information in some sense.
So the ability of an organization to convert business problems into technology problems. 'cause business problems are hard and multidisciplinary and technology problems are often less so. Then I'd look at the underlying data probably as number three. You know, you've gotta ingest AI's only as good as the data you're gonna get.
And finally I would look at business impact, right? You don't wanna end up with a solution looking for a problem, and I kind of worry that a lot of AI implementations are falling into that category. And after you've got that, you just apply the world's most reliable algorithm, divide and conquer, right?
I mean, I personally come from all the front end, so I have a pretty lenient attitude to project failure. I think that to me, if you're meeting those metrics and you're moving culture in that direction, you want, that works well. I mean, I guess I would throw it out to AT&T to you guys. What do you think?
Luke Corbin: [00:12:00] I was sitting there resonating with exactly what you're saying and particularly your point about a solution looking for a problem. And I think that's a really good framework to evaluate the value of what you're trying to achieve and what you're trying to build, and whether or not it's academic or even exploratory, which is great, right?
We're all learning this new technology. There's some things that we're going to do that we're like, Hey, I, I think I can do this thing. I'm just not quite sure entirely where it applies. And so from an innovation perspective, that's awesome. When it really comes down to thinking about your business and the way you're operating and where do I have some of those more patented solutions that are very AI friendly, your data is available, et cetera.
Those are the ones, at least I know we have prioritized early because we know that we can go get some benefits. We can put some wins on the scoreboard in terms of having impact to the overall operations.
Chris Wofford: You know, that brings me to my next question. So, either Jonathan or Luke, excuse me. Can either of you describe [00:13:00] some specific business cases that you've targeted for AI implementation?
Anything ongoing? Anything that you're gonna be hitting in the near term?
Jonathan Blake Huer: Yeah there's so many things that we're looking at. It really runs the gamut and I think there's obviously some specific use cases which may not necessarily be the most appealing, which is like call center type things, right? Helping a customer when they call in automating those kinds of things.
But I think the reality is that's a frustrating experience for a lot of people they're already frustrated at before they call in. Or they wouldn't have to call right? So a lot of the actual opportunities are finding ways to avoid the call in the first place. So finding ways to use agents or to use AI that can be 24/7 watching very specifically to make sure that problems don't happen.
And if they do, they're fixed before anybody noticed that there, there was a problem. And so those are the more interesting opportunities in many ways to help customers before they even knew there was a problem. Right? And so, you know, those kind of [00:14:00] layers. Just thinking about how can we help solve the problem faster, but also how can we just avoid the problem?
Those are a lot of the really interesting cases that we're looking at.
Luke Corbin: Yeah, I agree. I would, I would add to that, even say if you think about intelligence as well. So, we sit on a treasure trove of information and especially in large enterprises like AT&T whether that's sales opportunities and leads, or that might be orders that were taken or trouble tickets that our customer had, or even transcripts when someone calls into a center. And so being able to apply ai, and by the way, some of that's just traditional ai, right? And so there's still going to continue to be this spectrum of traditional AI versus gen AI versus agentic that will help us solve many different business problems.
And the thing that I think is really interesting now is being able to unlock all this data that we've been capturing, but really didn't have the time or the ability to go understand what it meant. So how do you take correlating [00:15:00] perhaps an NPS score from a customer after an interaction with you, as well as looking at all of their transcripts that they had from calling into a call center and looking at all of the touch points that your sales organization had, and putting that all together into a way that can drive next best action or recommendation. And really being very smart about experience, the customer experience and the lifecycle that they have with you now, because you have that intelligence and you can put that intelligence at the hands of anyone in your organization, including your marketing teams.
Jonathan Blake Huer: And just to add real quick to what Luke's saying, we obvious, like perfect example. We all have beliefs about how the business works, right? And we all have beliefs about what customers want and what they need and what their challenges are.
And those have evolved over the years. We can actually go back and look at every single call transcript, for example, and reclassify what the problems are. And I mean, one of the things that we've said is, a person doesn't have time to read a million transcripts or a million surveys or [00:16:00] a million, It's just the volume is massive. But that's one of the huge opportunities with gen AI is we can go back and look at all of that and reevaluate it. And so, perfect example, just going back and looking at every conversation and reclassifying with better detail and better fidelity to understand what the problem is so we know why someone had to call in.
Perfect example of where Gen AI can do something that we just couldn't do before.
Chris Wofford: Hey, Dirk, any particular applications or use cases that you're working on in your space?
Dirk Swart: Um, well, I come from an embedded background, so very, very different to AT&T. Um, and in fact, one of the things my students work on is taking AI and putting it down onto very, very resource constrained, embedded devices.
So a little bit different, but a lot of, there's a lot of, um, machine learning benefits to that and to getting the data upright. The sort of the philosophy is get the data into the cloud or get it onto the big machine as soon as you can. And we're sort of fighting that a little bit and looking at places.
There are [00:17:00] motivations for not doing that. Uh, there are motivations for doing learning on the edge, looking at very quick response time, for example, or looking at highly sensitive information. We don't want to be sending information backwards and forward. So we are looking at those kinds of applications, but it's very different to this big.
I sort of get the idea of, AT&T sort of see the mining, this enormous data warehouse. Of course it's much more focused, and partly that's because it's tractable for students. But it's to me, just a very interesting area.
Chris Wofford: You know, things are moving so fast, pulling back out, high level. Things are moving so fast that to some of us it's pretty, pretty dizzying.
Right? The prediction business of course, is very tricky. But how, over to our AT&T friends, how do you envision AI transforming the way that you do business over the next five or so years, or five days. What are you doing now that may pave the way for these changes that you're considering or looking out to work toward?
Luke Corbin: Yeah, I don't wanna be flip in the response, but I think it's [00:18:00] almost, how will it not change? I, you know, I can't think of an area of our business where AI in some form or fashion won't change what people do and the way that they do it. That is extremely true in agentic. And so, what I would tell you is historically when we've thought about investment for technology as an example, we tend to categorize it.
We tend to say, Hey, that is something that will improve the customer experience, or that is something that will help drive growth, or that is something that maybe will help us reduce opex. What we're finding is as we go in and look at the opportunity that agentic has to actually transform entire organizations or entire operations models is that you can actually check all three boxes at once and you can change the either or conversation into really an overall arching and. I'll give you a simple example in that when you think about customer support. Historically, all of us have thought about this tiered structure. I have tier one support versus tier two support [00:19:00] versus tier three support. And the reality is, is that we do that because there are limitations in what knowledge or system access or authorization we have for different work groups within the business.
And so we built processes that are used to taking things and going through that machination and through those different work teams. When you start to deploy and think about agentic flows, suddenly those boundaries of what any given work team might have knowledge or experience in goes away. And you can start to look at an entire end to end process and transform it into, maybe I am, should be really over indexed with my employees in that tier three space, the most complex, the most anomalous problems or challenges to face, and I can go do agentic on the first part. What that does for you is not just helping you repurpose where your employees are, and perhaps moving them to different high value activities, or even reducing opex where it makes sense. It's [00:20:00] also now accelerating your time to respond to your customers.
It's giving your employees less frustration in trying to interact and hand over through those different silos. And so you're seeing yourself now hit on all cylinders and that's what I think we're most excited about. You could take that same play and you could go look at every part of the business and do that.
In fact, that's consciously what we are doing is we're starting with business process transformation. We're starting with rethinking the way we operate before we're even going and deploying the technology.
Jonathan Blake Huer: Luke's completely correct on all that. One, one thing that's nice about being with such a large organization is that we have a lot of colleagues outside of the business that are really deeply thinking about this in lots of interesting ways that surprise me. So like even just in terms of like, data is one of the, the obvious ones. How do we manage our data? Where do we put our data? How do we move our data? You know, all of those kinds of things.
So that we can in the future keep pace with what's [00:21:00] happening. And also just to be clear that we stay outta trouble, right? Sometimes there are some people internally who think very, very deeply about, the risks of this. And so, just making sure that we're working through all of these processes in terms of, you know, just thinking deeply.
And, collaboratively about like how we can move at the pace of technology, but also just foreseeing some of the risks that might be there. And so, that's just one of the things that there's, there's just a lot of people thinking very, very deeply about every aspect of the ai, as we work through it.
Chris Wofford: You know, I'm so glad you mentioned that. That's kind of where I was going. You know, I thought for our audience, because I have you three in the room today, I thought it'd be worthwhile to better understand how innovation, occurs, like at the nexus between research organizations or institutions like Cornell.
Um, industry like at AT&T. Government. We don't have a policy or legislator in the room here, but we kind of have that going on to some degree. [00:22:00] Luke, you had some interesting perspective on how innovation and adoption, kind of what Jonathan mentioned, is actually happening with ai from where you sit. What, what do you think about that?
Luke Corbin: Yeah, I, I think the reality is that we're in such an experimental state with this and nascent in understanding what the technology can do. Maybe the traditional kind of linear lifecycle of innovation where research is happening and then it's being applied, and then we're learning from that.
It's not linear. It is a combined activity right now that I see that in an ecosystem. To build off what Jonathan was saying. We're doing some work here at AT&T and I'm sure many other companies where it's very cutting edge and you're figuring out how to use this technology in advance of your partners.
Conversely, we have partners who are working with many customers, whether that's service partners or even product partners who are seeing that new innovation. And they're working then back to the broader ecosystem to figure out how to share it. And then of course, and I'll let Dirk talk about this [00:23:00] more, but, you're gonna continue to have research and innovation occurring within academia. I just don't know right now at the pace of innovation that's happening within large companies in this space that any of us can declare a linear path to bringing these things to market. We're going to continue to see, I think all of these things popping up in parallel and working through synthesizing them to what makes sense for anyone's specific business problems.
Chris Wofford: Dirk, what are you seeing from your side? What do you think about that question?
Dirk Swart: I mean, so a couple of things. First of all, it's interesting to note that the use of AI is, is changing, right? So the Harvard Business Review had an article in the July with the latest, the latest edition on the sort of the usage AI is changing, right?
So last year people were using Gen I for things like content creation and for technical assistance, and those kinds of things were top. Now people are using it for support, uh, either business or or personal and for learning and for generating [00:24:00] ideas. So I think we're seeing a sort of an evolution to, first of all, more collaborative, which is great, right?
That's more collaborative and we are seeing a move towards more, using it for more innovative uses, I guess is a way to phrase that. So I think that's an interesting thing. It is actually happening, just to sort of riff off that, that is going on right now and we are seeing a more, I guess, thinking approach.
I mean, I don't wanna use that word very, I wanna use it very extremely cautiously. I'm not proposing that AI thinks. But I think we are seeing that a lot more. So that is helping with innovation. I always wanna be a little cautious because this idea innovation is something that happens collaboratively, right?
The idea that you have somebody in a basement who be's away for a year or so and comes up with a new invention, there's total garbage, right? That's not how innovation works. 99.999% of the time, the innovation is a collaborative effort. So we are seeing AI as a collaborating tool, right? It's a collaborative tool.
As a business, you can put things into it as a sort of a, [00:25:00] you know, a big daddy collaborative tool. And I think that is a very exciting way that I think it's gonna change the way people think for good and for bad, and it is gonna change it. I think we as innovators have to be very careful because if you interact with AI in the way the prompts are trained, you get great answers.
But you can get AI to go off the rails pretty quickly, which means that AI has a channeling effect. It has an effect of encouraging you to think in the manner in which AI quote unquote thinks right. I'm a little worried about that. Honestly, I don't, I think that's a sort of a, you know, it's like going to the, you know, Hey, I want to eat tonight.
You know? Oh, you can have whatever you want, so long as it's Burger King or, or McDonald's. Right? Yeah. Well, you know, it's the illusion of choice, right? I don't want us to be channeled into that illusion of choice and forget that AI doesn't solve all problems. Right? It is not an innovative tool in the same sense that people are innovative.
Chris Wofford: Yeah, great point there. Luke, I want you to speak [00:26:00] about internally at AT&T how do you need manage the great need, the urgency, the opportunity around rapid innovation inside an organization like AT&T?
Luke Corbin: Yeah, I think we've probably learned a little bit from historical innovation and technology transformations, and I would even throw things like RPA and robotic process automation into that category that at least at early stages, just doing it broadly and opening up the kimono to everyone to go do what they want to do is not the right way to do this. So the, what we've done actually at at and t is we've stood up a, a very specific organization in partnership with our technology teams that are dedicated to this function. We call it the uh, agent Foundry, and we've co-located some of our technology resources with our business resources, and most importantly, we've co-located them with some of our operations teams.
So you have end users who are able to bring in ideation and participate in the [00:27:00] transformation along with teams like mine and our true technologists who are writing code. And I, I think that this is, while it may, out of the gates be creating a little bit of a structure, right? And it's not gonna be a free for all.
It's actually enabling what I think is the objective of your question, which is how do we move faster? And if we set the right foundation, if we are building the right platforms, we're building the right pipeline for building some of these agentic flows. What we're seeing already is that now you're creating a flywheel and you're creating the flywheel that's gonna let you do more and more and more.
And maximizing the reuse of these agents, and going back to a point that, that Dirk said, right, while we might all like to believe that these agents or gen AI in general is a hundred percent accurate, and all you have to do is go build an agent and deploy and it's gonna solve all your ills and your business problems.
The reality is that observation and evaluation. Governance, right? We are a regulated [00:28:00] company. We have to be really careful about the way that we ingest data, the way that we use data. Every company has to be really smart about the private information that their customers entrust with them.
And so having that type of governance that ensures that as we go down this path, we're doing it smartly, while at times can sometimes feel like bureaucratic, I certainly have days where I'm like, I can't believe we're still stuck in this governance process. But the reality is, is that doing it now and doing it the way that we are doing it through a centralized organization is the foundation to open this up and to be able to move faster in the long run.
And you know, I think there's probably lots of philosophies that can be had around how much will this eventually become democratized technology. And will you see citizen led development in this space? The reality is, is that for large organizations with as much data and as much concern about how that data is used as AT&T has, that question is worth punting down the road. And what we're trying to figure out [00:29:00] more so now is how do we apply the technology in the smartest way? And we think doing it through a centralized organization at least, uh, the gains is the best way to do that.
Chris Wofford: Perhaps you've partially answered a question that came in from Carlos who asks, how has data quality and governance impacted the ability to use AI within the organization?
Will bad quality data or siloed data sources impact the ability to use or deploy AI solutions? Or should organizations fix data first and then worry about ai? Anything to add on that? Where you super Carlos going?
Luke Corbin: I love this question and it's one of, been one of our biggest focus areas over the last year.
What I would tell you is that I think our notion of data quality historically has been fixing data. In fact, it was there in the question. What we have found more so is that most data is actually relatively good. It's just highly disaggregated. And so when you think about like a, a company at the size of ours, one who's gone through years of m and a and have systems around that have been here for [00:30:00] decades that represent that m and a, trying to portray a clean view of, say, customer 360 is not easy.
And so we've actually invested in and built an internal product we call the data sphere. It is a collection of all of that different information and data that we're been very purposeful over the last 12 months to consolidate into a platform that then can be used to enable greater and greater automation on top of it.
This is at the heart of everyone's largest challenge. I think at least it enterprises of our scale and with some of the history that we have. Um, and I would tell you, I think it's a fool's errand to try to go fix all your data. You're going to have to be realistic about, maybe I can't fix it, but I can aggregate it and I can stitch it together.
That makes it as a whole much better than it is independently. And then the second thing is that we should be also realistic about the promise of this technology. Our employees work with inaccurate data all day long. And [00:31:00] being able to deploy agentic flows that mimic what the employee or the human does when they find those data anomalies and those inaccuracies is an area that ai, I think is, is helping us actually scale and supplement even from a productivity perspective, what our employees would've had to do otherwise very manually on their own.
Chris Wofford: Yeah. Dirk, do you have anything to add to that?
Dirk Swart: I think that's spot on. I mean, I think that the AI can help with that, right? The first thing we should do is, you know, put AI to the task of working on that. But I do think, I mean, my first job outta college was a data warehouse consultant. So clearly I think that collecting and aggregating data is a, is a valuable proposition.
But I think it's gonna be a challenge because AI is very acquisitive with data, right? It wants to sort of gather more data to it. And you always have this urge to do that sort of for, you know, give into the acquisitiveness of this great data source. But in fact, you can do a lot without it.
So the issue is one of defining your domain, defining your [00:32:00] problem, getting it to be good enough, and working from there.
Jonathan Blake Huer: Yeah, and just to add to this, I think what's taken a giant step back, I think there's also this, this broader cultural context where just understanding data and utilizing data and interacting with data, um, gen AI and AI in general makes that a lot easier.
But there is just a lot, uh, you know, it is a cultural shift to think about data-driven decisions in such a daily moment by moment kind of basis versus feeling it out, so to speak. , We've never had so much data available to us, and so there's just this kind of cultural shift to where you can ask a question and get a data-driven answer.
Versus, just having a fun debate about it. Right? And that's a cultural shift that's happening around all of this.
Chris Wofford: So moving from data to measurement and management, and I'll open this to the floor. Whoever raises their hand first gets to go. Are there metrics or benchmarks or like key [00:33:00] performance indicators, what we call KPIs that you use to evaluate the success? The relative success of AI as it's being implemented KPIs here.
Luke Corbin: Yeah, I can start and open this up. Certainly, our CDO organization, very proud of the work that they've done in enabling the business with gen AI and with large language model support. They participate regularly to benchmark the evaluation and accuracy of the responses of our platform and the way that they've tuned it.
And I think that's a fundamental best practice that you have to have in your organization. This does not deploy a large language model once and build some code on top of it and hope that it's gonna continue to be accurate. To John's point, we're building, you know, there's some black box activities that are occurring and the gen AI response and in agentic flows. And so you have to continuously be in the process of how do I evaluate that I'm getting the responses I want, and how do I have to continue to [00:34:00] upskill to use sort of a personified term, my agents and my AI the same way that I have to employ dealing with employees.
So I think that's your fundamental KPI. And then I think the second one is we talked a little bit earlier about adoption. When we look at the spectrum of our employees, we can see very clearly, and it's a fairly consistent pattern between the third of employees who are very much AI first.
They want tooling and they use the tooling to solve business problems through AI capabilities that have been enabled for them. There's that middle tier that's sitting there and they're like, I, I kind of like it, like maybe when I have to do my end of year review, I'll summarize my content with gen ai.
And then there's about, you know, the other third that just is resistant. And that resistance is not necessarily rebellious, it's resistance maybe out of a lack of understanding or allowed a lack of awareness. And so measuring your adoption and then figuring out how to have programs like what Jonathan leads with evangelism, I [00:35:00] think is really important.
And then I wouldn't be a business person and if I didn't add the third, which is ROI and that is a reality that we wanna make sure that the things that we do have return on investment. And I'll go back to the point that I made earlier. I'm super excited about this technology because. I think large organizations, at least AT&T and those of us who you know, are in competitive markets, et cetera, often think of ROI as a reduction in cost and am I able to, to control my opex?
I'm most excited about the ability to measure by top line revenue growth and my opportunity to compete and market by using this. So those are the three we look at pretty regularly. I'm sure that there's others
Chris Wofford:
viewer Alex wants to go deeper on this one. Hits us up and says, what KPIs beyond content volume are you using to prove measurable productivity gains from agentic ai? Part two of the question. This is cool. How do you define levels of autonomy for enterprise AI agents?
Who wants that one?
Luke Corbin: I'm [00:36:00] happy to jump in. I'll start maybe with the latter one. You have to think about agents that we've characterized 'em as sort of four different quadrants, if you will. So single agent, supporting a human. And so that autonomy is clearly not autonomous. Single agent supporting a human autonomously and making some decisions.
And when we look at those those aren't, those aren't enterprise business processes, right? This is like, Hey, when I come in in the morning, I want you to gimme some feedback on the quality of my emails yesterday, right? So this is, these are very like, personalized examples. They're not driving outcomes.
When we get into agentic and you start thinking about teams of agents that are driving business processes. Much more when you get into the overlap, I think with our technology and our IT organizations. And to be honest with you, right now we haven't opened that up to full autonomy. So we are continuing to see processes that will have humans in the middle that will be supporting and evaluating checkpoints , on the outcomes, and making [00:37:00] sure that it's occurring as we expect.
I think that's critical. There will be a point where we will have some lower risk or, you know, as the technology evolves, maybe even high risk. But certainly there will be an evolution of that where you will need less. But that's where we're starting.
Chris Wofford: You know as we kind of round out our conversation thinking about driving people toward next steps, how can I take action in my organization? I have a really cool question that came in from Pete who asks, what are a few applications considered that are considered low hanging fruit that are safe and effective to implement. We're talking about dipping our toes, right? Early stage kinda stuff. Presentation notes, meetings, HR resumes, project management, AI agents, et cetera. Any thoughts on that? Quick wins. Where might the opportunities generally be? I see Jonathan, you're gesturing.
Jonathan Blake Huer: Yeah, I was gonna say that the resume one always makes me kind of laugh a little bit because it's, uh, it's a very, very common example, but it's also very easy to accidentally have the AI hallucinate and give you skills that you didn't initially [00:38:00] have and try to line up your resume with the job description.
And so it's a very, very common one. I think the most interesting thing. Is that a lot of people are trying to transfer their Googling skills from the last, like 20 years into prompting. And that's been one of the most important things. Not treating the gen AI as a search engine, because it's not right.
It can do some searching, it can do, but you know, there's all sorts of interesting nuances to that. But, it's not a Google and I think that the most important thing in terms of utilizing it is making sure that you are not just using it for recall, but that you're really using it for transformation, for classification, for all of these different types of skills.
And that's, I think, the most important thing to kind of break that habit of just thinking of the AI as a replacement for a search engine. Really just try to push it. And if you don't know, you can always just say, this is what I'm trying to do. How could you help me?
And even that kind of open-ended thing, you're gonna [00:39:00] start getting responses and the AI can walk you through it. And one of the things that I always just try to tell everybody is, you're not gonna break it. You can just go in and say hello and nothing else, and it will just start interacting with you.
And so that's where, you know, you really just gotta get your hands on and start using it.
Chris Wofford: Dirk, any thoughts on Pete's quick win question?
Dirk Swart: Yeah, thanks. First of all, I think it's an excellent question. I, I really like the question. I think I would say two things. First of all, just riffing off what Jonathan was saying, I think he's spot on in that the very, very simplest thing you can do if we have this continuum, if we think about sort of starting on the individual level and going up to the systemic or or enterprise level, starting on the individual level is the place to do it right? And even if it's something as simple as habituating your staff to using an interface, 'cause there's that training process as well.
One of the things you can do is just give your staff 10 prompts that they can literally cut and paste, use this prompt and then type your question in afterwards. Right. People who've used this a lot have learned. I don't know. I mean, I certainly have the list [00:40:00] of prompts that I use a lot, you know, prompt I use a lot is, is asking AI to self-reflect and include, what would you do in this situation?
Come up with the top 10 questions I think I should be asking. Right. Get AI to self-reflect. Teach your staff to think differently. It isn't a search engine and if you're dealing with frontline staff or people for whom that, that's something that they habituated in doing. Absolutely. Simple as that.
The next level up, personalized, but there's some things involved. So content creation, for example, technical assistance and troubleshooting, right? Personal level support. Those are sort of the next wins that you do, that are things that you can do. Once you've got that, you've kind of got learning and education, you can get it to learning and educate.
After that, you're doing a sort of more, more specific things, right? Either you've got domain specifics, like improving code, which everyone's talked about. But really at the top, the things you do last are dss uh, decision support, research and analysis. Those things you hold off and you don't do them quite yet [00:41:00] till you've habituated your staff.
So I wanna characterize all that by saying where in your organization do you think there is the best absorptive capacity to produce, to ingest this information, to change the way they do business and to produce an ROI? I mean, you know, it's all about in the end of the day, a good ROI and being able to showcase success.
Chris Wofford: Terrific perspectives. I've got a tough one, a, a really good question from Suzy, taking us in a bit of a different direction, but totally worthwhile. Are larger organizations thinking about AI in the context of their mission or values and values? If so, are they also thinking about internal AI policies and codes of conduct for organizations where kind of, this is a bit of a callback to governance, but any thoughts on Susie's question here?
Luke Corbin: I think the answer is a hundred percent and you have to. you know, within our organization we are absolutely focused on that. Which involves which tools are available to you, which certainly, how are you using, how are you monitoring this behavior, including, looking for plagiarism or, ensuring that you're not getting [00:42:00] back inappropriate responses or providing inappropriate prompts internally. And so organizations, I think like ours, that have built their own LLM infrastructure, part of the value of that is to be able to have the type of governance and visibility to the activity for exactly, I, I think you said Susie's question.
Jonathan, what would you add?
Jonathan Blake Huer: Yeah, I, I think Luke's spot on. The only thing I would just add is that to concretely answer the question, we have incredibly rigorous internal policies. You know, some things even involve like an ethical review, right? To make sure that the models, and even if things are done with the best of intentions, are there accidentally, is there accidentally an outcome that might potentially not be in line with what our company values are and serving our customers and all of those things? So yeah, I don't know if most people would understand like how much rigor we actually have around that in making sure that as we adopt the technology, we're not accidentally doing anything that might be out of bounds.[00:43:00]
Chris Wofford: Professor Swart.
Dirk Swart: Susie, thank you for that question. Uh, first of all, that question resonates with me. I sort of worry about this a little bit because I feel that it's the orphan stepchild of ai, but it really shouldn't be. It should be front and center. I mean, you can come up with and deploy a rag solution in about an hour.
You know, nowadays it's really not a huge deal. Two years ago, it was a huge deal. Which means you're gonna start seeing these things springing up in organizations that are sort of little projects that people are working on. 'cause it's gonna help their productivity. And depending on your organization, you want to encourage that or not. But I think we have to have rules in an organization.
Just like so at at university we have human subjects testing, right? You have to go and do human subjects testing, training, right? And good because there's certain things you need to know or things you have to get approval for, right? You can do certain things and you can't do other things. There need to be very bright lines explaining to people who want to run off and do their own projects, who want to do [00:44:00] their own thing.
That says, this is what you can do, but if you wanna do A, B, or C, you have to interface with our central organization, right? You have to have an ethical review, and you have to deal with it in a different way. Not because it's an intractable technology problem, but because it has other deeper social, other implications.
A great question I, I gotta tell you, I'm a little worried about it. We'll kind of see how that really rolls out in organizations.
Chris Wofford: You know, Dirk, since you have momentum, I want to ask you one of our final questions here. So, big picture, are we underestimating AI? After Dirk, anybody's free to chime in, but are we properly assessing what's going on here?
Dirk Swart: I love this question 'cause there, there's sort of a flippant answer, but let's think about, I mean, I think that we are, are radically underestimated. The pace of AI is increasing. It's not decreasing. And you look at everybody's estimates. They're happening sooner. Things are happening sooner, and change is happening faster.
And humans are just not [00:45:00] structured to understand geometric growth. Right. And for those of you who are a little older, maybe you remember this with the Human Genome Project, right? Human Genome Project started in 1995, right? By like 1990, I think it was 1999, got 10% of the genomes sequenced, right? And then.
One guy came, uh, vent. He came and he wrapped the whole thing up in like three years. Right. Did the other, like mostly the other 90%. If you think of things doubling every year, right? You have a five year project. Your productivity doubles every year after four years in your project.
What does that mean? It means you're halfway. Well, if you came to most people and you said, I've got a project that takes five years. I'm four years in and I'm halfway. They'd be like, dude, you're toast. But in fact, in the world of geometric growth, you're exactly on track, right? 10 year project, nine years in, you're halfway.
That's where you should be. We are not estimating that correctly. We are not estimating the rate at which this change is happening.
[00:46:00] Where there really isn't an understanding 'cause human brains are just not good at understanding this. Um, so I would say it is a big concern and it is one of the reasons why it's so difficult. Just to go back to Susie's question, why it's so difficult for that governance to catch up and do that.
Luke Corbin: one comment I might add to that, and it just reminded me,
I saw Alan Greenspan speak like 10 years ago, and he, and he was talking, someone had asked the question around like, why does GDP consistently grow at about 3% per year? And his answer was interesting to me, at least at the time, was it was that he said, it's not tied to economic growth that's tied to the capacity of the human being to expand at about 3% per year.
That's an an a really interesting thought that as a society we could expand at 3% a year. Now go throw AI into this, and who knows what that number is. And so to your point, Dirk, the rapid acceleration and the, our inability really to, to realistically predict that [00:47:00] rate is exciting. Maybe a little scary.
I think we'll figure it out.
Chris Wofford: Thank you for tuning into Cornell Keynotes. If you are interested in Professor Schwartz technical product management certificate program, I want you to check out the episode notes for details on that.
As always, thank you for listening, friends, and please subscribe to stay in Touch. I.