The European Commission unveiled its AI Continent Action Plan, aiming to establish Europe as a global AI leader through initiatives in manufacturing, data infrastructure, and talent development, while maintaining a focus on democratic values and collaboration.
The European Commission launched its AI Continent Action Plan, signaling a commitment to a more proactive role in the global AI ecosystem.
Join Cornell’s Lutz Finger and Lucilla Sioli, Director of the European Commission’s AI Office, as they explore the EU’s bold and optimistic vision of becoming the “AI Continent.”
From cutting-edge AI factories and gigafactories to trusted data spaces and talent development, this Keynote highlights how Europe is building a uniquely powerful AI ecosystem rooted in collaboration, excellence, and democratic values.
Chris Wofford: [00:00:00] On today's episode of Cornell Keynotes, we examine the European Union's ambitious vision to become the world's AI continent. We are joined by Cornell Professor Lutz, finger and special guest, Lucille Oli, who's director of the European Commission's AI Office. Together the two unpack the groundbreaking AI Continent Action Plan, which you'll hear more about shortly.
Lucila and Lutz explore Europe's big AI strategy. And this goes from establishing AI factories and secure data spaces to building an ecosystem that balances innovation with democratic values. All the while examining how the continent aims to position itself as a leader in the global AI economy. So be sure to check out the episode notes for links to the complete AI continent action plan and related ECOR L courses on AI strategy and implementation.
Now here's our discussion with Lutz Finger and Lucilla Sioli.
Lutz Finger: , before we dig into this, let's talk a little bit about Europe as a [00:01:00] whole. If we look from the US, We know where Europe is, obviously, but like the question is what is the European advantage in terms of ai?
Is there a European advantage in terms of ai? How is Europe positioned in that race? It's a global race. We see it China, us, India, and Europe. How is Europe positioned?
Lucilla Sioli: I think Europe is not badly positioned because Europe has some strengths that, characterize it. For example, it has a lot of talent.
There are more engineers per capita and European Union that, that are in the United States or in China. , Europe has a very big and large and rich single market. It's not as integrated as it is in the United States, for example, because European Union is not a federal union. , it's a group of countries of 27 countries that want to cooperate closely, and they have [00:02:00] established a single market, uh, where we have, you know, freedoms of movement of trade and, and, and of capital.
, but. it is a single market and it is large. It's bigger if you want that than the United States. There are more people in the European Union than the United States, and it's particularly rich on average. Europe is also, very strong from the point of view of research. Number of publications is very, very high.
We have excellent universities and so on average European population is very well educated and therefore likely to uptake technology.
Lutz Finger: Nice. So it's essentially good engineering talent, strong research, and a big market. There is an interesting factor, like in through large language models, we essentially a large language model.
Or, and embedding how we call it, like takes [00:03:00] information and embeds it in a computer language kind of thing. Let's, let's keep it that high level, but that means that all those large language models are. Multilingual by definition, right? For my startup, if I'm kind of like, with sales advice for one of the companies, it doesn't make a difference whether they talk in Italian, German, or English to my model.
My model automatically knows those languages, meaning Europe actually has a way, like is a big market. So far there was always a problem of multilingual and that is actually breaking down with the current technology.
Lucilla Sioli: Certainly the technology is one element that will help in making the country, in the, in making the continent more homogeneous.
of course there are also some differences between countries. I'm thinking, for example, employment legislation, capitals that are not yet flowing. Uh, between countries [00:04:00] exactly as we would, uh, expect them to do yet, uh, but we're working on these aspects and so, uh, we are trying to make the continent much smoother in terms of the single market, than it is now.
But then there is another, couple of elements I think that are unique for AI to the European Union. One is the fact that there are many developments that are brought forward by startups and scale ups. And they tend to be very strong at B2B, so you will not see in the European Union many models for consumers.
You know, as you see, for example, the United States, uh, also in the United States, you already have social media, all these platform and they that work very much for consumers models. In the European Union, it tends to be more for businesses. . And secondly, we have some kind of infrastructure, which is probably unique for AI because we [00:05:00] have a network of public supercomputers, and these are very advanced supercomputers.
We are even upgrading them now even more with more GPUs and so to AI capabilities. And this is a public network, meaning that universities, startups, companies can use these networks. and that should facilitate over time, more developments because the computing resources are made available basically for free, at least to the startups and to the universities companies.
You know, bigger companies will have to pay for it, but everybody else can access it for free,
Lutz Finger: which is a super fascinating discussion because we saw early on VC companies actually acquiring infrastructure like Nvidia a 100 chips, and then offering this as a capacity to scale up in startups to saying, well, we don't invest in you, but, or, like, [00:06:00] we, we do invest in you, but like, what we really do is we give you access to an infrastructure that would've been otherwise scarce.
And that model. Seem to have worked. Now I saw that the EU under the idea of the AI continent actually focused a lot on this infrastructure play, which is, um. It is an interesting discussion to have because the question is how much infrastructure is needed for a company to, um, scale up. Right. it's definitely an interesting discussion for research.
If you think about large, like models, transformer models. Developing new drugs or building out a new foundation models. But, you know, some companies we, we saw like OpenAI was now late with their number five, right? So the progression of investment actually went down. I personally kinda like made a lot of jokes about [00:07:00] Stargate that got announced and Mike saying, yes, there was a lot of investment.
This is cool, but do we really need that? How does the EU think about this investment area? Like, how much can we really gain from having a network of supercomputers for research? No question, but for developing an ecosystem of, uh, a thriving economy, essentially?
Lucilla Sioli: Well, you know, for the startups and the scale up, having access to computing resources for free.
It's very, very important.
Lutz Finger: Yes.
Lucilla Sioli: I mean, even , American companies were very strong in ai. Like the one you mentioned was supported in terms of computing resources initially by another company, and that is all fine. So here what we are trying to do, is to give these startups and the scale up the possibility to have computing resources for free.
now [00:08:00] it's true that the bigger and the more advanced these models are and the more reasoning that we have. They will need more GPUs. And now it's good also that Nvidia is producting the Blackwell, which are more efficient. So maybe you don't need many more in terms of numbers, but you still need to have more computing capacity as we go on.
And in fact, in the AI continent action plan, on the one hand we say, look, we are now upgrading our supercomputers to more AI capabilities through the AI factories, but we are also planning to set up a limited number of what we call giga factories, which are very, very strong supercomputers integrated with data centers.
And if, uh, you know, one needs to train a very advanced frontier. Model right now [00:09:00] does, well, we need to have this kind of computing infrastructure. So I do not know if Stargate is too much, , or, uh, the plans Exactly. And what will happen. We will see. But, obviously if you want to train a model with more than trillions of parameters, you do need to have a very, very powerful supercomputer, and that's what we call the gigafactory in European Union as well.
Lutz Finger: Yes. Um, you, you said something earlier where you talked about. The strong B2B landscape in Europe. And that's true, right? Now, as we saw in other technology revolutions. The mobile revolution, for example.
it was the companies who had access to data and to customers who gained most of the benefits. So there is a good argument to be made that Europe is actually extremely [00:10:00] well positioned with their. Set up with the companies in order to reap the benefits. Is there anything from, from your side, as a policymaker, something you kind of want to stress or want to put together to enable the existing company landscape to be most effective?
Lucilla Sioli: Well, on the one hand, we need to make sure the companies have access to the best infrastructure. In the US all these developments are driven by platforms who do have computing capacity. Think of the cloud capacity that you have in the United States. They have the data, so they have what is what is needed to bring forward the data revolution. In the European Union,
because these developments are driven by startups and scale up. It's important on the one hand that the policies make available this infrastructure as much as possible to, to these [00:11:00] companies. So computing capacity, we spoke about it, but also data. You know, you need high quality data and a lot of data. To train these models.
So what we are trying to do in the European Union, for example, is to make sure that we have what we call European data spaces with high quality data sets. And we have mechanisms also for companies to put data in competitive ways. if uh, there is a need for training models that need data, data coming from different sources.
So we are linking then these data spaces to the supercomputers and we are creating what I called earlier, AI factories. Which will also provide services in terms of training, for example.
Lutz Finger: Is it fair to say just, just to jump in there? Is it fair to say, so essentially, Europe [00:12:00] has not an established ecosystem as the us Like in the US you have big players, they're creating an ecosystem.
They are offering resources, they're offering access, and The way you think about it as a policymaker is that you're saying in order for us to be competitive, we are trying to provide now the infrastructure as well as the space for data sharing to happen.
Lucilla Sioli: Yeah. Yeah. This is, uh, this is exactly, so we are trying to, to provide the, the necessary.
Infrastructure computing capacity and data for companies to facilitate their developments in ai. And then there is another element of infrastructure. I call it an element of infrastructure, which is skills and talent. So we also support, uh, you know, more master courses, more PhDs. AI in particular to, to make sure that we have enough talent in European Union
for this kind of developments. [00:13:00] And then once the models are developed, we also want companies and our key industrial sectors to use them, because this is, if you want economic, the comparative advantage in the European Union has always been traditional sectors like manufacturing, automotive, aerospace, and so on.
So we want these sectors to make use of AI to make sure that also these sectors remain competitive and drive innovation. 'cause innovation is not only the development of ai, innovation is also the use of AI by other industrial sectors or service sectors. And so what we try to do with our policies is to facilitate what we call the.
Application of ai, we call it the Apply AI approach or the apply AI strategy whereby we try to facilitate the adoption of AI by all these key sectors of our economy. [00:14:00] And we need to facilitate that probably because our ecosystem is a bit less integrated than it is in the, in the United States.
So, we have to make sure that, Companies in these key sectors are actually, making use of ai. And if possible, AI made in the European Union.
Lutz Finger: Got it. AI made in the European eu. this is a nice new slogan here. So there is a, I think there is a three prompt approach if I follow you correctly, right?
Like. Enable infrastructure, enable data access and train to enable companies actually to use AI more effectively. And altogether should create a vibrant ecosystem, which in the past, like Europe as you said, like has many engineers in Europe has done a lot of cool research, but in the past, I think it's fair to say that, the past re technology revolution have scaled up in the US and not in Europe. [00:15:00] So we, we are trying to, you're trying to create a system to actually enable the system. If you go to the data spaces. AI is built out of data. The the data is then ingrained in weights and biases, right? This is the model weights, and those create a competitive advantage. If you would go to Google or Amazon or OpenAI or any of them, or mistrial and saying, give us all the weights, they would be not as happy, except they do an open source model.
if we create data spaces, how is that creating a competitive advantage? Because essentially, if you have a space where everybody shares their data, then um, there is no competitive advantage for the companies to gain. How does the EU thinks about that in a regulatory framework?
Lucilla Sioli: Data spaces are not only about, uh, publicly available data sets, [00:16:00] they're also about ways of sharing data.
We have legislation on the way companies can share data with each other. It's called the Data Act. You may have heard of it, and it's about facilitating. the sharing of data between companies. And so what the data spaces do when it comes to business data, for example, is to give ways or to offer ways and examples of facilitating the share of data, maybe along the, supply chain of, for example, manufacturing rather than something else.
So they're not necessarily about. Having data or data sets that people can actually use. So of course the public data sets are open to everyone. I think, uh, we have some important ones. I think for example, the Copernicus data sets are also used by many American companies in terms of earth [00:17:00] observation kind of data.
, but other data spaces are then subject also to certain regulations. We are setting up the health data space with its own regulation, and so on. And so it is very much about facilitating an exchange of data also between uh, European organizations and actors and being subject to certain kind of data regulation.
Lutz Finger: I like this a lot because when we compare, like if, if we look at the US industry, right? It's dominated by players who have access to data. Right. And we see this actually in the current AI discussion. The ones who have access to data are way better positioned in the, in this part of their revolution.
So , by you guys creating a way to, to share data you might actually leapfrog the approach between. Like, you don't need to have a huge [00:18:00] conglomerate with all the data because smaller companies together can create the same value creation, like somebody who would own all the data in one go.
Lucilla Sioli: Yeah. And then I, I think what is also going to be important in the future is that we also will support the development and the production of more synthetic data, I think is gonna be extremely important for the world of ai because. You know, everybody's scraping the internet at one point. The internet is scraped and you just need to find more data.
You have to, you will have to rely a lot on synthetic data. So I think that both in the United States and European Union, there will be a lot of attention to the development of synthetic data.
Lutz Finger: Yes, yes, totally. And we, we see this already happening, right? Like, like a lot of companies, it's a, it's a tricky.
Tricky balance, by the way, because synthetic data is, should contain the tiny clues the model wants to learn. So we need to know what tiny clues we want to train. And very often we see [00:19:00] this in privacy constrained areas like healthcare, but I expect way more to actually happening in, in the whole area of image generation.
We see a lot of synthetic data companies coming up at the moment so let's talk about trustworthy ai. So far. We talked about the ecosystem, right? You described there are several approaches. You as a eu regulator or like, I, I called you earlier on the spokesperson for the scales up.
You as a spokesperson for the scales up, uh, putting together. Access to infra, better way of, uh, sharing data as well as helping with education and making use of the talent there that is there in Europe. Now in the last announcement on the AI continent, you talked a lot about trustworthy AI and. From my point of view, it's a, it's a double-edged sword, right?
If you say trustworthy ai, who decides what is [00:20:00] trustworthy and how big is the hurdle you, you need to jump over to be actually trustworthy. Talk me through this. What does it mean to be trustworthy ai? And maybe before you do this, just an acknowledgement to the audience. Guys, there is no 100% in.
Humans, and there is no 100% in ai. Meaning AI like humans has always a percentage where it's wrong, right? It's, it's the same approach, whether it's human brain or an AI brain. So, but like, talk to me about trustworthiness.
Lucilla Sioli: Trustworthiness is about making sure that the AI we use. Is a system or a technology that one can trust and the reason why there are situations where one cannot trust it, it's because AI is of course trained on data and there are context where we [00:21:00] don't just want to have an ai, which exacerbates the biases we may have developed historically.
I'll give you an example. If I'm a company, I'm looking for an engineer, and I do that, for example, on LinkedIn or in any other, through any other application or for, for hiring human resources, then if the AI that is looking at the CVs and that is offering me the the profiles that are found on the market is only trained on past data is likely to just give me, in the European Union white male engineers because this is what we have in the majority of engineering positions at the moment.
but of course nowadays with the new generation, there are more women of course, on the market only that, uh, the AI will probably think that in the past I've been hiring a lot of male [00:22:00] engineers and therefore it'll give me a CV of a male engineer. So I think it's important that for certain kind of applications.
Where discrimination, for example, is important. Certain characteristics are kept in check. So this is one example, I think another, and it's,
Lutz Finger: it's a very, it, it's a very good example. Uh, and by the way,, in my course, the public eCornell course where I talk about models, I, I use this very example as a bias one, right?
Because it's easy to see now the question, so meaning. If an AI selects candidates, and as you said it was trained on bias data, then it will select biases, right? So the AI just reflects the data it was trained on, and if the data is biased, the AI is biased. It's always the data. So now, but the question for me is what is trustworthy?
So let's keep that example of the hiring, at the moment. And I'm, [00:23:00] those numbers are totally made up. So don't take I just make them my numbers up. So let's say in the eu, 70% of the engineers are man, and 30% are women. Right? what is a trustworthy ai? Because it should be 50 50, right? Is a trustworthy AI in ai, which does 50, like gets you 50 women and 50 men?
Or is it trustworthy ai, which is 70 30? Or is it trustworthy ai, which selects the best talent for the job in meaning the highest success ratio? What is trustworthy in this case? And Why is the EU deciding this and not the company who's building the model?
Lucilla Sioli: I think that a trustworthy AI would be an AI that does not consider gender as a relevant factor for the choice, but is based on the capabilities reflected in people's [00:24:00] CVs.
Now we think that it's important that for certain applications where, you know, this kind of discrimination can really have a negative impact on our society. These things are kept in check and what we are asking with our regulation is that companies that put on the market this kind of AI first check that the bias.
We also asked them to document how they have developed the system. And we also asked them to provide information to the user on this kind of elements and whether the user has to be attentive to anything in particular. So. Sometimes I think this is what any serious data scientist developing an AI for a field, which is likely to show, like in this case, discrimination, should do, you [00:25:00] know, a serious data scientist would be documenting what she or he's doing.
And it's in this sense that I don't think that the AI Act is particularly burdensome because. It is asking simply to document before the AI is put on the market, and this requirement is only limited to certain applications. We spoke about the recruitment case. Other examples are, for example, for universities when they select the use AI to select the students that may have access to university in European Union.
We had incidents. Where the, the universities would base their selection on the schools, where the students come, the high schools, basically where the students come from. And the, the high schools based in richer areas tend to have a higher number of students normally going to university. And therefore it [00:26:00] would've been this kind of AI would be really discriminatory towards those maybe less well off students that have studied in areas or in schools.Where maybe less students normally go to university. So we find that, this is another example of a discriminatory ai. We had incidents like this in European Union, and so we have decided also, for example the AI that is used in universities in this sense to be considered high risk. and we have done the same for when the AI is used in healthcare, for example.
And then for the AI systems that may have an impact on safety.
Lutz Finger: This makes all sense. And then we, we have technically, and I don't go into technical details here, but like, uh, come, come to the course if you want to know more, there are many things how we can actually manage bias, but the fact is.
All data is biased, right? And [00:27:00] no model is unbiased. Like essentially we want a model to figure our tiny clues. If we like those tiny clues, we say tiny clues. And we say the model is good. If, if we don't like the tiny clues because they actually point to something which we didn't intend to see, then we say it's biased, right?
But it's, it is the same mathematical impact here. Now the question comes down. For me to, who decides that threshold and who decides what to focus on? Right. Let's take self-driving cars. Self-driving cars are going to be awesome. They, they're going to be in the market. I'm living in San Francisco, we have them on the street.
There seem to be more self-driving cars by now than normal cars. And they will come, they will come as volunteer, but self-driving cars make mistakes. They are not. Infallible, right? So we will have situations with self-driving. Cars will kill people. That means a robot kills a human. What level is accepted versus not?
This [00:28:00] is an error rate. Bias is an error, right? Bias is an unwanted error, which you might. Reduce, but what is this acceptable and was not acceptable, and why is the EU deciding this? because it's a safety issue. But how do you come to the conclusion?
Lucilla Sioli: Well, the error rate or the accuracy will be decided in, standards which are not decided by the EU or by us.
It's decided by industry normally working on the standards. So. As I said, the AI Act only addresses a very limited number of applications, and so we will be requiring that for those applications that are accuracy rates that are decided by industry that work and participate to the in the standardization organizations.
Um, so. Just to say that you're right, there will never be an AI that is infallible. And so our system [00:29:00] of checking, if you want, is about minimizing the risk of accidents. Once the AI is in the market, accidents can still happen. Then in that case there will be authorities that will intervene, that will look at the accident, but they will simply conclude that if they can see that the developer or the deployer of the AI system has done its homework in terms of making sure that the system is basic, minimizes bias the documentation is there, the information given to the user is correct, then there will be no consequences.
Lutz Finger: Got it.
Lucilla Sioli: It's a mechanism to minimize the risk that an accident happens. You cannot avoid it completely.
Lutz Finger: Got it. There, there are two questions now in my mind, minimizing risk. First of all is the risk level, depending on where you as a regular have put your [00:30:00] focus as well as once a risk happened, what happens to the company?
Let's, let's do them in two different steps. So. If you wanna minimize risk and you compare now another country outside of the EU versus the eu, the other country is more relaxed with the risk. So it doesn't apply the same minimization needs. Meaning like, okay, yeah, you launch it, maybe something goes wrong, we will improve later, versus the eu, which says, okay, I have a certain risk standard and like as essentially.
How much do you focus on innovation versus how much do you focus on risk? , avoidance. These are, in my mind, at least two areas, which are a trade off. How do you, how does the EU deal with it and what does it mean for the ecosystem, which you described earlier on you're trying to support,
Lucilla Sioli: Well as I said. the EU is looking at risk and [00:31:00] probably setting a higher standard than maybe other parts of the world. , although, let's admit it, there are regulations on AI also in other countries, not only in the European Union nowadays. And, it looks at, at risk and it may set a higher bar than other parts of the world, but only for some applications of ai, not for the whole market.
I would say it's probably targets, I don't know, 10% of the market. And that's very important. And the other important element is that it does that across European Union. So it's one set of rules across European Union. Recently I was reading the AI Stanford Index, which was indicating that in the United States there are many, many different pieces of legislation on AI in different states.
So. A developer who wants to sell in the United States probably has to be careful in each state to [00:32:00] meet the requirement of the relevant legislation while in the European Union. The idea is if you are developing an AI that may be considered to be high risk. And you need to put in place a certain checks before you put this on the market.
You do this once in, and you put this, you do this in any country or the European Union you want, and after that, this AI can circulate freely in the European market.
This is, this is very important also to keep in mind.
Lutz Finger: So while there is a trade off, and I totally get it. There is a trade off.
However, the value you actually see here is, um, that it's at least a standardized trade off across all different countries. Can you, like you are in the field, you're talking to scale up in startups, um, what's their reaction to the AI act? what do you hear from European companies back to [00:33:00] you?
Lucilla Sioli: I have a mixed reaction in the sense that there are some companies who think, uh, some startups who think that the rules may be burdensome.
But I can tell you that the majority of the startups I talk to, they totally get it and the see it as an advantage. they see that the AI they sell. They can refer to it as being trustworthy because it's compliant with the AI Act, and in this way they find it much more easy to sell. You know, we live in a world where the media talks very negatively about ai, and AI is gonna kill everyone and is gonna take over the world.
And the fact that people can actually, and organizations and business users that can actually trust that technology. They consider this an advantage, so the [00:34:00] trustworthiness of the technology can be used as a branding, uh, a commercial branding, and therefore can be turned into an advantage.
Lutz Finger: Awesome. Thanks.
this is actually an interesting one. We are living in a world where AI is seen as, something dangerous. That depends on where you look like, if you look in Silicon Valley, then obviously people way more positive about all the value that can be created with ai. I personally have said many times, I don't believe in AGI kind of ruling the world.
That's. Technically in my view still, uh, a lot of bs. Moreover, it is still us humans who have agencies. And so let's move into the last part where we kind of think a little bit about how do you change perception for ai? How do you enable people to be trained to build. To actually [00:35:00] create the AI continent, because all what you can do as a regulator is you, you set the rules.
You can all set the ability for people then to create amazing things.
Lucilla Sioli: Well, our main message has always been that artificial intelligence is good for for the world, is good for European Union, is good for everybody. So. Our main objective is to stimulate the development and the use of AI across European Union.
and that's why we, put in place all the actions that you can find in the AI continent strategy. We want developments in the European Union in terms of ai. We want to have more technological sovereignty. So we want to be able to make these developments in the European Union and not just depend on, on other parts of the world, and we want to use it, , but to, to use it again, we need to create trust.
Because there is a sometimes lack of [00:36:00] trust towards this technology. Exactly. Because it's a black box and you don't know how it's going to behave. And we need to have people understand that AI is actually a positive technology and use it with more, let's say, friendliness than is suggested in the news very often.
And that's why I think at the end of the day that even a very targeted piece of legislation can be useful to support investment. But alone it cannot do much. And that's why we also have to support our infrastructure, our talent, and the adoption of AI in general in the European Union. This is gonna be key in the next few years to come.
Lutz Finger: Got it. No, this makes sense. Now. What would you tell to people like me? So, like I told you my story before, right? I,, I try to come back to Europe a few times. I'm an AI guy. I realized that I couldn't really build like a lot [00:37:00] of products.
There was a, like, there was a lag. I have now an own startup, my. Advisory board tells me, look into the us, don't look into Europe despite being a lot of Europeans on the advisory board, what do you tell innovators and people who build stuff like me?
Lucilla Sioli: Well, I would tell you to come to Europe now. Now it's the time because we are really trying to build this AI continent, so not only you will find the infrastructure that you need, and this infrastructure will be made available to you for free.
But also you will find the talent you need. And we are setting up certain elements to facilitate the life of startups. For example, we are strengthening the, what we call the capital union. So the possibility to have much more venture capital and more equity than it has ever been possible in the past.
And we are also, uh, simply facilitating the [00:38:00] life of the, the startups and the scale ups. We will be publishing a strategy pretty soon to put in place what we call the 28th regime. So a regime only for the startups and the scale up to facilitate their growth in the single market. basically smoothening.
The differences between countries in some of the administrative activities that companies have to put in place when they are active in different countries. So I think the moment really is now and I really hope that the brilliant minds of Europeans all over the world really come back to Europe because we need them and, uh, we would love to work with them.
Of course.
Chris Wofford: Thank you for listening to Cornell Keynotes. Now be sure to check out the episode notes for more information about eco's AI technology and Innovation certificate programs. I want to thank you for listening, friends, and please subscribe to stay in [00:39:00] Touch.