Language selection

Search

Reflections by Yoshua Bengio (LPL1-V52)

Description

This video features Yoshua Bengio, professor at Université de Montréal, scientific advisor and founder of Mila, and a world leader in artificial intelligence, who reflects on where artificial intelligence is headed and our shared responsibility towards advancing its development.

Duration: 00:50:04
Published: October 9, 2025
Type: Video
Series: Review and Reflection Series


Now playing

Reflections by Yoshua Bengio

Transcript | Watch on YouTube

Transcript

Transcript: Reflections by Yoshua Bengio

[00:00:00 Montage of a series of images: people walking down busy streets; a Canadian flag waving on the side of a building; an aerial view of Parliament Hill and downtown Ottawa; the interior of a library; a view of Earth from space. Text on screen: Leadership; Politics; Governance; Innovation.]

Narrator: Public servants, thought leaders and experts from across Canada reflect on the ideas shaping public service: leadership, policy, governance, innovation and more. This is the Review and Reflection series, produced by the Canada School of Public Service.

[00:00:25 Yoshua Bengio appears in full screen. Text on screen: Yoshua Bengio, artificial intelligence researcher / Professor (Université de Montréal) / Co-president (LoiZéro) / Founder (Mila).]

Myra Latendresse Drapeau: Hello Mr. Bengio, Professor Bengio. It's a great pleasure, a great honour for us to be able to listen to you a little, to hear you. You're obviously known as an international expert, as a leading figure in the field of artificial intelligence, one of the founders of what we now know as artificial intelligence. But we'd like to maybe start at the beginning, get to know you a little bit. Would you like to tell us a little about where you were born, how you grew up, how you became Professor Bengio?

Yoshua Bengio: I was born in France in 1964. My parents immigrated to Montréal when I was about 12 years old. I studied computer science at McGill University.

[00:01:20 Image of the main entrance to the McGill University building.]

Yoshua Bengio: That's when I started to wonder about artificial intelligence. Somewhat by chance, I read scientific articles that inspired me on the synergy between understanding the brain and understanding the intelligence that we could deploy in machines, hence artificial intelligence. It very quickly became a passion. I found it really amazing that we could think about formalizing what intelligence is in a way that would allow us to understand humans, human intelligence, and build machines that could help us with their intelligence.

It was the beginning of a long career that was full of exciting milestones and progress, even faster than we anticipated when we started.

Then, around 2012-2013, what we were doing left the labs. Big companies like Google and Facebook had taken some of our developments and put them into their systems. We were starting to recruit a lot of researchers. That's when I said to myself, "It would be good if in Canada we had centres of attraction for these researchers," and that led a little later to the creation of the artificial intelligence institutes that we have in Canada, the Pan-Canadian Artificial Intelligence Strategy. Today, I'm proud of what we've accomplished.

Myra Latendresse Drapeau: And you stayed in Canada?

Yoshua Bengio: And I stayed in Canada. I had many other options.

Myra Latendresse Drapeau: Yes, I can imagine.

Yoshua Bengio: And several other researchers came back from the United States, Canadians came back because, all of a sudden, AI research in Canada was becoming much more attractive and we were creating critical masses of researchers in places where things were happening. Researchers want to be surrounded by people like themselves, who are stimulating, with whom they can talk and then collaborate. So that was a really big success.

Myra Latendresse Drapeau: You started to draw the parallels, or what leads you to artificial intelligence are the parallels between human intelligence and artificial intelligence. Can you give us a definition, your definition, of what artificial intelligence is?

Yoshua Bengio: I'll start by trying to explain what intelligence is. Then after that, artificial or biological, in a way, that's just a detail, though very important obviously. But on an abstract level, an intelligent entity has enough understanding of the world to anticipate what is coming, to predict how things will turn out, and potentially use that understanding to achieve goals.

We have these capabilities. Increasingly, cutting-edge AI systems have these capabilities. And in different ways, our animal cousins, the closest being the primates, also share these abilities. Intelligence is not a single dimension; because you can be intelligent in certain areas where you have a lot of experience, and then be rather naive in another area. And this is also true for artificial intelligence (AI). We have systems that are very competent at certain things, like the cutting-edge AIs today can understand 200 languages. No one in the world can do that. But their reasoning and planning abilities are far inferior to those of, say, average human beings. So, intelligence is not just a quantity; it is knowledge and the ability to apply it to answer questions, to make decisions that work.

Myra Latendresse Drapeau: That definition is very enlightening, and it brings us back to the essentials. Sometimes we're intimidated. We hear the expression, we don't really know what it is. We're a little intimidated. I really like how you present it.

I'd like to gently introduce you to artificial intelligence in relation to the Government of Canada. If we return to the foundation of the government of Canada. You know the old saying about serving the public interest by ensuring peace, order and good government, right? Do you think that AI could eventually help the Government of Canada as an institution to fulfil this mission, its fundamental mission, in fact?

Yoshua Bengio: Yes, definitely, if we use the right safeguards. And what I mean by the right safeguards, roughly speaking, is on two levels. First, AI, at the technical level, needs to be a trusted tool. Because we make decisions that may have a significant impact, especially in government. Today, we're not really there. We can talk about it later. But that's an issue. Then the other issue, it's more of an issue, well, how do humans organize themselves with these tools? What governance rules do we put in place?

[00:07:12 Excerpt from the OECD article

Text on screen: Artificial Intelligence

Effective governance is essential to ensure AI development and deployment are safe, secure and trustworthy, with policies and regulation that foster innovation and competition.]

Yoshua Bengio: Because technical standards are not enough. Finally, to answer these two questions, I will return to an important point: intelligence gives power.

[00:07:25 Yoshua Bengio appears on screen.]

Yoshua Bengio: So it's important to know in which direction this power is directed. It must be for the public good. This power must not be abused by those who use or develop AI. And that's why we need rules within an organization, within a country, and then at the global level, because the consequences of using AI can cross borders quite easily.

Myra Latendresse Drapeau: So, in the public service, we have about 250,000 employees, a very large employer in Canada. So, it's a large workforce, which is also extremely diverse. I wanted to ask you what kind of skills or competencies civil servants could develop to better use—well start using, firstly—because there are studies that show that this is the sector, the public sector, in which artificial intelligence is least used in Canada in any case. But what might be some of these skills that civil servants might want to develop? And maybe also lean just a little bit toward what kind of questions they should ask themselves when thinking about artificial intelligence, but also when using it in their daily lives.

Yoshua Bengio: The skill, perhaps the starting point, is simply to have basic knowledge about what artificial intelligence is today, but also scientific knowledge today that allows us to anticipate where it will be in a year, in three years, in 10 years. Because AI is progressing very rapidly in its capabilities. Today's AIs, as we're recording, are much smarter than those of a year ago, really. They're much smarter than those of two years ago, three years ago, etc. Three years ago, they did not master the language. Today, not only do they master the language, but they are beginning to have reasoning skills. Today, they are trained on so much data that they have knowledge about almost all of it.

At the same time, today's AI makes mistakes and sometimes acts deceptively. We need to understand this, then ask ourselves questions about our work. What does that mean for your work? How can we develop skills to manage this coming change?

So, the ability to adapt to change is going to be crucial and it's not just individual. We agree that the way an organization operates in an office, or bigger, if it is too rigid, even if people have the willingness and ability to adapt, there will be too many barriers.

One element I want to bring into this equation is that it is plausible, if AI continues on its current trend, that more and more tasks that are done by humans will be able to be done by AI.

In what order will it happen? It's uncertain. But things that require little thought or work that can be done fairly quickly, that don't require highly developed interpersonal skills, those will be the first to be replaced by the work of artificial intelligence.

But we don't know where it ends. There is no reason to believe that developments in artificial intelligence will stop at the level of AI skills. The proof of this is that, in certain areas, AI is already more competent than us in terms of the quantity of knowledge, in terms of all the things it knows, all the languages, etc.

So, we have to prepare. How will we adapt our work, our teamwork, when part of it is automated? How do we benefit from it? But how do we manage the transition? Then again, flexibility and organizational agility will become essential.

The other thing is that today, AI makes all kinds of mistakes and, unfortunately, people tend to take the machine's output as divine truth, which is obviously not the case. AI can... we hope they will improve in this regard, but for the moment, we practically have to check whether there are consequences to false assertions made by the AI. You shouldn't take things as the truth, you have to verify them. If there are no consequences, it's not necessarily serious, but you have to ask yourself the question: What am I going to do with this AI, with this response? Then eventually, more and more, companies will give us access to AI tools that we call agentic, that is to say, which make decisions by themselves without a person checking at each stage.

So, these are systems that have more autonomy, that will be able to carry out a lengthier task without a human being behind it at each stage. It's not yet clear whether these systems will be sufficiently reliable, and perhaps they will be eventually, but for now, what we observe is that these systems can be deceptive, can intentionally lie to achieve a goal.

Here, not only do we need to have a critical mind, because AI can make mistakes unintentionally, but we could be faced with AI that, for all sorts of reasons, supposedly to please us, will lie to us. There are already plenty of examples of this. AI is currently programmed to get rewards, but sometimes the humans training the AI will get fooled by something that looks right and but isn't.

Myra Latendresse Drapeau: What do you mean by "It's programmed to seek rewards"?

Yoshua Bengio: Yes, there are two phases in AI training today. There is one where it imitates what someone would have done, that is, by completing texts that humans have written. And in this way, it absorbs all kinds of knowledge that humans have, and then how a human would have reacted in a certain circumstance.

The second stage is a stage called alignment, where we make it interact with humans. Then, when it behaves well, it will receive a reward; if it does not behave well, it will be punished. It will then seek to obtain more rewards and less punishments, like a child or an animal.

The problem with that is that maybe the AI will say something that makes us happy but isn't true, for example. There was an experiment where an AI tool was supposed to make restaurant reservations for us. Then it came back and said, "Yes, there's no problem, it's reserved." In reality, there was no more room, but to please us, it said, "Everything is fine, I have the reservation." Or maybe it would be willing to cheat, or hack into the computer to try to get a reservation. So, it's going against our moral instructions. We're at a stage where we don't know how to properly control the normative behaviour of AI, which can have serious consequences if we aren't aware of it.

Myra Latendresse Drapeau: This idea brings us almost to the question of values and how AI can or can't adopt behaviours. Because we've nearly reached that point, or you might say that we've already reached it.

Yoshua Bengio: We've reached that point.

Myra Latendresse Drapeau: We've reached it. And it can go in the direction of our democratic structures or in another direction. In the context of public service and of public administrations in general, public servants are working toward the viability of democracy. It's at the core of our function and central to our mandate. What could you say about this, particularly for public servants who want to move in this direction and who see this issue emerging and don't really know how to proceed?

Yoshua Bengio: One of the biggest shortcomings in current cutting-edge AI, and particularly agentic AIs, which perform actions on their own on the Internet with your credit card and other valuable information, is that we are not sure whether they will act according to our prescriptive morals or instructions or to the rules we have set for ourselves in our work environment.

Through various measures, our government could promote the development of these mechanisms for controlling what AI does so that, in a context such as the deployment of AI by public servants in the government, we have a fairly good level of confidence that AI will either do nothing or say, "I don't know," or act in a manner aligned with our prescriptive instructions.

So, what does it mean for AI to follow the standards we set? Who decides this? At some level, it should be the democratic process, the popular will of what is acceptable or not, which should result from the collective decisions we make. In a democracy, we have democratic institutions that should already be somewhat transparent when using AI, but it's what we consider acceptable in AI's behaviour that may not be the same from one country to another, and we need to have a certain degree of control over that as a nation.

People also need to understand that we can have control, that we can express ourselves on this, that AI will be used more and more, not only in the government but also throughout society, and there are decisions and societal choices to be made, as is generally the case in all public policies, on what we find socially beneficial and on what is unacceptable behaviour from entities that will have increased autonomy and whose actions could have consequences that could be significant for society.

So, that's it. We need to think about a course of action. Initially, it can be quite simple, but you must understand that every office can't just make its own decisions, because we'd still like a certain sense of sameness as to what is acceptable or not in the behaviour of AI.

The way I see it is a little like this: in our laws, we have the Constitution, then we have more specific laws; there needs to be general normative principles that apply to all the AI that we are going to deploy in our country. Then, maybe after that, we'll break it down into constraints or rules that are more specific to a company, an organization, an office, etc.

Myra Latendresse Drapeau: For a policy objective, for example.

Yoshua Bengio: For a policy objective, etc. Another component related to the democratic question is how we will ensure that the deployed AIs will do the right thing on their own and not become someone's tool to use against our norms, our laws, and ultimately, in a criminal manner. So, this means that we need technical and societal mechanisms to control this, to ensure that people can't use AI in their garages to manufacture a new pandemic or use it to hack into a computer system at their office, for example. There are plenty of scenarios, because AI will have progressively more knowledge and ability to apply this knowledge. And someone who is not necessarily an expert will be able to use AI to do illegal things, things that we don't want in society.

Myra Latendresse Drapeau: Or antidemocratic.

Yoshua Bengio: Or antidemocratic, exactly. This includes, for example, influencing the AI used as a bot on social media. Well, it can become a way to educate, but it can easily become a way to manipulate as well. Therefore, it could have a dangerous effect on the democratic institution itself.

Myra Latendresse Drapeau: One of the pillars, let's say, of our democracies is the trust that citizens place in their government. I'm not telling you anything new when I tell you that the confidence index is falling, declining almost everywhere across the world. Canada is doing rather well, but the trend is still there. Citizens have increasingly less trust in their government. What is AI's place in this, in this sort of major issue of trust? Can it help? Can it make things worse? What should the government do if it is using AI in way that earns the citizen's trust rather than the other way around?

Yoshua Bengio: That's a good question. We have the chance to collectively decide which direction we want to take in the development of AI, for what kind of application. Some could be very beneficial to democracy and to the trust we have in our government systems, while others could be very destructive.

This means that we need to have people in the government who understand these issues. Now we have a new Ministry of AI, so that's fantastic. But we must capture these issues and give ourselves the tools to influence this development, promote directions that will be beneficial to democracy and trust.

Aside from the kinds of safeguards we were talking about earlier, there's also the possibility of investing in the development of AI, for example, which will allow us to verify the facts. Today, one of the biggest problems with our democracies is that we are losing the notion of truth; people can say all sorts of things that are ultimately not consistent with reality and citizens get lost. And I think one of the main reasons why we're losing trust in our governments is because we no longer know what is true, what is good, etc.

So, if we developed AI tools that provided more objective responses, and were not trained to please but rather to provide us with accurate details on subjects that interest us, it could become a tool in the democratic toolkit for democratic dialogue.

Because a democracy needs democratic dialogue, that is, a place where we agree on certain basic things, certain values, democracy, for example. And eventually, we can build public policies and collective choices together, because we have a common language. If we can't agree on the fundamentals of what is real in the world, then things aren't going too well. And the polarization that we see on social networks today makes this dialogue very, very difficult.

I think there's a way to develop tools that will allow for a more objective collective dialogue, which leaves room for everyone's opinions and views, but not for lies, demagogy, etc., that greatly pollute the democratic environment today.

Myra Latendresse Drapeau: Yes. Now you're addressing the development tools. Are you thinking primarily about dialogue tools or also about technology tools, like really concrete ones?

Yoshua Bengio: Technology ones. As it is today, I really think AI has opened up several paths to us. Right now, the development of AI is very much shaped by competition between several companies and a few countries, which is not necessarily in line with how AI could be further developed to help democracy, for example. And these are decisions we can make. An obvious example is the use of AI to improve medical research. Everyone would like to have better medical treatments. But for now, AI companies aren't making big investments in this area. So, I think governments can play a role in ensuring that development and research and development has at least one component that is focused on public interest.

Right now, there is a lot of discussion in Europe about the idea of developing nuclear physics, a bit like we did for physics, for example. We developed research laboratories that are not for commercial purposes but that are intended to help with the public mission. It can be about protection or fundamental advances.

In the case of AI, it can be developed by means of advancing democratic tools and public protection and, therefore AI security. All of these are aspects that are underdeveloped today because market forces are not really driving them in that direction. However, in my opinion, it is more the government's role to develop this kind of research.

Myra Latendresse Drapeau: This issue, naturally, these investments in artificial intelligence are huge, much higher than what governments can afford as such; however, do you see potential for governments to work together with major technology development centres to move more toward this more ethical direction? If we look at Canada, we can see that it plays an important role in certain areas, but there is still a risk that it could miss the mark in some ways. What can we do to avoid missing this opportunity?

Yoshua Bengio: First of all, if we take Canada as an example, we are very small compared to our American neighbour and even smaller compared to the Chinese. So, how can we navigate this properly? I think the answer is simple: we need to form alliances with other countries that share, for example, some of the same ethical visions of AI and the fear of being left behind in the future. I think it's entirely feasible, and it doesn't exclude collaborating with companies, ideally with Canadian companies that will have the interests of our other companies at heart. This is very important. For a decade now, I have been trying to work toward developing technology in a way that will ultimately give us sovereign tools as a country.

So, what are the right measures for this? This is another question. But today, we can clearly see, for example, European countries deciding to move in this direction and invest in a public mission regarding equipment, resources and research. And I believe that Canada has the potential to contribute to this kind of thing so that it can increase our chances of gaining access to AI that is competitive with the world's best in the future. We have a pool of experts; we have the energy; we have many assets for this challenge.

Myra Latendresse Drapeau: We're slowly starting to... You've mentioned this matter of the future of AI a few times because the acceleration progress is incredible. What fascinates you most about these advancements exactly, whether it be technical, ethical or social? What's most interesting to you now, compared to the last ten years; where do you see this going?

Yoshua Bengio: First of all, there are many uncertainties if we consider the future of AI, and researchers don't agree among themselves on the scenarios that await us. However, there are still objective trends that are quite strong. So, if we look at the last five to ten years, we see that AI's cognitive abilities on several measures and test beds have increased and, in some cases, exponentially, from a quantitative standpoint. This means, for example, that on the ability to plan, we see that

[00:30:53 Text on screen: Article by Jean-François Lisée – "Désalignement artificiel" [Artificial Misalignment] / Excerpt from the article:

within five years (2030) and at the latest in 10 years (2035) artificial intelligence will reach not only the human level of intelligence, but the combined level of intelligence of all humans. This will be true if artificial intelligence maintains its pace of doubling its cognitive capacity every seven months, which is the case. It will happen faster if it increases that pace, which is likely.]

Yoshua Bengio: the duration of tasks AI can successfully perform doubles every seven months. So it's important for a citizen, a public servant, or someone who makes public policy decisions, to understand this, to understand that we are on a trajectory that is moving very quickly. Not only is it going fast, but it's also accelerating.

[00:31:12 Yoshua Bengio appears on screen.]

Yoshua Bengio: Maybe it'll stop in six months, and company researchers will hit a wall. However, it's more likely that we continue on the same path.

Now the other problem is that we have to accept it, yes, but what does that mean for me? What does that mean for society? And it is very difficult for most people to accept the idea that in a few years, or perhaps in a decade or two, we will build computers that will have the cognitive abilities of human beings or more, or maybe even significantly more, perhaps in some areas more than in others. We have this idea that us human beings are the most intelligent. However, if someone tells us that this could change in a few years, we will say, "It can't be, that's science fiction." Unfortunately, the scientific facts tend to suggest that this is where we're headed. There is a kind of psychological barrier that leads many of us to ultimately deny this future we see coming progressively quicker.

The most important thing is to open up to the idea that it's not science fiction, it's not inconsequential. Not everything we see in science fiction movies will come true. It requires an understanding of what's likely to occur and what isn't. But with this understanding, we can plan and try to protect ourselves against several worst-case scenarios. Because in order to benefit from the advances in AI, whether for democracy, health, or climate change, we must govern it and manage it wisely. That means, firstly, doing so with the understanding of the possibilities of what is coming and through values like the precautionary principle, which is what we use in science when we're dealing with something dangerous. And even if I'm looking at scenarios that can be a cause for concern if AI continues in its current direction.

As we've talked about, there are criminal or terrorist uses of AI, because it has a wealth of knowledge and the ability to apply that knowledge, including creating, for example, new pandemics or cyberattacks and all sorts of other things. This is called a loss of control.

Today, we see that the AI tools that reason well and are the most advanced have tendencies to lie and deceive in order to try to preserve themselves and are ready to cheat in order to achieve their objectives like human beings ultimately do. That's all well and good until they become smarter than us, manage to escape our control and potentially cause damage to society, which, for some researchers, can lead to the extinction of humanity. This is serious. We can't even take a small chance that this might happen.

The third problem I talked about a bit earlier was the fact that AI will grant those who control it a great deal of power. Concentration of power is the opposite of democracy.

[00:34:35 Text on screen: Statement by Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO – An excerpt reads as follows:

"AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms."]

Yoshua Bengio: Democracy is the sharing of power. How do we manage the fact that the tool, by its very nature, could give rise, if we are not careful, if we do not set the right democratic conditions, to...

[00:34:50 Yoshua Bengio appears on screen.]

Yoshua Bengio: authoritarian governments, for example, who could use AI to increase their power, monitor populations, etc.? And that could happen here, it could happen in another country. What can we do to avoid this kind of scenario? There are plenty of worst-case scenarios.

There are also the worst-case scenarios in terms of economics. So, we also spoke about the potential to replace many jobs with work that is done by AI. This will happen gradually. But how do we manage it? What do we do if the companies that control this are mainly foreign companies? Do we no longer have control over this? Will the benefits from these changes be in another country, while we just end up with people who are unemployed? These are difficult questions that require strategic thinking, not just about Canada, but Canada in the world with other countries, all the geopolitics. These are things... I think it is important that individuals understand, that citizens understand, that public servants understand and that directorates, our legislators understand too.

Myra Latendresse Drapeau: There was a bit of a second aspect to my question. I asked you what keeps you occupied, what worries you the most. Then I was going to ask you what you think is most underestimated right now in AI capabilities. I think I have heard a bit about what AI could do that would have some pretty significant deleterious effects. Is there, however, a potential that is somewhat untapped or underestimated at the moment, that could instead go in the direction of the common good or the public interest?

Yoshua Bengio: Yes. I think there is potential to accelerate scientific research. It can be in areas where people immediately see the benefit, such as medical research. It is important to know, for example, that our body is made up of a large number of cells and that biologists do not fully understand how each cell functions. We know a great deal, but it remains a mystery. We do not know how to simulate the cell. This means that, currently, the development of treatments and drugs is done almost blindly. We try things, we see what works. What we see on the horizon is AI's ability to help researchers systematically better understand a biological system, for example, or an aspect of the climate, or an aspect of municipal life or our ecosystems, or what makes a student learn better and faster.

Ultimately, understanding these things is a science that is happening at a certain pace currently, but has the potential to be greatly accelerated by AI. Scientists are already starting to use this, but again, there is an incentive problem. It is not necessarily profitable to advance science in this direction. It is more profitable to take jobs; it is more profitable to provide services that people will pay for. So, I think to an extent it is the responsibility of governments to say, "Okay, there is a direction of application for AI that would really benefit society but there isn't enough investment, because the market forces aren't there currently. And that's where we can intervene."

Myra Latendresse Drapeau: Anything that helps us recognize patterns, basically. That is where human intelligence is quickly limited, but artificial intelligence sees things that we do not see.

Yoshua Bengio: Yes, and not only recognizing patterns, but even proposing scientific hypotheses. This is the basis of how science works. We observe experimental data and then our brain generates ideas of why, why does this happen? This does not mean that the idea is good. Sometimes hypotheses are not confirmed. But generating explanations and ideas, this is a situation where we are seeing more and more research that could lead to tools that really could accelerate the time it takes to solve certain scientific problems, some say dividing this time by 10. This could really change society in a beneficial way, in a drastic way.

Myra Latendresse Drapeau: In 2023, you were of the opinion that we should put the development of artificial intelligence on hold to consider the repercussions it could have on our societies, on human development, and on some of the factors you have mentioned so far. How can we design a global framework that will continue to foster innovation, but at the same time ensure a kind of harmonization of AI systems and their anchoring in our human values, security principles, democratic oversight, etc.?

Yoshua Bengio: That is an excellent question. Today, unfortunately, we have gone in the opposite direction, where the attention and political will concerning the guidelines we want to place around AI are giving way to the desire...

[00:40:49 On screen, an article by CBC's Jenna Benchetrit, with the following excerpt:

"What we're seeing here is this tension between, on the one hand, the desire to drive the economy through AI innovation, and on the other hand, the need to regulate it."]

Yoshua Bengio: to win the race. The desire to put everything into innovation toward AI that is more intelligent but not necessarily more moral or more capable of following our instructions or standards.

[00:41:07 Yoshua Bengio appears on screen.]

Yoshua Bengio: So that is an understandable issue, because there is more and more competition between companies. We can see this in their behaviour and it means that, ultimately, they will tend to cut corners on aspects that are more of a public good; security for example. The willingness of states to collaborate to establish regulatory mechanisms or international standards has also seen a certain decline. In 2023, the British government brought together heads of government from around 30 countries and ministers from around 30 countries to try to think along these lines. The companies came, they said, "Yes, we want to go in that direction." And now, in 2025, well, we see a decline in this desire.

So, that's a problem. How can we get past this? I think the technical details of what kinds of international treaties or agreements are less important than political will. So where will the political will come from? It will come from citizens who are worried about the risks of things going wrong. It will come from better understanding from citizens and leaders of countries. It will come from risks that put us all in the same boat, all countries, even countries with very different political systems.

So, when we see AI as a potential weapon that can be used against us or that we can use against others, that puts us in a competitive position that is not necessarily healthy, even if we have to take that reality into account.

When we look at risks like loss of control to, say, crazy AIs that wish us harm, then we are all in the same boat. I think it is that kind of thing... or, for example, the risks of a terrorist using AI or a cult with bizarre beliefs using AI to create global catastrophes. These are risks that would affect everyone. All countries would be affected. Nobody has any interest—nobody, apart from a few madmen—nobody really has any interest in these things happening.

I think it is that kind of risk—but we have to look at all the risks—but it is that kind of risk that is most likely to gather governments that see themselves as adversaries around the same table. I think Canada can be a positive player in this. We have a tradition of multilateralism. We want to collaborate with other countries. We are doing this and we can do more of this and encourage efforts from multiple countries to develop secure AI, for which these risks would be mitigated.

Myra Latendresse Drapeau: On a national scale, what could we do?

Yoshua Bengio: Nationally, we have to do what is morally right. As an example: Currently, Europe is trying to develop a regulatory system around AI. It would be good if other countries did something similar. Right now, we are as a whole, more countries, saying, "These are our limits, our ethical limits." Even if there are some countries that refuse to get on board, we have more power when there are many countries standing together saying that AI can be beneficial, but it must be regulated.

Myra Latendresse Drapeau: I want to end on a slightly more personal note. I believe you are a father, I believe you are also a grandfather. Sometimes, I hear you, and there are still possible scenarios that are frightening. How do we raise our young people, whether or not we are parents, whether we are teachers, how do we grow up in this world, how do we learn to navigate with possibilities that are still, well, let's say unknown, but in any case, perhaps a little frightening? What do we say to each other, to ourselves? So, what do we say to our young people who are going to grow up in this world?

Yoshua Bengio: First of all, to make good decisions,

[00:46:02 UNESCO statement. Text on screen: A human rights approach to AI.

Public understanding of AI and data should be promoted through open & accessible education, civic engagement, digital skills & AI ethics training, media & information literacy.]

Yoshua Bengio: you have to understand what is happening, you have to understand what is probably coming. So, collectively, if we can help each other understand, that is already an important step.

[00:46:16 Yoshua Bengio appears on screen.]

Yoshua Bengio: Second of all, if I think about my own trajectory, with people throwing around worst-case scenarios, this did not necessarily influence me that much, but what got me moving was thinking about the love I have for my children and my grandson and asking myself the question, "What kind of life will they have in 10 years, in 20 years, if there are AIs that are more intelligent than us that can be used in a destructive way or that we lose control of and that could want the end of humanity?"

These are scenarios for which it may be very difficult to quantify whether or not they are likely to occur. But these are scenarios that researchers and engineers at AI companies are seriously assessing.

But this thought of the love of the people who matter to us, which really, for me, led me to move and change my trajectory. I think that afterwards, when we say to ourselves, "Okay, there are unacceptable risks," the next step is to ask what we can do. What can I do for the people I love?

Each of us can do something, a bit like with the climate. Yes, our governments play a very important role, but fundamentally, it is because people are taking it seriously that we end up with public policies regarding this. It is the same with AI. If we look on from the sidelines and don't emotionally confront both the potential benefits and the potential dangers, we'll do nothing and potentially become passive victims of some of these scenarios.

To have a chance of finding a path that is beneficial for us, for humanity, we will have to pull ourselves out of our ruts, let our imaginations run wild, be rigorous in this work, be creative in the solutions we deploy, and open ourselves to ways of understanding things that are very different from what we have experienced all our lives. We all have barriers that trap us in a routine and cause us to resist change, that cause us to resist ideas that are very foreign to us. And it takes mental flexibility, it takes creativity, and it takes us working together and listening to each other to find a path that will be positive for everyone.

Myra Latendresse Drapeau: Once again, thank you very much, Professor Bengio. It was amazing hearing from you. Clearly, you are passionate about this, you have a lot to say. I think we have food for thought. We, as individuals, must take a moment to pause and then think about how we take action, how we mobilize. Yes, it is a question for our governments, our administrations, but it is also a question for us as individuals. Your words were very inspiring and I thank you once again for offering us so many perspectives.

Yoshua Bengio: It was my pleasure. Thank you for the questions.

[00:49:54 The CSPS logo appears.]

[00:49:58 Text appears on screen: canada.ca/school.]

[00:50:01 The Canada wordmark appears.]

Related links


Date modified: