Language selection

Search

Adopting Artificial Intelligence with Security in Mind (DDN2-V46)

Description

This video captures insights from government, academic and industry leaders on the topic of artificial intelligence (AI) and how Canada's security interests must be protected as the world adopts this powerful new technology.

Duration: 00:22:36
Published: March 27, 2024
Type: Video


Now playing

Adopting Artificial Intelligence with Security in Mind

Transcript | Watch on YouTube

Transcript

Transcript: Adopting Artificial Intelligence with Security in Mind

[00:00:00 Video opens with title page: Security from the Start: Artificial Intelligence.]

[00:00:08 Caroline Xavier appears full screen. Text on screen: Chief, Communications Security Establishment.]

Caroline Xavier: Hello, my name is Caroline Xavier and I am the Chief of Canada's Communications Security Establishment–codemakers and codebreakers for over 75 years.

My team is working together with partners in National Defence, elsewhere in the Government, and the world, to establish how we can provide for safety and security in a world where artificial intelligence (A.I.) promises to disrupt everything, including our assumptions about security.

We have pulled together some colleagues working in the field of artificial intelligence to discuss how Canada's security interests can be protected as the world adopts this powerful new capability.

They will be telling you about:

  1. how A.I. is an opportunity and a vulnerability;
  2. how Canada has a proven track record;
  3. how our adversaries are moving quickly;
  4. how the policy, legislative, and governance frameworks needs us to act now;
  5. how we're all in this together; and
  6. how the future can be bright.

[00:01:10 Slide. Text reads: AI is an Opportunity and a Vulnerability.]

Caroline Xavier: 'A.I. is an opportunity and a vulnerability.'

[00:01:13 General Paul Nakasone appears full screen. Text on screen: Commander of US Cyber Command, Director of the National Security Agency, and Chief of the Central Security Service.]

General Paul Nakasone: I understand the head of the Public Service is convening Canadian government leaders to look at A.I. Security. Caroline Xavier asked me if I could share perspectives from some recent N.S.A. (National Security Agency) efforts, focused on A.I., specifically A.I. security.

[00:01:30 Anne Keast-Butler appears full screen. Text on screen: Director of the Government Communications Headquarters.]

Anne Keast-Butler: Caroline asked me just to reflect for a couple of minutes on how the U.K. is thinking about A.I., as you get together and reflect on how Canada wants to tackle this challenge for our generation.

[00:01:44 Jen Easterly appears full screen. Text on screen: Director, Cybersecurity & Infrastructure Security Agency.]

Jen Easterly: The internet was never created to be safe and secure. The internet was created to move pictures of cats. And then came software, and software was not created to be secure. And then there was social media, and social media was not created to be secure. Quite frankly it's why we have an internet that's full of malware, we have software that's full of vulnerabilities, we have social media full of disinformation. With this next generation of technology, artificial intelligence and large language models, we have to ensure we're not making those mistakes of speed over safety. And so we have to ensure that security is prioritized at the design phase.

[00:02:29 Marilia Araujo appears full screen. Text on screen: Partner, Data Analytics and A.I. at PWC Canada.]

Marilia Araujo: One interesting information that our Global Survey for Hopes and Fears brought to light is that only 31% of Canadians think that A.I. will have an impact on their roles professionally or their personal lives. So, I find that the lack of awareness is a big threat when it comes to national security and when it comes to Canadians really being prepared for what's coming next for them.

[00:02:56 Slide. Text reads: Canada has a proven track record.]

Caroline Xavier: 'Canada has a proven track record.'

[00:02:59 Eric Bisaillon appears full screen. Text on screen: Director General of Cyber Defence Capabilities at the Canadian Centre for Cyber Security.]

Eric Bisaillon: There really is an immense amount of talent in Canada in terms of artificial intelligence. Certain research centres, collaboration centres, A.I. ecosystems are rooted in Montreal, Quebec City, Quebec in general, and elsewhere in Canada, where there are really top A.I. researchers and eminent experts who are also French-speaking. So, it's really interesting for us to tap into this community, to work with these people. And to recruit them if some of them want to come and work for the federal government and help us with these challenges.

[00:03:38 Dr. Joel Martin appears on screen seated next to Rajiv Gupta. Text on screen: Chief Digital Research Officer and Chief Science Officer of the National Research Council Canada.]

Dr. Joel Martin: The Government of Canada has had A.I. expertise for many decades, in Statistics Canada, in C.S.E. (Communications Security Establishment), in many different organizations. I know best what's at the National Research Council of Canada (NRC).

Our first A.I. project was delivered with a company in 1991. For large language models, we've been working with language models since 2003. We made them large around 2010, and then we added Transformers (which is what the T is in G.P.T.) in about 2017. All across the Government of Canada we have A.I. experts—you might not have heard of them yet—but they already are building A.I. projects inside the government.

[00:04:25 Rajiv Gupta appears on screen seated next to Dr. Joel Martin. Text on screen: Associate Head of the Canadian Centre for Cyber Security.]

Rajiv Gupta: Similarly, from a C.S.E. perspective, we have been doing machine learning, artificial intelligence, and these sorts of big data analytic initiatives for decades in many different ways. From a cyber defence perspective, we in put into operations a very first "no-human-in-the-loop" A.I. defence system in 2018 running on the Government of Canada. So that was very exciting for us.

We have been doing this for a while. We have the Tutte Institute for Mathematics and Computing with many A.I. researchers. So, lots of expertise in the domain and we have been doing this for a while in this space.

[00:04:54 Slide. Text reads: Our adversaries are moving quickly.]

Caroline Xavier: 'Our adversaries are moving quickly.'

[00:04:58 Jen Easterly appears full screen.]

Jen Easterly: Generative A.I. has captured the imagination of the world over the past year. It is the responsibility of all of us, as leaders, to ensure we can leverage that power of imagination, and avoid the failure of imagination.

[00:05:15 Eric Bisaillon appears full screen.]

Eric Bisaillon: It's always difficult to know exactly what adversaries are doing, but based on our threat assessments, especially from threats from foreign countries, we know that emerging technologies are going to be used more and more—for good, but also to try to undermine the security or wellbeing of Canada.

Operations are really progressing at machine speed, so we absolutely need the right tools, because the new threats, and the new challenges we're looking at, are no longer at the scale of human capacity to handle. So, we absolutely have to arm ourselves with these tools to be able to really react and operate in the field of modern security.

[00:06:06 General Paul Nakasone appears full screen.]

General Paul Nakasone: Not surprisingly, our adversaries are moving quickly to develop and apply their own A.I. and we anticipate they will explore ways of using A.I. against our national security systems and defence industrial base soon, and against our partners as well.

This is why we must think about A.I. security now, protecting A.I. from learning, doing and revealing the wrong things, as well as protecting it against attack, thefts, or damage. We must build a robust understanding of both A.I.'s vulnerabilities and threats to A.I. systems, and help ensure foreign actors don't steal America's, and our partners' A.I. capabilities.

[00:06:46 Anne Keast-Butler appears full screen.]

Anne Keast-Butler: We, as a SIGINT (Signals Intelligence) organization, have unique insights into the threat of our adversaries using A.I. to augment or change the sort of specific threats that they can pose to national security. And so, we're using our insights in our intelligence collection to help shape thinking on that. And that's all coming together in the U.K. through the Joint Intelligence Committee and through the temporary focus they have put on A.I. That is bringing in all sorts of intelligence, and G.C.H.Q.s (Government Communications Headquarters) is just part of that.

[00:07:18 Rajiv Gupta appears on screen seated next to Dr. Joel Martin.]

Rajiv Gupta: From a Cyber Centre perspective, we're very worried about the use of generative A.I. in terms of generating false information or misinformation or disinformation. We did warn about that in our National Cyber Threat Assessment. As we develop new technologies, we have to make sure that we have the guardrails in place to make sure that we're actually ensuring that the societal benefit is not outweighed by the potential harms that can happen.

[00:07:37 Jen Easterly appears full screen.]

Jen Easterly: I know what the end state should be. The end state should drive down the potential for these very powerful capabilities to be exploited by threat actors. That's what we want to get to. Just like what we're trying to get to in the secure-by-design revolution, is dramatically reducing the number of flaws in the software that we all rely upon every day that can be exploited by threat actors. That's the end state.

[00:08:05 Slide. Text reads: The policy, legislative and governance framework needs us to act now.]

Caroline Xavier: 'The policy, legislative, and governance framework needs us to act now.'

[00:08:11 Dr. Joel Martin appears on screen seated next to Rajiv Gupta.]

Dr. Joel Martin: Because we're uncertain, we can't really predict what Open A.I. is going to release in year's time from now. If we try to codify something into law right now, we might get it wrong.

Rajiv Gupta: Absolutely. That's the challenge of regulation. So, we have keep it high level, framework, principle based. And the leave the room for the agility necessary to continually evolve.

Dr. Joel Martin: So that agility implies that we are revisiting it on a regular basis.

This is technology that nobody understands. I mean, there are there are pockets inside the government and academia and industry that do understand these technologies, but by and large the users of the technologies are not going to understand. And the only way they can trust it, is with some type of certification. Some trusted body is saying "Yep, this has these characteristics, and we can trust it in these ways".

I don't think we're very far along being able to do all of the tests that we want to be able to do on say a large language model to be able to trust it in edge cases, in specialized edge cases.

We would like to have rules about the light touch that we were talking about earlier, so we have guidelines coming out on a regular basis, being re-examined on a regular basis. Guidelines that stick around for a long time. Well, those are what we put into the regulations and laws. But we need to be detecting. We need a group to be looking to see what's coming next. Hopefully, C.S.E. is doing some of this for us already, but we need to recognize that it's changing a little bit with A.I., what we need to be detecting.

Rajiv Gupta: Well I don't think there's a governance change, you know—there's no Minister of A.I. that's going to fix the problem or not. But I think it's a philosophical, or a cultural shift in terms of how we develop policy, and understand that we don't have the timeframe that we use to. Things are changing much more rapidly, and we need a different approach as to how we're going to manage an incredibly rapidly changing technological system that has many, many different implications across society as well. So, we need a mindset shift in terms of how we actually approach the government challenges to this.

[00:10:20 John Weigelt appears full screen. Text on screen: Chief Technology Officer and Responsible AI Lead at Microsoft Canada; Lead, AI and ML working group, Canadian Forum for Digital Infrastructure Resilience at ISED.]

John Weigelt: When we start to put together the frameworks for these tools, it's important to take a principles-based approach looking at the potential harms and being very technology agnostic. Where we see technologies finding their way into policy, those policies tend not to last a long time. And so we manage this discordance, this different pace of change between policy and technology, by having principles-based approach. We've seen that done by government before in examples like PIPEDA (Personal Information Protection and Electronic Documents Act), which lasted us for a very strong 20 years.

[00:10:50 Slide. Text reads: We are all in this together.]

Caroline Xavier: 'We are all in this together.'

[00:10:53 Anne Keast-Butler appears full screen.]

Anne Keast-Butler: Our thinking on A.I. needs to bring together government, security, academia, industry and be really global in its approach. And we're leading that through our department of Science, Innovation and Technology. But given the scale, and the need to move at pace, they set up a task force, which I'm privileged enough to be on the steering group of, which has led our thinking on safety and security in A.I.

How you partner and how you get voices of expertise and a really broad range of voices, including from civil society, is central to the U.K.'s approach.

[00:11:28 Dr. David Perry appears on screen seated next to Dr. Alex Wilner. Text on screen: President, Canadian Global Affairs Institute and Co-Director of Triple Helix.]

Dr. David Perry: This is a space where I think there's a real need for ongoing dialogue between stakeholders in government and folks in the private sector and outside of government. So, we're looking to have opportunities for regular engagements to try and keep the dialogue going. It's really fast-moving section of technology and it's important to keep people on the same page as much as possible and keep lines of communication open.

[00:11:49 Dr. Alex Wilner appears on screen seated next to Dr. David Perry. Text on screen: Professor, Norman Paterson School of International Affairs at Carleton University and Co-Director of Triple Helix.]

Dr. Alex Wilner: And from an academic perspective, it's critical to get our grad students thinking about future considerations, not only for their own careers but also for the good of Canada. And so we're trying to train people a little bit differently. We are social scientists by nature but we're trying to add a science and tech lens to the way that we're doing things within academia, which is why these kinds of engagements are so critical to the success of Canadian academia.

[00:12:12 Frédéric Demers appears full screen. Text on screen: Manager, Communications Security Establishment Research Office.]

Frédéric Demers: I believe the partnership aspect is also important to: partnership with the Canadian academic community, which is excellent in the field, and partnership with Canadian industry, which also has immense strengths in the field to guide us in learning to tame these tools. I think this is an important aspect too. We're talking about regulating use, but Canada can also become a leader and project our own values, so that the use of these tools is regulated in a way that represents Canadian values. I think we have a position to take internationally in this respect. Our agency is well positioned, since we can see what's going on abroad.

[00:12:52 General Paul Nakasone appears full screen.]

General Paul Nakasone: I would like to provide some context on A.I. from a U.S. perspective. The U.S. is a world leader in A.I. In addition to being on the forefront of its development, U.S. industries and companies are leading efforts to find ways of applying it for the benefit of our society. Our adversaries around the world recognize the importance of A.I., and will no doubt seek to coopt our advances, and corrupt our application of A.I. for their own purposes. Our nation's leaders understand this, and emphasize A.I.'s importance in every national level strategy released over the past two years, spelling out how A.I. will be consequential for our military, economic, technological, and diplomatic efforts, as well as those of our allies and partners.

So, what are we doing in A.I. security? In September, I announced that N.S.A. is consolidating its A.I. security related activities into the A.I. Security Center. This center will: advance the science of A.I. vulnerabilities and mitigations; drive bidirectional exchanges with industry and experts; and develop, evaluate, and promote A.I. security best practices.

The A.I. Security Center will contribute to the development of best practices, guidelines, principles, evaluation methodology, and risk frameworks for A.I. security, with the goal of promoting the secure development, integration and adoption of A.I. capabilities within our national security systems and the defence industrial base.

[00:14:26 Jen Easterly appears full screen.]

Jen Easterly: What we don't want is to crush the innovative spirit that can really catalyze these incredibly powerful capabilities to do stunning things like hope to solve intractable diseases. But we need the experts to come together collaboratively to ensure that we are putting the right mechanisms in place, and the right guardrails in place that can effectively govern these capabilities so that security and safety are top of mind, not a bolt-on as they have been in software since it was first invented many decades ago.

[00:15:06 Charlotte Duval-Lantoine appears full screen. Text on screen: Ottawa Operations Manager and Fellow at the Canadian Global Affairs Institute, Executive Director and Gender Advisor at Triple Helix.]

Charlotte Duval-Lantoine: Because of the pace of our adversaries, we often see those frameworks as impeding our development of A.I. But I do think that that's going to be an operational enhancer because the more diverse and inclusive our implementation is, the less biases we will face, the better operational picture we're going to have, the more operational effectiveness we can achieve. But also, we are going to reduce potential collateral damage.

[00:15:33 Eric Bisaillon appears full screen.]

Eric Bisaillon: I think some basic principles apply. In other words, where does the data that we're going to insert into these tools come from? Is it going to remain under our control within the government? Or is it going to be exported to third parties or companies that manage these models? What's going to happen to the privacy aspects of this data, or the confidential aspects of this data? So, these are important considerations that we need to look at in our efforts.

[00:16:13 Frédéric Demers appears full screen.]

Frédéric Demers: C.S.E. has lots and lots of data. Both on the intelligence side and the cyber-defence side. So, using these tools could give momentum in fulfilling our mandates. It's something we need to look at and tackle. At the same time, Canadian values mean that we want to respect the rights and freedoms of Canadians, whether through A.I. models or the humans behind them who use these tools. I think that a human approach is important to supervise the use of these tools, so that they're not left to their own devices, making recommendations without a human being involved.

[00:16:50 Dr. Joel Martin appears on screen seated next to Rajiv Gupta.]

Rajiv Gupta: So, from a critical infrastructure perspective, I'm always worried about even the sensors themselves, whether they're measuring seismology or temperature or traffic rates or whatever. We have to consider that the data is coming from some source and you can have the smartest algorithm in the world, but if the data that is being fed to it doesn't have the integrity that it needs, then you have a problem. If you're not storing the data property, if you don't have the right data governance regimes in place, incredibly worrisome in many ways. So basically, baking in layers of data governance and cyber security and trustworthy computing in the other data analytic spaces, is important. So, it's your whole supply chain you have to look at right from where you're collecting the data through to where you're processing the data, to where you're actually taking the action or interacting with a user at the end point. Which means you do need that common standard right across the whole supply chain because one weak link in that whole chain will result in risk to your overall organization, which is really the end point of A.I. or cyber security in the end.

Dr. Joel Martin: The National Research Council has been managing astronomical data for Canada since 1986 at the Canadian Astronomical Data Centre. And this is a huge resource. As soon as data comes off of telescopes, it can go into this resource within 15 minutes. And so, the astronomers from around the world can come and look at it, and there's all sorts of code around it. And it's all open science. So you think "OK well we don't need to worry about cyber security there because it's all open, anybody can come get it anytime they want". But the problem is that we need to protect the integrity of this data. It's scientific data, a lot of it can only be captured once ever in the history of the universe, so we need to keep that trustworthy data. It also reflects back on N.R.C. and not Canada. So, if we have bad astronomical data, people stop coming to us and we stop getting funds, we stop being part of the international telescopes, all that sort of stuff. Even open science data must be protected integrity.

[00:18:51 Slide. Text reads: The future can be bright.]

Caroline Xavier: 'The future can be bright.'

[00:18:54 Dr. Joel Martin appears on screen seated next to Rajiv Gupta.]

Dr. Joel Martin: We're also going to be able to use generative A.I. in the science space—where N.R.C. is doing this but not just N.R.C.—many companies in Canada are working in the space of using A.I. to search for better materials. One place that's going to work in Ontario and Quebec at least, is searching for battery materials, so better materials to make batteries that use critical minerals from Canada and then design e-autos using generative A.I. This is happening already, in Canada.

I was at a conference in the United States where they were saying "oh, we're falling behind Canada". So this is a place where Canada is succeeding, in A.I., using generative A.I. It's going to increase. But, as I said before, there are some things that we have to pay attention to.

[00:19:44 Jen Easterly appears full screen.]

Jen Easterly: I want to ensure that the generations that come after me are able to benefit from these technologies without suffering the very considerable harms that they could bring if they are not created with safety and security top of mind.

[00:20:06 Anne Keast-Butler appears full screen.]

Anne Keast-Butler: How does A.I. help me and my organization be better? So that's better in how we run ourselves corporately, what insights can it give us? How do we benefit from it in the way that everyone else does? How can we use it to drive productivity, to drive our insights into where we're burning carbon and improve our ability to sort of hit our net-zero targets? How else can we use it to run a business, and how can we use A.I. to improve our collection capabilities and really get after that? And we're experimenting a lot in our own unique secret environment and we're sharing a lot of that experimentation with Caroline and C.S.E. so that it's not a U.K. only learning, it's something that we're doing together in partnership.

But also, as we think about the future. So, we're making investments that make us ready for the disruptive effect of A.I. in the future, even if none of us can quite see what that is.

And all of that came together at the recent summit which I was pleased to attend, and which Minister Champagne chaired brilliantly, a conversation around the threat. And I know that Canada itself is a leading voice through experts like Yoshua Bengio and others too. So, know that we want to keep this as an active conversation, and wish you really good luck in working through Canada's approach to A.I.

[00:21:32 Caroline Xavier appears full screen.]

Caroline Xavier: What we have heard here is that we can do this. And we must. But we don't have time for delay. The policy, legislative, and regulatory work needs to be advanced quickly or we will miss the opportunity to put the elements of success in place. We don't want to suffer from a 'failure of imagination', and we want to make sure we leave space for Canada to innovate and be competitive.

I believe every public servant will end up playing a part in terms of building security in from the start. We have risen to disruptive challenges before and, I believe with the strength of our partnerships and our home-grown talent, we are starting from the front and with your help, we can stay there.

[00:22:16 Slide. Text reads: Disclaimer: This video is a compilation of stakeholder views collected over a two-week period in November 2023. Many thinkers from the Government of Canada, industry and partner countries will contribute to Canada's A.I. security but are not featured here. Some people have changed positions since this video was filmed, but their perspective remains important for this product which is intended to prompt discussion only.]

[00:22:32 Canada watermark appears, and fades to black.]

[00:27:32 End of video.]

Related links


Date modified: