Transcript: Artificial Intelligence Is Here Series: AI and Machine Learning in Foreign Intelligence
[The CSPS logo appears on screen alongside text that reads "Webcast".]
[Martin Green is sitting in an office facing his webcam.]
Martin Green: Hello, everyone. Welcome to the Canada School of Public Service. My name is Martin Green. I'm the Assistant Secretary to Cabinet for Intelligence Assessment at the Privy Council Office, and I will be your moderator for today's event titled Artificial Intelligence, Machine Learning, and Foreign Intelligence. I'm pleased to be here with you today and want to welcome all of you who chose to connect to this event.
I would like to acknowledge that since I'm broadcasting from Ottawa, I am in the traditional unceded territory of the Anishinaabe people. Today's event is the seventh instalment of our Artificial Intelligence Is Here series which the Canada School offers in partnership with the Schwartz Reisman Institute for Technology and Society, a research and solutions hub based in the University of Toronto that is dedicated to ensuring that technologies like A.I. are safe, responsible, and harnessed for good.
To date, this event series has covered topics that include the basics of A.I., how and when to use it in government, economic impacts of A.I., issues related to bias, fairness, and transparency, and the global effort to regulate A.I. Today, we will be turning our attention to the role of A.I. and machine-learning when it comes to foreign policy and intelligence.
The format of the event will be as follows. First, we will watch a lecture featuring Janice Stein, a professor at the Munk School of Global Affairs of Public Policy and who will discuss the potential for using A.I. and machine-learning to inform foreign policy decision-making. Following the lecture, Janice will join me live along with our other guest panelist, Jon Lindsay, from the Sam Nunn School of International Affairs at the Georgia Institute of Technology for a panel discussion in which we will go more in-depth on A.I. and foreign intelligence. Finally, we will have a bit of time near the end for audience questions, so please feel free to submit those questions at any point during today's event. To do so, please go to the top right corner of your screen and click the Raise Hand button and enter your question.
The inbox will be monitored throughout the event.
Before we play the lecture, please note that simultaneous interpretation is available for our participants joining us on the webcast. To access simultaneous French interpretation, please follow the instructions provided in the reminder email which includes a conference number that will allow you to listen to the event in the language of your choice. Without further ado, let's play the lecture.
[A graphic appears with the text "ARTIFICIAL INTELLIGENCE IS HERE Series" Text appears that reads "AI, machine learning, and foreign policy. Janice Stein, , Professor of Political Science, University of Toronto, Founding Director, Munk School of Global Affairs & Public Policy is standing next to a photo of puzzle pieces connecting in a circle.]
Janice Stein Lecture:
Intelligence is critical to the making of better policy decisions. It always has been.
[Text appears on the photo that reads "How can AI improve decisions around foreign policy?".]
Today, I want to talk about how A.I. can improve intelligence and help to reduce the likelihood that decisionmakers will be surprised and make poor policy decisions as a result.
[A photo is shown of multiple black pins surrounding one brown pin.]
What is intelligence?
[Text appears on the photo that reads "Intelligence estimates are predictions about the future."]
Intelligence estimates are predictions about the future. They are forecasts of what people will do or what a virus will do.
Intelligence analysts use information about the past to make these predictions about the future and these predictions inform decisionmakers' choices when they have to make hard calls among different options.
[A photo is shown of multiple white arrows pointing in different directions.]
Now, as recently as 100 years ago, all intelligence was exclusively human-generated. Spies are as old as history and they sent back secret information about the capabilities or intentions of their adversaries...
[Text appears on the photo that reads "Predictions are generated by information drawn together from multiple sources."]
...and analysts drew together the information they got from multiple sources and make predictions about the likely behaviour of their adversaries in the future.
[A photo is shown of multiple green chat boxes.]
That their predictions are sometimes right and sometimes wrong says little about their capabilities as analysts...
[Text appears on the photo that reads "Predictions are always at best estimates with different degrees of probability."]
...since information is always incomplete and predictions are always, at best, estimates with different degrees of probability...
[00:05:24 A photo is shown of multiple people holding puzzle pieces around an incomplete puzzle.]
...but over time, we've learned a great deal about how humans make decisions, and here's the surprise.
[Text appears on the photo that reads "Humans have limited cognitive capacity, and use patterns as short cuts. Text appears on the same photo that reads "As a result, our capacity to predict is subject to systematic error."]
The evidence is really strong that humans are limited in their cognitive capacity. All of us, to some degree or another, use cognitive patterns as shortcuts to save energy and cut through the complexity in our environment, and as a result, we make errors, especially when we're making predictions. We often see the patterns that we expect to see.
So, what we've learned is that our capacity to predict is subject to systematic, not random, error.
[A photo is shown of white steps leading to the top of an arrow in front of a blue wall. Text appears on the photo that reads "So, how can we improve our capacity to predict." Text appears on the same photo that reads "We can draw on better data".]
So, how then do we improve our capacity to predict? Well, we need help. The important first step to improve predictions is to draw on better data. In the last few decades, we developed two different ways of improving prediction. The first method has been around for several decades.
[A photo is shown of red, blue, yellow, and 3D geometric shapes. Text appears on the photo that reads "Modelling".]
What do we do? We build models that make explicit the assumption we are using to predict these outcomes, and then we test the models against outcomes in the past and against whatever data we have in the present.
[A photo is shown of different coloured cubes. Text appears on the photo that reads "We can "exercise" models by varying assumptions to see how the expected outcomes change." Text appears on the same photo that reads "Modelling is very useful in predicting events that are rare and complex."]
One of the real values of models is we can exercise these models. We can vary the assumptions and we can see how a change in the assumptions will change the expected outcome.
So, modelling is very useful when we are trying to predict events that have two attributes, they're rare and they're complex, and we do a lot of that in global politics. I return to modelling later when I talk about the challenges of predicting...
[A photo is shown of soldiers in Afghanistan with text that reads "Kabul, Afghanistan (2021)".]
...the fall of Kabul to the Taliban in the summer of 2021, almost nobody got it right, and that certainly qualifying as an event that was both complex and uncommon. How could we do it better the next time?
[A photo is shown of lights in the shape of a brain.]
A second tool to improve predictions is...
[Text appears on the photo that reads "Artificial Intelligence".]
...artificial intelligence, and A.I.s can be trained to see patterns embedded in very large amounts of data that arrive really quickly from multiple sensors and multiple sources, and our world looks increasingly like that, studded with sensors and drawing on multiple sources.
There are patterns embedded in the large amounts of data that are flowing in now to intelligence analysts but these human decision makers are unlikely, on their own, to uncover these patterns. Artificial intelligence can be a real help here.
[A photo is shown of a circuit board. Text appears on the photo that reads "Artificial intelligence can be trained to see patterns in large data sets, and its predictions can be tested to determine the error rate."]
So, artificial intelligence can be trained on these very large data sets. Their prediction's then validated against another data set, and then here's what's really valuable. These predictions can be tested against new data so that analysts can see the error rate. That is really the promise of A.I., to improve intelligence. A.I. can generate predictions that can help decisionmakers make better decisions than they would without help.
[A photo is shown of the Earth with text that reads "How might we use AI in global politics?"]
How might we use A.I. in global politics? There are many, many potential applications and let me just talk quickly about six, but they are six of hundreds that I could identify.
[A photo is shown of the top of a red globe.]
First, A.I. can help...
[Text appears on the photo that reads "Analyzing voice communications."]
...in the analysis of patterns in huge data sets of voice communications that intelligence agencies routinely collect from people abroad. These patterns can be very helpful in predicting what adversaries are likely to do.
[A photo is shown of computer code on a screen. Text appears on the photo that reads "Analyzing metadata."]
Analysis of metadata, the second one, metadata of who is talking to be whom can be helpful even when we have no access to the content of the conversation. This kind of analysis, done with the help of an A.I., would allow the identification of networks that would otherwise be invisible to analysts. This kind of analysis is so powerful that in democracies, we put safeguards in place on the collection and analysis of these kinds of data because analysis of their patterns is so revealing.
[A photo is shown of piles of coins shaped to resemble the continents. Text appears on the photo that reads "Analyzing financial transactions."]
Here's a third area where A.I. can be helpful.
It can help in the analysis of large volumes of financial flows across borders to track the funding of criminal and terror networks. Following the money is one of the most effective ways of identifying ransomware networks as well as predict illicit activities of criminal networks and drug smugglers.
[A photo is shown of a road with three red pins along it. Text appears on the photo that reads "Analyzing movements of people."]
Fourth way analyzing satellite data to track movements of people across borders to improve predictions of patterns of migration.
[A photo is shown of a smartphone next to an array of icons covering a map of the world. Text appears on the photo that reads "Analyzing web data."]
Another one that we know has worked analyzing web data and Google searches with an A.I. to predict the outbreaks of viral illness and demand for health care advice. Had we had that in place, the trajectory of the pandemic might have been quite different.
[A photo is shown of airplanes on an airport landing strip. Text appears on the photo that reads "Analyzing transportation networks."]
And finally analyzing transportation networks on land, at sea, and in the air, again, with the help of A.I., would improve the predictions of critical disruptions in supply chains.
We've seen how disruptive these breaks in supply chains can be.
[A photo is shown of a blue pathway with text that reads "What is the challenge to using AI to develop predictions?" Text appears on the same photo that reads "We need large, accurate data sets."]
So, what's the big challenge to using A.I. and large datasets on all these big issues of global foes?
We need large, accurate datasets and that is still a work in progress but improvements are coming very, very quickly.
[A photo is shown of an eye with that that reads "While there are risks involved, the benefits of AI for improving predictions are significant."]
When I think about it, of course there are risks but the benefits of a A.I. to improving predictions in intelligence analysis are significant and I compare it to what analysts would find if they were using only their naked eye.
[A photo is shown of multiple white staircases. Text appears on the photo that reads "Data sets will reflect the biases of the people who construct them." Text appears on the same photo that reads "If analysts are unaware of thee biases, their predictions will reproduce them."]
Now, let's talk for a minute about the risk of harm.
We all know that datasets will reflect the biases of the people who construct the training algorithms for the A.I. or even the datasets.
If analysts are not aware of these biases, these predictions will simply reproduce these biases and we already have...
[A photo is shown of the front page of Machine Bias by Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, ProPublica. Text reads "There's software used across the country to predict future criminals. And it's biased against blacks."]
...considerable evidence of racial bias that is built into the A.I. that were constructed to predict the likelihood of criminal re-offending. It is really important that we pay a great deal of attention to the implicit bias that is baked into the datasets.
[A photo is shown of multiple white staircases. Text appears on the photo that reads "Data sets can be biased not only by the inclusion of data, but also by the exclusion of specific kinds of data."]
Datasets can be biased not only by the inclusion of data but also by the exclusion of specific kinds of data and if these omissions are systematic, the predictions will be biased.
[A photo is shown of a figure representing a human, standing at the entrance of a maze with text that reads "Analysts are often unaware of the limits of their data or the bias embedded in it." Text appears on the same photo that reads "When this happens, analysts will be overconfident in their predictions."]
Analysts and decisionmakers are often unaware of the limits of their data or the bias that is embedded in their data. When that happens analysts especially will be overconfident in the prediction that they generate and I know from research on decision-making that was poor, that overconfidence is one of the chief errors that analysts make.
It is a trap because they convey that overconfidence to the policymakers that they are advising.
[A photo is shown of a hand moving a black chess piece along a chess board. Text appears on the photo that reads "We need to pay attention to error rates of predictions..." Text is added to the previous text that reads "while remembering that unaided human decision-making is biased as well, but is harder to correct."]
So, as we move ahead in this field which is changing dynamically in real time, we are already in the world of A.I. and intelligence.
We need to pay extraordinary attention to the error rates of predictions and to the systemic biases in the data that are used to train A.I.s all the while remembering that unaided human decision-making is biased as well but we can't see it and it is harder to correct.
[A photo is shown of balls placed inside a Venn diagram with text that reads "The usefulness of modelling".]
Let me talk about a second tool that is especially useful for predicting rare and complex events. That tool is modelling, and let me illustrate the usefulness of modelling by...
[A photo is shown of soldiers in Afghanistan. Text appears on the photo that reads "The fall of Kabul, 2021".]
...retelling the story of the fall of Kabul in the summer of 2021. Certainly, engraved in my memory are pictures of U.S. aircraft...
[A photo is shown of an article from The Guardian which features a photo of multiple Afghans surrounding an airplane.]
...taking off from Kabul Airport with Afghans clinging to the underbelly of the planes in a desperate attempt to leave the country...
[A photo is shown of an article from CNN which features a photo of people gathering near a plane in Kabul.]
...before the Taliban consolidated their power. The Taliban had advanced from the south of the country to the north in a rapid sweep...
[A photo is shown of an article from Fortune which features a photo of Afghans crowding Kabul's airport.]
...and, within ten days, took control of the capital city, Kabul, on August 15th, and cut off escape routes for Afghans who were trying to flee the city.
[A photo is shown of Afghans waiting in line to board a plane.]
If you saw those pictures, you probably asked, as did I, how could this evacuation have been...
[A photo is shown of a large crowd of people in Afghanistan.]
...so chaotic, disorganized, and frankly shambolic? The answer that came back from U.S. officials when they were asked is...
[A photo is shown of an article from Newsweek which features a photo of U.S. soldiers in Afghanistan.]
...we were surprised. They said that they had asked their intelligence agencies and were told that, yes, the Taliban would take over Afghanistan but not quickly.
[A photo is shown of an article from USA Today which features photo of a map of Kabul.]
One large agency predicted that the current government, the government of President Ashraf Ghani, would stay in power for at least two years after the U.S. withdrawal. Another agency predicted that the Afghan government would survive for at least a year after U.S. forces withdrew.
[A photo is shown of an article from CBC which features a photo of President Joe Biden speaking into two microphones.]
Those intelligence estimates were updated in early August by all the big agencies, the President asked. The new prediction, the Afghan government might fall by the end of 2021.
[A photo is shown of President Joe Biden speaking to officials.]
So, there were disagreements within the intelligence community but all of the agencies that were advising President Biden agreed there was time, at least several months. So, the President made a critical decision. He agreed to President Ghani's request not to start the evacuation because that would undermine confidence in the staying power of the Afghan government.
A photo is shown of the ground with many cracks in it.]
Now, what happened here in this story?
[Text appears on the photo that reads "Predictions drove decisions."]
Predictions drove decisions as they almost always do.
[A photo is shown of an article from Global News which features a photo of Prime Minister Justin Trudeau speaking from behind a podium.]
Now, was it different in Canada where we might think intelligence analysts were less motivated to be optimistic, they had less skin in the game? After al, we in Canada had no troops on the ground. We had withdrawn our forces in 2014...
[A photo is shown of people walking down the street in Afghanistan.]
...but what we did have were thousands of interpreters who had helped the Canadian Armed Forces and, in so doing, put their lives and the lives of their families at risk were the Taliban to take over, and we had fixers who worked with Canadian journalists and also put their lives at risk and who had helped Canadian NGOs. So, were the predictions of Canadian intelligence any better? It seems not.
[A photo is shown of an article from Maclean's which features a photo of a crowd of people at Kabul's airport.]
Now, it's true that we in Canada get much of our intelligence data and analysis directly from the United States. It's also true that we had limited, independent capacity to collect and interpret intelligence on the ground in Afghanistan.
Nevertheless, as one of our senior deputies said, we were surprised. We thought we would have time to organize an evacuation of all the Afghans who had helped Canadians. We didn't have time and so we left behind hundreds if not thousands of Afghans who had risked their lives for Canada. Many hid in safehouses for months, waiting for help that was now much harder to provide...
[A photo is shown of black puzzle pieces. Text appears on the photo that reads "Immigration, Refugees and Citizenship Canada was swamped by thousands of files by people stranded outside Canada." Text appears on the same photo that reads "Better predictive analytics would have made a huge difference."]
...and IRCC, our immigration and refugee agency, was swamped by the imperative to process thousands of files by people who were stranded outside Canada, desperate for help. Yet, these people had to be examined as potential security risks. Better predictive analytics would have made a huge difference...
[A photo is shown of five red pins on a map. Text appears on the photo that reads "Predicting "when" is much harder than predicting "whether" an event will happen." Text appears on the same photo that reads "Even if we could draw on all historical cases, many would not be relevant."]
...but the real challenge was better prediction on when the Taliban were likely to take over Kabul. Predicting when is much harder than predicting whether an event will happen. This kind of forecast is really hard. We have very limited data to train an A.I. even if we could draw on all the available historical cases, and many of those cases would not have been relevant because the context was so different.
[A photo is shown of multiple black question marks and one yellow question mark. Text appears on the photo that reads "What could we have done to better inform time-urgent decisions?" Text appears on the same photo that reads "We could have built a model."]
So, what could we have done in early August in the face of uncertainty and the need to generate predictions to inform time-urgent decisions?
We could have built a model, making explicit the assumptions of the factors that would drive the return of the Taliban, and in advance, specify the indicators of each of these factors so intelligence analysts would have known what to look for.
[A photo is shown of digital geometric models.]
We could then have fed that model, on an hourly basis...
[Text appears on the photo that reads:
"Data relevant for predictive modelling:"
"Location of Taliban forces"
"Distance from the capital"
"Methods of transportation"
"Capacities of Afghan special forces"
"Rate of Taliban advance".]
...data relevant to the indicators of those factors, data like the location of the Taliban forces their distance from the capital road conditions their methods of transportation where Afghan special forces were deployed and where they were stretched and the rate of the Taliban advance.
Using assumptions about the Taliban's preferences, we could have asked the model to predict the pace of the Taliban advance and then we could have asked the model to update the predictions as new data were generated.
[A photo is shown of a red and yellow map titled Taliban Advances.]
The first two provinces in the north of Afghanistan, the heartland of traditional resistance to the Taliban, fell with no fighting whatsoever on August the 5th. How diagnostic were these indicators? Well, to me, they were very diagnostic and I said quietly to myself, the fall of Kabul is coming imminently. Had our agencies treated these variables as diagnostic based on an argument that we could have had beforehand about how important they were...
[A photo is shown of people standing in line at Kabul's airport.]
...we could have had aircraft on the ground in Kabul a week before the Taliban took control of the city.
[A photo is shown of soldiers helping lift a child onto a platform.]
Many of the Afghans, to whom we were and are obligated, could have been safely evacuated. Time really mattered and it is accurate prediction that gives decisionmakers time.
[A photo is shown of a red puzzle piece in a white puzzle.
Text appears on the photo that reads "Models to test assumptions working together with human judgement could have improved predictions.", and then fades out. Text appears on the same photo that reads "The question is not whether we have a perfect model or AI, but rather: How do these models compare to human decision-making?"]
In this really difficult case, models working to test assumptions and human judgment working together could have improved predictions. When we think about how a model could have helped predict the fall of Kabul, we needed to compare it to human decision-making that is also biased. That is always the default.
The question is not whether we have a perfect model or whether we have a perfect A.I. but how do these compare to human decision-making that is often also flawed for reasons that we know.
[A photo is shown of a tall, illuminated ladder next to shorter light blue ladders against a light blue wall. Text appears on the photo that reads "Building a model forces us to make our assumptions explicit, which is a powerful de-biasing tool."]
So, here's why building a model helps.
We're forced to make our assumptions explicit, and we're forced to do that in some kinds of A.I. as well. That is a powerful, de-biasing tool. We can test alternative assumptions against the data that we have. Considering alternative assumptions is another de-biasing tool. If it wasn't the rate of the Taliban advance, well, were there certain provinces that, if they fell, told us that the fall of Kabul would be likely?
[A photo is shown of rectangular prism of different colours and different heights. Text appears on the photo that reads "Alternative models can generate a range of predictions, broadening what analysts can say to policy makers."]
Alternative models can generate a range of predictions, each expressed with varying level of confidence to decisionmakers, broadening what analysts can say to policymakers, expressing contingency, and alerting them to the ranges is so valuable to decisionmakers who have to make these decisions, at best, in a world of probability, but more often, in a world of uncertainty, and make these decisions with really grave consequences.
[A photo is shown of multiple black question marks.] Text appears on the photo that reads "A difficult choice: Starting an evacuation immediately and undermining confidence or waiting and risking that an evacuation would be rushed."]
So, let's go back for a moment and put ourselves in the position of decisionmakers in Ottawa between August the 5th and 15th.
Ministers had to choose between starting up an evacuation immediately and undermining confidence in the Afghan government, or waiting and risking that an evacuation would be rushed and put at risk many Afghans who had been on application lists for years. Which harm would you have advised our government to minimize? What evidence would you use? What would you have based your prediction on? I'm going to leave that one with you.
[A photo is shown of a helicopter flying over land.]
Let's consider a second problem, a prediction where A.I. could really have helped. After that frantic evacuation, Afghans who had risked their lives for Canada were dispersed throughout the city of Kabul.
[A photo is shown of a red arrow made of puzzle pieces. Text appears on the photo that reads "Could AI have helped generate predictions to identify and provide visas to Afghans who helped Canadians?"]
Could A.I. have helped our officials to generate predictions rapidly within a 48-hour period that they could've used to identify and provide documents and visas to Afghans who claim they had helped Canadians? Many Afghans had applied years before to come to Canada but processing was very slow, and those Afghans who congregated at the gates of the airport needed one document, a visa from Canada, before they could get into the airport.
Could A.I. have helped us here?
[A photo is shown of numbers and buildings. Text appears on the photo that reads "An AI could have been trained on a data set of previously accepted refugees and then tested for accuracy."]
Here, large datasets are available, of past immigrations, or could easily have been constructed.
An A.I. could have been trained on a dataset of refugees that Canada had accepted from war-torn countries and those that had been rejected. A validation set could have been constructed on refugees from other countries. Smart analysts could have paid real attention to the biases that might have been built into those datasets and then the A.I. could have been run against a test set and its error rate established. It's almost inevitable that the predictions generated by the A.I. would have reflected some of the biases of previous decisionmakers but at least the decisionmakers in IRCC would have known in advance something about the biases and more about the error rate.
[A photo is shown of soldiers in Afghanistan. Text appears on the photo that reads "What was the cost of those delays?"]
So, back again to the real policy problem that officials faced. If they'd had predictions generated that way, they could have traded off the error rate against the two-year human-generated delays in issuing visas that failed to reach many of the Afghans in time to be evacuated.
What was the cost of those delays? Well, Afghans who congregated at the gates of Kabul Airport but who did not have visas from Canada were turned back. Virtually everybody who was turned back is now at considerable risk. Funding for safe houses has run out and there are Afghans stranded inside the country as well as around the world as they await Canadian visas.
It's very likely that officials who are working overtime now to process these files are nevertheless making errors that all human beings make. We all make those mistakes, so that is not in any way a criticism of our officials, but when humans make unaided decisions, we can't calculate those errors with any degree of accuracy nor can we examine the biases.
[A photo is shown of a pattern of multiple small circles. Text appears on the photo that reads "Reframing the policy problem: Train an AI to generate lists of Afghans to be evacuated on a time-sensitive basis, prioritizing efficiency over accuracy, or not generate any lists and strand many Afghans with no access to help."]
So, think of the policy problem this way.
We could have trained an A.I. to generate lists of Afghans that could be evacuated on a time-urgent basis, and here, we would be using A.I. to improve efficiency, knowing that we were going to sacrifice some accuracy because of the imperatives for quick action.
The alternative, not to generate any lists because of the short time available and the complex data requirements and the risk of error which is a major factor for human decisionmakers, resulted in stranding many Afghans with no access to help. How would you weigh the relative harms and benefits and what would you advise the Minister?
[A photo is shown of puzzle pieces connecting in a circle. Text appears on the photo that reads "Good intelligence is critical to decision making." Text appears on the same photo that reads "AI and models can help in different ways, and are suited to different policy problems." Text appears on the same photo that reads "We can know the error rate of their predictions, but rarely know our own." Text appears on the same photo that reads "We pay attention to the biases of AI, but far less to the biases of all human decision makers."]
In summary good intelligence is absolutely critical to decision-making. It always has been and it is becoming more and more so. Intelligence is fundamentally a prediction problem.
A.I. and models can each help with prediction, although they do it in different ways and they're each suited to different kinds of policy problems.
We can know the error rate of their predictions. We rarely know our own error rate.
We pay a great deal of attention to the systematic biases created by an A.I. but far less to the systematic biases that all human decisionmakers have.
[A photo is shown of one open box emitting light in a group of closed boxes.]
So, what are the challenges that we face as we move into a world of A.I.-generated intelligence? How best to combine the benefits of human and artificial intelligence to improve predictions that are essential to better policy? When do we use A.I. to improve accuracy and when do we use it to improve efficiency, and how do we trade these two off? What kinds of prediction problems are best suited to A.I. and what kind are best handled by models that manipulate assumptions explicitly? In the next few years, we will learn more and get better at meeting these challenges but we are already in the world of A.I. models and intelligence. Thank you.
[A graphic appears with the text "ARTIFICIAL INTELLIGENCE IS HERE Series".]
Martin Green: Well, thank you, Janice, for that marvellous video. I'd like to introduce our two guest speakers for this afternoon. We're privileged to have Janice Stein, Professor, Munk School of Global Affairs and Public Policy, and Professor of Conflict Management, Department of Political Science, University of Toronto, and we also have with us from Atlanta, Jon Lindsay, Associate Professor, the School of Cybersecurity and Privacy, Sam Nunn School of International Affairs at the Georgia Institute of Technology.
I'd like to remind folks that you can submit questions. We're going to have a few brief chats with our guests and then we're going to go to 'Q's and 'A's. So, to do so, you need to use the top right corner of your screen, click the Raise Hand button, and enter your question. So, please feel free to do that and the second part of this hour will be devoted to your questions.
So, to kick off on the panel discussion, people have been talking about A.I. for a number of years, lots of coverage, how we're on the cusp of the fourth industrial revolution or indeed in it, and I was wondering if I could ask both of you for your thoughts sort of contextually on where we're at.
Is A.I. really differentiating right now from the more traditional intelligence collection approaches?
[Text appears on the screen "AI and Machine Learning in Foreign Intelligence" "Policy and regulations." Prof. Lindsay is sitting in an office facing his webcam. Prof. Stein is sitting in an office facing her webcam.]
[Text in French appears on the screen: « L'intelligence artificielle et l'apprentissage automatique en renseignement étranger » « Règlements et politiques ». Janice Stein apparaît dans une fenêtre de clavardage vidéo. Jon Lindsay apparaît dans une fenêtre de clavardage vidéo.]
Where are we at in the use of these technologies? Are they still considered nascent or are we well down the road? This is to provide some context for folks about where this fourth industrial revolution may or may not be at. I thought I would ask you, Professor Lindsay, if you wouldn't mind starting off.
Jon Lindsay: Absolutely. So, no, there's no shortage of documents from the United States, from European countries, certainly from China, that paint A.I. as the new silver bullet for competition whether that be in the economic realm or the military realm. This idea that this transformation and the ability to analyze and predict data is going to lead to kind of wholesale reforms in national competitiveness, and maybe that's especially in the military realm, right, and there's no shortage of kind of science fiction prompts to get us thinking in that way.
Of course, the reality is far more complex. The context, as you mentioned, is far more complex and I think it's important to understand this thing that we're calling the fourth industrial revolution in terms of a much longer process of switching physical labour and physical activity and human interaction and building and doing things to more cognitive activities, and before you even start thinking about what the machines may be doing in the cognitive realm, we have lots of people that are doing what looks more like office work rather than factory work that are working in headquarters rather than fighting on the battlefield, and that's been a long term trend that really isn't about A.I. and that creates these incredible information management challenges.
So, I think some of the excitement about A.I. is kind of just recognizing the difficulty of this informationally dense and interpretively challenged environment that exists for firms, for militaries, and for intelligence agencies, and there's perennial hope that there will be a technological solution for it but I think rather than really seeing a fourth revolution, we're seeing a continuation of the same process which means that humans and energy are having to deal with increasingly complex problems and having to sort through a lot of the problems that those very solutions will create in the process.
Janice Stein: So, I might just add to what Jon just said. If we move in along the frontier, into science fiction, the most interesting science fiction right now is about what we call AGI, artificial general intelligence and that's what's gripping the popular imagination. Are we humans about to be replaced entirely by machines, and that's way, way, way out at the end of the frontier, I think, and how close are we to that is what we hang around coffee machines talking about all the time.
I think it might be useful to make a distinction here and it'd be interesting, Jon, to hear what you think about this. One is this argument about replacing human cognition with machine cognition, because that's what artificial intelligence is, really advanced capacity to process enormous amounts of data and to derive patterns from that data, and then let me add a loaded word here which we don't know what it means. Believe it or not, after 2,000 years, we still don't know, unlearn, because that's the critical word in here and we do not have a consensus in neuroscience, cognitive science, philosophy, or artificial intelligence on what that one word means because that's what machines went up to be and would do.
They would have to have an autonomous capacity to learn in order to replace, and then there's the much more, as Jon suggested, pragmatic approach which is machines are going to help us do a lot of the things that we do. They're augmenters, and so A.I. will augment. We don't have to look very far. When we pick up our smartphone, mine has already sent me a message about something because it's figured out by pattern recognition that I'm interested. Even worse, my espresso machine at home. I just had to buy a new one because my old beloved one imploded, and this one is a smart espresso machine. So, there are 12 drinks on one machine. I've only used it for three days. It knows that I only want that one and the other 11 have now disappeared from the display board.
So, is that machine thinking? Is it doing really smart pattern recognition? But, it's an example of A.I. in everyday life. So, from that perspective, it's here. We're not- it's not a process, it's here. It's been here for a long time and it's getting better and better and better at pattern recognition all the time. There is still a big, huge question, can it learn? That's the big one and we don't have an answer to that.
Martin Green: Well, you know, some of the A.I. stuff out there is very polite. I know that I yell at my voice command on my television on occasion when it doesn't do what I want and it's unfailingly polite in it's response. It's sort of like, I don't understand, Martin, did you want to do something else?
In my job, we do a lot of work on A.I. just as a subject, you know, what country's doing what, where the technology's at, and sadly, one of the things that quickly becomes apparent is all of the opportunities, A.I. in health services, A.I. in social services, the opportunity is endless, so too is the dark side on all of this in terms of what some of these technologies can be used for, and I'd like to ask both of you, in terms of your studies on this, where do you think the major sort of policy debates are going to be around the use of A.I.? We know that authoritarian countries, given their command and control and, I would suggest, lack of restraint, are able to use some of the newer technologies, including A.I., basically for suppression or to get rid of dissent on the dark side, but in terms of, I guess, you know, the West, for lack of a better word, and especially Canada, where do you see some of these key policy debates with respect to the application of A.I.? Did you want to start again, Jon, or is-
Janice Stein: Over to you, Jon?
Jon Lindsay: Sure, sure. You know, I think there's kind of four big areas and they certainly pop out when you're looking at the authoritarian countries. I mean, China's interested in A.I. because it wants to make money and it sees that this is a huge emerging market. It wants to, you know, have a more effective censorship and information control regime, and that's certainly a place where it can use it.
It wants to have, you know, a more modern and effective military force, and so throughout the last 20 years, the Chinese were talking about informatization. Now, they're talking about intelligentization. It sounds better in Chinese, right, and that makes a lot of sense for China because China wants to build a military that looks a lot like Western militaries that are very high tech, very networked, but they run on people that are very highly educated and have a lot of initiative to go and do things, and that's a little bit scary for the Chinese People's Liberation Army, right?
Because if you've got really smart, highly-educated people with lots of initiative, how do you know that they're loyal? So, if you could replace that loyalty problem with A.I., maybe you could get military effectiveness and, you know, loyalty to the regime, and that solves the coup-proofing problem that, you know, I think we're seeing on display in Russia right now, quite frankly, right? I mean, like an abysmally, you know, effective military but one that has been made loyal. So, you know, maybe that's a win domestically. I digress.
And then, the fourth bucket is your world, right, the world of intelligence, right, trying to make sense of this mass of information, and I think there are some serious challenges there as well.
So, you know, you can drill down to all four of these categories but I think one thing I want to emphasize is that in the kind of economic and political history in all of these areas, whether we're looking at economics, kind of institutions, military power, intelligence, the key thing that determines the effectiveness of technology is not the technology itself, not whatever it substitutes for, but the complements, right, the organizational skills and the human capacity that that country or that firm can mobilize to get the most out of its technology.
So, I would suggest that when we're kind of looking and comparing, you know, what North America can do with what China can do or Europe, right, we need to be doing that full net assessment and looking not just at the technological substitutes but those super, super vital human complements.
Janice Stein: I think that's a really important point that Jon has made, that if we isolate technology from the organizational context and the larger systems in which it's embedded, that's really a problem. That is really a problem. We're going to get the analysis really badly wrong. On the other hand, let me just drill down the other end of the spectrum for one minute, just to make a point that doesn't get made often enough when we look at the challenges of A.I.
One of the problems is, and this is the most frequently cited challenge in the literature, A.I.'s biased. Now, why is it biased, and that's very- it's a really interesting conversation, because- and there's huge amounts of evidence here. I just saw a really interesting piece on the A.I.s that come out of DeepMind, and it's because DeepMind scrapes the whole of the web and gathers it up in, you know, massive amounts of data.
But then, of course, what it doesn't take into account, how much junk is on the web and how much bias is reflected in what's on the web, and all that's scooped up and incorporated and is used to train A.I.s, and so, you know the old expression, garbage in, garbage out, well, you get bias in, bias out.
But what this whole argument misses, if you left me with the problem, you get biased too because every single one of us brings bias, unconscious bias for sure and sometimes conscious bias, and the real problem is with the unconscious bias, and there isn't a human being- there's no exception to that rule. There is- and everyone of us has unconscious bias because we have scripts, cognitive scripts that we develop to manage the problem that Jon talked about, huge amounts of information. Our cognitive processing would break down if we did not have these scripts but we don't think about our own bias when we- when an analyst in your shop, Martin, provides a piece of analytic work.
So, our conversation, if I think about it as a seesaw, it's tilted this way and all the bias is on the side of the A.I. and none of it's on the human side, and that's just a completely inaccurate discussion to have every time.
So, my concern is not that we shouldn't be worried about bias in A.I. but it's a much bigger concern. We need to be worried about unconscious bias in all human information processing, whether augmented or not, and when we put the problem that way, that's, I think, a better context for thinking about A.I.
Martin Green: Thanks. It's interesting, you know. The bias conversation is very interesting. In the lead up to the Russian invasion of Ukraine, you know, it was pretty obvious to everybody who was reading the newspapers that the United States, you know, in an unprecedented fashion, gave some of its more exquisite intelligence to the public domain because it was pretty much indicated that the invasion was coming.
But a whole bunch of people including myself had this optimism bias which was, no, he's never going to do this, like, it's wrong, and actually, it was one of those times when my analysts were telling me, you know, get your act together, Martin, and it's very- you know, as a human being, it's very hard to see and I still have those moments when- of complete disbelief in what we're seeing.
I also remember that- you know, and it fits in what you were saying, Professor Lindsay, is, you know, the biggest ingredients in innovation in any country are basically demographics, technology, and the skills of people, and the more that I'm looking at A.I., everybody is saying the foundation piece of this has got to be people and their talent if you really want to, you know, have A.I. that produces social benefits.
But one of the things that we're also seeing that I alluded to, and I'm wondering how Western governments, you know, get around this or if it's sustainable. We believe in a free and open Internet, you know. We have charters of rights, privacy laws, all of this, and one of the things about A.I. is the more data you can get, the more data on your citizens and the better that data is, the better your A.I., you know, could- you know, the better the results that the A.I. may produce for you.
So, do you think it's sustainable, you know, in the U.S., Canada, in the West, that we have, I guess what is a slightly laissez-faire approach to the internet and how A.I. is used, or do you think we're going to have to intervene more?
Jon Lindsay: Well, I mean, I think we already have plenty of intervention going on. I mean, you know, data doesn't just produce itself. It produces itself because, you know, people in societies and firms want it that way, and if people start to get upset, then you get, you know, more privacy restrictions and comprehensive regulation like you have in Europe, and now, you know, for the first time ever, we're now seeing the NSA in the United States is going to have to have a foil-like process where it will review European complaints about surveillance, right? I mean, like this has never happened.
You've had an American intelligence agency that's now going to be shaped by foreign laws that Congress hasn't ratified but that's the price of making sure that Facebook and Google get to do business in Europe.
So, you know, I think that this is a really, really live debate that we're going to continue to see, you know, and it's not the case that simply more information makes A.I. better. I mean, you know, I think the days of information wants to be free and we'll all get more, you know, educated and smart on the internet, like those are long gone. We see a lot of noise and it's only going to be in these particular applications where you can say, okay, this is the data that we have on a problem that we can understand, we understand where the data's coming from.
And now, right, you can- okay, like, take Uber, Lyft, right? There's several kinds of data. There's all the data that goes into making live traffic maps, there's all the data that goes into ridership patterns, and there's kind of this massive consensus amongst people that build maps, use maps, use roadways, you know, want to get from place A to point B at- in an efficient way, like everybody can kind of understand what they want to do and what a solid performance of the A.I. supporting this is going to look like.
So, you end up getting, in this really well-institutionalized environment, a really nice niche kind of route predictor and price predictor set of A.I. applications to support that Uber application, you know. That's not a general purpose A.I. that you can just stick out somewhere else, nor would you necessarily want to dump a bunch of other extraneous, you know, information that wasn't totally relevant to that prediction. So, I think when you kind of think about, like, needing more data, it's data about what- it's never going to be unbiased. It's going to be a bias that's useful for the kind of application that, you know, that firm or that society has decided that it wants to engage in, if that makes sense.
Janice Stein: You know, just to elaborate on what Jon just said, we think back to the early days of computers and there were cards, punch-hole cards. That's where the sector really started. It was all kinds of quality control, right?
Because you knew no matter what, there was error and in order to get reliable outputs, you had to control the quality of the inputs. I think that's where we're going with A.I. too and that's really the import of your remarks, Jon, that we're no- it's not about hoovering up massive amounts of data about your citizens, because you're going to get a lot of garbage and you know you're going to get a lot of garbage, and it's- so, it's not going to be that useful to you. It's going to become more refined.
You have to have a more precise definition of what you want to do, Martin, and then there's going to be a big investment in the quality of the data that goes in because you're not going to get very smart A.I. if you don't have really- if you don't have quality control around the data that's going in. So, you know, that project that I talked about that hoovered up this massive amount of data from the web, produced garbage, and that's what they found out, well, good.
So, there's a lot of hype around mass surveillance, frankly. It's really wasteful and expensive and doesn't give you the kind of results you want.
There was another gem in Jon's remarks there that's worth thinking about for a minute which is the role of European regulation of data standards, and particularly with respect to privacy.
What's really interesting is there no big European companies. They're not in the top 20. There's two and they're way down, 18, 19. The rest are all Chinese and American which really tells you something about the Europe of the last 20 years. There's Nokia, there's Ericsson but, boy, it's a small number and totally disproportionally small given the European industrial marketplace. It's a really bad performance. Now, the French are just beginning now with very early days where the French may begin to agglomerate enough.
So, how is it that Europe is able to export its data standards and Facebook and Google comply? Well, they comply because it's just more efficient to have one standard and to conform to that one standard, and you treat the whole world as a global market now.
So, that's what gives Europe the power to regulate even though it has no horses in the race, right? How long is that true? I think that's the bigger question.
Do we have- are we pretty well at the end of a single internet? Are we at the verge of a divided internet, a fragmented internet where there are- it's really difficult, the firewalls grow, the spaces shrink, and when the spaces shrink, does your- lose some of that leverage to set standards in a much smaller internet than the one that we currently have now? That's a speculation about where we're going in the future but it's something that some of us think about now.
Martin Green: That's a perfect segue to sort of a third part here which is, I would mention that right now, half the world's leading A.I. experts work for American companies and the U.S. attracts the majority of high quality for talent.
[Text appears on the screen "AI and Machine Learning in Foreign Intelligence" "Policy and regulations."]
[Text in French appears on the screen: « L'intelligence artificielle et l'apprentissage automatique en renseignement étranger » « Règlements et politiques ».]
Canada is no slouch in this regard. We ranked fourth globally in attracting high quality talent in 2017. So, we're a strong niche player in this game, just so folks know.
In terms of- one of the questions about this are the impacts of A.I. technology on geopolitics. There- we've seen a lot of debate recently that the U.S. is very, you know, upset with the Chinese for investing in reverse engineering, stealing U.S. patents technology, the same here in Canada, and there have been lots of chats about bifurcating, decoupling these extraordinarily complex supply chain systems, but I was wondering if I could ask both of you to offer some thoughts with respect to the geopolitics.
A lot of folks are talking about, you know, there's three centres of technology right now. One is China, two is the U.S., and three is Europe, as, you know, the major players but I'd love to hear your thoughts on where this is going geopolitically.
Janice Stein: Jon?
Jon Lindsay: (laughs)
Martin Green: Just in a sentence or two.
Jon Lindsay: Just a sentence or two. I'm trying to, you know, decouple from the geopolitical question, from the A.I. question, you know. Obviously, this is not a new Cold War that we're entering and this is something totally different, right? This is not nuclear superpower bipolarity. There are multiple centres.
You've got simultaneous competition on multiple dimensions with an unprecedented degree of integration and even if there will be a little bit of decoupling in some areas, like, that integration is going to continue to be the theme and we'll watch it ebb and flow in a couple of different areas, and so how do you compete while you're cooperating or how do you cooperate while you're competing, right? Like, those- that's going to be the overarching theme that we're going to continue to try and deal with and I think it puts a huge tension on the A.I. question because on the one hand, you're like, oh my gosh, China or whoever, you know, is going to have the A.I. thing that's going to enhance their military power and allow them to overcome a lot of their deficits. Well, that's kind of a classic let's look at the power of individual countries as they, you know, impact others.
But if what makes A.I. really useful is access to a lot of this data that is shared to, you know, a high degree and, you know, involves kind of more collective problems, then, you know, you need some degree of integration to even make that, you know, data possible and available.
So, while we have these important kind of geopolitical trends, don't expect any kind of sharp rupture because you're still going to be stuck in this interesting kind of superposition between cooperation and competition, and I think A.I. just really, really- you know, it's almost the poster child for that because it works- when it works well, it works because there is an institutional structure for it, right? I gave the Uber, you know, self-driving example, right? Like, that is kind of a stable economy where everybody kind of agrees this is generally something we, you know, want to have happen.
And the military situations are the exact opposite, right? The data is terrible. The judgments are controversial, right? Really, really hard to find things for A.I. to do other than, like, really, really niche, you know, applications.
And so, if we're looking at emerging multipolarity, so multiple centres in a heavily interdependent world, you kind of have both of those things going on, and so I think it's going to make A.I. very attractive but also really, really hard to operationalize.
Janice Stein: I'm going to- I agree with everything Jon said. I'm just going to make a really provocative comment now about one tiny piece of your question, Martin, which is when you opened up the question and you said there's a history of thievery of intellectual property where China is concerned, and that's certainly been a dominant trope, and we in Canada have passed on regulations in order to deal with that problem.
I think that it is probably going to prove to be one of our biggest own goals, and why is that, because the story of China is behind, and sending its graduate students and its faculty to the West to steal intellectual property, you don't have- you could just do it by being present, right, because you walk out of the lab and ideas are in your head.
That story is old and outdated and- because this is what Jon just said, we should be sending our faculty and our graduate students to China, to its best universities to learn so that we can catch up and be as good in some very important areas where China is now in the lead.
So, this kind of visceral reflex, keep people out, might have been a story that would've worked 20 years ago. It is a self-defeating strategy for a country like Canada to take into the 2020s, that is for sure. I know there is almost nobody in the Canadian government who agrees with me on this but if we say it often enough and loud enough, it may begin to make some sense to people who actually have deep knowledge of what China is doing in some areas.
And that's where Jon is so right, because this is such a differentiated story and you have to know some areas of A.I. and some areas of content where China has made both massive investments and really significant advances.
So, we have to let go of the story that's 20 years past its best before date, is all I can say.
Martin Green: I certainly agree with the part about having more Canadian students study in China, you know. I think, as I recall the numbers about five years ago, there were 150 some odd thousand Chinese students studying here, a lot in post-grad. I think we had 1,300 or 1,500 studying in China, you know. So, if you do want to work with and, you know, profit from the biggest economic centre of gravity, it probably makes sense that we know more about it.
As for the rest of the trope, we will discuss that at some other time because there are a lot of things going on and we have put some safeguards in place. We're at the juncture where we get to go to 'Q's and 'A's from the audience and a number have popped up. One of the ones that I'm going to use, you know, my perch as moderator, and it's something I wondered, you know, about myself, and my analysts are always asking.
In terms of A.I.- and we have, you know, a growing intelligence analyst community at the federal level, foreign service officers, a number of who collect foreign intelligence. As we're moving towards perhaps a more technology-driven future which includes A.I., what would you counsel young public servants to do with respect to preparing themselves for a newer world?
Jon Lindsay: Read about the older world. I mean, I think the more that you can learn about history and human nature and you know what states and state leaders actually want, the better off you're going to be because again, you know, A.I. will help you identify is that a tank? Is that a ship? Like what do the migration and refugee patterns look like and all that's really, really useful to inform your assessment.
But when it really, really comes down to like what is that dictator going to settle for or when are they going to negotiate, right? Like that's going to come down to these basic questions of fear, wealth, status and all the Thucydidean concepts and those are issues of value, judgment, goals, objectives, stakes and I think the more that we, you know, automate kind of what human beings do, the things the machines don't do, it's kind of these moral and intellectual qualities become really, really, really more important. So, like the comparative advantage of analysts are going to be understanding the nature of human beings and human societies.
Janice Stein: I think that's absolutely right. I might add just one more because let's put A.I. in the queue of a long wave of technologies that've come on board and improve our capacities, to understand and to analyze problems. So, when we use these technologies, there's always a risk of blind mysticism that we think, you know, and I can remember going back, this is a very long time ago to my graduate school days when I learned a new statistical technique. Well, it was magic, right? It just produced results and there's a tendency to believe the result because there is this magic that's happening. Well, technology's never magic and I think Jon and I, and you Martin have said in entirely different ways this afternoon that what's really important to understand the quality of the data that is going into any technology whose results you use, and just to take that step back as an analyst and ask those hard questions, what kind of data, right? What kind of data informed this analysis? What did you do to the data to sort of have not the capacity to do it yourself. That's not, you don't need that because as you know, we're never going to be a skilled as the people who do it full time but the capacity to ask the hard question about quality and bring skepticism, always bring skepticism to the result of any analysis that you don't understand. You know, it makes me think just for a minute of derivative analysis and a CEO who said, "No, no, I'm not doing that in my bank" and they asked them, why? And he said "Because I don't understand it. And I'm not going to sign on to anything I understand" and he was one of the few whose bank survived 2007, 2009 without very serious consequences.
So, bring your wits to the game, bring your skepticism, bring your own intelligence, and ask hard questions about the quality of work that you're doing and that never changes, it doesn't matter what technology is.
Jon Lindsay: I think maybe if I could add one more practical piece of advice and this is something you could easily implement, I think if you're interested in A.I., come down out of the stratosphere from the big kind of debates about A.I. competitiveness and go look at one particular A.I. system in use and just kind of look at the difficulties that people encounter.
So, I have a neighbour who works in computer vision. He's got this company right, and he's trying to put together this system that will just sit there and stare at beer cans going by on the supply line and identify when one of the beer cans has been crushed because you want to pull that off the line and right now, like humans will come in spot check the beer cans and you know, you miss most, most of them and so a bunch of them end up going through so like this seems like a perfect A.I. application, super stable. All the beer cans are stabilized, it's on a mass-produced line. All it has to do is look and go like 'bing', and you would not believe how hard this problem is.
Beer cans that look the same are not the same. Like they're rotated slightly different they've got beads of sweat, the lighting gets all funky and other machines are like suddenly glinting all over the place and so, the amount of tweaking to solve the easiest possible computer vision problem is, I think, just useful to understand. Okay, now you want to scale this up to, you know, doing a national intelligence estimate? Give me a break, right? So, kind of immerse yourself at the lowest, lowest level to understand how a couple of these systems work and then go back up with that knowledge and start to think about, okay, how is China really going to implement this? How is it really going to work? And I think it will be a revelation.
Martin Green: That's really, I think that's great advice. We have a number of questions in from the audience. I thought I would start with this one. Do the panelists think the decision makers don't trust A.I. as much as traditional intelligence collecting, in brackets, implying the rule is to always trust your instruments over your judgment.
Jon Lindsay: I love this question because in all of the debates over lethal autonomous weapons system, the concern is like, oh my God, you're going to give Terminator the authority to start killing human beings, this is terrible. We need to have a human being in a loop and sometimes the question is, well, why do you trust Sergeant Schmuckatelli more than the machine, right? You can end up with Srebrenica or Mỹ Lai or any of these terrible human making, that the machine would probably say, no, this is obviously illegal, this is not a good idea, we're not going to do this, this is not a situation that I want to be if. So, you know, there may be situations where turning things over to the machines may give you more predictable results. But we don't want to do that because we have this more romantic notion of what humans could bring to it.
So, I love the spirit of the question. But again, I think it comes down to what is the specific thing that you're trying to solve, like instruments on an airplane work because we understand the environment that they're in, the physical principles on which they're operating, what they're actually sensing right? How far above the ground you are, which way the wind's blowing, that's great. You've got a fairly stable problem and you've got a solution that is super harmonized with it and most of the intel problems you're looking at are not that at all. It is a messy, noisy, difficult problem and you have made all of these kind of biased assumptions about what you're looking on, and you need to have somebody that is smart about both the problem and the solution to kind of mitigate that gap.
Janice Stein: And let's just push this analogy one step further. Let's go back. How many passengers would be okay if they had announced, buckle your seat belts and an A.I. is flying you today from Toronto to Atlanta, there's no pilot aboard this plane. They would not be happy at all. But what do the data tell us? 97 point something of crashes are human error, not machine error. They're human error. It's when pilots actually take the controls, now we get into a lot of trouble.
Jon Lindsay: But what about Captain Scully?
Janice Stein: I'm sorry.
Jon Lindsay: Human romanticism in this space is huge.
Janice Stein: I know and the reason as Jon says, it works so well is because we have a stable system and humans mess it up, so it is no general rule here, right? You've got to fix that problem you're trying to solve what you're trying to solve for the environment you're in and look at the relative advantages of a machine versus human because both make errors and you're just trying to get a good handle on who's more likely to make what kind error in this environment.
[Text appears on the screen "AI and Machine Learning in Foreign Intelligence" "Policy and regulations."]
[Text in French appears on the screen: L'intelligence artificielle et l'apprentissage automatique en renseignement étranger » « Règlements et politiques ». ]
Martin Green: Another question from the audience. If western governments are limited in the extent to which they can gather and share information, but leading A.I. companies, for example, Google are not, is the future of intelligence production in the hands of the private sector?
Janice Stein: Look at (inaudible) satellites, right? There is a, Jon and I read and sleep this stuff and we couldn't see what you were seeing five years ago, but we could see a chunk of what you're seeing now because we have private satellites companies that are flashing out images in real time for us to look at. That's a different world, right? You're not sharing it with us voluntarily. So, a private sector company is doing it and we're not even paying them.
Jon Lindsay: That ties, there's another question that sits in the queue there, it was kind of on general open sources in general, you know, not just companies, but are we able to learn things outside of states? This is not an AI story. This is a data and information story. I mean, one aspect of the war in Ukraine that is so fascinating is the degree of specificity in the open source world that is available. I mean, think about what's available to you in your open sources and it is far more detailed than stuff that I was seeing in Kosovo 20 years ago with all of the sources behind the green door. I mean, that is amazing, right? So, the public is now able to track events. You know, I would think that there are probably many places that the open source intelligence is probably providing better information for what's going on the ground than the Russians themselves have and I'm sure the Russians, you know, are using that information as well. We know that, you know, the Iranians after Stuxnet benefited tremendously from all the open source analysis that was happening. So, you know, there are lots of interesting ways in which that's going on. But you know what I mean? Like when we see companies or, you know, open source movements replacing what the intelligence community used to do, you know, many decades ago, that doesn't necessarily tell us about what you guys are doing now and your value added is always bringing some kind of private information advantage to bear against that additional context and hopefully you're bringing in enough of that public information so that you have that context and not getting distracted by secrets. Spies, loves secrets, but, you know, we need to be focused more on the picture. So, I think, again, the overwhelming theme of complementarity comes to the fore again here. So, yes, lots of substitution happening, lots of cool stuff going on. But there's still a lot we don't know. To help break up wars is the greatest piece of evidence that there's things that we don't know, if we knew everything that we needed to know, there wouldn't be war in the first place and so you still have to have these kind of ongoing hard analytical processes.
Martin Green: For me, this is just an endlessly fascinating question and I think we really are, when you couple that with A.I. and some of the technologies that are available in the private sector, I think we're at a really nascent stage in terms of governments working with the private sector. I would include academia in that and I think we have to do that and the other part that you alluded to is open source, which is there are cultural barriers within you know, the intelligence community that if it's open source, it's somehow a lesser source and I've never really understood that because they clearly go hand in hand and it's one way to confirm things. You know, if you do have, the more covert or exquisite intel, it's quite often helps one move from something is likely to something is very likely if you confirm that through OSEN sources out there. So it's a potentially huge discussion. Sorry.
Janice Stein: It's also very interesting, Martin because you started us off this afternoon by saying the Biden administration did an unprecedented thing, it shared intelligence. That's true, We did that in order to shape the information space. But one of the really interesting things was, it's able to do that because there's so much open-source intelligence, battlefield intelligence. There's so much stuff available on Russian troop deployments up against Ukrainian frontier. The risk though, that it was going to violate or, you know, betray or risk betraying a source much less because so much of what was released would be confirmed by open source anyway. The certainty could not, the certainty did not, but a lot of the other stuff could be and that was very helpful to them.
Martin Green: Well, one of the things we're seeing too in terms of, you know, bad actors and threats, a lot of the threats are now, you know, private sector companies, individuals who may not have much to do with government and I think, you know, we've got to have a hard look at the obligation there to keep people informed and more open and transparent you are, you know, maybe pollyannaish, but I think it actually gives you the high ground in the long run. So, several great questions here as the world moves towards a world of unique, erratic autocrats that are sole decision makers for their countries, can A.I. and models really work? Does psychological analysis fit into A.I. and models.
Jon Lindsay: Professor Stein, that's all you.
Janice Stein: I will just answer that question and then I'm going to have to excuse myself, Martin, because I teach a class at the university, I'm going to have to give myself just enough time to log off and log on. But psychological models are of many varieties. We have a rich literature on general patterns of decision making that are predictable. They're not random, they're predictable. But you can be have very strong predictive value when you're capturing what two thirds of the population do, but doesn't tell you whether the person you're really interested in studying is among those two-thirds on the other side, one-third and that's, so that is a more, is an illustration of the more general problem we all have. Predictions or general findings don't give you point predictions.
So, when you want to know what Vladimir Putin is going to do, there are no general models that really help you answer that with precision. You're fitting together so many different pieces of the jigsaw puzzle and you're drawing on multiple sources and that speaks to the point that you made, Jon earlier, that's where human analysts are will continue to be absolutely invaluable because it's an understanding of the history, the culture, the context. You know, if people did not read the article that he published last summer, that 7,000 words article, somebody would have had to read that fit it in into a machine and it would get lost because it would be only 7,000 data points in a 10 million data point collection. It wouldn't get the weight that it deserved. So, when we're dealing with point prediction, which is often what we're dealing with in our field in international security, I think the future for human analysts, I think there's a lot of job security, unlike radiologists for instance whose job security is much more limited. A.I. does a really great job on reading mammograms.
Jon Lindsay: And I wonder if I can jump in on this just to kind of tie together some of the themes we talked about, because I think Ukraine is all on our minds and the intelligence community did a fantastic job assessing what the build up looked like and the operational intentions to launch that. But there were kind of two categories where we got things radically wrong, and one was the actual balance of power of these two forces and we radically overestimated the Russians because we didn't understand how bad their doctrine was and how terrible their experience was and how they really weren't able to operate all of this kit that they had, let alone the condition that it was in and number two, we didn't really appreciate that the Ukrainians went to school for the last seven years, and they have learned how to shoot, move and communicate with a high level of skill and they've got incredible morale and courage and some really difficult situations. So, like, again, those magical human factor X on both sides were really, really different than the material balance of power and that maybe had huge disagreement about the outcome of this war, which we're now in the process of measuring the hard way and then the other category and Janice just touched on this was like, what was this war about, right? And, you know, as security people we tend to like really think about security and it's this rational progress, like do you need a buffer or are you worried about escalation? Are you worried about NATO? And I think the more we learn, we're like, that's not really what this war was about, right? I mean, this is a war about Russian identity and Russian prestige and status in the international system as its decaying alcoholic power. I mean, like that's a different set of analytical concepts that are now becoming more and more salient because how else do you explain Russian willingness to, you know, completely break their geopolitical future on this particular conflict? So, again, those are deeply, deeply human and strategic questions and that's the future of A.I.
Martin Green: Listen, this has been extraordinary for me and I hope for the people listening. Can I just ask one final thing from each of the professors? If you could do a little self advertisement. Where would folks go who are listening if they wanted to learn more about either of your research? What would you suggest?
Janice Stein: The easiest way is to just find me on the Munk's school website. There is an email there and I would be delighted to hear from all of you and I'm going to say goodbye because my class is in a minute. So, Martin thank you so much and Jon, who is a valued friend and colleague. Thank you.
Jon Lindsay: Bless you and the Jon and Janice Show will be touring again in Toronto on Friday if you happen to be around. So, 2:00 at the Munk school.
[Text appears on the screen "Upcoming Event!" "Mainstreaming Novel Policy Instruments" "April 26, 2022" "Visit Canada.ca/School"]
[Text in French appears on the screen: « Événement à venir! » « Répandre le recours aux nouveaux instruments de politique » « 26 avril 2022 » « Visitez le Canada.ca/Ecole». ]
Martin Green: Okay, so I'm going to wrap it up here and thank, well, thank you, Jon and Janice, who's got a teacher lucky class. These have been invaluable insights and I would like to point out today, this series has covered a lot of topics that include the basics of A.I. and there are several more coming up, which I don't have my list, but I think you should go to the Canada School website and look those up on and unless the organizers who would be walking me through this think I have missed something big, I'll wish everybody a great day.
Jon Lindsay: Great. Thank you so much, Martin. Appreciate your hosting and thanks for inviting me. This was a lot of fun.
Martin Green: Look together to meeting or talking to you again. Thank you.
[The video chat fades to CSPS logo.]
[The Government of Canada logo appears and fades to black.]