Transcript: Artificial Intelligence Is Here Series: How AI Is Transforming the Economy
[The CSPS logo appears on screen.]
[Erica Vezeau appears in a video chat panel.]
Erica Vezeau, Canada School of Public Service: Hello, everyone. Welcome to the Canada School of Public Service. My name is Erica Vezeau, and I am the Director General of the Digital Academy here at the school. I'm really pleased to be with you today and welcome all of you who chose to connect to this event. Before proceeding further, I would like to acknowledge that since I am broadcasting from Gatineau, I am in the traditional unceded territory of the Anishinaabe people. While participating in this virtual event, let us recognize that we all work in different places and that therefore we each work in a different traditional indigenous territory. I'd invite you to please take a moment to reflect on this and acknowledge it.
Today's event is the fourth instalment of our Artificial Intelligence Is Here series. The school is offering this event series in partnership with the Schwarz Reisman Institute for Technology and Society, which is the research and solutions hub based at the University of Toronto and is dedicated to ensuring that technologies like AI are safe, responsible, and harnessed for good. So far in this series, we've provided participants with an overview of the AI landscape, including how AI is likely to transform decision-making, issues around citizen consent and when and how it could be used in a government context.
Today, we will turn our attention to the topic of how AI is transforming the economy. The format of today's event will be as follows. First, we will watch a pre-recorded lecture delivered by Professor Avi Goldfarb, Rotman Chair in artificial intelligence and healthcare, and Professor of Marketing at the Rotman School of Management at the University of Toronto. Following the lecture, Avi will join the audience live along with our guest panelists, Alex Scott, Group business developer for Coriolis AI, the research and development arm of the Royal Bank of Canada. Please note that our panel today was also supposed to include Pamela Snively, the Chief data and trust officer for TELUS communications. But unfortunately, she was unable to attend today due to illness. We wish her a speedy recovery. After introducing themselves, Avi and Alex will engage in a conversation that builds on some of the themes and topics addressed in the video lecture.
But before we begin, here are a few housekeeping items to mention, as we have a great event planned for you and we want you to have the best possible experience. Firstly, to optimize your viewing, we recommend you disconnect from your VPN or use a personal device to watch the session if possible. If you are experiencing technical issues, we recommend that you relaunch the webcast link that was sent to your email. Simultaneous translation is available for participants joining us on the webcast. For those who wish to access simultaneous French interpretation, please follow the instructions provided in the reminder email, which includes a conference number that will allow you to listen to the event in the language of your choice. Audience members are invited to submit questions throughout the event using the Collaborate video interface on which you're viewing this event. To do so, please go to the top right corner of your screen and click the raise hand button and enter your question. The inbox will be monitored throughout the event and questions will be taken nearer to the end. Now without further delay, let's start the video on how AI is transforming the economy.
[Erica fades away to a title screen that reads "Artificial Intelligence is Here Series.”]
["How is AI transforming the economy?” Avi Goldfarb stands in front of a blue background, as he speaks, illustrative images and headlines appear behind him.]
Avi Goldfarb: Hi, I'm Avi Goldfarb. I'm the Rotman Chair in artificial intelligence in healthcare and Professor of Marketing at the Rotman School of Management at the University of Toronto. And I'm going to talk to you about the simple economics of artificial intelligence, based on my book, Prediction Machines with Ajay Agrawal and Joshua Gans. So, Ajay, Joshua, and I are professors at the University of Toronto. Proud to be there and it's a good university. But about 20 years ago, something special happened at the university. And that was the discovery or invention, depending on how you think about it, of a number of technologies that have driven the current excitement around artificial intelligence. And coming through the University of Toronto between 15 and 20 years ago, were the future heads of AI research at Facebook, at Apple, at OpenAI, and elsewhere. And very recently, two of those, um, people who were at the University of Toronto 15 years ago, one, Geoffrey Hinton, who's still here, won the Turing Award. Just -- you can think about it as the Nobel Prize in computer science.
Now, the way the university usually works is I sit at the Management Department, I sit it in the business school, I'd open up the virtual or real school newspaper, see, "Oh wow, look at the exciting stuff going on in the computer science department," then leave it at that and go about my daily business. But Ajay, Joshua and I run this organization called the Creative Destruction Lab, which is a program for helping science-based startups scale. In our very first year in that Creative Destruction Lab, little, roughly ten years ago, we saw a company that called itself an artificial intelligence company for discovering new drugs, pharmaceuticals. And that was interesting, a little bit out there. And the next year we had a couple of more companies calling themselves artificial intelligence companies in different fields. And a year later- and two years later, we had this flood of companies coming out of, largely, the computer science department at the University of Toronto, and a handful of other, um, schools in Southern Ontario that called themselves artificial intelligence companies in all sorts of different fields. It was at that point that we realized this is a thing. It's exciting and it was worth our time to try and get our heads around it using our previous hat of scholars in the economics technology.
And so, what this talk is, and what the book's about, what our research has been since around 2014, has been trying to understand this change in artificial intelligence and what it means for businesses and other organizations. And the thing that you see over and over again through- you know this started way back in 2015 and the hypes kept building, and here we are in 2021 and we still see all this hype and excitement around AI. AI adoption is expected to surge. Robots are on the rise, wait, robots are on the rise but what does that mean for us humans? We should expect record job losses. That Professor Daniel Kahneman, who's a psychologist who won the Nobel Prize in Economics, and he said, you know, "AI is clearly gonna be able to do everything we can do. What's left for us humans?”
Now, I think it's very important to recognize that there is a particular technology we're talking about when we're talking about artificial intelligence today. It is not like the robots from say, Star Wars who, ah, the droids, who can do just about everything we can do. They do a couple of things better than human, hands on the robot. So, for example, C-3PO was a robot in Star Wars, C-3PO can do two things better than humans. The first one is something you see in the movie, which is C-3PO can translate between languages better than any human possibly could. The other is less obvious but it makes, actually, the droids in Star Wars particularly useful relative to the rest of science fiction, which is that C-3PO listens to humans and the droids from Star Wars tend to listen to what their humans ask them to do. The rest of science fiction, of course, is filled with these robots who can do just about everything we can do, do several things better than what we can do, and they don't listen to us, and that's how you get Skynet from the Terminator, or the robots from The Matrix, or HAL from 2001 Space Odyssey.
I don't think this idea of an artificial general intelligence is crazy. I used to think it was, but then I heard a handful of debates and discussions between Professor Kahneman and Yann LeCun and some others that convinced me that, now, this is possible someday. But it's important to recognize that that artificial general intelligence has been 20-50 years away since the very first AI conference at Dartmouth College in 1956, and it continues to be 20-50 years away. This is not the technology we're talking about today. The reason we have the hype about artificial intelligence today is because of a very particular technology, a branch of computer science, and a branch of computational statistics called machine-learning, has gotten much better.
And so, when you're trying to understand the impact of AI and how it's going to affect your work in your organization, you need to recognize that AI is prediction technology. What do you mean by prediction? It's using information you have to generate information you don't have. That's what's changed. Prediction has gotten better, faster, cheaper. In order to understand why it's a big deal that prediction technology or any technology has gotten better, faster, cheaper, it's worth jumping back a generation to 1995.
In 1995, I was an undergrad in 1995 and there was an exciting new technology, it was called the Internet. And there was a lot of hype about it. It was the year that the last aspects of the-the public Internet, the NSFNET were privatized. It was the year that Bill Gates wrote his internet tidal wave email saying to Microsoft, "We missed this technology in the internet, that's the future of computing." And it was the year that Netscape had their IPO, and they were valued at over a billion dollars without a nickel in profit. The hype kept building 96, 97, 98, till people stopped talking about the internet as if it was a new technology. They started to describe this thing called a new economy. This is when I was starting grad school and my professors very aggressively pointed out, no, no, no, it's not a new economy. You still need to take your economics courses and you still need to buy our economics textbooks. What they said though, is we can understand the impact of this technological change by understanding what's different, what has gotten cheaper. And in the context of the Internet, we can see that search had gotten cheaper, communication has gotten cheaper, and copy has gotten cheaper. And once you understand that you can map out a whole bunch of consequences. For example, cheap copying. What did that mean? It meant that protecting copyright was gonna be a big deal and it was going to be trouble for the music industry in various ways, at least for the music publishers.
At the same time, cheap copying meant that we're going to start paying a lot more attention to privacy than we did before. Because once anything you say to anyone anywhere can be instantly broadcast to everyone everywhere around the world, you're going to be a little bit more careful about what you say. So, it's not a new economy but cheap copying transformed the way we can think about the opportunities from digitization from the internet.
To jump back another generation and really get your head around this idea that cheap creates all sorts of new opportunities. Think about your computer, about the semiconductor. What does the computer really do? Well, feels like a computer does all sorts of things, but really your computer only does one thing. It just does it really, really, really well. Your computer does arithmetic. And we can represent it as a drop in the cost of arithmetic. We're economists, so thinking about things in costs comes naturally to us. But once you start thinking about things and costs, you can recognize, well, when something is cheap, we do it more. Demand curves slope downward. That's economics 101. When arithmetic got cheap, we started to do more arithmetic. At first, we started to use machine arithmetic and good old-fashioned arithmetic problems. For example, in World War II and shortly after we had cannons, they shot cannon balls. And it's a really difficult arithmetic problem to figure out where those cannon balls are going to land. The movie Hidden Figures was about these teams of humans whose job was called computer, who figured out the trajectory of things. But then machine arithmetic came along, and we no longer have human computers who are doing the arithmetic and calculating the artillery tables. But we still have plenty of humans in those jobs because the people who could do the arithmetic also needed and were also well-suited to interpret the arithmetic.
Similarly, an accountant. You ask an accountant for the 1940s 50s and well into the '60s and '70s, what they spent their time doing, they spent their time adding. My fellow professors, uh, in the accounting department at the business school here at the University of Toronto, remember a time when their professors would ask them to open up the phone book, the white pages to say, page 962. And, uh, they'd say, okay, this is the column of, you know, tens, if not hundreds of numbers, and they'd say, "add up those numbers.” Why? Well partly because we professors are like that. But partly there's more to it. The students put up with it. The students put up with it because they knew they would be spending their lives adding up columns of numbers. And so, machine arithmetic came along, and we no longer have accountants who spend their time adding up columns and numbers. Still, plenty of accountants around again, because the people who are good at the arithmetic also happened to be well-positioned to leverage that arithmetic for company's strategy and tax policy. And so, as arithmetic got cheaper though we started to realize there's all sorts of other applications for machine arithmetic. It turns out games are arithmetic, mail can be reframed as arithmetic, music, pictures we use to solve pictures as a chemical problem. Kodak was a chemical company. But with cheap arithmetic, we started to re-frame pictures, images, as an arithmetic problem.
[A graphic shows a complex series of connections for a moment before returning to Avi.]
So that gets us today's technology. This is representation of a convolutional neural net. This is one of the technologies underlying the current excitement in AI. You should think about this as cheap prediction. Same graph, different axis. Artificial intelligence, this transformation we've seen over the past few years, can be represented as a drop in the cost of prediction. And just like with arithmetic, the first applications are good, old-fashioned prediction problems. Right? You walk into a bank, and you want a loan. The bank has to predict where they're going to pay back that loan. Increasingly, banks are using machine-learning tools to make those predictions. The insurance industry is also predicting whether you're going to make a claim or not. They used to do it with, uh, other you know, sort of old-fashioned stats- statistics and increasingly they're using these machine learning tools.
But as prediction gets cheaper, we're starting to realize a whole bunch of other things can be re-framed as prediction problems we didn't think of as prediction problems like medical diagnosis. Medical diagnosis is prediction. What does your doctor do? They take in data about your symptoms, and they fill in the missing information of the cause of those symptoms. That's prediction, and increasingly, researchers are applying machine learning tools to make better, faster, cheaper diagnoses. And image recognition is prediction. How do you recognize a familiar face? Your eyes take in light signals and fill in the missing information of a label and a context. That's prediction technology and increasingly machine learning tools, AI does that as well, if not much better than human.
So, up until this point, we are on day 1 of Econ 101. As technology gets cheap, as something gets cheap, we do more of it. This is the idea that demand curves slope downward. Price of something falls, we buy more of it. If coffee is cheap, we buy more coffee. The fears around AI are about substitutes. Yes, as coffee gets cheap, we buy more coffee. But we also buy less tea, as tea and coffee are substitutes. And so, to the extent that your job or your organization is focused on human related prediction, as machine prediction gets better, faster, cheaper, your job in your organization is going to have to change.
That's kind of bleak. What's left? What's left are compliments. When coffee gets cheap, yes, we buy less tea, but we buy more cream and sugar. And so, the question you should be thinking about for yourself and your organization is, what gets more valuable as prediction gets cheap? What are the cream and sugar? What are those compliments to prediction? In order to understand that you need to know what prediction is for. Prediction is valuable for one reason. It's valuable because it's an input into decision-making. And decision-making is a big deal. A prediction without a decision is useless. The core is to identify decisions and figure out how to insert your skills in your organization into those decisions. Because prediction is not decision-making.
What is decision-making? Well, a decision has a whole bunch of parts to it. It has the prediction here at the center, but it also has data that allows you to make a better prediction. It has the judgment which you can think of as, uh, which predictions to make and what to do with those predictions once you have them. And it has actions and outcomes, which is what happens, what you can do with the combination of prediction and judgment.
Let's talk about judgment. There's this movie I, Robot. It's a fine movie with a fantastic scene. So, Will Smith is a protagonist in I, Robot. And he hates robots. You can kind of figure out where the movie's going to go. Why does he hate robots? There's this flashback scene. Will Smith and a little girl are driving and there's a car accident, and for whatever reason they both are sinking in a river, and it's pretty clear that both Will Smith and this little girl are about to drown. Then a robot comes along and saves him and not the girl, and that's why he hates robots. What's interesting is because it's a robot, he could audit the machine. He could figure out why did the robot save him and not the girl. And what he found is the robot made a prediction. The robot predicted that he had a 45% chance of survival, and that girl only had an 11% chance of survival and that's why the robot saved him and not the girl. But there's something missing there. That's the prediction. The judgment is to say, well, 45%, uh, Will Smith surviving, is better than an 11% chance that that girl survived. That's the- to the extent that he thinks the robot made the wrong decision, what he's saying is that girl's life is worth more than four times his life. Judgment is the reward to a particular action, uh, in a particular situation. It's the payoffs that you get for making those decisions. And that, for now at least, is inherently human.
How does this play out in an organization? How do you think through: "Okay, well, we have these predictions. We have prediction machines. We're going to insert machine prediction instead of human prediction. We're still going to need lots of humans to judge.” How does that all play out? We can think about this as every organization has a workflow, which is a series of tasks and decisions within each job. And what you do is you identify which tasks in which decisions are predictions. And you, to the extent possible, take the machine and replace the human. And so that leads to a more macro policy level pessimistic view, which is that, well, there's a whole bunch of things that machines can predict. And eventually, machines might even be able to predict what would the human do. And so that means that AIs are going to replace jobs, leaving little for us. There's a different pessimistic view that says AI is wonderful and influential, but as Bob Gordon of Northwestern University is pointed out, it's not as exciting as the technologies that diffused between 1870 and 1970. AI is great, information technology generally is wonderful, but it's not the flush toilet, and it's not antibiotics.
Now, Joel Mokyr is another Northwestern professor, he was my history professor a long time ago. And he said, "Well, the good news is that both of these pessimistic predictions can't be right. And the even better news is they can both be wrong. We can't both have AIs that massively increased productivity leaving little for us to do and AIs that don't matter." Sort of a one or the other. But we could have AIs that both make our lives much better, and give us time, and give us the ability to continue to do interesting and new work.
So, the big policy question number 1 is, is this the end of jobs? Well, let me ask. You've seen the movie, The Matrix. The movie The Matrix every human has a job from the day they're born, to the day they die. They're not good jobs. What are humans doing? The batteries, providing energy. Will there be jobs? Is this the wrong question. The major victory of the 20th century in labour was not that we had more work, but then we had less. We got to retire. We got weekends and eight-hour workdays. Will there be jobs? Will there be work is the wrong question. The right question is how, if AI is to fill their purpose, and they massively improve productivity and generate wealth, how will that wealth be distributed? How do we think, do though, will a small number of people gain? In which case, yes, maybe on balance we have more wealth, but we have massive inequality? Or we think about ways to spread that wealth so that everybody can take advantage of the increase in productivity.
Now, moving from society level trade-offs, it's going to increase productivity, to the extent that does that, do we all get to benefit or do only the people with the right skills, and who own the machines get to benefit? Let's, uh, now return to how does it increase productivity? What does it mean? What does it look like for a particular organization to change because of prediction? And to ask that question and answer it, I'm gonna ask you another question, which is, if you wanted to look something up, say 30 years ago, where do you imagine you would've went? For those of you who, uh, weren't looking stuff up 30 years ago, try to think through, you know, previous generation, what would they have done? If they wanted to look something up. Well, they might've gone to an encyclopedia. And failing that, they would've gone to a beautiful big building might look like this.
[An image of the Robarts Library appears behind Avi.]
This is Robarts library. This is one of the least, most imposing buildings on the University of Toronto campus. And if you walked into the Robarts library 30 years ago, and walked up to a librarian and said, "What do you think the most exciting industries in the world are gonna be in 2020?" I bet you none of those 1991 librarians or 1992 librarians would've said, "You know what? The industry of the future is library science." And yet, it turns out that if you think about what's arguably the most exciting company in the world today, what they're doing, they were a library science company. Google helps you look stuff up. That's what they do. And they do it better, and faster, and cheaper than used to happen in the library. Library might argue not as well. But it turns out once accessing information, once looking stuff up is cheap enough, there's all sorts of commercial opportunities in looking stuff up. We don't have to subsidize the library anymore. We now, uh, the place where you look stuff up, the library can make plenty of money on all sorts of other searches that maybe we wouldn't have subsidized before. It turns out that cheap looking stuff up can replace the library and the yellow pages and the classified ads, and do it much more efficiently in ways that there's plenty of money to be made, plenty of revenue.
So how do you think about this in the context of your own organization and your own work? What does it mean for AI to make, you know, to transform the way you operate? We'd like to think of this as a dial. That's where you can turn up the dial on prediction. And at some point, you're gonna turn up that dial enough that you can change the way you operate. For example, here's Amazon's predictions about what you might want to buy. Now, Amazon has hundreds of millions of items in their catalogue. And as far as we can tell, they're right, something like one in every 20 times. That's extraordinary. Think about that. Hundreds of millions of different items. They predict what you're gonna buy, and they write something like five percent of the time. It's extraordinary, but it's actually not transformative to Amazon's business. What do I mean by that?
Amazon is in the same business they were in when they started back in the 1990s. In fact, they're in the same business that the Sears catalogue was in well over 100 years ago. Amazon is a catalogue company. They're an excellent catalogue company. But fundamentally they're a catalogue. You go to their online catalogue, you tell them what you want. They send out information to their warehouse, and they ship it to your door. Just like Sears would've done a long time ago. Now, what does it mean for these recommendations, these predictions of what you're going to buy to transform the business? Well, instead of five percent, let's say there were 20% or 50% or 70% accurate. At some point, Amazon can do something totally different. For example, maybe they don't need to wait for you to buy. If their predictions are good enough, it might be worth it for them to just ship it to you. And then you open a box at your door, take what you want, and leave the rest. And Amazon will be selling you so much more stuff because they are getting it to you when you want it, that it's worth it for them to invest in the infrastructure to deal with the 10, 20, 30, 50 percent of stuff you don't want, and to deal with those returns. Now, let me be clear. We have no idea of Amazon is really going to do this, or if this is going to be feasible. But we do know they thought about it.
[The screen fills with a patent form.]
Here's a patent from 2013 for anticipatory package shipping. This idea of moving from shopping on the website in the catalogue, and shipping to your door, to shipping to your door, and then shopping as you open the door has been, er, around for a long time.
So, whatever industry you're in, think about the predictions that happen now and think about what and do differently as you turn up that dial. As machine prediction gets better in terms of diagnosis, the healthcare industry can change. Maybe you don't need all those years of training of primary care doctors in the selection on your ability to, you know, write an MCAT and memorize things. Instead, maybe we need to train our doctors differently. And the whole primary care experience, uh, could change from someone who we think of as a doctor to someone who has the skills more like a psychologist or a social worker.
We spent a good chunk of 2020 and parts of 2021 inside, looking outside. Why? Because of a prediction problem, we didn't know who was infectious. And because we didn't know who was infectious, we had to treat everybody as if they were infectious. A better prediction on whether a particular individual was infectious or not, would allow most of us to just go on about our business as we had before. And those few people who are infectious stay home. Lockdown was a direct consequence of bad prediction. And better, faster, cheaper prediction, for, in this case, infectiousness could have transformed the way, you know, we lived at that time. So, whatever industry you're in, you can run through this experiment. Imagine we get better and better and better at predicting something fundamental to what you do. How does that allow you to do something different? Thank you.
[The title screen fades back in. Avi appears in the video chat with Alex Scott.]
Welcome, everybody. I'm Avi Goldfarb. I think you guys just saw a video from me. I'm here with Alex Scott. Alex is a group business developer at Borealis AI. And he's gonna, uh, tell us his insights into how to think about the Canadian AI ecosystem and Borealis" role in it. I'm looking forward to what should be a lively half an hour discussion followed by lots of Q&A. So, Alex, I'm going to let you, uh, give, uh, deeper introduction to your own expertise and what you're up to and then tell us about, uh, what you do in your perspective. Alex, go ahead.
Alex Scott, Group Business Developer, Borealis AI: Thanks, Avi. I'm Alex Scott. I work at Borealis AI, which is the Royal Bank of Canada's Artificial Intelligence Research Institute. Been at Borealis for just shy of two years now, and my role at Borealis is really about bridging the gap between machine-learning, between artificial intelligence researchers and the business itself. RBC is one of our largest, oldest, organizations. We have lots of data, lots of opportunities, machine-learning. And a big part of being successful in this space is finding that amazing ML talent. Finding really great analytics problems at the business and marrying those two in a really effective way. So, I consider myself to be a bit of a translator between our business executives at RBC and our excellent AI researchers at Borealis.
My background is actually in both of those things. So, I did an undergrad in business many, many years ago and then did a masters in analytics and artificial intelligence a few years back. So, I really do try and marry those things together. And I started my career in analytics back when we just called it statistics, really. Back when we were just being a little bit more data-driven wherever we could be. So, I spent a lot of time in management consulting, helping out the public sector, other financial institutions up their data and analytics game, and slowly moved over to industry, where I've been able to help Borealis deliver some excellent projects across RBC. Just for a bit of background into what we do with RBC. We have projects ranging from work in capital markets, where we're trying to help our traders make better decisions on the fly. We do work in our risk states where we're trying to make better credit allocation decisions. And we do work that some of you may have encountered if you use the RBC mobile app where we're trying to help consumers take charge of their own financial future, give them insight into when to expect direct debits, when to expect bills that are upcoming and for how much to expect those. So, trying to give people a little bit more information about their financial life and their financial future. And we do that by using the excellence and massive amount of data we have about our customers here in Canada.
Avi Goldfarb: Well, thanks, Alex. That was a great introduction to thinking about Borealis' role. Before we dig into uh, sort of the Canadian landscape questions, I'm wondering if you can dig into one of the use cases. I understand there's some things you might not be able to say, but just so we have a sense of what it means to use AI in the context of a bank. Uh, so yes, we've seen them in our mobile apps. Uh, but what- what exactly have we seen and what's going on behind the scenes to make it actually work?
Alex Scott: Yeah. I will talk about that one that you see the front end up in your mobile app because it's one that many of you may be familiar with. So, we call this NOMI forecast and in the mobile app if you're an RBC customer, you can go in and it will tell you our predictions for the next payments you'll have to make, whether that's a credit card bill, a mortgage payment, a utility bill, something like that. And on the surface, it seems quite easy to you because you maybe make a mortgage payment once a month or your utility bill comes in once a month. But there's actually an incredible amount of complexity that goes into this. And thinking about all of the data we have available. So, let's think about an RBC customer like me, who has a credit card and a checking account with RBC. So, most of my financial life goes through RBC in some way. And we're keeping track of all of those transactions, all the ins and outs.
In the background, we were pulling in all that information, which at RBC scale is in the terabytes per day. You think about the tens of thousands, hundreds of thousands of transactions that are coming in and trying to match those up with an individual customer and what's coming up in the future. So, a mortgage payment might be a very easy, very one. Right? They happen on the same day every month. But there's another layer of complexity here. Well, what happens if your mortgage payment is going to fall on a weekend? What do we do? Now we have to start making some predictions around where we're going to shift that. Is it going to be pulled back to a Friday or pushed to a Monday? Are you going to be making early mortgage payments? Are you accelerating that mortgage payback? How do we make predictions about that? Well, we try and understand more about you as a customer, more about your spending patterns, more about your financial situation. Of course, if you're a really great RBC customer, you might have investments with us as well, and we can see how you're using those investment accounts. Pulling all of that data together in the background allows us to build a better picture of you today, and hopefully understand a little bit more about what you'll be doing tomorrow and the next month. From a technology perspective, we have a very large stack of technology that includes very, very powerful computers, lots of data storage. And of course, the actual machine learning algorithms that- that our developers create to make those predictions.
Avi Goldfarb: Okay. And what kind of a team puts this together? So you have uh, there's a data team, there's a machine learning team. Uh, there's a product manager, I assume. How does that all fit, like, how many people were required to get NOMI up and running?
Alex Scott: Yeah. I always describe analytics projects and AI projects as being a team sport. And you've hit on a couple of big ones there. From a Borealis perspective, there's always four pillars to a project. We have a research team. A research team is full of PhD, data scientists, very, very bright folks in the field. Uh, many of them trained right here at home. We have an engineering team. The engineering team.. it's what I describe- they make this real. So, our research team builds the model. They get the predictions, but then the predictions have to go somewhere. And our engineering team is the one who goes out and figures out how to get this built into the mobile application. They figure out what data needs to be available to make those predictions. They help us maintain that data, update it constantly. There's always a product manager on the team. This is the person who coordinates between research and engineering, making sure that as we're developing these very complex algorithms on the research side, the engineering team knows exactly what's going on, so they can build the technology to- pull that in and make the data available. And then we have a business lead. That's my role. The business lead is the one who makes sure that we're solving the right problem for, in our case, our customers, RBC.
So, how do we make sure that the RBC employees know what we're doing, why we're doing it, that we're solving the right problem? Those four pillars are just on the Borealis side. We then have a team of almost equal size on the RBC side, helping us understand where the data is stored, how the data works. Is it nice and clean data? Or is it actually kind of hard to work with? And do we need lots of information, metadata about that data? There's engineering teams on the RBC side that help us sync everything up to make sure that we can get the data from them to us. And so all in, how many people did it take to make this real? Probably upwards of 40 or 50 people throughout the course of, you know, the year and a half it took to develop something like this. So, these are large-scale projects to make real. And I'm not saying that all machine learning projects have to be 50 people to be successful. There are much smaller ones where you can have a few developers, a few engineers, and do something really great. But when you want to roll something out to 14 million customers on a mobile app, it really does take a village to bring it all together.
Avi Goldfarb: Okay, and then- so that's quite a scale, um, in terms of-and you said, well, there's this you, as business lead, make sure you're solving the right problem. What does that mean? How does an organization, uh, or how does RBC in Borealis decide what the right problem is? Why NOMI and helping people understand that instead of some, I don't know, some other prediction?
Alex Scott: Yeah.
Avi Goldfarb: That those 50 people could've been working on.
Alex Scott: That's a great question. And it's a really hard question to answer. It's something we've struggle with every day at Borealis because there's so much opportunity here and being able to make these predictions really does change the way we do business. There's a few ways to look at this. RBC is a profit seeking organization. We have a responsibility to our shareholders to produce profit. So, do we just tackle the most profitable problems? You know, can we just lean in on capital markets, really optimize our trading there? Well, that's one thing to think about. But we also have a responsibility to our customers and our communities. And so, how can we make sure that when we're seeking profit, we do it in a responsible way? And this is the constant balance that we're thinking about. One of the reasons we took on NOMI, is because we know Canadians are worried about their financial future and anything we can do to help them better understand where they are today, where they're headed tomorrow, is going to mean that we have a better financial future for all Canadians. We have a better relationship with our customers. We're building trust with them. And hopefully over the long term, we can deepen those customer relationships as we continue to help Canadians build a better life. At Borealis though the one other thing that we're always thinking about is how can we push the threshold for AI? How can we do more novel research, solve harder problems, and that's, uh, a big part of our DNA, is making sure that we're pushing the research thresholds in artificial intelligence.
Avi Goldfarb: Okay. And so, NOMI is now launched. I assume you are continually improving it, but in effect, it's out there. Uh, how do you measure whether something like that, uh, you know, works or achieve whatever goals you had for it in advance?
Alex Scott: Yeah. So, there's the sort of technical way we can look at success here, which is we predicted that Alex is going to have a mortgage payment this Friday for this amount. Did that happen? And with that we can come up with a sort of a very quick and dirty percentage accuracy, if you will. That's the easiest way to describe it. But this is very much, uh, a technical view of success, right? You can have 100% prediction accuracy, but if it's not interesting to customers, well, that's not that useful. So, we also look at engagement with the app. How many people are looking at this forecast? Are people changing their behaviour on that basis? One of the things that we really like to see is when customers go in, they see this forecast, and they realize that their checking account might be a little bit short to cover that upcoming bill payment and are able to move money from savings to make sure that they don't overdraft. This in our mind is a huge success, right? We'd never want to see a customer go into an overdraft position on their account. So, if we can give them the information ahead of time to say, you know, as long as you move that $200 from your savings account, you'll be in good shape, we really, really like seeing that. And that's really where successes for us. I mean, we of course need to have high prediction accuracy. But having that engagement is really, really important.
Avi Goldfarb: That's great. Okay. Uh, thanks for that broad introduction. If people have questions as in the broad perspective on how to think about, uh, AI strategy, and AI projects, and AI project management. Yeah, we can- I'll take those questions now before we go into thinking about policy.
If not, we'll come back to them. I'm not sure. I can't- you know, we'll see how the Q&A works. Just going to- okay. We're going to go on to the next set of questions and then we'll come back to these AI strategy things. So, we're in Canada and there are, you know, on a snow day, you can think about that as a great thing about Canada or a bad thing about Canada for those of you who are based in Southern Ontario. Um, but one of the great things about Canada in the AI context is a lot of these technologies were invented here, okay. So, you know, at the University of Toronto and, uh, Professor Geoffrey Hinton, who, um, did maybe more than anybody to develop deep learning technologies, is that remains at University of Toronto, Rich Sutton's at the University of Alberta. He is, uh, you know, the core developer or at least one of the core developers running technology called reinforcement learning, which is key to a lot of what we do. And, um, Yoshua Bengio was in Montreal. We have a lot of expertise. Um, I think I caught the big three. ‘Cause Yann LeCun's no longer here. Okay. Um, and so we've a lot of expertise in Canada, and that's, uh, you know, something to be proud of. And it's frankly the reason why, you know, I saw AI before others. As I said in the video, we saw these AI companies coming through the Creative Destruction lab back in 2012. Um, so I want to talk a little bit about Canada, but actually now we have questions coming through, so we're going to go with those questions. My stalling worked. Thank you. And, uh, we're going to go through these questions then we'll come back to Canada. So question one: what are the implications of increased prediction accuracy that AI is going to bring to the capital markets? So, uh, if institutional, does that mean - I guess the philosophical question is, do we institutional investors have, uh, a bigger edge compared to retail? And is that something to worry about? I don't know if that's something you've thought about, Alex.
Alex Scott: I have thought about this a little bit, you know, I'll throw a few things out there, I mean, I don't know the answer. This is very much a philosophical question, but I- I would like to say that AI and ML has the opportunity to democratize the space. because there's not that big an advantage to being a big organization when it comes to using machine-learning. Yes, we have data, but in the capital market space, data's not a competitive advantage. The data is public and broadly free. So, anyone with a subscription to Azure or Google Cloud or AWS can have equally powerful computing environments. And with some study, with some friends, can produce an algorithm that is likely to beat what we can produce at RBC. What we do at RBC and at Borealis is less about institutional investing, we do some of that at RBC. But we're a market maker more than anything. And where we make our money in capital markets is on that spread. And we would be happy to optimize market-making, make a smaller spread, but make more markets. That's where AI is really going to play a role here: speeding things up. That increased prediction accuracy, meaning we can price things better midday before the market is closed and we recalculate everything, make things better for everyone. So, we don't really play all that much in -- in heavy instant institutional investing like -- like a hedge fund would, for example.
Avi Goldfarb: Okay. Now that's great, thanks Alex, and I- I'll take my own stab on that with my econ background. Um, so, you know, I'm a card-carrying economist. And, um, there are …So, to reinforce Alex's point on efficient markets and, you know, market-making. The more markets you make, generally our models say that's better. And it is a worry if one institutional investor had all the best AIs and nobody else did, because then they'd have a pricing advantage over retail, etc. But once you have two, then, um, yes, those companies can get a little lead off each other. But the information that any two companies have is going to be reflected in the price of the securities. And so, you know, our model suggests that as long as, you know, to Alex's point of your thinking, well, the retail investor doesn't really have access to those models. But as long as we have dozens, not even hundreds of investors who do have access to those tools and models, then that should end up being reflected in the price. And as long as the market-making is efficient, which I've just argued it was, um, this is reason to be optimistic about this, um.
Next question. Uh, cost is always a key consideration for companies. So, you're thinking about the cost of not adopting AI. Is that starting to exceed the cost of doing so? What do you think?
Alex Scott: I think there's a strong argument for yes. It is starting to become costly to not invest in AI. Now, I'm speaking very much from a financial institution perspective. Let's play out the Canadian market, right? We've got basically five banks, maybe at six or seven depending on the thresholds you want to put it in their, and Canada is something like 99% banked, which means that 99% of Canadians had a bank account, at least one bank account somewhere. And that means that the way a bank is going to grow is okay, yeah, we've had a couple of 100,000 immigrants a year, so we should pick up those. Otherwise, we're stealing customers from other banks. How do you steal a customer from another bank? You be a better bank. So, better customer service, better applications, better mobile experience. All of these are driven by AI, and I think what we're gonna see is the customer -- is the Banks who didn't double down investing in this five years ago are going to start to see the hip now that AI has become commonplace.
How can you make the case that you shouldn't be investing in AI in marketing right now? Where we've got expensive channels. We still mail things- are expensive. It's over a dollar a mailer to get a flyer out to you. So, you can save costs by being more accurate, and if you know much more about your customer than the next bank, you're going to know when to reach out to them about that mortgage. You're going to know how price sensitive they are. You're going to know what experience they need to close that. If a bank is better than you in any of those three ways, they're going to win that business. So extremely high cost.
Avi Goldfarb: Thanks Alex. This is actually, uh, a little bit of the subject of my next book, uh, that's coming out next fall. Thinking through, yes, there are real challenges in adoption. It is hard, but, you know, in competitive industries, the companies that win are going to be those that figure out how to do it. The hard part is there's gonna be a whole bunch of companies that try and don't guess right, and so that they're going to have, you know, invested in the technology and they won't experience the benefits in the same way that the companies that have figured it out, you know, I would assert Borealis is one of them or RBC. Okay. Next question. Uh, AI developments and progress often rely on large datasets. Is this hard in Canada because, implicitly because we're a 1/10 of the size of the United States or Europe?
Alex Scott: My first reaction to this is that no, it's not a problem. Of course I'm coming at this from a very financial institution perspective where most of our banks have millions and many of them over 10 million customers. When you think about all the transactions we process, all the information we have about those customers, we're dealing with very large datasets to start. So, there is this sort of inflection point and how large a dataset needs to be for it to be useful for machine learning tasks, and I think we've hit that inflection point at least in financial services we have. Avi, any thoughts on industries where there might be a huge competitive advantage in the US?
Avi Goldfarb: Well, it's not clear there's lots of those, but it's not clear that anything is different from AI from where we were before. You know, if you're a retailer, there's good things about being in the US relative to Canada. The size is just - you know, there's huge economies at scale and data is going to be part of those economies of scale, but I think in the banking contexts, you said you had 14 million customers. That's a lot. Um, it's a lot of data, and for those of you who have taken a stats class, prediction is improved in the square root of n. What does that mean? It means there's decreasing returns to scale and data, and 14 million in most contexts is a pretty big number. And so, uh, yes, 140 million is 10 times 14 million, but we're already pushing the limits of what large n does, and where a lot of the advantages are going to be is can you, you know, deal with more data points or other things in various ways?
There's a -- there's a policy question on privacy, uh, that we'll you know, maybe we'll touch on later in terms of does that is that a constraint to the Canadian environment? But, but in terms of, you know, just the size of Canada and for the most part that- my experience is that hasn't been an issue either. Okay. Uh, next question -- next question is for me. Okay. I'll read. Okay. Um, so [Avi laughing] I guess I'll read myself the question. Uh, how do you think AI technology will impact the public services as a whole and public sector financing in particular? Do you think it'll impact policy-making and development or program management?
So, what I think will happen and what I think should happen are different here. Okay. So, what I think should happen is that different aspects of the public service should take advantage of the opportunities in technology just like RBC is. There's great things that can happen with data in making, you know, the government a more effective and more efficient deliverer of public service. That's what I think should happen, and I think there's lots- there is areas of the public service where this is already happening, okay? From my understanding.
But it's harder in the public sector. It's harder in the public sector for a variety of reasons. One of them Alex already highlighted which is, look, if RBC decided not to do AI, they have at least four major competitors that would beat them to it, and RBC would lose share, and it would be bad for RBC. Those competitive pressures, you know, just naturally don't happen in a government context, and so there's a lot more as you, the audience, know much better than I do, there's a lot more management of downside risk. Uh-oh, if we do this and something goes dramatically wrong, then thinking through the upside potential. And so there's an organizational design issue on figuring out how can you get- take those upsides in the public service without taking the risk of an essential service collapsing. Right? Like, no offense to bury out borealis in RBC, but if you invested, you know, billions into AI and it didn't work, it will be bad for you, and it'll be bad for the people who work at RBC, but, you know, from a society point of view like we don't want the bank to collapse, but, you know, if you lose 10% share, you lose 10% share, someone else gains it and that's fine for everybody. That's how markets work.
In the public service we don't have that luxury and so it becomes more challenging. Um, there was a question. You know, the public serve as a whole. That's about public service as a whole, not protect public service financing. Public service, you know, public service financing is trickier, uh, because a lot of that is not AI in the classic sense because- my understanding of, uh, you know, public service financing or pub -- public finance anyway, is you're not in a big data world, you're in a small data world. You have a time-series and maybe it's a frequent time series, but you sort of have time series of how, uh, budgets are allocated. And so it's not prediction problems in the machine-learning sense, it's a combination of causal inference problems of if I do this what will happen, and some prediction based on extrapolating past trends. So, it's a different statistical toolkit.
Okay, um, next question. How is AI gonna have impacts on various underrepresented groups, and what supports might be needed? Um, I'm gonna do that first and then I'm gonna pass it to you. Alex it's- its uh -- so there is a very pessimistic narrative around technology that's, um, that AIs are biased, okay? And that narrative is not wrong. In the sense that they are biased, predictions are biased. Why are they biased? Because they're based on human decisions. And we, humans, are terribly biased, okay? The solution to that that we often see is therefore we shouldn't have AIs. And that seems to be exactly the wrong conclusion. What's great about AI, what's great about prediction machines is that we can use them to first identify where those biases exist and then proactively try to make them better. So, Sendhil Mullainathan who's an economist in US Chicago, has written about this extensively in the New York Times and in a lot of his research. He's sort of a leading thinker on how, you know, on an optimistic version of AI with respect to bias and discrimination. I'm gonna tell a, you know, a story around this, I think it summarizes the concern, okay? And then- then I'll kick it over to Alex, which is Amazon, a few years ago, I don't know if you remember this, uh, tried to develop an AI for hiring, okay? And, uh, they built a resume screener and they tried to hire. And, um, what they found before they deployed the tool was that this resume screener, they were not- was predicting that all women would not succeed in the company. To the point where if a man coached a women's soccer team and the word women appeared on his resume, he would be penalized and not hired. And so Amazon, to their credit, did not deploy this in the wild. They built the machine and then said, you know what, we're not deploying that, it's crazy biased. And what did they do? They went back to their old processes, okay? So, they are on the AI side they did exactly the right thing, okay, which is that they said, hey, this AI is biased. We've discovered a bias, uh, we're not going to deploy it. But what seems like the, you know -- the AI gave them an opportunity to learn, and ideally build a better human process that could then feed into a better AI. And so, um, you know, yes, the AI was biased, but they would never have known they were -- The reason it was biased is because their hiring process was biased. They would never have known that otherwise. And so there are- oh, and there is another event on AI bias. So stay tuned. That will be from somebody who has more than the anecdote that I just gave you from Sendhil, okay. Um, Alex, I don't know if you want to add anything to that.
Alex Scott: Ah, I'll say a few things here coming from RBC where we're highly regulated in the public sphere, we're very conscious of this and, you know, add it to your point, can AI be biased? Yes, of course, it can be biased. Our job as data scientists, as with folks who use AI is to catch that and fix it. One of the great things about AI is it learns, and it learns based on what we tell it to learn. So, iteration one comes back and says your model is biased to Group X, we can re-weight that. We can say don't be biased to Group X. Here's what we need to do to eliminate that bias. And so, we- we often get this narrative of AI being an un-understandable black box. It's biased. But these are all solvable problems in- in some sense. And the question I would put out to you is, would you rather knowing this, knowing that AI can be unbiased and with the right tests, with right scrutiny can be fixed in this place, would you rather have a decision made by an AI that's been tested to be fair or random person sitting in the office at the back of your bank branch? I know what my answer is yes.
Avi Goldfarb, overlapping: Yeah. Yeah. [inaudible]
Alex Scott: But we're a long way from there, right? It's a tough question and it's tough to put your trust in something like that. A big part of what I want to see Borealis do and where we're pushing is publishing our AI fairness standards. Here's everything you need to know about this model that makes this decision about your financial life. Here's why we know it's fair. Here's how fair it is.
Avi Goldfarb: Yeah.
Alex Scott: Here are the ongoing metrics.
Avi Goldfarb: Yeah.
Alex Scott: Let's build trust there.
Avi Goldfarb: And there's been huge progress. And from my understanding in terms of how to do that, and it's great to hear that- that you guys are thinking about it deeply.
Alex Scott: Very much.
Avi Goldfarb: And speaking of -- you're excited about cutting edge on the research side. I'm sure that's- that's a way that you get to keep your researchers excited.
Alex Scott: Exactly.
Avi Goldfarb: Ah, okay, so think about it, a good data science team. So, turning -- changing directions here a little bit. Uh, how do you think about a good data science team? Uh, and more generally, how do you -- how do you attract talent?
Alex Scott: So, a good data science team. There is no one good data scientists, right? You need diversity in these teams. When you think about everything that has to come together, you need data on one end. And not just data but clean data, data that you can work with. So, you need someone to do that. You need someone to understand and build these models. You need someone to put these models into production. You need someone to think about bias and fairness and how safe this AI model is. You need someone who understands the business problem appropriately. So, I would say that the best data science team is- is a well-rounded team, is a team that understands each other, what each other needs. In terms of attracting talent, this is a really good question. As I'm sure everyone knows there's the war on talent and the great resignation and of course, we're battling with this as well. We're very fortunate to be in Canada where we have spectacular AI talents. Unfortunately, firms like the FANGS, Facebook, Amazon, Google, they can pay a fortune for the top talent, and they do. We do see folks leave- they're Canadian trained, leave to go to the US to- to make those big dollars.
Avi Goldfarb: Why is that- I'm just going to push back a little bit. Why is that unfortunate? Um, that's--
Alex Scott: I think it's a -- selfishly, it's unfortunately.
Avi Goldfarb: Okay, [Avi laughing] I was going to say that seems great for, you know, for our talent and we'd liked that our talent is- is getting, I guess we'd like those offices to be in Canada, so they're paying Canadian taxes. But other than that, um --
Alex Scott: Selfishly, we would-
Avi Goldfarb: Okay.
Alex Scott: We would like that top talents and the economics of it just don't make sense compared to what Google's paying. RBC is just not going to do that we're not in a position to. It is great news that we are producing talent that is desirable across the world, though. Very, very pleased about this. So how do we attract good talent? We offer flexibility. So, we want people to be able to work when, where, and how they want. We offer what we think is a pretty compelling dataset to work on. Maybe about 14 million customers. We've got lots of historical data, so that's pretty exciting. And we try and offer really big problems. This has been really attractive to me for one, but many of the other people who work at Borealis. Come work on big problems with big impact. RBC is not as far along as Google on its AI journey. So yes, you can go work for Google and you can make Google search better by 0.2 percent. You get paid a lot of money to do that. Or you can come build NOMI and help 14 million Canadians understand their financial future.
Avi Goldfarb: Right. Now, that's- so given that, I just want to talk about the Canadian landscape then a little bit, because you've brought that up. How do you take advantage of being in Canada? What's good for RBC and Borealis on the AI side, you know, because you're in Canada?
Alex Scott: Our partnerships--the-the first thing that comes to mind is our partnerships with universities. Our partnership with the Vector Institute has just been phenomenal for us from a bunch of perspectives. One is the talent pipeline. So, we get to meet these Ph.D. candidates going through their studies. We do internship programs. We bring people in from across Canada to work with us. That's been phenomenal from a talent perspective. But also, just the quality of research. Being able to partner with U of T, McGill, Queens, UBC, bring in their faculty, have them work as faculty advisors on some of our research problems has been huge for us. It's- one it's very attractive to data scientists, being able to work with the best minds in this space, especially researchers who want to publish who want to go into conferences. And two, it really does help us push the boundaries. Those two things have been huge for us. Our relationship with Vector has been awesome. We've got so much public support for AI here that it's really made our lives easier even in selling the work internally, that RBC, everyone knows it's important.
Avi Goldfarb: That's great, now- okay, so that's the good stuff. Are there aspects of the Canadian environment that are holding you back?
Alex Scott: One thing that we struggle with at times is the regulatory landscape and understanding how everything fits in here. The example I'll give is, so, we're regulated by OSFI. That's the Office of the Superintendent for Financial Institutions. And they are principles-based regulator. So, they will tell us that when-when we deploy a machine-learning model, it needs to be fair. And this seems like a great thing to say. But the implications of this are very difficult to figure out because there are many different definitions of failure. At last count, I think we had 23 different machine-learning definitions of fairness and some of them are in conflict with one another.
A very simple example is individual and group fairness. We talked about bias earlier. And we know RBC is a very old organization. Historically, we used to give more credit to men than women. This is, it's just true that we did this. So, when we look at that historical data, our models tend to want to give more credit to men than women. But we do that because of their credit profiles, because of their credit scores. These things accumulate over time. So, do we be individually fair here? Which means we don't look at sex at all. We just look at your credit portfolios, or do we be group fair and we do look at your sex and we normalize those things? So, these are in conflict with one another. You can't have both in this space. And when you have a regulator telling you need to be fair but not what kind of fair, very tricky problem.
Avi Goldfarb: Is there ah, some kind of pathway to talk to the regulator to figure out how this works or how does, like how do you square that circle?
Alex Scott: Yeah. So, we spend a lot of time chatting with the regulator and telling them what we think about this, how we're tackling it. All of the different ways we can measure fairness. We actually have a team at RBC, all they do is test machine-learning safety. And they are our path to the regulator. So, we, before we put the model in production, run dozens of tests on fairness, on bias, on robustness. And we review it with this team who says, "Yeah, this looks pretty good. We've done our research. We know it's out there and the literature protests. This seems really interesting. Why don't we go talk to OSFI about it, see what they think, see if they're okay with this." And we've got this very collaborative relationship, which is one of the good things, very collaborative with OSFI and the other financial institutions where we can start to shape this regulation and make sure that we're bringing everyone up along with us. So, when we come up with a new style of test, everyone should be using it. It's better for everyone.
Avi Goldfarb: That's great. Okay, so let's go back to our Q&A. Oh, you know what? I have one more-one more planned question. So, we're going to do the last planned question then we'll come back. And I think this planned question is showing up implicitly in a handful of the-the other questions, which is you know, there's a lot of worry about um, you know, that AI is going to substitute for humans and machines are going to replace us. Okay? So, whenever I give a talk about AI, that's sort of the- one of the dominant themes uh, and in the video that that-everybody else just saw. They saw me say, well, we always think about substitutes, but a lot of the opportunity is in compliments, okay. So, new areas of human activity that occur from, you know, through AI. And so, do you have some sense of complementary jobs that the AI tools that you've built help create? So, there's one set that's obvious and I'm just gonna-- I'm not gonna let you give that answer which is, "Hey, there's-there's data science people who need to-to build the AI's." But more generally, is there a line of business or new opportunities because of the AIs that are being created?
Alex Scott: A couple of thoughts in this space. So, the first is that when we roll out AI, the best place to start are on the easy predictions. So, let's- let's play with credit adjudication here. When you apply for a credit card, you go through this process, and we decide whether we're gonna lend to you and how much we're gonna lend to you. There's some really obvious ones. So, you know, perhaps-perhaps you've got a perfect credit score, you've never missed a payment, you don't use very much credit and you're not asking for very much. Seems pretty obvious right? And machine learning's going to find that right away. The trickier ones are, you know, maybe- maybe you're someone who went bankrupt once, but have recovered. So, a long time ago. It's been wiped from your record. There's all these other little pieces of credit. You've had some secured credit cards. Much more complicated financial situation. Perhaps you own your own business as well, so you don't have a traditional salary. Much harder to work with and much less data to go off of. You're in a very small subset of Canadians. This is where humans really shine.
So, AI goes in and what we've found is that you think you've put in this machine-learning tool that's going to save a ton of time for credit adjudication, for example. And it turns out to not save anytime at all. We process more applications, but we have to kick out all the really complicated ones. And so, we've created the space for very specialized human decision-making, for human judgment to lay on top of that AI. And this is a bit of a different skill set, right? If- If you pull out the ML completely, well, then most of your day job is finding these really easy cases where you're like, yeah, it's obvious Alex deserves a credit card, stamp. Off it goes. And now you've got this very specialized person who is spending all this time to learn all the intricacies about you as a customer. So, we've actually got this way of supplementing our AI capabilities with human judgment, layering it altogether and making much better decisions as a result, and many more decisions as a result.
Avi Goldfarb: That's really- so there's a Canadian company called Ada. I don't know if you- you're familiar with them. They're a chat company and they helped Zoom scale in March 2020. So, ah they had signed Zoom as a customer fortuitously in like, January 2020 or something, and they were, you know, they were a decent customer and then suddenly the pandemic hit, and Zoom needed a lot of customer support. And I don't know if you remember and you interacted-- you know, in terms of the audience, if you interacted with Zoom in March and April 2020, it wasn't great, but it was surprisingly functional relative to, you know, many other similar apps at the same time. And so, Zoom scaled. One of the reasons why Zoom was able to scale is they had semi-automated chat. Okay? And it's very similar to what Alex just described, which was for the easy problems, oh, I need to change my password. Okay. They had chatbots that helped you. And if you emailed Zoom about changing your password, then chances are you would have had Ada's automated chatbot getting back to you. But if you had something more complicated, that wasn't, you know, that they didn't exactly know, you know, that Ada hadn't programmed yet, then you dealt with a human. And you dealt with a human because that human could use their skills to help manage your problem. And of course, if there were enough of those kinds of things, and over time that could- to be automated. But then as the scale of Zoom grew, you had even more need of other customers who had very specialized needs, who therefore needed human judgment in terms of their chat experience and their email. So, customer service. So, here we have very similar story and that's why, you know, it's a Canadian story, so that's why I also brought it up, where we have a machine-learning company seemingly automating humans out of work. But in the process, they allow someone to scale to the point where you end up needing lots and lots and lots of humans because their judgment on the specialized cases is much better than the machine. Um, so thinking…
Where was this? sorry, I had one more question for you on this sort of theme, I'm just trying to remember where it went. Okay, ah, there it was. So, if someone's now entering the field. Okay, so we think about there's opportunities of judgment, but someone is interested in doing AI research or implementing AI in their organization. Do you have general advice on what they should do? So, if you go and you know you go to the Vector Institute and you see somebody, a master's student and meet with them over lunch or coffee. How do you- how do you think about advising them?
Alex Scott: It's actually a really tough question because there's so many different paths here and depending on what you're interested in there's going to be work for you. There's so much opportunity in this space. There are few things that are, I guess what I would call base criteria. You have to be comfortable with a programming language. So, you've got to go down that path in some way. Even if you're never going to be the best programmer in the world, being able to write a couple of lines of Python goes a long way, even if you're not going to be a data scientist but just data science adjacent. Being able to see that really, really helps. And having a bit of background in this space, learning to program will help you with this but, you know, at some point someone's going to show you a confusion matrix, and they're confusing the first time you see them. But eventually you'll learn how to read these things and become really, really valuable tools to understand where this model is going to succeed and where it's not. So, you need to have the base level understanding whether you call it statistics knowledge or programming knowledge that has to be there.
The other piece of advice I'd give is, if you truly want to be a data scientist and you want to be in the data, you want to be building models every day, get practice and it's so easy to do. Go out to Kaggle, play some Kaggle competitions, lots of opportunity there. Get yourself up on a leaderboard, start a GitHub page, start posting some of your work, build up a portfolio. We ask for people to submit those as part of their applications to Borealis because it shows us that you're passionate about the space, it shows us what you're capable of. All great things.
And then on the other side of this completely if you don't want to be a programmer, there's also nothing wrong with that. AI is going to be everywhere, not everyone's going to be a programmer. There's going to be people who consume AI who use it as part of their day jobs. So, having some analytical ability and I don't mean programming analytical ability, I mean logical thinking, I mean good human judgment. Being able to incorporate both what you understand and the output of a model that's going to be really, really powerful. Those are going to be people that transform businesses, who are able to say, "here's where I need AI. I can't build it myself, but I can find the people who will.".
Avi Goldfarb: Right, that's great advice, thanks Alex. And so, another version of that is, you know, so the question specifically just was should be generalists or specialists or you know sort of digging into details? And your answer is yes. Whatever works for you, there's lots of jobs for specialists, there's lots of jobs for generalists. There's exciting opportunities for people who can translate from whatever field they're in into AI. Go for it, this is worth investing in, and AI obviously- I think obviously, fully agree and that's why we're here. Okay.
Okay. Question for me ah, but Alex you might have some follow-up on this, which is "Have I seen AI used in the public sector? An example of AI use that's not about financial profit goals.” Yes, I've seen lots of- there's lots of examples and there's room for many, many, many more. Whenever you have somebody filling out a form for example, an AI can make filling out that form better. Okay, I don't just mean cheaper. It can make it cheaper because you know you may not need a human to translate or to type or whatever else, but I also mean better. The experience of having someone pre-fill out a form for you and you only have to answer a few questions the AI doesn't know is a much better experience in interactions with the public service, then having to fill out pages, and pages, and pages.
Okay so, we've seen that already happening in terms of, you know, at the border where you have to deal with fewer questions because many of them are answered already. That's around the world, some in Canada but more elsewhere. I think there's a lot of opportunities for better, more efficient, not so much immigration services, but at least border services there…. There hasn't been much of this, but I think there's a huge opportunity in taxes. Canada Revenue Agency's doing a little bit of this in terms of after the fact and seeing if this- predictions of what this person should owe or what they owe. But I think there's a lot to be done. You know, the current system of taxes where we -- where you fill in -- you have to hire somebody. Many people have to hire somebody to help them fill in a government form. They fill in that government form and then the government tells them whether they filled in the form right or wrong and comes back with their own offer. Seems like it is a very inefficient way from the user point of view to provide that information. So, an alternative vision for taxes, who knows if this could ever happen anywhere for a whole bunch of reasons. I understand there's barriers. Is the government makes an offer, and says "here's our prediction about what you owe, take it or leave it.” And that would save a lot of time and energy in the whole system and then the government could focus on those people whose taxes are complicated. And for most of us where our taxes are very, very easy it could almost all be done by AI. I don't know of any jurisdiction that's done that yet. I know people are talking about it, and I understand there are all sorts of crazy political barriers and logistical barriers to making that happen.
But whenever you have people filling out pages and pages of forums, that's prediction, right? You're filling in missing information, and so AI is gonna make it better. Ah, we've seen that in a few aspects of customer no -- citizen facing public service, this is already happening and I think there's tons and tons of opportunities for that sort of you know to Alex's earlier point where there's- hey, there's an obvious prediction let's - let's we can put a small team on it, fill in the missing information and just make that aspect of the business more efficient, which is different from a total reorganization. There's another thing which is, hey, let's totally reorganize how we run this part of the business based on better prediction, thus I think there's opportunities there too but that's a heavier lift.
Okay. I'm gonna to go to this- this last question I really like. So, well, maybe second last question, is Canada doing enough to invest in AI and prevent a brain drain to the US and elsewhere? So, I'm going to split that question into two, which is, there's point one, which is are we investing enough in talent development? Then the other, which is you said we can't compete on salaries, and ultimately I guess the question is why not? You're RBC's a pretty big company. You make a lot of money. Why can't Canada compete on salary with the- with the big players for the best AI talent?
Alex Scott: This is a- this is a really big question. So, is Canada doing enough to develop talent? I think we're doing some of the best work in the world developing our AI talent, which is why there is this brain drain in the first place. What our universities are pulling together, the talent we see coming out of universities is really spectacular. I mean, I've reflected over the last couple of years about I can't keep up with an undergraduate programmer anymore. There was a time where I was actually good at this. And now, you meet someone who graduated last year and why even bother trying to compete? So, we're doing a great job there. As we get a little bit more senior in industry, I think that's where we could be really doubling down and, you know, part of Borealis" goal is to retain Canadian AI talent. We're very big on this. But we're only 100 people. We're part of a big machine, but we're only 100 people at Borealis so we need help, right? We can't hire all the AI talents in Canada, and I would love to see more groups like Borealis show up because there are problems to solve, there's great data to use, and Canada is a phenomenal place to live.
In terms of the salary conversation, I think we can compete on salary. I don't know why we're so afraid of this. And, you know, people make claims about, yes, it's much more expensive to live in San Francisco than certain cities in Canada. And this is probably true, but I'm not sure that the math works out on the deltas between compensation and cost of living. I'm not convinced that's the case. Part of my challenge sometimes is we've got this great big organization that's not used to hiring this kind of talent and very much struggling with the right way to do it. So, I'll give you an example. Many big organizations, traditionally, the way you became senior and made more money was by building bigger teams, taking on more responsibility as part of the bank. So, you go from managing a team of five to five people who have a team of five and so on until you manage an entire bank. This doesn't really work in data science, right?
There's no AI team of 10,000 people. And so how do you make a role for someone who doesn't want to manage people and perhaps shouldn't manage people because they are the best machine learning programmer in the city. This is something the bank is struggling with, and I think part of our problem. We need to make a space for it. Borealis is the start of that, but we're not the end. There's a lot more work to do.
Avi Goldfarb: That's fascinating. And that gets us to this last question, right? So, you know RBC is a big bureaucratic organization. The Canadian government is a big bureaucratic organization. You've made incredible progress in making machine learning and AI happen within a big bureaucratic organization. We have about one minute left or is there a one-minute lesson in, you know, what's the secret to the success?
Alex Scott: Probably not, but I'll- but I'll try.
There's a couple of things that I think are worth saying here. And so, one is if you have the right support from the right people you can go a long way. And a big part of our success at RBC has been building belief. Start small, start low-risk, show some wins, get something on the board, and grow from that. It's- at RBC we're very hierarchical. We've got lots of executives, and getting a handful of executives, very senior people, bought into this that then become evangelists, and could talk about the success, can talk about how great it was to work with this data science team, and everything they learned that's been huge for traveling around the bank. And then you start getting other groups of people who don't want to miss out. That's when you know you've made the change. When people start coming to you and say, "Hey, I heard about this machine-learning thing, and I think we can really do something here."
Avi Goldfarb Goldfarb Goldfarb: Uh, that's great. You did it! That was under a minute. Okay. A useful question. So, thank you so much Alex. It's time to turn it over to Erica to wrap up.
Alex Scott: Thanks for having me.
Avi Goldfarb: Thanks.
[Erica's panel joins the chat.]
Erica Vezeau: Thank you so much Avi and Alex for delivering such an insightful and thought-provoking conversation over the last hour. Alex, I think you summed this up. You just had the, like, the best line in the last minute, that concept of building belief, you know, we're are really big machine in the Government of Canada. And sometimes it feels like those of us who have drunk the Kool-Aid and are trying to make a difference it feels like an insurmountable mountain. But, uh, your bang on there. We're starting small and spreading the word and we'll get there; we'll all get there together probably.
So also, I think it's really valuable for public servants to be exposed to conversations between academics and industry professionals, you know, on any topic, but particularly this topic, so that we can better understand the unique perspectives you bring to the critical issues. So again, I really thank both of you for taking the time with us today. So, on behalf of the Canada School of Public Service, your expertise is invaluable and what you've brought to us enables us to produce high-quality learning events for public servants across the country. We hope to see you again.
For those of you on the line I want to remind you as Avi mentioned earlier that the next event in the AI is Here at series will take place on February 14th, and we'll cover the topic of bias, fairness, and transparency. Registration details for that event will be available on our website pretty soon. We also want to draw your attention to some of our other upcoming events, not necessarily on AI, but they are pretty interesting. So, I'm just going to share my screen here and I hope you can see behind me, uh, that there are four events here that we've mentioned happening through January and February on innovation- a couple on innovation, one on quantum computing and, the data conference happening from February 23rd to 24th.
While you're on our website looking at more information on those events, we invite you to discover the latest learning offerings coming our way. So, not only do we have events, but we've also got courses, programs, and other learning tools. And finally, I will just mention that this learning event is brought to you through a partnership between the Digital Academy and the Transferable Skills Business Lines here at the Canada School of Public Service. We also really want to thank our series partner, the Schwartz Reisman Institute for Technology and Society for their support in delivering today's event. Finally, to all the learners who registered for this event thanks for tuning in and we hope you found this useful. Your feedback is really important to us, and I invite you to complete the electronic evaluation that you will receive by email in the coming days. Again, thank you to our speakers and thank you to our attendees. We look forward to seeing you again soon. Take care. Enjoy the rest of your week.
[The video chat fades to CSPS logo.]
[The Government of Canada logo appears and fades to black.]