Language selection

Search

CSPS Virtual Café Series: Artificial Intelligence, with Gillian Hadfield, Karen Hao, and Shingai Manjengwa (TRN5-V07)

Description

This event recording features a conversation with Gillian Hadfield, Karen Hao and Shingai Manjengwa on the challenges involved in advancing AI technology, including its role in shaping future prosperity, addressing human bias in design, and what effective AI governance may look like.

Duration: 01:00:29
Published: March 2, 2021
Type: Video

Event: CSPS Virtual Café Series: Artificial Intelligence with Gillian Hadfield, Karen Hao, and Shingai Manjengwa


Now playing

CSPS Virtual Café Series: Artificial Intelligence, with Gillian Hadfield, Karen Hao, and Shingai Manjengwa

Transcript | Watch on YouTube

Transcript

Transcript: CSPS Virtual Café Series: Artificial Intelligence, with Gillian Hadfield, Karen Hao, and Shingai Manjengwa

Neil Bouwer: Hello everybody. Welcome to today's event with the Canada School of Public Service. This event is one of our virtual café series, which aims to introduce public servants to interesting thinkers and leaders, from inside and outside the public service, to explore important topics and ideas for the country and to have thoughtful discussion. Thanks everybody for being here today. We're really pleased to have you.

My name is Neil Bouwer. I'm a vice president at the Canada School of Public Service. It's my pleasure to be the virtual moderator today and thank you for all of your virtual participation. We have over 1200 viewers and participants today. That's fantastic. You can ask questions through Wooclap today. You can go to Wooclap.com on any of your devices. The code for this session is VC0302. VC like Virtual Cafe and 0302 like March 2nd. Please log in to Wooclap. VC0302. That's how you'll be able to ask questions. We also have simultaneous interpretation for this event. Please follow the instructions in the event email if you would like to listen in the official language of your choice.

We're really pleased to have this discussion today on artificial intelligence. Whether you love it or whether you hate it, whether you're curious or whether you're convinced, AI is increasingly ubiquitous in our lives. The news we read on the Internet, all of the prompts that we get on our phones, our cars through lane-assist technology to keep you in the yellow lines—hopefully not while you're checking your phone—the movies and TV shows that you watch and that are recommended to you by Netflix or YouTube or other platforms. These are all increasingly common ways that artificial intelligence and machine learning is coming into our lives.

For the most part, I think we will see that it's making our lives better and it's helping us as new technologies emerge. We're also beginning to see the impacts, and sometimes the alarming societal impacts, that AI could hold in store as it grows and is even more deeply embedded into different aspects of our work and home lives.

At the same time, artificial intelligence is advancing so rapidly that it's hard for regulators to keep up in terms of the types of oversight or the types of guidelines or directives that the private sector and other organizations in society need to follow. This can create gaps and vulnerabilities that can result in, for example, violations of personal privacy or issues around inclusion or equality and equity.

This is particularly important given the reawakening of society around systemic racism, around implicit bias, and around discrimination that can take place around a number of factors. It's deeply ingrained in our culture and our practises and our institutions and also by default in our data. When we rely on those data for machine learning and artificial intelligence, those biases can come to the fore and play an important role in our lives. That's a lot of issues to talk about and a lot of issues to unpack. We are so delighted to have such a stellar panel of leaders to help us talk this through.

First up, we have Karen Hao. She is the Senior Artificial Intelligence Reporter at the MIT Technology Review, where she covers issues around research and societal impacts related to artificial intelligence. She writes a weekly newsletter titled The Algorithm on the subject. In addition to her work at MIT, Karen co-produces an AI podcast, In Machines We Trust. Welcome, Karen.

Next is Gillian Hadfield. Gillian is the University of Toronto's inaugural Schwartz Reisman Chair in Technology and Society as well as being a professor of law and strategic management. She's also a Senior Policy Adviser to OpenAI in San Francisco as well as an advisor to courts, other organizations, and tech companies in the field. In 2017, she published the book Rules for a Flat World: Why Humans Invented Law and How to Reinvent it for a Complex Global Economy. Welcome, Gillian.

Next, to Shingai Manjengwa. Shingai is the founder and CEO of Fireside Analytics. It's an ed-tech start-up that helps to teach data literacy and data science to people—all the way from kids to CEOs. I won't ask her which is the more difficult to teach. She is also the Director of Technical Education with Toronto's Vector Institute for Artificial Intelligence. You may also remember her from winning the Public Policy Forum's Emerging Leader Award in 2020. Welcome, Shingai.

To start us off, perhaps just in the order that I introduced you each, give us five minutes on your perspective on artificial intelligence and a few thoughts. We will start with you, Karen.

Karen Hao: Thank you so much, Neil. Hi everyone. Thank you so much for having me. I just had one thought that I wanted to share today to frame the conversation a bit. When I cover AI and society, what I'm really covering is automated human decision-making and society. When I first started, I think AI was just beginning to come into the public consciousness. When I would tell people my job, they would say, "Wow, that's very niche. What do you spend your time doing?" Now it's the exact opposite. Only two and a half years later, people say, "Wow, how do you even wrap your head around something so expensive?"

The way that I see it, even though there are so many different ways that AI or algorithms can manifest, there are essentially just two core pillars through which I do my reporting. I remember that AI systems are always made by people and therefore imbue a certain worldview and a set of values into whatever algorithm or whatever machine learning system that they're building. I also remember that the system will inevitably then impact another group of people and have those worldviews or those set of values imposed on them, for better or for worse.

I think about: How do we actually want AI to impact us? How do we make it a force for good? How do we set up regulatory guardrails to make sure that it continues being a force for good? We really need to not foreground AI, but actually foreground the people and the context within which it operates and understand that there are people who are wielding it to make these automated decisions. Ultimately, algorithmic decision-making is some form of human decision-making in an automated way. That's just my frame and comment for today's discussion. Thank you so much.

Neil: Fascinating, Karen. I can't wait to come back to that in Q&A. Next up, Gillian.

Gillian Hadfield: Thanks, Neil. Thanks for setting us off, Karen. I think that's a really nice way to frame: the people in front and people behind automated decision-making. I think that's really, really quite critical.

For me, I think about the way you talk about creating those regulatory challenges. I work in legal and regulatory design. I've been thinking about this space for a number of years now. What I see are two significant regulatory challenges. Our existing approaches to regulation are not really up to the task of dealing with AI. The technologies move at a speed that is much greater than our historical approaches to regulation. They require greater levels of technical expertise and much of that data and technical expertise is locked inside the tech companies we need to regulate. I think there's a significant regulatory challenge to our existing approaches to regulation. I think that's one of the reasons I'm very focussed on legal and regulatory innovation in this space.

I also want to emphasise the risk of not figuring out how to regulate AI and relatedly, regulating data. Every time I turn around, people are talking to me about data governance. I think that's just part and parcel when talking about AI now. There are two sides to that regulatory challenge. What happens if we don't figure out how to keep up with these complex, fast-moving technologies? One is what Karen has emphasised. I think it's implicit in what she was talking about.

There are technologies in particular—she's focussing on automated decision-making—that can be harmful because they replicate existing biases. They discriminate. They might be unfair and they can disrupt our society. It's automated decision-making about those recommendations you mentioned, Neil, or the news feed. That's not just a problem of discrimination. That's also disrupting our politics, relationships, social stability, and mental health for people who become addicted. I think there's the harmful part of the technology that requires a regulatory response, which we're really struggling to supply.But I would like to also emphasise that there is a lot of good that we could be doing in health care, in climate change, in transportation, and in smart cities. There's a lot of good automated decision-making.

Automated decision-making gets to the heart of things. I've thought about it for a long time. The problems of access to justice: 80 to 90 percent of people don't get access to any kind of structured decision-making or they have to show up in court without lawyers. It takes a long time to get an immigration decision or to get anything that is thoughtful. If there are ways that we can automate that in the right way, that can actually give people greater access to justice. I think about that. That's where the lack of good regulation is actually now serving as a blocker to the development of AI for social good.

What I'm seeing is that we're getting this imbalance in the way AI is developing. It's charging ahead in the private sector because it's roped to these incentives that come from advertising revenues. That brings some good stuff. Our Zoom is working as well as it is because of AI and that incentive. That's a good thing. In the meantime, it's being stymied in our public sectors. Our public sectors are not keeping up, not only in regulating that private sector, but they're not keeping up in the fact that we need to just step back and look at our economy. It's really quiet over here in the public sector. I think that's a real imbalance that we need to be thinking about. This is why I think that the challenge of building good regulation and novel approaches to regulation is such a challenge.

When I look around the world, very, very few places are figuring this out. We are still struggling to do that. I think we do quite a bit of a leap. That's my framing for how I'm thinking about AI these days.

Neil: I think that's fascinating for all the public servants on the line. I hope some of them are thinking "challenge accepted" in terms of how to bring the public sector's game up on the good use of AI to produce public value. We'll come back to that. Shingai, over to you.

Shingai Manjengwa: Thank you. Hi Neil. Hi everybody. It's a pleasure to be back with you again. I live in the intersectionality between business, data science, and education. As Gillian pointed out, business is galloping ahead farther than any of the other components.

I'll speak more on the subject of education in the sense that our education systems just have not really caught up to the rate of development in artificial intelligence. What we have are education systems that are very good at empowering, I'm going to say young people, but all of us really, with an understanding of the physical world. However, most of us don't really know how an email gets from one place to another. We also don't have a strong understanding of how our mobile devices work. Often in asking the question, "why do computers need electricity?" you will get some very creative answers from folks that use them every day. If we take that as a premise, then we really do need to focus on how do we fix our education systems at a basic level, and also how do we upskill and empower our policymakers so they can actually participate in this conversation.

To that end, I've pivoted my thinking and really my approach to technical education to start looking at AI as a team sport. You will not get there with just the AI researcher who is working on the complex models or reinforcement learning on how to beat the next AlphaGo game. We're not going to get there with just that individual. However, if we can get that individual to sit in the same space with perhaps somebody like Gillian who brings the legal perspective and some of the social science perspectives, and we can also get them to sit down with policymakers, then perhaps we can get to that place where artificial intelligence regulation can actually happen. That is one of the fundamental challenges that I'm seeing in that space.

For context, at the Vector Institute, my work really is to take artificial intelligence research—the latest and greatest cutting edge work that's happening—and package it into programming the industry can absorb. That's the work that I'm doing. Right now, we are doing a course on bias in artificial intelligence, and it is designed for small to medium enterprises and funded through the National Research Council of Canada. I mention this because working with those SMEs, I really see what it means for them to translate really difficult issues in terms of fairness and bias into their business models and the deployment of those models. I have a new found sensitivity for how we will not get there with just the engineer or the computer scientist with a PhD. That's the point and I'll leave it there. AI is a team sport and we really need to upscale and empower our policymakers so they can actually participate in the conversation.

Neil: Thanks, Shingai. I absolutely love it. It's refreshing to talk about it as a team sport rather than an interdisciplinary or multidisciplinary team, which I think evokes the same sentiment. Thank you all. That's really fascinating. It's hard to know where to begin. Karen, let's start with you. As we see the pervasiveness of AI coming into different facets, into the economy, into Google home devices and other devices, our vehicles, as we've talked about, and privacy, could you share some thoughts on where you think AI is going to create the most disruption? What parts of the economy or what parts of society do you think are subject to the most disruptive change? I'll ask for everyone's reaction on this. Karen.

Karen: That's a really good question. It's funny. I actually think that the biggest disruption will happen with AI being used in government systems, not necessarily in the private sector. For a little more context, I have been thinking a lot recently about government algorithms.

I worked on this story where I had been speaking to a group of poverty lawyers. They work with clients who can't really afford to hire lawyers and who have lost some kind of access to basic services or basic needs. Sometimes it's a loss of access to services from the private sector. For example, they have a severe amount of credit card debt and they aren't able to secure housing or they are unable to secure a car. Sometimes it's loss of public benefits. For example, they can't get childcare for their kids or they can't get unemployment benefits or they can't get Medicaid in the US.

What's interesting is when I started speaking with these lawyers, they said that increasingly these decisions are actually made by algorithms. The lawyers are in this really weird place where they have to defend their client to a nameless entity because when they're trying to fight with the government on figuring out why their client didn't get Medicaid in the US, the witness on the stand is a nurse, but the nurse has no idea what the algorithm is doing. They can't really effectively get to the root of why this client was suddenly denied an essential service.

It's made me think a lot about the use of algorithms to allocate and distribute resources as a very persuasive use case. It is widely applicable across different government agencies when they're thinking about how to more effectively, more fairly, and more justly distribute the resources that they have across the population. It also has the highest consequences because these are the services that provide the net of support that a lot of people rely on when they hit hard times. I don't know if that was the answer that you were expecting, but I think that algorithms could be the most disruptive, either in a bad way or in a very positive way when applied in government use cases.

Neil: It's interesting. Gillian, you touched on this in terms of the application of the law. It turns out a lot of law is just looking at jurisprudence and looking for similar cases and looking at outcomes and correlating that. It sounds like it will be very disruptive potentially to the legal profession. Do you have any comments on that or on other sectors that you would point out that are likely to be disrupted?

Gillian: I want to start off by saying that AI is a general purpose technology. That's actually why we should expect that it's going to be so disruptive because it really can end up anywhere. I think it's very hard for us to predict where we're going to see the greatest amount of disruption or what that might mean.

To think about this point that Karen has raised about the role of AI in decision making, she's talking about benefits decisions. There are immigration decisions. There are housing decisions. We actually have a ton of decision-making that happens. A starting point is that the vast majority of people don't get any kind of insight. They don't get a lawyer to go up and find out why they got benefits or why they didn't get benefits. You're constantly clicking those little boxes online and nobody reads them. I'm a contracts professor. I don't read them. Nobody reads them. You can't read it. There's no point reading them.

I actually think that it is possible that we could build automated decision-making systems that could give people more access to a reliable understanding of the way the rules are being interpreted and how they're being implemented. But, that requires us to build it differently than we currently build it.

I don't know who thought that the way to build a decision-making system in criminal justice, medical care, health care or benefits was to look at how we've done it historically and just train the machine to reproduce what we've done historically. We know that's not often the way we want it to be done. We want it to be done in a way that is consistent with the rules. We need to build those systems in that way.

It's going exactly to something that I'm working on in my research these days with some researchers at the Vector Institute. It's a lot of discussion about explanation of decisions, but that's focussing on the mathematical explanation and which variables mattered, but what humans want are justifications. They want to be told that there's a reason that is consistent with the reasons we allow in our societies. I think about that as the real challenge. How do we get our automated decision-making systems to integrate into our systems of reasons and justification?

Karen appealed to the visual of somebody on the stand. The nurse on the stand. How do you integrate machines into that? Because that's what's critical for the stability of our societies. When I worry about what big disruptions might be, it's that people will start to feel like the world is not a fair place or is a less fair place than it is now. You can hit a tipping point where I think you really have a lot of loss of value, benefit, stability, welfare, and good life. I think Karen is focussing on something really important there in terms of potential disruption and what we need to do.

Neil: Absolutely. Let's come back to some of these tensions and trade-offs and some ideas or perspectives on how to address them. But first, Shingai, any additional thoughts just on the destructive potential of artificial intelligence or particular areas that you that you think about in that regard?

Shingai: Yes. There's an African saying: when elephants fight, it's the grass that suffers. The shift that I'm seeing is around where power is housed and the borderless nature of these technologies and the fact that if my AI is stronger than your AI, I can reach over your border and influence and make changes to your society. How you govern yourselves is where I think that I'm going to see the biggest disruption. It's not the sexiest one. It's not the one that we really want to talk about. But I think that's where we need to be thinking. Sometimes when we look at the lens of a decision, such as China choosing to invest in their own search engine, there are many arguments for why that could have happened but I think there's something there around identity and the idea that if the people building these technologies, and they become so pervasive, are not of your culture and your tribe, then there's an identity question to be answered.

As soon as you speak of identity, you then speak to national security. You then speak to how do we view ourselves as a community and a society and how do we govern and rule ourselves. This is not the sort of thing that you can outsource. You can't go to a neighbouring country and say, "hey, we don't have the skills or resources to do this. Can you lend us a few people?" Just because of the nature of the technology. The biggest area of impact, the biggest area of change, and the biggest area of evolution I see is around those identity and national security questions. They're big and lofty questions. I think they're even more significant than "advertising is driving the Internet" or "my next toothpaste is going to come from this location faster than it did yesterday". I think we need to keep an eye on the big picture and take the opportunity to speak to policy-makers. We need to keep an eye on that as a significant area of focus, particularly in Canada. I'm going to call us one of the good guys. We need to keep our foot on the gas on that one.

Neil: Shingai, what does that look like, putting the gas on this perspective? This goes back a bit to what Karen said in her opening remarks about people at the beginning, people at the end, and Gillian, to the comments you made about social justice. Shingai, could you tell us a little bit more about what you think it looks like to address those trade-offs or those big questions that you're putting on the table around power and around influence in society?

Shingai: It goes back to my original questions: are policy makers empowered to see those dynamics, to make a call on those dynamics, to speak to those technologies and have a full understanding of what's happening?

One of the big ways that Canada has taken broad steps to empower ourselves to be ready is by investing in artificial intelligence, investing in a pan-Canadian AI strategy, investing in institutes like the Vector Institute, making sure that our graduates have a home in Canada where they can develop their skill sets and then ultimately contribute to this community and society. That's step one in making sure that we have the people in place. It really does come back to when a policymaker is looking at a series of options around security and around defence. Do they have the tools in hand and the skills within them, the knowledge to make a call relating to a life and death issues when we're dealing with topics of that nature?

Neil: Great. Karen, do you have any views? I know you have to log off in a few minutes. Do you have any parting thoughts on that question?

Karen: Yes. I really love the points that Shingai has been making. As an embarrassed American and as part of the AI community, I do think that Canada has a huge role to play on the global stage in figuring out a way to help guide the global conversation around AI development in a way that's not so competitive. I think the US and China have so grossly dominated this space and this idea that there's an AI arms race happening between these two countries has created a lot of perverse incentives around the ways that countries are now really focussed on trying to develop the technology just for themselves and as quickly as possible, not necessarily always as safely as possible. I think Canada actually has a really unique role in stepping into the fray and trying to bring a more collaborative mindset, a different perspective to this weird superpower tussle.

Neil: Thanks, Karen. Thank you for sharing your time with us.

Karen: Thank you so much.

Neil: Gillian, what about for you? What does success look like in terms of dealing with some of these power dynamics and societal vulnerabilities associated with the growth of AI?

Gillian: Shingai is focussing on a really important point about the how AI is redrawing the landscape of power. I think we're talking about eras here. This is the end of the regulated market economy that we've been living with for 120 years or so where we have corporations that can push forward that profit motive, which drives a lot of great innovation and new ideas. Then we have effective regulation that keeps things in balance. We have regulation for environmental reasons, for health reasons, and for fairness reasons. That's what's breaking apart. That's what is sort of driving ahead on the corporate side.
Power is accumulating in those private tech companies in particular because we're not keeping up on the regulatory side. Our politics is rewriting the structure of our societies. I think it's got a geopolitical global dimension, which is partly what I think Shingai is also pointing to. You have large tech companies now deciding things like who's on a platform and what masks will be distributed and just a lot of decision-making that's happening at that location because power is accumulating there. I think a lot of those companies don't really want that power.

I think we're going to have to be rethinking where we draw the boundaries. I worry a lot about the fact that the pace of development inside our large tech companies is going so fast and it's so complex. The models they are building are just massive. They can't be reproduced in the public sector. The language models, for example. OpenAI has 175 billion parameters in it. I think Google announced one later, Shingai may know the numbers better than I, with 1 trillion parameters in there. That's just not something we could reproduce in the public and academic sector. How are you going to regulate that if you can't actually peer into it? If you can't actually work with it?

I always draw the analogy. It's like you have to regulate the car company, but you can't buy cars and run them through a crash test to figure out how they work. You're entirely dependent on how the car company reports to you that they work and what they can and can't do. I think that's a major challenge and I think that's a massive disruption just at the core level of where our society is being decided. How important are our elections? How important are our communities and decision-making processes? I think that's why we need to figure out the regulatory challenge and fast.

Neil: Gillian, who is doing this right? I think in Canada, we like to think we're at the forefront. Maybe once we were at the forefront of AI. I don't know if it's true today that we're at the forefront. Do you see other countries regulating or taking other measures on AI that you think we should take a look at and apply here in Canada?

Gillian: No. I think we actually have the capacity to be out in front on this. Actually, most governments around the world don't know what they should do. The EU has made some bigger steps in this area. The GDPR, or the general data protection regulation, is about data but also includes things like explanations for automated decision-making. They've announced a framework and proposed legislation for regulation of AI, which would designate some AI systems as high-risk and require national supervisory bodies to certify across a whole bunch of dimensions. I see that in Europe. Taiwan is thinking ahead on digital infrastructure. There's a few other countries that are thinking ahead on digital infrastructure, but I don't think anybody has quite figured it out yet.

I don't think there are actually too many models out there to emulate, but I think we know how to build them. There's an element in our new proposed bill, C11, for certifying organizations that have data management processes that meet the standards required in the bill. I think that actually opens up a possibility for a move. I'm excited about the potential for Canada moving there.

Neil: Shingai, Gillian is painting a picture where the good news is: that Canada is not last. But the bad news is nobody's figured this out. That puts an onus on tech companies, on tech developers, and on teams that are developing tech to operate. In the absence of this regulation, what do you make of the onus on the tech companies themselves to self-regulate in this area?

Shingai: In a perfect world, yes. They would. I think many of them are. To be fair, I've got a lot of friends and colleagues who are part of these big tech companies, and they're people who I would say share the same values and are moving in similar directions.

At the same time, these are massive companies. This is not the sum of the parts. A big ship turns slowly. Any decision that they make has an incredible impact on the landscape around them and sometimes unintended consequences. I don't think at any point that Twitter ever set out to be a policy platform for Heads of State, but then as soon as that happened, it started to raise questions about: if something is said by a Head of State on a platform, is that not policy? How do you relate to it? From that perspective, we would like that to be the case, but I think it's not enough.

It comes back to the conversation of bringing those different parties to the table and having better and more constructive conversations around the details so we can have lofty conversations around, henceforth, no more policies made on social media platforms. Beyond that, what does the technical implementation look like and is that feasible? We've seen the story play out in Australia and there are different sides to that conversation. I think there are some very real technical implications for some of the decisions that we make at a policy level, that even the tech companies may figure out or may not figure out. I
t's easy to sit here and point a finger and say, "They're not going to do it because they're chasing profits." That's not necessarily the case. I think sometimes it is genuinely difficult to make the types of changes that we're asking for.

I'll speak specifically to data privacy. If we're looking at laws and policies that say that if I, as Shingai, decide I no longer want to be in your training data set, that's my choice. Legally, the law can potentially protect me. What are the next steps in terms of physically removing me from the database? If my face has been used to train a facial recognition algorithm, we cannot undo that. We cannot go back on that, even if my face is no longer in that data set. We would like those organizations to be more responsible. I think certainly there's a moral and ethical push for that, but it will have to come from regulation.

Neil: Thank you. We can take some questions from Wooclap. If you logged in late, if you go to Wooclap.com and enter the code VC0302, you can submit your questions. Let's take a look and see if we have any questions from the audience. It looks like we do. Why don't we start with the one on the top left corner? Is it all there? There we go.

There seems to be a somewhat unexamined assumption that human beings make decisions that are explainable, but it may be another elephant in the room, that human beings make decisions with plausible but maybe unprovable or excuse-based explanations. This might raise questions of accountability. This assertion, I guess, not so much a question, is that Canada is in a great position because of our history and human rights context to be thinking about AI in this space. I take it you're going to agree, Shingai and Gillian, but let me just see if you have any reactions.

Gillian: Let me take this one on, because I've been thinking a lot about this question of justification. I'm concerned that we're talking now a lot about explainable AI. The way that's being interpreted in the computer science world is really the explanation for what's happening inside the model—what variables are driving the decisions. That's valuable information. It helps you predict the system. It allows you to identify factors that are influencing the decision.

I think it's important to recognise, that when we talk about justifications, I certainly find that a lot of people hear that as ex-post rationalisation. People can say whatever they want to, but that's actually what our legal systems and our moral systems say. We ask for accounts and reasons from others, and then we don't just fall down dead in front of them. We check them for plausibility. We check them for consistency with other principles. The fact that we have a rule about not being hypocritical. If somebody gives their reason, but last week they said that the opposite reason would support something that they want to do, then we discipline them.

Our systems of legal reasoning and moral reasoning are all about testing those reasons. When a judge decides a case and the judge writes written reasons, we actually don't know if those were the reasons that were actually firing in the judge's head when he or she made a decision. You've got to set out reasons, and only those decisions that are justified by a consistent set of reasons, as examined by the public, by the press, by other lawyers, by the litigants are the ones. That's our control mechanism.

That's where I think that we have to be thinking about how we build automated decisions that can provide those reasons that are explicable to humans, understandable to humans, and most importantly, that are woven into that fabric of the way in which we control. No, you can't provide that reason for this decision. That's not a good reason for the decision. You can't say you're not going to confirm our judges because the election is coming up close and then say, " but it's okay when we do it." That's how we discipline our reasoning. That's why we're giving reasons all the time and disciplining those. We have to integrate AI into that.

Neil: Yes. It's hard for me to think about what a Court of Appeal looks like for an algorithm, but it's a fascinating question.

Gillian: Yes. We currently have zero licensing regimes around machine learning models. You can't sell drugs that haven't been tested, but you can put a machine learning model out there with nothing. One of the things we've been exploring is if there is a way to create a procedure where you actually design so that there is a human who can be held responsible for the decisions that are made and you've gone through a certain kind of process to establish that we can't just say, "The machine did it. I have no idea." We need to design processes that allow us to hold somebody responsible for the decisions that are made. That's the only way we all hold together, I think.

Neil: Shingai, did you want to comment on that?

Shingai: Yes. I always think automated decision-making as a very complex coin toss that defies the laws of physics. It jumps up, goes sideways, comes back, and then finally lands on a decision and we have to stick with it. We're all comfortable with deciding what type of pizza we want to have with the toss a coin and we all understand how that happened and we are able to accept the decision. But if that coin behaved in such unpredictable ways or we couldn't see how it happened and actually someone was responsible for the ways in which that coin was behaving, we would all want to know how this decision was made. That's really the way to think about this incredibly complex coin toss that, at the end of the day, we're asking to help us make a decision. We do need some sort of explanation for how that came to be. Otherwise, it is part of our nature to question it. To Gillian's point, if we're going to be some sort of cohesive society, we're going to need that final call. We're going to need to be able to have trust and faith in how that decision was made.

Neil: Absolutely. One of the other Wooclap questions that caught my eye, Shingai, is a question about how the portrayal of artificial intelligence in science fiction has impacted people's thinking around science fiction. I don't know if you're a sci-fi fan or if you have any thoughts on that.

Shingai: Yes, I'm a huge Star Trek fan. If this was an actual conference, we could definitely discuss this over the breakouts and after the conference or after the session. I'll use the example of Star Trek for context here. There's an episode where you have an artificial intelligence medical officer making decisions. At some point, it is questioned whether the medical officer is classified as a human being or an android. If you look at the life of Gene Roddenberry, the maker of Star Trek, it was very rich with different walks of life and different parts.

I think what sci-fi does differently is that they come at it from a blank piece of paper, whereas a lot of us are restricted by what has already happened and the framework that we already have in place. We're looking at that bill that would never pass, so we don't even go down a path of trying to do it. We believe that advertising is the wrong model for the Internet, but we can't even think about how we could turn that on its head. Would it be subscription? How would we even begin to start? Whereas when you're writing a sci-fi show, you can make things up and it's a completely blank piece of paper and you can play out the different scenarios and then we see it. I think it is the result that we all need to be a little bit more creative. To really imagine from scratch that if we did this differently, if we started again, what would we put in our educational curricula? Let's not talk about how difficult that would be to change, or that the next review of the curriculum is in 10 years. Let's just say, if we had to do it today from scratch, what would we put in our education systems? I think we could get closer to how the sci-fi movies and shows represent the future.

Neil: That's great. Gillian, I don't know if you're a sci-fi fan or if you have anything to add.

Gillian: I have not been a sci-fi fan in my life, but since I've started hanging around with a bunch of AI researchers for the last four or five years, it's hard not to be. I finally read I, Robot a couple of years ago. In particular, the one story called Runaround, which introduces the three laws of robotics that everybody's heard about. It's a great story. It actually completely captured what I think is so critical for thinking about how we're going to make sure that AI is doing what we want it to do.

The idea in I, Robot is that the robots have been built with the laws built into them—mechanically, physically, most deeply embedded in their positronic brains—and the rules say that you must protect, you must follow orders from a human, and you must also try to protect your continued existence. The challenge of the story happens when they can't resolve a problem because there's something that's happened that causes the machine to get stuck because they're just in perfect balance. The trick is to try and figure out how to trick the brain of this robot.

It's actually that misunderstanding about the way our legal and normative systems work. I say normative to mean our systems of social norms and morals as well as laws and regulation. That is, that we can't just go and get a list of all the rules and stick them into the machines. I think this has been a misconception in some of the early days of thinking about how we are going to make sure AI does what we want it to do. It's like, "We'll poll everybody for what they think the right choice is when a car has to choose between killing the driver or killing the pedestrian on the street." It's the trolley problem that is all over the place in this area. We'll just get a list of those things and then we'll just kind of feed them into the machine and the machine will decide, but that's not how we actually work.

We actually are constantly looking around and having to work with each other through that exchange of reasons, through that exchange of ideas, and through our legal systems. We have structured systems for saying: if we can't figure it out, we now have a process that we will put it into and the courts will ultimately decide whether it was taking enough care on the road or not taking enough care on the road? Should those benefits have been given or not given?

That's why I say, "look, the problem is that we actually need to be not thinking about how we are going to embed values into our machines." Our values are constantly evolving. They're different in different communities. They're going to be different in 15 years than they are now or in five years.  Shingai started us off with identity and different cultures around the world. We want to build machines that can navigate our normative world, our world of what's okay and what's not okay, in the same way that we were building them to navigate physical space. That's what a competent human is able to do. I think that's what a competent machine is going to have to do. That's my little bit of science fiction.

Neil: Nice. One observation I would make is that in sci-fi, we often talk about AI for general intelligence or general super intelligence, whereas for the foreseeable future at least, I think we're thinking about AI as tools that humans can use that are merely super intelligent. The way a calculator is super intelligent on arithmetic. That's one difference, I think.

Great. Let's look at some of these questions here. Why don't we take the one on the top right again, which is about AI ethics, diversity, inclusion, bias, and representation. I think you've both spoken to this. I'll just give you each a chance to amplify a little bit, if you like, or to bring a new idea in. How high is the risk that we will be creating bigger societal chasms? What are some practical things that we can do to address this?

Shingai: I can get us started on that. The risk is great and in fact, in many ways the ship has sailed. I'll just give you a quick anecdote, which is, if you are light-skinned, if you consider yourself white, how do you know if your children are sick or if children around you are sick? Somebody might say they are flushed, they have a red nose, et cetera. Well, if my kid was having a playdate with your kid, you might struggle to identify those same markers of illness in my kid. Therefore, you might conclude that both kids should go to school today because maybe one of them is just fussing.

Now, take that micro example and put it into a firehose of technology and then pump it into apps that we are all using on all our mobile devices and see the consequences that might take place. No one is racist. No one has set out to create a bad system, but inadvertently, just by virtue of the context that we have and the lack of awareness of the differences and experiences of others, we might end up with technology that discounts. I'm going to use the same example of illness in dark-skinned people. If you cannot see my face flushing, if you cannot see that my nose is red, that my eyes are puffy, then you're going to conclude that Shingai is fine. That's really at the heart of it.

The people that have predominantly been involved in creating these technologies have been of a homogenous group. What we need to be working on is that representation and that inclusion. It's not just about representation in the datasets as we create the technologies. It's about representation in the models themselves because often we start with hypotheses. You don't know what you don't know. The range of the full spectrum of possible hypotheses you can dream up is based on your experience and your context. Without representation in the group or in the creation of the technologies, we have bias in existing technologies. I'll park it there.

The fact that the ship has sailed is not all doom and gloom. I do see hope. I do think it's possible to resolve, but the rate at which we are trying to fix these issues, the rate at which we're trying to get representation in data sets and in the teams and in the people developing the technologies is not nearly fast enough. That's one area that we can absolutely invest more time and resources into making that happen.

Neil: Shingai, you're referring to inherent or systemic, but unintended, biases. There's another question there about China's social credit system, which is more of an intended bias. Gillian, you mentioned that when you were talking about the search algorithms in China and the US. Any comments on the question around bias and discrimination? What can we do to deal with it? Also, any reflections on the Chinese approach in terms of a system of social credit, which is more the intended?

Gillian: Yes. I want to say that the bias that we are seeing in our current algorithmic systems should not be surprising to anybody. It's the result of grabbing cheap data. We're seeing all the explosion in AI because of a particular kind of AI, which is machine learning based or neural networks where you feed in a lot of data. It discovers patterns and then it's able to predict those patterns. I keep looking at that and saying, "who thought it was a good way to build a decision-making system?" That's just a way to reproduce a set of patterns. And you know what? It's better at seeing hidden patterns.

There are tons of ways in which systemic means, bias, and discrimination are filtered throughout and lots and lots of kinds of bias and discrimination. That's because we're building these models on cheap data. Here is a set of outcomes and decisions. Here are my hiring decisions from the last 10 years. What you want to do is to automate the human. We don't learn how to make the right hiring decision just by looking at what the company did in the past. As a human, we learn about what our goals are and we think about what we're trying to accomplish. We understand the legal and regulatory normative environment in which we operate. No human is going to sit down and say, "Go through all the paths. You mostly hired men in the past. Okay, just show me the men."

That's just not what we do in human decision-making. We haven't actually automated human decision-making there. We've just done this weird thing, which is we've created a machine that will reproduce what we've had in the past. That's not how you do decision making, particularly in an area where we have lots of normative and legal constraints. We say, "Well, you can't base it on that decision, that aspect or this one." I feel like we're trying to solve a problem that's in the decision to use this particular kind of artificial intelligence and machine learning technology to automate decisions. The more expensive way would be to create a data set where you had judges or humans who were making decisions in the way that you verified you want humans to make decisions. To say, "Here is a thousand files. Please make the right decisions about this thousand files." That's an expensive dataset. I think this is a problem of cheap data. Maybe we should be spending a lot more money on just getting the right kind of data instead of trying to fix what seems like a faulty premise in the first place.

Neil: It's interesting. I guess the public sector should be a target rich environment for the kind of data sets that you're talking about because we make administrative decisions every day, whether it's to award grants or to deliver services or to make other administrative decisions. Presumably, the public sector has all kinds of training data that it trusts. Now, I'm sure those aren't free of bias either. Certainly systemic bias is hard to get at, but at least it's not a complete free-for-all or the cheap data that you're mentioning. Gillian, that's fascinating.

Why don't we put the Wooclap questions up again and see if we can tackle more? We're on the homestretch now. What I'm going to do is put the Wooclap questions up. Shingai, Gillian, if you want to just turn your mics on and if there's anything there that catches your eye. Let's just open it up.

Shingai: Can I jump in with one on employment? Replacing people in jobs with AI computers is problematic. I think, again, if we take that sci-fi view of starting with a blank piece of paper, then it opens up more opportunity for changing the scope of the future of work. Whereas if we take it as people are in existing jobs and they will continue to do those existing jobs for the foreseeable future, then any technology that comes in and threatens that system is going to be perceived as replacing people. I think the idea that AI will automatically replace people is based on if we take the view that the past will continue as it has been, but if we open ourselves up to the idea that the world could radically change and we could all end up doing jobs that we didn't even dream of. My original job title at the Vector Institute was made up. It didn't exist. We didn't have somebody who came from a data science, business, and education background. Having that openness to what else is possible and really applying ourselves to thinking about what the future could look like makes it not a zero sum AI versus people.

Neil: Great. Gillian.

Gillian: Yes. Let me address the question: could AI be implemented in the management of a future pandemic based on the actual situation? If yes, what are the challenges? I'll start off by saying, yes. There are questions of how we regulate against harmful effects. I think a very big issue is that our regulatory challenges are stymieing the development of the use of AI for great value. COVID is one. That's actually something that I've been looking at throughout the pandemic. One of the big challenges we're facing is our data governance structures—our rules around how information and data are handled. Our privacy regulations are really not designed for the digital world. They're designed for a paper and records world. They're not designed for an environment where you have so much data.

We're focussed on an idea about the identification, which is really not possible in a world of massive amounts of data. We've got IP protections. We've got all kinds of things. Most of our data is sitting in silos and is not available to track outbreaks in a pandemic. I've been working with the Rockefeller Foundation on this question. They would like to build a pandemic sensor system. The big challenge is: how are you going to get a data governance structure in place that people are willing to trust? They should be willing to trust in a sense. Most of us are happy to share our data to track pandemics. We just don't want it being used for something else. I see that as a governance challenge. How can we regulate that data so that we can all trust? I gave you the data in order to fight the pandemic. I did not give you the data to sell me stuff or give it to the police or tell my neighbours. How can we get data shared for those purposes much faster, much easier, and in a trusted way? Our existing approach to regulating data is not helping there. We have lost the value of AI in responding to the pandemic to date.

Neil: Wow. We could have another virtual café series just on that subject, but we're out of time and I just have to thank you both for a fascinating conversation. Thank you for taking our questions and thanks for being so generous with your time today.

Thank you as well to everyone who's participated in the session. Thank you for your questions. I'm sorry we didn't get to more of them. I do want to let you know that our next virtual café event is on March 19th. We'll be talking about space exploration. We have two leaders in this area: Lindy Elkins-Tanton, who is from NASA, and Dr. Sarah Gallagher from the Canadian Space Agency. Registration is open and I invite you to access that through the Canada School of Public Service website. Thank you once again for a great discussion on artificial intelligence. We look forward to seeing you at our next event on March 19th. Take care, everybody.

Related links


Date modified: