Transcript: Innovate on Demand, Episode 1: Field of Broken Dreams
Todd: I'm Todd Lyons.
Natalie: I'm Natalie Crandall.
Valeria: I'm Valeria Sosa.
Sean: And I'm Sean Turnbull.
Todd: And this is Innovate On Demand podcast.
On this episode of the program, we go back to basics. What does innovation even mean? Is it the magic bullet for any problem that some people make it out to be? Is it something we should even be attempting in the public service, given what's at risk? Or is it the latest shiny thing we feel pressured to chase? We were past due for a frank and fearless conversation. Natalie, Valeria and our guest were happy to oblige.
Valeria: Welcome, Sean! So Sean, let's just let's just dive right in. Why don't you tell us what you think about innovation?
Sean: I'm a little exhausted of the term "innovation".
Valeria: I'm shocked! [laughs]
Sean: Yeah. Spoiler alert: it's going to be a little bit dark. So, every job I've had in government so far, has had the word innovation in it. And I'm a little bit jaded when it comes to government bureaucracy innovation, based on those experiences. I do want to say from the start that I don't want it to be super negative and say people shouldn't try to innovate and that we should just continue along with the status quo. Obviously, there are times when there are many examples where we need to try new things and be more adaptive in government.
Valeria: And you're also not just jaded when it comes to innovation. [laughs]
Sean: No, no, that's just a general sort of approach to life for me.
Sean: Truth. But I think with government innovation, we oftentimes try and import ideas and concepts from other places, mostly the private sector, especially the startup sector. So we try to do things like the cargo cults where they would build these airports out of cardboard boxes, and then they would think that airplanes would land. So we think if we build an innovation lab, then innovation will then happen. It's the "Field Of Dreams" innovation strategies. And I don't think we take seriously, (A) I don't think we take seriously the lessons that we could be learning from the private sector and (B) I don't think that there are necessarily a lot of lessons to be learned from the private sector. We aren't the private sector. And we need to make a b case about why we need to innovate in a certain field or with a certain tool before we actually do it. And just because it's there to try and just because it's a new idea is not good enough. And so that's one of my main takeaways from, from being in the government innovation space, both in terms of innovating with new interventions to work with the public, but also in terms of innovating within our own Corporate Services, and internal innovation and the way we think about and see the world.
Valeria: Can you give us a for instance?
Sean: Sure, blockchain is really interesting. It's a very cool tool. Distributed ledger systems can can help solve real problems and are solving some real problems. And so we should know about blockchain, especially from a regulatory perspective. We should understand it. We should add it to our toolbox. But we shouldn't try and shoehorn in block chain solutions to problems that do not exist. We shouldn't take a problem that can be solved with just an Excel file, and try and do a block chain solution, just because it's cool and new. And similarly, with design thinking, a very cool way to approach the world. We can't design think every single problem. So I do sometimes worry that we fall into the trap of being tool-driven in our innovation, where we see a solution, and then we start looking for a problem. And I think the best innovations are ones that are problem-driven. We see a real problem in the world. This is a problem for Canadians. I'm less interested in problems for government employees; those are real problems, too. But when there's a real problem for Canadians that we can fix with an innovative tool, that's the best kind of innovation. And there are real, very cool examples of that and you guys have probably talked to some people, or will talk to some people who have great examples of that. But when we see something cool in the world, and we just want to try it out, that's when we get into some problems, I think in terms of not really delivering real value for your organization. You're not really delivering real value for Canadians. And this, this cuts to the core of what annoys me about it is, you don't get focused on outcomes anymore. The innovation itself becomes the outcome. Getting the money out the door, getting the innovation out the door, becomes the outcome, become success. And you are right to think that way, because you will get the ADM Award if you get that innovation done. But did you produce any real meaningful outcomes? So that's a lot of it. That's the core of my exhaustion.
Natalie: So if I were to maybe try and paraphrase, and please correct me if I'm not capturing it: not everything necessarily lends itself to an innovative lens right now. And so maybe we should put a bit of time and thought into where we should be innovating? Would you say that that's fair?
Sean: Yeah, definitely. I always say -- and this gets back to the definitional question -- what is innovation, which is both exhausting, but also important to think about. Innovation is trying something new. And when is the best time to try something new? Well, when the old thing isn't working. And what evidence do we have that the old thing isn't working? And in any given scenario, our evidence in government ranges from non-existent to kind of crappy. We rarely have good evidence about what we're doing right now. So how can we even presume to think an innovation would be necessary? We don't even know how fast the car is going. Does it make sense to put a new engine in there when you haven't even built a speedometer? And so I think a lot of our innovations are like engine-focused innovations: we've got get going faster. And I always get frustrated that we don't really know our original speed. We have no baseline to compare this to and those questions aren't even considered necessary, because it's just "Oh, cool. It's an electric engine. Let's get it in there."
Natalie: So, I have a question for you. I've been thinking a lot about this, thinking of you coming on as our guest Sean, with your background. How do you think that experimentation can actually contribute to innovation within the public service?
Sean: It makes me very happy that you asked me that question.
Sean: That's a very b host move, I would say. So yeah. And that's been part of my innovation journey, too, is that I started in innovation and then I ended up for the last year at Treasury Board working on the experimentation file. And it was for that reason, I think, experimentation is like Innovation 2.0, or it is a necessary precondition to innovation. Because experimentation -- capital E experimentation, not when people just use the word experimentation to mean innovation because doesn't mean that -- but the rigorous experimentation of testing something: that really opens up a ton of space for you to then say, "Okay, we think this program is causing this change in the world. But we have to do an experiment to test it: test it against another way of working or test against nothing at all." Sometimes, maybe our programs are worse than doing nothing. And so once you start thinking in that way, you start questioning your assumptions. Well, we thought sending this letter would get this reaction from Canadians, or we thought this website would get people to click on this. And then you have to think, "Okay, we need to measure this now. We need to compare it to something." And then when you find out, it's not working the way you wanted it to, or it is working, but you think it could be working better, or it's working, but compared to an international benchmark, it doesn't look so great, that's when innovation can come in. Because then you've got actually created a problem that you can then solve with an innovation. You have a hypothesis you can test. You can say, "Look, we're doing it this way. That led to this level of client satisfaction, or that led to these processing times, whatever your metric is that you want to change in the world. We think doing it in with a design design thinking approach will will shorten that amount of time to process it or will, or will increase client satisfaction. We think moving to blockchain will help the whole thing run faster, while not sacrificing the timing. So experimentation, rigorous experimentation, even just thinking it through… not even running a full experiment, but but bringing experimentation mindset, I think helps better prepare you to have a more meaningful innovation conversation, because it forces you to ask the right questions, to draw your logic model, to interrogate your assumptions. And then that opens up space for meaningful innovation, instead of just slapping an innovation down because it's a new, cool thing to do.
Valeria: So let me ask you, what do you think would be the next step? Or how could we go about educating people on this front? Because I think we've had this conversation before. I think after hearing you speak on the subject so passionately, so many times, I started observing in people, the fuzziness and the blur between innovation and experimentation and how even entire departments misinterpret these words and their meaning. So what do you think is the solution in terms of working with a common understanding of everything? Because we've been trying for years, really.
Sean: I think it's really, really hard. I think there's like a depressing answer, and then a more hopeful one.
Valeria: [laughs] Depressing first.
Sean: So, the depressing one… the depressing and insulting one -- but it's insulting to me, so I can say it -- is I was doing a roundtable on experimentation and someone -- I think was at Global Affairs -- said we have a "liberal arts problem" in policymaking where we hire people who don't come from a rigorous empirical field. And if you hire -- 90% of those people, ECs, have that background. They weren't engineers. They weren't even psychologists who at least have to understand stats. If you hire that kind of people, then that's the kind of innovation experimentation you're going to get. It's going to be liberal-artsy, fuzzy experimentation. So that's the depressing answer is that there is maybe a systemic challenge in the way that we educate policymakers in the design of the programs. Literally going back to high school and then universities -- people are not empirical enough. They don't understand the stats. That's not to say empiricism is the only important thing in the world. But, a weaknesses is an over-used strength. And I think we're really b on the liberal arts ideas and the concepts of philosophy that we need, and the qualitative understanding of the world. We have that in spades. But what we might need is some more quantitative [understanding], and that can be achieved, other than reforming the education system -- which is kind of a crappy answer to give -- there are ways we can do that at a departmental level with Free Agents and the PCO Fellowship Initiative and people trying to make changes to rapidly staffing. Talent Cloud, maybe, with bringing in new kinds of expertise much more easily than we could do before. So that's one way of doing it. But then there's also just at an individual level, and that's maybe more the kind of point of your question was, how do we convince individuals who might not be so familiar with it? And that one, I don't know what the answer is to it, other than to say, I think we need to be more thorough, experimentally with the way we do this outreach and kind of test, instead of just sending out an email or organizing a workshop, try and be a bit more rigorous with thinking through what change, we want to see the world and testing different approaches to that change. So it's a meta point, but we have to be experimental with the way we we support experimentation as well, or else what kind of hypocrites will we be?
Natalie: I find that really interesting because I think it'd be fairly easy to extrapolate the same situation around data and database decision making right now. So the world we live in, and the world we work in, the environment we work in is changing significantly. So how do we make sure we've got those skill sets? Because I agree, we talk about data all the time, but I don't see anything that lets an individual take that data and say, "How do I know that I've understood this data?" How do I know that I'm actually maximizing the use of what's available to actually make better decisions, to better understand the situation, to better determine an experiment, or identify a hypothesis or whatever it is that you're trying to do?
Sean: Yeah, I totally agree. I think data is an overlapping Venn diagram with experimentation and innovation. And we are opening up new frontiers in terms of the level and type of data that we have access to the point of now, we, we probably have too much. And so again, I think it goes back to the like, level of expertise we need in government. We need more data scientists. I push Comms teams a lot on why they don't run more experiments, because they have access to so much data. If you're sending an email out to 10,000 Canadians, you can send it twice, and just randomize which version they get, and just see which one better leads to the outcome you want. And those kind of experiments are quite rare in Comms teams, even though those are the easiest to run. If you gave every Comms Team a data scientist, you'd see a lot more of these things. But we have no budget for that. And frankly, even if we wanted, there's not enough data scientists who have some Comms knowledge that you can even do that. You couldn't even hire that many. And so again, I do think it's an expertise problem. But then there's also an individual question of what can any individual do to get better at using data. I think some people have started to answer that question. We do have that data strategy that is out and things are happening around it, I guess, I don't really know.
Valeria: I think this brings up an important point. I've also seen something [that] doesn't measure what it says it's going to measure. But people are convinced that this does measure what it says. You know what I mean? I've seen it so many times, we're like, "Really? I don't think this is actually measuring what you think it is." Yet, we accept it. And as you're talking, what I'm thinking about is building diverse teams, and how important that is. Consciously constructing a diverse team so that you have those knowledge gaps filled, and giving that opportunity. And that's part of the experimentation too, figuring out how we can put some rigor behind building those teams, because I know, but I don't have any evidence to support it at all. I think the results would be much better.
Natalie: I feel like you're trying to define a unicorn. Some mythical creature who's both a data scientist and a master storyteller.
Sean: Yeah, I think I think we know story is important. And sometimes when I when I go on this rant, people think…
Natalie: So this isn't your first time on this rant?
Sean: No, no…
Natalie: Excellent. [laughs]
Sean: It won't come as a surprise to you.
Valeria: Oh no. Anyone who knows Sean, knows his rants. [laughs]
Sean: I'm not trying to put myself out of a job. I don't think it should only be just the nerds with glasses doing the Moneyball advanced stats of government. We definitely need that qualitative storytelling side of things, and it is more about the diverse teams. But diverse teams are one way to do it. And so we can look at other [ways]. We can learn some lessons about experimentation from other fields as well and think about how do they do it in the medical side of things -- not that we want to reproduce that system necessarily, but we can take we can take lessons from how do they do it in the UK where they have a trial advice panel, where they have a list of academics who are willing to give their time to government to help design experiments. That's a structure that they have built. We can look at the private sector where they have experimentation teams that, much in the same way we have innovation labs in government, whose real focus is they have the data experts and they have the empiricists and they have the people who can do experimental design. They have the ethicists, though that's a separate question, and they can really build meaningful, useful ethical experiments within their respective organizations and learn from those who improve -- whatever -- the way you sell pet food or in the UK the way that government works.
Valeria: Random example. [laughs]
Sean: There was. I think Petco, or there's some company that has a real b experimentation side. It's very impressed. Just to say there's more than one way to do it. And I think at first, before we make the systematic change in the way we educate and hire public servants, I think it might be more the centralized approach which has its problems, but I think it is working in some departments. I think in ESDC you see it working in terms of they do have a team that is an innovation experimentation focused team that can deliver wraparound services to help people solve a problem, and they're getting some some good outcomes from that. And we're seeing other departments trying to replicate that model and the CRA as a great great team as well. IRCC. So we do see things happening. It's a question of… It's like any change management. It's got to be from the ground up. It's got to be at the mid-level. You have to incentivize it at your leadership level. There have to be champions for it. There have to be real resources put aside for this stuff. So I think it's like any big problem that we face. If we want to tackle it, we have to tackle it at every level with multiple vectors.
Natalie: So would you say then, that it's fair to say that we need to mainstream some of the concepts of innovation, experimentation, data, all sorts of things within our everyday work culture so that we can utilize them better? What would you say that's fair?
Sean: Yes. But… I think sometimes we over mainstream the innovation piece of it. Not everything needs to be an innovation. Some things work. Government is actually good at some things, there's no need to innovate in them. Now, the Foresight guys might argue, you always got to be one step ahead of the problems on the horizon. And that's a fair comment. But, I would say, maybe it's the lens I have because it's where I've been working. I've been working in this innovation bubble, so this might not be true, if you haven't been in that bubble, or you're in maybe not even in the National Capital Region. Maybe there's something about my experience, but I do feel that we've done a pretty good job on convincing people on the innovation side of things. And we still have a lot of work to do on the data and experimentation, even evaluation. Even the rigorous evaluation piece. I think we found a way to hive off evaluation into its own ecosystem that we don't really need to interact with as policy and program people, which defeats the whole purpose of evaluation. And so that's it's own podcast entirely. But we found a way to take a very useful concept and bureaucratize the usefulness out of it with the way we do evaluation. I hope evaluation people don't get mad at me for saying that. I think a lot of them would agree with that.
Natalie: I think they'd feel supported. They would like to be much more involved. I think the people who work on evaluations firmly believe that it needs to be an integral part of the entire process, whether it's a program or a project or whatever it is. But if you are not incorporating everything that you learn through your evaluations and tweaking and adjusting, then what's the point of doing it?
Sean: What's the point? And I think that we talk about building diverse teams. One way we could really quickly do that, and we wouldn't have to change very much at all, is break apart the evaluation shops and take those people and put them in program sector and just say, like, stop doing these three to five year post-hoc evaluations mandated by Treasury Board on questions of questionable relevance to the program, and people on the ground and start helping people design programs from the start that are based on existing evidence and help generate new evidence. Those are the two important things. Once we have better evidence, then that feeds into the experimentation and innovation pieces, because then we can know this program isn't working the way we thought it would. But right now, the way that evaluation is consulted for half of a second before TB sub goes through, or an MC goes through, and then they're brought in three years later to run the large evaluation. But that's too late. So many decisions have been made. And this goes to data as well. If you're not capturing the right data, it's pretty hard to run an evaluation five years later.
Valeria: And I think that it's not only on a big scale, but when you talk about small scale, it's almost like a muscle that we have brain muscle that we need to build within teams, because people just put that aside, and they just feel like there isn't any time -- that it's a luxury. Evaluation is a luxury. You put something forth. Let's just say you put an initiative within your team. You have to talk about it afterwards. Did it work? Did it not work? And, it's not a luxury to have that discussion. It's a necessity. Evaluation of anything that we do is a necessity.
Natalie: Some of these these topics come up often. If we want to talk about innovation, if we want to talk about experimentation, if we want to talk about these things, we need to take a step back and take a bit of time to think about things to be strategic. So we keep coming up to similar points in our conversations. And I guess one question that I would ask is: it's one thing to talk about innovation for those people who consider themselves in a and work in the innovation space. But our public service employment satisfaction survey this year, asked every employee what their views are around innovation, and whether they felt that they were encouraged innovate in the workplace, which really got me thinking about how an average public servant -- and I would consider myself to be exactly in that boat. I'm working in something that's called an innovative space for the first time in my career, but I don't feel like it's the first time that I'm applying an innovation lens. The question that I would ask is, what fundamental changes to our environment, to our work culture, could we make to allow those concepts and allow that understanding of what that all means to become more mainstream in the public service? I don't mean mainstream, that an innovation lens needs to be applied to everything but more that this kind of an articulation is something that's within reach of all public servants.
Sean: I think that's a really hard question. I think we're wrestling [with it]. It's true and that's part of the problem is when you do approach these things, by trying to create a centre of excellence is that you undermine the mainstreaming side of things where you say, like, only these people are doing innovation, which, of course, couldn't be further from the truth. And so I think it's really important. And I think we don't want to marginalize innovation in the same way we have sadly done with gender based analysis, where it's something if you have time for it, it's nice. But you rewrite the paragraph after you've written most of the MC and you worry about it post-hoc. It's not. It needs to be an integral tool that you use to evaluate ideas from the start. And so all of these things we're talking about innovation, experimentation, data, concerns, evaluation, gender, reconciliation… these things are not solved by creating another annex to the treasure board submissions there solved by mainstream media into everyone's toolkit. I don't know how to do that. I do wish we were more evidence based when we tried to do things like that, because there's really good literature out there. Another issue I have is that we are pretty myopic, and the way we think about things, we barely even look at past evaluations of programs that are similar to what we do. We almost never look at the wider landscape of evidence for a given intervention or a given field. There's a lot of work out there on how to do change management. There's a lot of work on how to become a ber organization, when it comes to data. It's being generated by governments, NGOs, but by private sector as well. And so I think we need to be better at taking a breath and saying, What's the literature say on this? Like, a meaningful literature review of how to how to negotiate change management, these fields. And some of it's the same old stuff: you need resources and champions, and you do need some way of tracking it and incentivizing it. But I think we could be a little bit more innovative in the way we support innovation and a little more experimental as well and saying, Look, we know these approaches worked in this situation, let's try them. And let's make sure we're measuring something to know if it's working or not. And let's make sure we check that in a year to see if it's getting better. And if it's not, we'll try something else. Again, that's not a very helpful answer, but that would be the change I would like to see is an evidence based innovation approach to the way we do business. I haven't seen too much of that so far.
Natalie: Thank you very much.
Valeria: Yes. Thank you.
Todd: You've been listening to Innovate On Demand, brought to you by the Canada School of Public Service. Our music is by grapes. I'm Todd Lyons, Producer of this series. Thank you for listening.