Language selection

Search

Practical Advice About Evaluations for Program Managers (FON1-V51)

Description

This event recording features a panel of evaluation and program leaders sharing valuable insights to help program managers actively engage in and benefit from evaluations that inform decision-making, improvement, innovation and accountability.

Duration: 00:59:57
Published: June 25, 2025
Type: Video


Now playing

Practical Advice About Evaluations for Program Managers

Transcript | Watch on YouTube

Transcript

Transcript: Practical Advice About Evaluations for Program Managers

[00:00:00 The CSPS logo appears onscreen.]

[00:00:06 The screen fades to Derek Armstrong.]

Derek Armstrong (Canada School of Public Service): Hello, everyone. Welcome to Practical Advice About Evaluations for Program Managers. I'm Derek Armstrong, an executive director here at the Canada School of Public Service, and I'm going to be your moderator today.

We are coming to you today from Ottawa, Ontario, on the traditional territory of the Anishinaabe Algonquin. Some of you are joining us today from different parts of Canada and working on different traditional lands of Indigenous peoples, and I encourage you to take a moment to think about and honour the land and to seek to understand its long history.

Before we begin, let's see who is in the audience today. We invite you to use the Wooclap platform to interact with us. You can join today's session now at www.wooclap.com. Enter code A30 to participate or use the on-screen QR code to answer the first question. Take a moment to answer the question about how your work relates to evaluation.

At the end of the session, you will have the opportunity to ask questions. To send in a question, again, go to Wooclap, enter code A30, and find the bubble chat icon at the bottom right of your screen.

Program evaluation is an important function that affects most aspects of government operations, from policy implementation to service delivery. Many of you have been directly or indirectly involved in the operation of a program and can understand the magnitude of the questions that can be raised when it comes to whether the program has worked as intended. Today's discussion focuses on approaches that will help you prepare for the evaluation of a program and benefit from its findings.

Joining us today are voices of experience from both the program and the evaluation communities.

[00:02:41 Danielle White, David Peckham, Kimberley Accardi, Jérôme Mercier, Gemma Irwin, and Benoit Cadieux are shown sitting next to Derek Armstrong.]

Our speakers bring several important perspectives and leadership lessons from the program manager and evaluator points of view.

First, the Assistant Secretary of the Treasury Board of Canada Secretariat's Social and Cultural Sector, and former Chief Audit and Evaluation Executive in a number of departments, David Peckham.

David joins Danielle White, Assistant Deputy Minister of Strategic Policy from Indigenous Services Canada. Danielle also leads the evaluation function for her department and brings a wealth of experience in policy co-development.

Next, we will welcome the speaker who leads their department's evaluation teams, Kimberley Accardi, Director General of Audit and Evaluation at ISED, and Jérôme Mercier, Director General of Evaluation at ESDC.

To round out the discussion and share their wisdom from the program perspective, we're joined by two speakers who led their program teams through different types of evaluation. First, we have Gemma Irwin, Director of the Global Innovation Clusters Program from ISED, and Benoit Cadieux, Director of Employment Insurance, Regular and Fishing Benefits Policy from ESDC. Thank you all for joining us today.

Danielle and David, welcome. Let's start with evaluation from a Government of Canada perspective today. David, you've had a long relationship with audit and evaluation function from a departmental perspective, especially at Indigenous Services Canada. As a leader now in a central agency, how would you characterize the value that evaluations bring government programs?

David Peckham (Treasury Board of Canada Secretariat): Well, thank you for the warm welcome and for the invitation to participate in this learning event along my esteemed panelists here. I mean, I would say I've had the privilege of seeing evaluation from a lot of different perspectives. I've been a student of evaluation at University of Ottawa. I've led programs that have used evaluations. I've led evaluation functions. And now, I take Treasury Board submissions to the board and present them to ministers, which has (inaudible).

So, it's really a pleasure to participate in an activity whose objective is to help program managers get the most out of evaluations and to become leaders in terms of culture and results in the organization.

I think the first point I'll make is, sometimes evaluations are seen as just an accountability or reporting tool or a necessary thing to do that are not really related to day-to-day program operations, seen as a burden almost. It's my really firm belief that a strong evaluation function is about learning. It can promote accountability but it's also about improving the programs that we have that serve Canadians so that your clients will feel and see the difference in those programs. So, I'm a big believer in evaluation even though I no longer work in it, and I'm a big believer in its usage, but maybe I'll turn to my good friend and colleague here, Danielle. What do you see as the value of evaluation?

Danielle White (Canada School of Public Service): So, I think I come at that question from a couple of different perspectives, one as the head of evaluation for ISC, also as the ADM of Strategic Policy. So, I see the value in terms of the longer-term vision and policy planning for the department but I also have a couple of programs under my remit, both of which are going into an evaluation in the coming year. So, I think in terms of the program piece, both of the initiatives that we're evaluating that fall under my responsibility are fairly new. If they're sitting in strategic policy, chances are they're innovation-type initiatives where it was a huge heavy lift to get them up and running and implemented. But inevitably, some things are going to go well and others are going to go less well, and it's possible that some of our initial assumptions on how change is going to occur were off.

So, really, I see the value, from a program manager's perspective, of really being able to step back and ask some fundamental questions and get some objective advice so that we can continue to improve what it is that we're trying to accomplish and how we serve our partners. It's that learning opportunity and that continual improvement opportunity but I also see, from a strategic perspective, the value of our evaluation work. We just completed a synthesis where we looked at all of our evaluations going back over a period of the last seven, eight years, thousands of key informant interviews, and we were able to pull out some very common trends that will factor into our medium-term planning, into our human resources planning. One of the things we found as a real challenge when it comes to delivering services and community is the high turnover of our staff, and we saw or started to see it not just in a particular program or a particular sector but program after program after program, challenges in terms of the lack of flexibility of our terms and conditions. So, we're starting to identify some really systemic issues that we can work into our future planning.

Derek Armstrong: Great.

David Peckham: Thanks, Danielle, good to hear what you're up to at ISC, and I think, building on your points, the value of evaluation is really around the use. It's the findings and the recommendations, and making those, I would say, as actionable as they can be is really helpful for the program, and I think also helping you figure out things like how do you reach program clients? How do you run the program in a more efficient way? How do you provide evidence to support requests for additional resources? These are all kinds of things that program managers can get out of evaluations, but to do that, it's really important that program managers explain to evaluators what they want to use the findings for. So, if you imagine the world of an evaluator, you're going into something, you may have no experience in it, you may not have delved into that program, you're as good as the information you get from your colleagues working in that program, and that for me, in my experience on both sides, makes or breaks an evaluation really in terms of its usefulness.

Also, when the evaluators understand your needs, your deadlines and your objectives, they can try to adapt the evaluation methodology depending on the situation. So, the Treasury Board of Canada Secretariat strongly encourages evaluators to consider a wide range of approaches and methods when they work with you.

And I think also it's opening the eyes of program managers as to what evaluation can be. There's a bit of a perception still, it's very quantitative or you're not going to get those stories and those kind of qualitative pieces that you might want to think (inaudible) Danielle, but I think being aware of all of the methodologies and the tools in the toolbox is really important.

Danielle White: Yeah, absolutely.

You're right, David. It's really about collaboration between program evaluators as well as clients or partners who, in some cases, deliver the programs and services and that's an opportunity. I see it as really a partnership in a sense, without losing our objectivity, our neutrality as evaluators, but emphasizing our common goal. It's really about improving services to Canadians and ensuring the optimization of financial resources. So, in this context, rallying and working with partners are key principles for our evaluation practice at Indigenous Services Canada. In our evaluations, we always seek to solicit community participation. In some cases, to co-develop the terms of reference for evaluation projects to ensure that we ask the right questions and respect what our clients have been through. I don't like the word "client"; that's more what they say. In fact, it's the program managers who have their relationships with partners and key stakeholders. So, you really have to work closely with them. And I think that the approach of rallying the groups targeted by our programs is important, especially in the context of Indigenous Services, but I think it's also relevant for any Government of Canada program, whether it's services for new immigrants, Canadians who benefit from Employment Insurance programs, other social benefits, seniors who benefit from certain public health initiatives, and so on. They are the experts and they have the strongest voices in the context of our work.

David Peckham: Thanks Danielle. That's a great point. A couple of things I wanted to raise, I think the first is around data, and the struggle of getting good data has been a struggle since I was in evaluation in 2009, and what are we now? 2024? So, I don't think that's changed but I think the need for good data also hasn't changed, and when I think about when I present to ministers at the board, for example, to have those kind of killer stats at my fingertips as to how many clients have been served, what difference did this make, was this the best way of doing this particular program when you come back for a renewal, those are really the things that decisionmakers want to hear, but I think data's not easy. It's not easy to collect, it's not easy for program managers, but considering that very early on, designing your data requirements early on, integrating those into your program, and working to make sure your performance information profiles, etc. are updated is really important.

And that's a tough task, I've been there, but when it comes to evaluation time, it really makes an absolutely huge difference in terms of the effectiveness of your evaluation, and I think that goes for all sorts of data performance measurement data and the data that you collect on the front line. The other point I just want to note is something that's not usually realized but I've come to value is the value of documenting a program. If you look at the suite of programs across the Government of Canada, I often find an evaluation is where it's all written down, the history of it, what it's supposed to do, and even that background documentation is actually extremely useful and not often recognized as such. So, I just wanted to put that on the table too. But Danielle, how do you feel about evidence and evaluations and what we have?

Danielle White: Yeah, so I don't disagree. I think quantitative data is an absolutely critical element in telling a program's performance story, and we work very closely. The Chief Data Officer for the department is also part of my sector. We work very closely with our results and delivery team at the front end to make sure that the performance indicators are right. We provide that input at the front end in Treasury Board submissions so that we have a hope of, five years down the road when we're evaluating a program, we have good solid data, and I think there's huge potential in terms of some of the new technologies to improve our ability to link to data sources and mine more unstructured data, and we have those foundational pieces like the census of population. We have the surveys of Indigenous peoples in our context. And so, you can really do that longitudinal study over time.

But I also want to really highlight the importance of qualitative data and how that's used in our evaluation work. We hold a lot of individual and small group conversations with partners, with key informants, and those really also can generate very rich sets of qualitative data. In our work, we seek to apply evaluation methods that are more centered in Indigenous worldviews and recognize and appreciate the cultural significance of some of our programs, including water, clean water, and making sure that we take that into account in our work, and with that comes different ways of gathering data and different forms of data, and I'll just give you an example. We had a team that went out into the field recently, working with a group of program clients, and they had prepared, as part of the work, sort of an artistic expression of what the program meant to them.

So, there are lots of different ways of gathering data and it is a valuable and meaningful source of evidence, and I think we have to be mindful that… of our own biases when it comes to what constitutes good data, and that's really something we've learned from our work, our work with Indigenous partners on how do we tell the story, and recognizing, yeah, what is credible evidence and that it comes in many forms, and I think part of what's driven some of our work in this area and in the context of ISC is this idea… and I'm of mixed Mi'kmaq and settler ancestry and there's a Mi'kmaw elder who's developed this concept people may have heard of in Mi'kmaq, it's called Etuaptmumk or two-eyed seeing, and the idea is you see, with one eye, the best of Indigenous ways of knowing and doing, and you see, with the other eye, the best of Western thinking and Western practices, and you bring them both together for the benefit of all and that's really something we try to apply in our evaluation practice.

David Peckham: Thanks Danielle for those insightful points. There's a couple of final things I'll mention in my remarks. If I look back, I think one of the most exciting things that's happened is the gender-based analysis plus, and I've seen its evolution over several years as an analytical tool to support the development of inclusive government initiatives which are not just about does it work but also who does it work for and how much does it work for various groups of people, and I feel like building that in has made a real difference in what I see coming through now, for example, in Treasury Board submissions, in people considering aspects of programing that they wouldn't have thought about before. So, I just want to kind of underline I see that as a really critical and very positive development that's taken place to dealing with any inequalities and serving diverse needs. The second piece is around the quality of life framework which is a set of national and interconnected economic, social, and environmental data and helps departments reflect on how their programs impact people and their well-being, and I think for me, it's about, what is success? What does people's well-being look like? How does that vary in different contexts? How can we measure that? And sometimes asking yourself those questions is really important as both an evaluator and a program manager.

I absolutely agree, especially in the context of GBA Plus as an evaluation tool. It's a lens. Even in the program, it's a lens that evaluators can use to better understand how various client groups can experience our program and to see the variability in terms of access, participation and impacts issues. It's also always important to consider these factors to better serve Canadians.

David Peckham: Thank you, Danielle.

Derek Armstrong: Well, thank you. I mean, there are some really important and striking points here, and I think one that I'd like to reflect on is this idea that both quantitative and qualitative data can be very important in informing evaluations, not just in terms of the evaluation itself per se but also for understanding programs down the road. So, I just want to thank you both for those great reflections.

This is really a compelling perspective on the role that evaluation plays not only in accountability, but also in introspection, especially with tighter budgets. Lessons learned from the evaluation could play an important role in restructuring Canada's program to reflect the Hippocratic Oath to do no harm. And also by examining the implications of our decisions for the system as a whole.

So, putting this all into practice could be a tremendous challenge or an equally attractive opportunity and I'm delighted that we have a keen panel with us today of program leaders and their evaluation counterparts to dive into the ways that they've carved out success. I had the pleasure of interviewing our panelists prior to the event and they shared many practical insights into their evaluation experiences. Several common themes emerged and we'd like to share them with you today.

Let's turn to Gemma Irwin of ISED first, and Gemma, tell us about the program that you operate.

Gemma Irwin (Innovation, Science, and Economic Development Canada): Sure.

Derek Armstrong: And what tops your good strategy list for a successful evaluation.

Gemma Irwin: Thanks, Derek. Thanks, it was really great listening to you at the beginning because I think everything that you've said really captures what we do in our program. So, I'm with the Global Innovation Clusters program and we recently underwent an evaluation and actually just found out today that it is… we've closed, we've completed progress on all of our recommendations. So, it's always nice and fulfilling to have that done, but the clusters were created back in 2017 really as a ten-year play to develop world-leading ecosystems in Canada, and there's five independent industry-led clusters. And so, this posed a lot of challenges when we were trying to set our program up for success at the beginning. You had five new bodies that had to be organized and then you had to get the right data, the right metrics. And then, you had this added layer where I work where at ISED, we managed the program and then we have these recipients, but these recipients at the end of the day were also… the point of the program was really to get collaboration projects off the ground and spur that innovation. So, I think the biggest theme that I want to share with everyone today is around communication because what we realized early on in the program is we thought we had things right. And then, when we went to start working with our evaluation colleagues, we started to think through, yeah, it's right, on the right track, but this is a ten-year play and we needed to get ready for renewal, and the timing was really great with our evaluation colleagues because we worked together early on, engaged each other.

And I think what I found super useful was being able to ask questions on both sides and meet and discuss and actually dig into some of the issues and how we were going to unpack the data we were seeing. And to be quite frank, I don't think the data we had envisioned at the beginning of the program is now what we're using going forward. So, we were renewed in Budget 2022 and the evaluation played a key role as one of multiple lines of evidence we used when we developed our Treasury Board submission. It helped bolster the case for renewal but to be… it did actually take several years to develop that case. It took several years for the clusters to get off the ground. And then, it took some time for us to get through the evaluation and actually see where we are now, and we're in a completely different place than where we started. So, it was really great working with the team, and I think the last thing I would say is as program managers, some of the program managers, I'm guessing, are watching today, it's daunting when you set out with the evaluation function to figure out what it is you're going to look at, but I think you really need to have a willingness to learn and work with those colleagues and not just say, okay, you asked me these questions, here's my box of information and data, here you go. It doesn't work that way but it's very easy to take that approach. So, you have to keep those lines of communication open.

Derek Armstrong: Thanks, Gemma. So, pick up the phone and be in constant communication is what we're hearing.

Gemma Irwin: Absolutely.

Derek Armstrong: Great. Thank you so much.

Kimberley Accardi, you lead the evaluation function at ISED. Given what Gemma has shared, what areas would you like to amplify or like us to consider in addition to that?

Kimberley Accardi (Innovation, Science, and Economic Development Canada): Yeah, I think there's a few, and David and Danielle brought up the notion of value for money, and you've talked about the current context financially, and evaluation has this wonderful opportunity of making sure that with the finite resources we have, are we investing our time and our energy and our money in the right things for serving Canadians? And so, I think to be able to do that, it goes to mindsets and Gemma spoke to that a little bit in terms of it can be a little daunting to have evaluation come in, and some people are wide open arms and other people aren't so much, and I kind of like to remind people, you would pay huge sums of money for a consulting firm to come in and do the type of work that an evaluation team will come and do. We're going to dig into your data. We're going to look at case studies. We're going to look at comparisons to other jurisdictions, survey participants. There's a lot of rich information that when you're so busy running a program, once it's going, it's very hard to have the time to do those things. And so, I'd really say it's not just about learning and being open and communication but really lean into the process, like invite us, share.

Part of it, for me, is sharing what's important to you. What are your concerns? Are there any elements in your program that you have questions about? It's not just the traditional elements that can be evaluated. If you tell me there is something that bothers me, I have a concern, I'm not sure. If we do the right thing, it can point us in an additional direction to really bring added value in the evaluation that will really consider the next steps for the program itself, and for the investment that will take place. For me, it's very, very, very relevant as not just active communication, but open and frank communication on both sides so that we can talk about what's going well and what isn't.

The other thing for me that I really always talk about and you spoke to a little bit is, for some of the data, it's really about starting from the beginning, and you talked about working with your Chief Data Officer and what do those performance measurement elements look like in the data strategy, but when you're some of the big organizations, like speaking for ISED where we have all these contribution agreements, it's also about, what's the clarity? So, if you're running a program, what is the clarity with recipients in terms of what are the data expectations? Do we have the same definitions? Checking in to make sure that it's coming in consistently because you can't re-create data five years later. And so, it's really about getting that good foundation to get the most out of your evaluation, and if you're three years in and haven't done it yet, now's a good time to check and see, but I also really like to reinforce that a strong end in the evaluation starts with a great beginning where we've had those thoughtful processes, and you're not just communicating with evaluation but you're communicating with your results, your data teams, and also the recipients to make sure that you all have the same definitions of what things are so when data comes in, you can use it.

Derek Armstrong: Wow, that's such a great addition to what Gemma's brought us to. We're hearing some tones here about how that openness to learning and to being able to adapt and evolve, even as you're going through the evaluation, is so important to being able to make it to the objectives of what you want to learn, and the fact that evaluations are like free research, so take advantage of it. Thank you, Kimberley.

Benoit Cadieux, you had some really interesting experiences at ESDC working on the Employment Insurance program while it was undergoing various revisions. Can you tell us a little bit about this period?

Benoit Cadieux (Employment and Social Development Canada): Thank you very much, Derek. So, yes, I'm Benoit Cadieux. I work in Employment Insurance policy. Over the last five years, with the Employment Insurance program, I had the pleasure of working with Jérôme here to complete three specific evaluations. I would also like to say that for us, the objective of these evaluations is very different. So the first evaluation we completed was for health benefits. For health benefits, to give you a little bit of context, benefits that have been around since the 1970s haven't really changed. So the big question is still, "Why do you do an assessment?" It's been asked for 40 years and hasn't changed. So what do you want to get out of this? Another evaluation? They've been done many times before. So, that's really the question to ask as a program.

What is it that you want to get from the evaluation? What's the objective? And it's working with your evaluation team to really make it clear on what is the evaluation going to bring to you. What was the original objective of the program and how can the evaluation help you determine whether the program continues to meet that objective or not? And in the case of the EI sickness benefits, what's interesting there is that there was a lot of stakeholders, a lot of government were pushing to extend it beyond 15 weeks. 15 weeks was not enough, extended to 50, extended to 26, extended to 35, whatever the number was. And so, for us, the evaluation was not so much to determine whether the program, as it was, worked. We knew it worked. We knew it met its objective. It was to… really, can we build into some of this analysis that looks into is there a need to extend further? Is there a need to extend to 52 weeks or 26 weeks? What can we get? Not so much evaluating the program itself but evaluating what it could be and determining whether there's a need for it or not, and I would say the timing of that was incredible because shortly around that time, the government had made the commitment to extend to 26 weeks and there was a lot of pressure to extend further, like I said. And so, the results of the evaluation were extremely useful in building the story, building the case why 26 weeks was the right number.

So, we really had the opportunity to use the results to add more detail to submissions to the Treasury Board, then came our proposed budgets and all that. So, all that to say, when you sit down with your evaluation program, the timing of the evaluation is super important and super critical because it's going to determine how a single piece of information is going to be for you, to tell you what comes next.

I just want to… do I have a bit of time?

Derek Armstrong: You're good, Benoit.

Benoit Cadieux: The next two evaluations we did were of maternity and parental benefits.

And that was really different because maternity and parental benefits went through a lot of changes, very recent changes. And so, the objective there was to look at, what was the impact of those changes and was… are those changes, did they make a difference? Did they… are they being used to their full extent? So, the focus was very different. It was really about looking in what recently was changed and seeing if that works, if that made a difference. So, I just want to say that again.

This is an example where we sat down together, then we set very clear objectives of what we wanted to get out of this evaluation. Then, I think in the end, it's because we had this...

This mutual understanding of what the objective should be, that we got a lot out of it. We got… for us, it was a very useful evaluation, and the third evaluation we did was on a pilot project that was also done very recently on providing extra weeks of benefits to seasonal workers. And again, that was extremely useful because it was… it informed the future extensions of that pilot project and the development of options for what could we do next, where can we go from there. So, I'll leave it at that, and yeah, that's my experience.

Derek Armstrong: Thank you, Benoit. It's a great example of how evaluations can be brought to bear in a department that has quite a rich source of data, to look at different questions of policy in fact. So, it's a fantastic example.

Thank you. It's always nice to have the last word on a panel. You spent several years leading evaluations at ESDC. How would you connect what we've heard today to your own tips for getting more out of the evaluation experience?

Jérôme Mercier (Employment and Social Development Canada): Thanks, Derek. Always a pleasure to have the last word, but it's also a challenge. In other words, where is our added value? No; it's a pleasure. Thank you for the invitation and for this wonderful discussion. This is very, very interesting.

A couple themes I think that stood out for me today, I heard about early involvement, I heard about willingness to learn from each other, open communication, open channel of communications. I think, Benoit, you were alluding to scope. You don't need to scope your evaluations the same way depending on the program. And then, there's this whole… sometimes I call it the elephant in the room, is the data. So, we talked about that a little bit. I feel all these points, all these themes, it points to evaluation as being kind of a collective endeavour when you think about it, and one of my favorite sayings is that evaluation is a team sport, but for a team sport to work, we need to have a good understanding of what is our respective roles as part of the team. I think Kimberley or Gemma was talking about, let's learn about each other. So, evaluators come in, they have specific expertise in how to conduct. We were mentioning quantitative, qualitative lines of evidence, and how do you pull all these findings together and find takeaways from this? On the other end, program analysts, they know their programs inside out, they know their trade-offs, and sometimes these trade-offs may not be as apparent for evaluators as well, where they need to work together.

So, how can you formalize this partnership or this teamwork? This brings me to scoping. So, at ESDC, one of our practices in fact is to… we formally establish an evaluation plan where you lay out the evaluation questions you'll be looking into and how you're going to be looking into it, and we present those at our Performance Measurement and Evaluation Committee for decision. Some view this as a good practice and I think some of the reason is that it helps get buy-in at the ADM level, that we're going to take time and resources and conduct certain evaluation activities. It may prevent less productive discussion around scope later on, and as part of your evaluation process, it can also help manage expectations in terms of the art of the possible, given the data available with upper management, here's what we can do given where we are. So, maybe a couple of things to keep in mind. It goes back a bit to what Benoit was saying. It's not because we evaluate programs that we evaluate them the same way. So, scoping, let's be mindful of overly ambitious plans (laughs). No need to look at all the aspects of the programs, so programs are a different juncture in their, I don't know…

Derek Armstrong: Life cycle.

Jérôme Mercier: Life cycle, and evaluation is an iterative process. I think I heard once in a movie, we will be back. So, we don't need… there's specific times and specific moments in a program, so let's scope accordingly. Not all questions need to be addressed by evaluation. Some of it could be done by good old policy analysis or it can be done by a research project. So, evaluation has comparative advantages compared to these functions. How can we take advantage of that? Narrower scopes are conducive to more concise and clearer reports. Data, so as the expression goes, evaluation is predicated on good data, and this inevitably points to the program data collection strategy and that's not… I think that's what we were alluding at the beginning of this discussion. This is not a walk in the park, and we know that. And often, you get into questions of data collection strategy at a time where the onus is in fact on getting the money out the door, and the next evaluation is kind of so… it's in the so far distant future that I will deal with that later.

I think the key here is not to get the perfect in the way of the good. So, let's get your foundational… I think I heard the word foundational piece, in place on which you can build on. And from the end, keep in mind that, again, we were talking about partnership, evaluation can support program individuals to get this done. Last thing that I would like to note, so as public servants, one of our values is to give honest and impartial advice to decisionmakers and the internal evaluation function that's currently set up in the policy on results, it gives us an opportunity to do just that. So, with that in mind, I would say maybe the following. Let the facts speak for themselves as part of our evaluation documents. Be mindful of the temptation to focus on only what works or to overly contextualize less flattering findings about our programs. That said, this notion of speaking truth to power, it comes with some responsibilities and I think that's where a constructive, collaborative partnership between programs and evaluation comes (inaudible).

Derek Armstrong: Wow, thank you, Jérôme, it's… that was a great overview of the messages we've heard here, and I think it's very fitting that the person with the last word would quote the Terminator in an evaluation event like this. So, thank you for that. I do want to reflect on one of the points that you made which was about the importance of data in evaluation. I'd like to tie it back to something that Danielle also expressed to us and that was in how evaluations are conducted with partners in the Indigenous communities. I had the most fascinating interview with one of Danielle's staff in the lead up to this, where she expressed to me how some of the greatest learning that they engaged in, in their evaluations with Indigenous communities, was in not collecting quantitative but qualitative data through narratives, and understanding that how narratives change over time was just as informative and in some ways provided a deeper understanding of how programs were impacting communities than even quantitative data. So, I think it's a lesson for us as members of the evaluation community or members who are enacting programs to keep in mind that quantitative, qualitative, it's about context, it's about understanding, and I think it's very important to just keep that in mind. So, thank you. Thank you very much, Jérôme.

What a terrific panel exchange we've had here, a lot of different lessons and ideas and practical tips, but now it's the time you've been waiting for out on the webcast. We've got the question period here. So, we have about ten minutes to send in a question if you haven't already, and I think there's been a prompt online. Go to Wooclap, enter the code A30, and find the bubble chat icon down on the bottom right side of your screen. Please do remember to ask questions in the language of your choice. Now, we've had a couple of questions come in already during the session, so I'm going to look at the screen here, and the first one I think is relevant to basically anybody doing an evaluation. Do you have any tips on how to best communicate with evaluators when you review a draft report and do not agree with one of the findings? So, that's, I think, a very universal question. I'm just wondering if any members of the panel here would like to take a quick stab at that.

David, over to you.

David Peckham: Yeah, I'll take a run because I've been on both sides of that fence. I think the first thing is to be factual. I think if you have an issue with one of the findings, it's… both sides need to look at what are the facts that we're talking about here and agree on the facts, and that's when I think you get into the interpretation and the context of those facts. I think from a program management side, you have to go in with some self-reflection, knowing that you may not like what it said but if the finding is actually valid, maybe it's more about how it's presented, and have that conversation. I think as an evaluator, you also have to be… you need to have your independent judgement but not sort of stand on that to the point where you don't feel you can ever back down or you can show any sort of flexibility when discussing findings. So, that's how I've seen that work best when that's occurred.

Derek Armstrong: Right. Kimberley?

Kimberley Accardi: I think maybe just to build on that, we talked a lot about open communication and early on in, and I think that those are really good points and that… I think that all the tactics that you've indicated, that's really the approach to take, but it's much easier to do if you've already been talking to your evaluator, you've been treating it like it's a collaborative activity, and the relationships are there. It's like any other conversation you're having with a stakeholder where you might have a divergent view. If the relationship is there, it's much easier to broach, and if you are in the context or in the process of doing an eval, having those early conversations. You talked about the early scoping transparency on what we're talking about, also asking to see the prelim findings before the report's drafted because once… a lot of time and energy and effort has been invested when the report is drafted. So, having a quick touch-in about the preliminary findings can be really helpful to understand, like did we not understand something? Was there information that's missing that now you realize could amplify or support or provide a counterfactual? So, I think those would be some extra things that, based on the conversation, I'd add in on that.

Derek Armstrong: That's a great point. Dale Carnegie always said in his signature book How to Win Friends and Influence People, it's much harder to get people off of no after they've said it than to influence whether they even say no at the beginning. So, I think those are great strategies for how we approach this question.

The next question we have, what measures can be put in place to protect sensitive information collected during the evaluation and to reassure participants? Would anybody like to take a stab at this question? And it may be more relevant to some of your programs versus others.

Jérôme Mercier: Maybe I can. I will try to start to the best of my abilities, but when we conduct, for instance, qualitative lines of evidence, there is… and we're working with experts on privacy and from a legal standpoint of how this information is used, how do we protect anonymity, and how this information will be aggregated so that what was communicated could not be associated back to the individual in question. So, there are a number of good practices in this regard and I would encourage individuals to work with their privacy and confidentiality areas of expertise in their department.

Derek Armstrong: Good advice, Jérôme. Danielle?

Danielle White: I think it gets a little bit tricky. So, yes, absolutely, I think on the pure data side, there are protections in place. I think it can be uncomfortable when you're starting to work with programs, and folks working in programs may see shortcomings, they may see problems, and there's probably a hesitancy to come forward. So, again, I think it starts from building that collaborative relationship, that trust approach. We don't see anything in our evaluations that are attributable. And so, creating that space for people to speak freely, it's all about speaking truth to power and telling it as they see it, and sometimes an evaluation can be a safer way to bring forward an issue than kind of going up the line to the Director and the DG. It creates that space and I think program managers have to go into it with that mindset, that you may hear some things that you don't want to hear but this is all part of the process. And so, creating that kind of trust between the program staff and the evaluation team is really key.

Derek Armstrong: That's a great reflection. Thanks, Danielle. It almost brings to mind that it's sometimes easier for somebody else to bring up the things that you are maybe facing yourself and then contextualize them for them. So, it's good advice. Thank you very much.

Another question that we've received too is, how can we ensure that the evaluation includes a useful analysis of GBA+ and intersectionality for my program? We heard a bit about this during our opening remarks but what are your thoughts collectively on this question, if anyone would like to venture a start?

Gemma Irwin: Do you want to start or… I'm happy to start. I think I'm going to pause and reflect on the question because I actually think one would hope that there was already that assessment done by the program managers before you get to the evaluation point, but I think at least in our case with the evaluation of our program, it was really about taking a second pair of eyes and stepping back. Are we actually achieving what we set out to do, especially with a GBA+ lens? And is it still relevant in today's context? Because when we started our program and got evaluated, we were a few years in but we were really… it was really setting us up to hopefully renew the program, and we were successful in shaping that but we also needed to think about, well, what are the GBA+ lenses we needed to apply going forward, right? It's not only about looking back. And so, I think that the… it's not easy to unpack. You have to do it at the beginning, but speaking from a GBA+ lens, you also should be doing it fairly regularly. I wouldn't say every year but you should be taking a look at your analysis every couple of years to make sure it's still on track and it's relevant, I think.

Derek Armstrong: Thanks, Gemma. Kimberley?

Kimberley Accardi: No.

Benoit Cadieux: If I could add to that, I think from a program area, we talked a lot about how to make the evaluation useful for you, and I think part of that is… and for us in EI, we have tons of data. We have a lot of administrative data and everything, but we still don't have that good GBA+ data and this is an area, I think, that whenever we have an evaluation coming, we see this as an opportunity. Let's build in some GBA+ analysis as much as we can to try to get some useful information out of it. And so, I think for that, I mean, I don't have really the answer to the question but I think if you plan early and you work with your evaluation team to say, I'd like for the evaluation to put a lot of emphasis in this area, I think you're bound to be successful in at least getting some useful GBA+ analysis out of it.

Danielle White: The only other thing I would add is just the importance too of incorporating GBA+ practices in our own work as evaluators, so yes, what the program sets out at the front end (inaudible) and in your Treasury Board submission, you have to have the right frame but we also have to walk the talk when we're out in the field, and something we really found during the pandemic, not being able to get out into community and meet face-to-face was a huge challenge in our evaluation work, so face-to-face meetings, prioritizing that, creating those opportunities, providing child care if you're doing focus groups or meeting during certain hours and applying all of those good practices, which is perhaps more incumbent on the evaluators than on the program, but making sure that we create every opportunity to hear from that diverse range of folks receiving the services is really important as well.

Derek Armstrong: Great. That's an excellent reflection. Sometimes our own work is impacted directly by GBA+ and intersectionality considerations, and it could bias the outcomes of the very evaluations we're doing and the lessons we're trying to draw. So, these are really powerful reflections you're offering.

Danielle White: Translating into other languages, Indigenous languages, immigrant languages so that you get the access.

Derek Armstrong: That's right. Thank you. Jérôme?

Jérôme Mercier: I was alluding to the fact that we like to do evaluation plans early on and present them to our Performance Measurement Evaluation Committee. So, we do include specific language around GBA+ but it's also an opportunity to manage expectations, again, given information available. That's an opportunity to do that, and I think to go back to your point, it's not just quantitative GBA+. You can also look at it using your qualitative lines of evidence.

Derek Armstrong: Thanks for that, Jérôme.

Kimberley Accardi: Now I feel like I should, everyone else has (laughs). No, but I really appreciate the practical, tangible things and solutions that have been offered on a GBA+ point because from an ISED perspective, and a little bit like what you're speaking to is, we look at… we have GBA in our plan every time we conduct and eval but in all candor, we still feel like it's a place where we're struggling in terms of having sufficient data to have the types of findings, the types of intelligence we're wanting to have come out, the ability to disaggregate it, to be able to tell more nuanced stories. So, I think we have a lot of work to do in that space, so really happy to hear some of those tangible things that we can be doing because I think to me, one of the values, and I joke about it, is the beauty of evaluation is to share those insights back out to the system so we can say from a common trend or challenge or persistent issue with us is having sufficient GBA, the type of GBA information we're looking for, and we want to be able to have it because otherwise we're going to reinvent the same square wheel. And so, proactively sharing out the different trends that we're seeing in all of our evals to lend information forward is helpful and this is an important one.

Derek Armstrong: Great. Thanks Kimberley. And without meaning to, I think you've provided me an excellent segue into our next question, just the idea that having your big questions already laid out at the beginning of your evaluation or even the beginning of your program helps to determine what it is that you want to measure, achieve, guide in terms of data collection and such. So, to that end, the question we've received here is, what are best practices for the kickoff meeting when evaluators in the programs meet for the first time, for example? And we'll take one response on this and then we'll wrap up. Gemma?

Gemma Irwin: I have a small one. It sounds inconsequential but it really matters. Have everyone who's going to be involved in the evaluation in the room, because I think one of the things we learned in our engagements is the program managers or the analysts aren't always at the table at the very beginning and get roped in later. And so, if you can engage them and they have a clear expectation of what this is going to be like, and also don't underestimate the effort that it will require to work with your evaluation team to get a good product at the end. I think someone mentioned earlier, you're stuck in operations and that's actually the team that I lead. So, we are in operations so we're stuck in the day-to-day but you have to get out of that to prepare and then to support your evaluation team in conducting.

Derek Armstrong: Great. Thanks a lot, Gemma.

Look, this has been a great exchange today. It's really, truly informative, both questions and advice that we've seen. Danielle, I'd like to give you a moment just to wrap up with some of your concluding thoughts.

Danielle White: Sure.

Derek Armstrong: And reflections on today.

Danielle White: First of all, thank you all. Thanks to the School for organizing today's session. I enjoyed hearing everyone's perspectives.

You know, I think if I had a closing thought for folks in the program community who we were really trying to target with this session is, challenging you to think about evaluation and to come into the process with that growth mindset that this is about managing change, this is about understanding the world around us and how do we better serve Canadians. Changes is a constant. It's… we live it every day. We can't be static and we do need to evolve and adjust and innovate as we go. And so, this is an opportunity, and I really love this idea of evaluation is a team sport, and I think to Kimberley's point, to think of it as an opportunity. This is something that's being done for you, not done to you. We're not the Auditor General when we come knocking, but it's really an opportunity to dig deep and to push aside kind of the day-to-day and really reflect on where we're going, what it is that we're trying to achieve, and how do we get there together? And I think that's where I really see it as well at the senior management level through our performance measurement evaluation community. I see heads nodding and that other ADMs start to see the lessons learned and that we're pulling from an evaluation in one particular area and being able to apply it elsewhere. So, it really is, I think, a tremendous learning opportunity.

Derek Armstrong: Great. Thanks so much. David?

David Peckham: It's always hard to follow Danielle but I would say evaluation for me is a public service. It's part of who we are and participating in evaluation is part of being a healthy, strong public service. How can we improve what we do? How can we deliver the best results for the people that we serve? And for me, those are the fundamentals, whether you're an evaluator or a program manager. So, I would like to thank everyone again. I would just like to say as well that the policy and directive on results, the [inaudible] evaluation on performance measurement can be found on the web, and also the TBS results portal on GCpedia does have a lot of really interesting guidance and resource material, but thank you very much for the conversation today. It's a pleasure to be with a panel like you.

Derek Armstrong: Thank you, David, and we'll be sharing those results, those resources with you. Well, they're up onscreen right now for those of you on the webcast, and we'll be sending that out in a message to you shortly after.

So, before you go, we'd love your feedback on any program evaluation topics you'd like to learn more about. Add your suggestions, in a few words, to the team word cloud, which the team is putting on the screen now.

[00:59:20 The Wooclap screen is shown with a range of answers to the question "Which program evaluation topics would you like to learn more about? ".]

And finally, let's recognize the contribution of our speakers today. Advice is just theory until you've lived it, and you have distilled the best for us. So, thank you very much for being with us today. That concludes our events today and thank you all for attending.

[00:59:43 The CSPS logo appears onscreen.]

[00:59:52 The Government of Canada logo appears onscreen.]

Related links


Date modified: