HMP Governance Lab: Introduction to Health Policy
HMP Governance Lab: Introduction to Health Policy
1.12 Policy Evaluation
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Professor Jarman discusses the pros and cons of policy evaluation, along with tips and tricks for navigating the process.
- HMP 615 Canvas site
- Find work from the HMP Governance Lab at www.hmpgovernancelab.organd on Twitter @HMPgovlab
- Music: 'Blippy Trance' by Kevin MacLeod
Hello, and welcome to the HMP governance podcast. I'm Holly Jarman. And I'm a professor of health management and policy at the University of Michigan. And so today I'm going to be talking a bit to you about policy evaluation. So, policy evaluation is where we take a long, hard look at a policy, and we try to understand some of its effects. And it can be conducted in a quite specific set of ways. There are lots of different frameworks that try to guide policy evaluation. And so I will mention some of those today. But they're only a guideline. There are lots of different ways to do this, and lots of different methodologies that can be employed in policy evaluation. But really, today, I want to get you thinking about sort of the bigger picture stuff here, because the act of policy evaluation requires starting with some particularly important assumptions. And so if you haven't taken a listen yet to my podcast on the three E's, efficiency, efficacy and equity, you might want to do that before continuing with this one. Because I have a little critique, I guess, of some of these policy evaluation methodologies. And I just want you before we get dive into some performance measurement, and that kind of evaluation, later on in a subsequent podcast, I'd like you to just dwell a minute on why do we do policy evaluation? What is it for? Who does it serve, and maybe what are some things that are right and wrong with it? So I'm trying to ask you to be critical here. So there are multiple ways to approach evaluation, but lots of similarities between them. And I would argue maybe that if you've been learning a little bit about program evaluation, and how that's done, policy evaluation can be very similar, but can be a little bit harder, in that policies tend to be more formally defined, sometimes then programs. And here we're talking about a whole range of different policies, which can be found in the healthcare sector, but also public health. And so sometimes even just defining what the policy is, and what the boundaries of what you're trying to evaluate, are, can be kind of tricky. So to overcome some of the common problems here, I just want you to remember, the general advice from a lot of people who conduct evaluations is evaluate early, evaluate often, and be as clear as you can about what you're you're going to do so have a well defined plan for your evaluation. And that should really help you. So sometimes it can be helpful to understand what policy evaluation is not. And here I'm putting on my big picture hat. But saying the policy evaluation is not necessarily external to the policy process. A lot of policy evaluation is driven by legislators, making it a requirement for the disbursement of funds that a particular policy is evaluated. So you might actually, as a policy evaluator, be reporting to legislators, because they want to know how you spent the money and whether you spent the money well, and what the outcomes from implementing the policy were. And so policy evaluation is not external to the policy process. It can be quite political, whether or not you're dealing with legislators, you also might be dealing with government agencies in evaluating policies on their behalf. It's also not necessarily academic research. So it's designed for use by others to try to make policy better, hopefully. But it doesn't necessarily pose a question. In the way that academic research does. There are some very concrete things that we kind of want to know when we're doing policy evaluation. Did the policy meet its expect expectations, did it meet its goals? how costly was it to do that? What were the effects potentially on certain populations? But we don't necessarily define the question narrowly in the way that we would in academia. It's also not really a one shot deal. So most frameworks will recommend that evaluation can is a continuous process or somewhat continuous process that engages with stakeholders in the policy area. That takes a look At the beginning and the midpoint and the endpoint of policy implementation and sees how things change over time, nobody in an ideal world is recommending the kind of model for policy evaluation where you do one small evaluation, drop in, drop out again, and then base all of your conclusions upon just that snapshot. So what kinds of things get evaluated? Well, if you take, for example, the CDCs model, which is very commonly applied, you can take a look at evaluating the content of a policy, you can evaluate the implementation of a policy, or you can evaluate the impact of a policy. So let's break that down a minute. If we evaluate the content of a policy, we're essentially evaluating what I would as a political scientist call, like a policy output. What is the like the law on the books? So maybe you're evaluating something that has a statute, and maybe some related regulation? Maybe there's guidance that's come from a particular agency that's written down that's relevant to this? So it's the policy itself? What is in that? What is in that set of documents? What's in the law? What's in the regulation, what's in the guidance, or any real relevant documentation? So that's content evaluation, implementation evaluation. So it's a blind spot. In the public conversations, we think a lot about policy on the books, we don't think so much about how the policy actually works in the real world, and that can cause some serious problems. So it's really important to evaluate how is the policy being implemented? How is it being put into practice? So that could be involve understanding how the agency has acted upon the policy directives, understanding how stakeholders are responding to the policy, especially if you're in a regulatory area where the government's trying to regulate their behavior of organizations in the healthcare system? How have they actually behaved? And has it been according to expectations? So implementation is a very broad term, that means looking at everything that happened after the policy passed into law. And then what have been the effects of that. The third thing you can evaluate is impact. So here, we're talking a lot about policy outcomes. So what have been the effects of the policy on the target group? So it's kind of important here to realize that the target group and how that's defined is a bit value laden in itself. So you might want to take a creative approach here and think about well, yes, there, those are the effects on the target group. But if you have any latitude to do so maybe there are negative effects, or positive effects on other groups that are related to the target group or or external to the target group that might have been anticipated, and maybe not, you might choose to put that in your impact evaluations. So an example would be something like an environmental policy, environmental health policy, where the policy was beneficial to the target group of stakeholders, in terms of boosting the economy, but may have well have had negative environmental impacts on certain communities. So just be aware of how these things are defined, can be quite can affect your results. So some things you might want to ask for each of these three categories content, implementation and impact. When you're thinking about policy, complex content, are your goals clearly articulated. So, frankly, a lot of law and policy, it goes through a political process. So it's not necessarily well constructed. It's not necessarily constructed to be operable in the real world. And so sometimes stakeholders find that they can't really comply with laws, because the laws don't quite fit their existing practice. So are the goals of the law or regulation really clearly articulated in there? Or are they a bit vague? Health IT is a good example here. We tend to think that health IT programs solve everything. Especially the legislation that comes out of Congress tends to ascribe reduced costs, better care, and a bunch of other potentially dubious results to health care and health, health technology and health information sharing, without necessarily specifying the causal theory. In terms of how that's supposed to come about, so is that underlying logic of causal theory, clear in the policy? It may well not be because we, as the public and as elected representatives are suckers for the next greatest idea, and we don't always necessarily drill down to understand causation. Do the policy documents themselves actually describe any of the implementation? Or do they just lay the implementation on a particular agency or actor? And then how was the policy actually developed? You might want to understand the history behind the law in order to understand why it's weird in certain ways, or why it doesn't necessarily get causation. Right. So in terms of implementation, you might want to ask yourself, was the policy really implemented as the writers the authors of the policy intended? But Furthermore, were there barriers to implementation that you can identify and what were they and how did they potentially get overcome? So barriers to implementation might be not having enough resources, for example, they might be the gap between the way that a sector of the economy works or the sector of a section of the healthcare sector works. And the way in which the policy was written, it might not be a good fit with existing practice, a barrier might mean legal resistance potentially, somebody objected to the policy and started up a lawsuit. A barrier might mean that individual behavioral problems in terms of public health policies, for example, that try to prescribe behavior in various ways, maybe people don't behave like we would expect. So there, it's a very wide category of thing. And you really do have to use your good sense and and dig into some qualitative data here. To understand what these barriers were, you might want to compare some different components of the implementation process, and break it down a little bit in order to make this more manageable. So in terms of then evaluating impact, the the key here is did the policy produce the intended outcomes. So you do have to link this with content evaluation in the sense that you have to know what the intended outcomes were, and have a good sense of that. And furthermore, how they could be measured. So what the policy produce needs to be measured and needs to be measured in the best way you can. And so that requires all of your skills, in terms of handling messy data, figuring out what could be a proxy measure for something. And so that requires a little bit of creativity and skill. So comparing the intent of the policy to the outcomes, you might want to then break this down further and think about whether there are multiple outcomes here. And whether some of them are shorter term goals, and others are longer term outcomes. And so you can start to see through these three categories, how policy evaluation can be quite fuzzy in some important ways. And so going through a framework like this one from the CDC, can be helpful in guiding people who are trying to do their own evaluation.
Unknown:So
Holly Jarman:an alternative framework, which gets used quite a lot in different places around the world is called the magenta book. The reason it's called the magenta book, it's from the UK. And so in the UK, in the civil service, they have different colored books for different things that are to do with like good practice in government. And so the magenta book is this beautiful magenta, pink color. And it basically details the steps for policy evaluation that are recommended by the government. So a lot of frameworks are quite similar to this. And so I picked this one because it's quite popular. And there's eight different steps here. Identify the policy objectives and outcomes. So what was the policy trying to do? And then also, what were the actual outcomes if you can detect those from the policy? Secondly, define an audience for your evaluation. So who are you doing this for? And I think that's incredibly important for all different kinds of evaluation is, who's your evaluation for Who are you serving? That's kind of a way in which evaluation is a little bit more distinct from broader academic research, which is for a scholarly audience. Three, identify evaluation objectives and research Questions? What are you trying to do here? What questions are you asking in the evaluation that you need to get answered? And sometimes these objectives and research questions will be set for you. Because you're doing an evaluation on behalf of an agency or as a response to a requirement from a legislation. So number four, you're going to select an evaluation approach. What is your general approach going to be here? which parts of maybe even content implementation and impact Are you going to focus on? What kinds of methods do you think you're going to use, which is going to be bounded somewhat by the skills that you have? And then after that, you're going to want to look at data requirements. So in order to answer the question that you have posed, what data do you actually need? And as I said, sometimes that data is hard to come by. So what are you? How good quality is that data? Does it measure what you think it's going to measure? These are all considerations that you need to bring in at this stage. And then, once you have an approach, and some data, you're then going to need to identify what resources do you need to actually conduct the evaluation. Qualitative research requires people, if you're going to do site visits, or interviews and so on, you need to have those skills. If quantitative research maybe in some cases, you don't have the data available, and you have to figure out how to get it or you have to figure out how to purchase it. And also the resources in terms of your team and the people who are on your team and what skills they have. So figure out what resources you need. And then figure out how you're going to manage this, because you'll probably be doing this and consulting with stakeholders and consulting with whoever's funding the evaluation, whatever if you have a client. And then you need to figure out how you're going to manage all this, how you're going to govern the team, and make sure they all deliver everything on time. Step seven in this in the magenta book is just says conduct the evaluation, which I find rather amusing. But that's the point at which you're going to put all this planning into practice. And then step eight is disseminating the findings. And here, the magenta book and the CDC have a quite similar approach in the context that in the sense that it's sort of portrayed as a loop. So the sort of a continuous evaluation, where findings are disseminated and acted upon, but they also inform future evaluation. And it's a way of going back to stakeholders and informing them about what you've found. So the CDC and the magenta book are pretty similar frameworks in that
Unknown:regard.
Holly Jarman:So some of these really important questions, I'll just, I won't not going to go through the whole eight steps. But there's a couple of important things here. Engaging the stakeholders, for example, you have to back up and think, well, stakeholders are the organizations groups, or individuals that have an interest in the policy. They're not just named organizations. So they might be a community or a group within a community. And you have to really think about broadly who are the stakeholders on all sides and see if you can figure out how to build relationships with them. The reason why this is so important is that stakeholders are sensitive to the outcomes of the policy. They are sometimes the people who are implementing the policy. And they care a lot. So they really have something, the policies affecting their bottom line here. They might be well be change agents or advocates. So they might be people who are willing to listen to the results of an evaluation and actually make changes in the way that they're working to implement a policy. But also your stakeholders are your funders, whoever is funding this evaluation is also a stakeholder. And so managing this process involves reaching out to people who are affected by the policy, people who are implementing the policy and people who are funding the evaluation and juggling all these relationships. So what are some common problems with evaluation? And these really come down to a small number of things. A lot of the time and evaluation is not really properly funded or resourced. So maybe a common set of problems arises from poor resources so you don't have the money to Do what you would ideally want to do to do a good evaluation. So you have to make sacrifices and cut the budget, and figure out how to do the evaluation on a much smaller amount of money. And resources more broadly can include human resources. So maybe your team doesn't quite have the right skills for this. And so you might find going in that you thought the skills you had were appropriate, but then really, you need to do something else. And that affects the quality of the evaluation. Poor leadership. So as you can kind of tell here, this is really a project management scenario where you're trying to manage relationships process, look at outcomes, you're trying to deliver something on time to budget with concrete findings, and then bring that back to stakeholders in a timely way. So a lack of leadership on your part, or inability to manage your team can affect the consequences affect the quality of the evaluation and effect the impact of the evaluation. Sometimes evaluations have poor methods. And the methods used are not really up to snuff when it comes to what best practice would be in that, say academia, where there might be more time to complete something or a bit more adequate resources to complete something, complete an evaluation. So methods can also suffer when the team isn't right. And the skill sets don't quite match with what's required of the evaluation. Sometimes people don't pick the right measures. So they're looking at outcomes, but they don't pick the right outcomes measures for the job. So the data that they have to hand doesn't quite measure the the outcome of the policy or otherwise has some flaws to it. So not measuring quite what you think you're measuring is a common problem, as is poor data, or just poor access to the data that's out there. But another problem that's specific to policy evaluation is that policy moves on. So sometimes evaluations have to be conducted quite quickly. Sometimes they're required to be conducted multiple times in a multi year program. And so a lot of this work has to be done quite fast. And to on a limited timeline. Whereas academically, if I were just studying a policy, I might want more time to dig into it and understand really, the consequences of some big policy change. As an evaluator, you might not have the scope to do that. Another big problem with evaluation is political scrutiny. So the problem with conducting an evaluation that's required by that state legislators is that the it's part of that, and it's a part design process. And so people are bidding in a competitive process quite often to conduct these evaluations, or they've received a grant and to put something put a program into place, and they are required to evaluate as part of that money. And so the political scrutiny of what's being done can cause problems in terms of sticking to what we would consider to be objective, good best practice, in terms of evaluation. Politicians would rather than put sometimes that it's done in a different way or done with less money or done in less time than we really would advise to produce a good quality result. And, and public scrutiny is part of the package to so you have to be aware that the results of this evaluation are going to be on the public record, and will be accessible to journalists and others are, but I hear you say, so if these are the common problems with evaluation, what can you do about it? Well, luckily for you, there also is advice in these various frameworks about how to tackle problems. And so for poor resources, for example, make sure you factor the cost of evaluation into any grant proposal that you put forward, think about what it's going to cost to actually evaluate the policy as well as to put any kind of implementation of a program into place for poor leadership, like think about who's going to lead the team before you start and what the structures will be including, maybe think outside the box a bit in terms of who is a leader outside of your team that you might want to bring in poor methods. Like Don't be over ambitious. Think about the smallest amount of research and evaluation that you can do in order to produce the desired required result. Poor measures, like don't do this alone in a vacuum, look for existing studies, see what other people have done in terms of evaluation and try to copy what they're doing poor data. Like, if you are following the steps, I would say you're going to be identifying pre existing data sources at an early stage and taking a quick look at to see what's their quality, or how accessible are they, and that will save you a bunch of headaches later on. Policy moves too quickly off, well develop an evaluation plan before the policy implementation takes place. If that's possible, sometimes you're going to have to just think about the scope of your project and make sure that it tries to fit within the timeline for any results given by your sponsors. Political scrutiny, you might well want to identify short, intermediate and long term impacts for the policy in question. And try to present those in a straightforward way that people can in the legislature or people in the government agency can understand and to some extent, be robust in resisting attempts to skew your results one way or the other. Because we quite often talk in political science about the idea of agency capture. And policy evaluation is one aspect of that the temptation here is to deliver evaluation results that you know, that your sponsor or your client wants to hear. And so the ability to be objective, and not do that is something that you have to work on. So public scrutiny, try to use a logical plan, and document what you're doing and make sure that you are able to explain to anybody who inquires What was your approach to this, including, you want to think about contextual factors factors in your research that might impinge on the evaluation. So these are some ways in which you can try to alleviate some of these problems. To go back to the big picture, think about policy evaluation, as a very useful tool that potentially could tell us which policies are going to work which policies may well work and which ones might produce the required results. But also, I want you to be critical of the whole enterprise to some extent. We've created a constellation of research units, individuals, teams, and small companies and other enterprises that are specialized in policy evaluation. And so let's be critical about that enterprise. We've created a constituency for policy evaluation. And so policy evaluation is a political act, as lots of research actually is. And it is something that where we have to put a critical eye on the results, understand them in context, and consider who the sponsors are of the evaluations to start with. Hopefully, some of these results will impact policies in the future. But we also have to remember that quite often, decision makers do not make policy on the basis of evidence, they make evidence to justify their policies. So please keep that in mind. And bear in mind that the results of policy evaluation may not be always used in the way you would expect. In the next podcast, I'm going to talk a little bit about performance measurement, and in a very general way, and try to introduce you to some of the concepts that get used in that field. This has been the HMP governance lab podcast. If you're interested in our research, come and find us at HMP governance lab.org or follow us on Twitter at HMP govlab.