Using Public Policy for Social Change - Part 5

Program/Policy Evaluation Research Basic

Program Evaluation Definition and Framework


P
rogram evaluation and Retrospective policy analysis are closely related concepts that aim to assess the outcomes and effectiveness of public programs, policies, or interventions after they have been implemented. Retrospective policy analysis is a form of ex-post evaluation that examines the results of a policy or program that has already been put into practice. It looks back at what actually happened and compares it to the intended goals and anticipated impacts. Program evaluation is a specific type of retrospective policy analysis that systematically assesses the results of a public program or intervention, It encompasses various aspects such as implementation, user experiences, effectiveness, cost, and impact on inequities. Both retrospective policy analysis and program evaluation are crucial in the public sector to ensure accountability, learn from experience, and inform future decision-making. They provide evidence on whether programs achieved their intended objectives and identify areas for improvement. In contrast to prospective analysis which predicts future outcomes, retrospective analysis and program evaluation examine the actual results that occurred after a policy or program was implemented, they look back to determine what worked, what didn't, and why. Program evaluation is a form of retrospective policy analysis that focuses specifically on assessing the outcomes and effectiveness of public programs and interventions after the fact. It is a key tool for evidence-based policymaking and accountability in the public sector.

Program evaluation plays a vital role in ensuring accountability across the four foundational pillars of the public sector: economy, efficiency, effectiveness, and equity.

Economy
Accountability for Economy:

Program evaluation holds programs accountable for their economic performance by analyzing financial management and resource allocation. Key components include:

  • Cost-Benefit Analysis: This involves comparing the costs incurred during program implementation with the benefits achieved, assessing the economic soundness of the program.
  • Budgetary Compliance: Evaluators determine whether the program adhered to its budget and if any deviations were justified.
  • Value for Money: The evaluation assesses if the program delivered expected outcomes at a reasonable cost, ensuring efficient use of public funds.

Efficiency
Accountability for Efficiency:

Program evaluation ensures accountability for efficiency by scrutinizing program management and execution. This includes:

  • Process Efficiency: Evaluators analyze internal processes to identify inefficiencies or bottlenecks that could be improved.
  • Resource Utilization: Assessment of how well resources like personnel and materials were managed and utilized during implementation.
  • Time Management: Evaluating whether the program was completed on schedule and identifying any delays due to inefficiencies.

Effectiveness
Accountability for Effectiveness:

Program evaluation ensures accountability for effectiveness by measuring whether the program achieved its intended goals. This involves:

  • Outcome Metrics: Using data to measure program outcomes, such as behavioral changes or health improvements.
  • Impact Assessment: Evaluating the long-term effects of the program to determine if it has sustained benefits.
  • Goal Achievement: Checking if the program met its specific goals and objectives as outlined in its design.

Equity
Accountability for Equity:

Program evaluation ensures accountability for equity by examining the program's impact on various societal segments, particularly marginalized groups. This includes:

  • Distributional Equity: Evaluators assess whether program benefits were fairly distributed among different groups.
  • Access and Participation: Evaluating if all eligible individuals had equal access to the program and participated.
  • Reduction of Inequities: Considering whether the program helped reduce existing inequities or exacerbated them.

By comprehensively addressing these pillars through program evaluation, policymakers can optimize resource allocation, improve service delivery, achieve desired outcomes, and promote fairness and justice within public programs.

Clearly, program evaluation is linked to the effectiveness pillar. The public sector must determine whether its policies and programs are functioning as intended and whether they are causing harm. Program evaluation also addresses the economic and efficiency pillars by examining if the allocation of limited resources is effective and what benefits are derived from the resources invested. Furthermore, it is crucial for identifying how government programs and policies mitigate, exacerbate, or leave inequities untouched, including their implementation and outcomes.

Tobacco excise taxes serve as a widely implemented policy strategy aimed at addressing the public health challenges associated with smoking. By increasing the cost of tobacco products, this formal policy leverages taxation to influence consumer behavior. The fundamental principle behind this approach is based on the economic concept of negative price elasticity of demand. This principle suggests that a change in the price of tobacco will lead to a corresponding decrease in its consumption or demand. In essence, as the price of tobacco rises due to increased taxes, the quantity demanded is expected to fall, reflecting the inverse relationship between price and demand. This mechanism not only aims to reduce smoking rates but also generates revenue that can be reinvested in public health initiatives.

To understand the impact of tobacco excise taxes on public health, it is essential to consider the relationship between price elasticity of demand for tobacco products and the level of taxation required to induce meaningful behavioral changes in consumption. Price elasticity of demand measures how sensitive the quantity demanded of a product is to changes in its price. For tobacco products, this elasticity typically falls within the inelastic range, meaning that demand does not decrease significantly with price increases.

  • High-Income Countries: Estimates generally range from -0.2 to -0.5, indicating that a 10% increase in price results in a 2% to 5% decrease in consumption
  • Low- and Middle-Income Countries: The elasticity is often higher, ranging from -0.5 to -1.0, suggesting that these populations may respond more significantly to price changes. For instance, a 10% price increase might lead to a 5% to 10% reduction in tobacco consumption

The effectiveness of tobacco excise taxes in reducing consumption and improving public health depends on setting the tax at a level that significantly impacts prices. In the European Union, the excise duty on a pack of cigarettes varies widely—from approximately €2 in Poland to nearly €9 in Ireland. This disparity illustrates how different tax levels can influence consumption patterns across countries. Research indicates that substantial tax increases are necessary to achieve significant reductions in smoking rates. For example, a tax increase that raises prices by 10% could lead to a corresponding decrease in consumption of about 4% in high-income countries and up to 8% in developing countries

  • Effectiveness of Tobacco Taxes: Studies show that higher tobacco taxes lead to increased prices, which discourage smoking and subsequently reduce smoking-related health issues and economic costs. This correlation highlights the importance of setting appropriate tax levels to maximize public health benefits.
  • Research Methodologies: Most studies evaluating the impact of tobacco excise taxes employ time-series analysis. This method examines data over time to identify trends and patterns, making it particularly useful for assessing the effects of policy changes like tax increases. For instance, longitudinal studies have shown a significant decline in smoking prevalence following tax hikes. Time-series analysis has been extensively utilized to examine the price elasticity of demand for tobacco products, which measures how responsive the quantity demanded is to changes in price. This analysis consistently reveals a negative price elasticity for tobacco, indicating that as prices increase—often due to taxation—consumption decreases.
  • Elasticity Estimates: Research indicates that price elasticity for tobacco products varies, with estimates generally ranging from -0.2 to -1.0, depending on the population and methodology. For example, a 10% increase in price may lead to a 4% to 10% decrease in consumption, particularly among lower-income groups who are more responsive to price changes.
  • Impact on Vulnerable Populations: The evidence suggests that excise taxes are particularly effective in reducing consumption among young smokers, younger individuals tend to exhibit greater price sensitivity compared to older adults. Research indicates that higher taxes on tobacco products are particularly effective in deterring youth from initiating smoking, as they are more responsive to price changes, and low-income individuals, who tend to be more price-sensitive. Research in low- and middle-income countries (LMICs) remains limited and inconclusive. Some studies suggest that poorer individuals may be more responsive to price changes, but empirical support is weak and requires further investigation.  This demographic responsiveness underscores the potential of tobacco taxes to improve public health outcomes by reducing smoking rates in these groups.

Program evaluation typically follows a structured framework that ensures a comprehensive and systematic assessment. The major steps in this process can be grouped into distinct phases, each contributing to the overall effectiveness of the evaluation. Here are the key steps involved:

1. Engage Stakeholders:

Involve individuals or organizations that have a vested interest in the program. This includes those who is invested in the program, those affected by it, involved in its implementation, those who will use the evaluation results and  determining what they wish to learn from the evaluation and those affected by the program, implementers, and users of the evaluation results. Engaging stakeholders helps ensure that their needs and perspectives are considered throughout the evaluation process.

2. Describe the Program:

Clearly outline the program's purpose, activities, and expected outcomes. This step often involves developing a logic model that visually represents the program's components and how they relate to the intended goals. So, to ensure that everyone comprehends the purpose and objectives of the policy or program intervention and what its aim to achieved.

3. Focus the Evaluation Design:

Define the evaluation's purpose, identify key questions, and select appropriate indicators. This phase ensures that the evaluation is aligned with stakeholder interests and that resources are used efficiently. Concentrating on a manageable set of questions that the evaluation aims to address. It is common for stakeholders to identify numerous aspects they would like to learn through an evaluation. However, it is also frequently the case that there are insufficient resources or time to address all these questions. So the evaluation objectives and the questions it seeks to address must be prioritized.

4. Collect Data:

Gather credible evidence using various methods such as surveys, interviews, or focus groups. This step involves determining the data sources, selecting data collection methods, and ensuring that the data collected is relevant and reliable. A research plan must be developed, and credible evidence that addresses the priority research questions must be generated. This is the research-intensive part of program evaluation, where we will be investing significant time.

5. Analyze and Interpret Data:

Process the collected data to draw meaningful conclusions. This includes analyzing the data against the established indicators and interpreting the results in the context of the evaluation questions. Formulating and justifying conclusions based on the evaluation findings. What was revealed or uncovered through the evaluation. What can we confirm? What questions still need to be addressed?

6. Use and Share Findings:

Communicate the results to stakeholders and use the findings to inform decision-making. The results need to be shared, with the hope that they will inform the development of new policies and programs. This step is crucial for ensuring that the evaluation results lead to actionable insights and improvements in the program. 

Types of Program Evaluation 

Summative evaluation is a critical approach in assessing the effectiveness of interventions, There are two main types of summative evaluation:

1. Impact EvaluationImpact evaluation focuses on the immediate or short-term effects of a policy intervention. It examines specific mediators that contribute to long-term outcomes. For example, in the context of tobacco excise taxes, an impact evaluation would analyze how a tax increase affects tobacco purchases and smoking behavior in the short term. This type of evaluation is essential for understanding the direct consequences of tax hikes on consumer behavior, particularly among vulnerable populations such as youth and low-income individuals.

2. Outcome Evaluationon the other hand, concentrates on the ultimate results that an intervention aims to achieve. It assesses the long-term effects of the intervention and examines whether it successfully impacts public health outcomes. For instance, an outcome evaluation of tobacco taxation would investigate how tax increases influence rates of smoking-related illnesses and mortality over time. This could include examining whether higher tobacco taxes lead to reductions in conditions such as heart disease, lung cancer, and low birth weight. Another example Another example, Let's examine the intervention mass media campaigns in Africa aimed at educating the public about HIV transmission, self-protection methods, and testing locations. All interventions have a logic model or explicit theory that outlines how they are intended to operate to achieve both short-term and long-term outcomes. In the context of an HIV educational campaign, the underlying logic model for the campaign suggests that it must be implemented effectively and that people need to be exposed to it. The next step in the logical chain of the intervention is that it should lead to an increase in awareness or knowledge. A health education campaign must first educate. Following an increase in knowledge and awareness, does this subsequently influence people's beliefs and attitudes regarding their risk? Does it alter their own behaviors and their belief that they can change their behaviors? This must occur before people might experience a reduction in their risk of exposure. The next step in the logic model involves questions such as: Does the mass media campaign actually result in a decrease in risky behaviors and an increase in safer behaviors related to HIV transmission within the population? Is it having a population-level impact? And if all of this occurs, then potentially over time, we would observe a decrease in HIV exposure and transmission, a reduction in HIV incidence, and a decrease in HIV-related deaths. In this example, an impact evaluation would focus on the first parts of the logic model. The focus would be on measuring the changes in the targeted mediators of knowledge, attitudes, and self reported behaviors related to HIV prevention. In turn, an outcome evaluation would focus on measuring changes in the rates of HIV exposure, incidence, and mortality. Once more, those longer-term public health outcomes that the intervention aims to impact.

In the case of tobacco taxation, the impact evaluation would measure short-term changes in tobacco purchasing behavior immediately following a tax increase. In contrast, the outcome evaluation would look at longer-term public health outcomes, such as changes in smoking prevalence and related health issues over several years.

To conclude, the summative evaluation as the first type, this type of evaluation aims to collect data on the impacts and outcomes of an intervention. This is the type of evaluation that seeks to determine whether the intervention was effective. Summative evaluation focuses on both the immediate impacts and the long-term outcomes that an intervention aims to to affect and change. Summative evaluation seeks to address questions about whether an intervention was successful, both in the short term and the long term.


Process or implementation evaluation focuses on assessing whether and how an intervention was delivered, ensuring it met the intended content, quality, and reach. This type of evaluation is crucial for understanding the implementation of a policy, program, or intervention and involves several key components:

  1. Delivery Methods:
    • This involves examining the specific methods or steps used to deliver the intervention to the intended recipients. Evaluators assess whether the intervention was executed according to its design and whether all components were implemented as planned.
  2. Perception and Acceptance:
    • Evaluators investigate how the intervention was perceived by the target population. This includes gathering feedback on participants' experiences and their acceptance of the intervention.
  3. Data Gathering and Analysis:
    • The evaluation involves the collection and analysis of data regarding the implementation, ongoing operations, and acceptability of the intervention. This data helps determine the extent to which the intervention is being implemented as intended.

Example: HIV Mass Media Campaign

In the context of an HIV mass media campaign, a process evaluation might include:
  • Assessment of Execution: Evaluating how well the campaign components were executed according to their intended design.
  • Demographic Exposure: Analyzing which demographics report exposure to different campaign components, such as specific media platforms (radio, print, social media).
  • Campaign Recognition: Determining whether people know the name of the campaign and their level of awareness.
  • Exposure Metrics: Collecting data on exposure metrics across various media platforms, including how memorable, understandable, and relevant the campaign messages were to the audience.

Importance of Process or implementation evaluation

Process evaluation is vital as it provides insights into the implementation fidelity of an intervention. It helps identify:

  • Strengths and Weaknesses: Understanding what aspects of the intervention were successful and which areas need improvement.
  • Engagement Levels: Evaluating participant engagement and retention rates, which are critical for the intervention's overall success.
  • Cost-Effectiveness: Assessing the cost of delivering the intervention relative to its reach and impact.

Effective implementation is crucial for the success of any intervention, and understanding the nuances of how a program is delivered can provide valuable insights into its overall effectiveness. By combining process, impact, and outcome evaluations, program evaluations can offer a comprehensive assessment that not only identifies what works but also why it works, leading to more informed decision-making.

Reference:

Coursera

Vedung, Evert, 2010, “Four Waves of Evaluation Diffusion”, Evaluation, 16(3), 263-277.

No comments: