Section 6.5: Program Evaluation & Applied Research

Decorative Page Banner stating the title of this text: Fundamentals of Social Research by Adam J. McKee

Applied research refers to research that occurs outside of an academic setting with the goal of producing positive change.  In other words, it is the application of social scientific theory and research methods to solve (or at least mitigate) social problems.  A major category of applied research is program evaluation.  We will dedicate a considerable proportion of our time to that important topic. 

Understanding Case Studies

What’s a Case Study?

Let’s talk about something called a case study. Imagine you want to understand why a certain skateboarder is so good at tricks. You’d watch them closely, ask them questions, and maybe look at videos of their performances. That’s like a case study but for science! It’s when researchers take a super detailed look at one person or group to learn something special about them.

Why Case Studies Matter

Case studies are not just about asking questions; they use other cool detective work like spying (well, not exactly spying, but close!), checking out old records, and using already known facts. They are like puzzles, fitting pieces together to understand the full picture, especially when it’s about how a person’s surroundings influence their actions and feelings.

The Real Deal on Case Studies

Some people think case studies are simple because they’re not filled with numbers and experiments. But that’s not true. Case studies can actually tell us a lot because they’re flexible. They let scientists change their focus as they learn new things, which is super handy when they’re starting to explore and don’t have all the answers yet.

🔍 Reflect: Have you ever tried to figure out something by looking closely at one example? How did that help you understand the bigger picture?

Digging Deeper into Case Studies

Setting Up Your Research

Before you start, you need to have a clear question in mind, just like when you start any project. You think about what you want to find out and maybe guess what the answer might be. That’s your starting point.

Choosing Your Subject

Next, you choose what or who you want to study. It’s like picking the main character for a story. You don’t pick them just because they’re easy to write about; they need to be perfect for the story you want to tell.

Collecting the Pieces

Interviews are a big part of case studies. Imagine you’re a journalist, and you’ve got a list of questions for an interview. You’d stick to those questions but also follow up on interesting things the person might say. And you’d look at other stuff, too, like reports or social media posts, to make sure you’re getting the whole truth.

The Big Picture

After gathering all this info, you start to look for patterns and clues to answer your question. It’s a bit like being a detective. You compare what you’ve learned with other cases to see if there’s a pattern. This helps you build up a theory, which is a fancy word for a well-thought-out guess about how things work.

Sharing Your Findings

Finally, you write everything up clearly, so others can follow what you did and decide if they agree with your conclusions. It’s important to be super clear so that other people can understand and trust your work.

🔍 Reflect: When you share a story or explain how to play a game, how do you make sure you’re being clear and others understand you?

Program Evaluation

The Essence of Program Evaluation

Imagine your community initiates a policing program aimed at fostering safety and trust. To ascertain its effectiveness, a program evaluation is conducted. This process is akin to methodically reviewing a plan’s performance to confirm its impact on the community.  We will continue with our community policing example, but note that this strategy can be applied to any program in any context.

The Rationale Behind Evaluation

Authorities who implement community policing must substantiate its success. Program evaluation is the lens through which they can demonstrate, with tangible evidence, that the initiative is contributing positively and justifying the investment.

Steps to a Thorough Evaluation

To evaluate a community policing initiative, these structured steps are followed:

  1. Identify the Objectives: Clearly comprehend the intended outcomes of the program. What specific changes or improvements does it aim to bring about in community safety?
  2. Formulate Pertinent Questions: Develop precise inquiries that probe into the program’s performance. Questions might include: “Has there been a reduction in local crime rates?” or “Do residents report feeling more secure?”
  3. Select an Evaluation Strategy: Choose a systematic approach to gather insights. This might involve community surveys, analyzing crime statistics, or observing interactions between police officers and residents.
  4. Collect Data: Determine the sources of information and gather data systematically. This could involve compiling incident reports, conducting resident interviews, or collating feedback from community meetings.
  5. Analyze the Findings: Plan an analysis that enables valid conclusions. This step involves interpreting the data to draw informed conclusions about the program’s effectiveness.

🔍 Reflect: Consider a community initiative you are familiar with. How would you assess its success? What specific evidence would be necessary to evaluate its impact effectively?

Needs Assessments

What is a Needs Assessment?

Think about a community garden. Before you start planting, you need to know what will grow best, right? A needs assessment is similar; it’s a way to figure out if a community program is really necessary. Whether it’s an old initiative that might not be useful anymore, or a new idea that could be beneficial, you need facts to decide.

The Aim of Conducting a Needs Assessment

The goal is simple: use solid evidence to see if a program should be launched or continued. It’s like confirming whether your garden actually needs more tomatoes before you start seeding.

The Starting Step: Collection Data

Just like testing soil before planting, you gather information to understand the problem that the program aims to solve.

By starting with a clear picture of the community’s needs, you can make informed decisions about which programs will truly make a difference.

🔍 Reflect: If you wanted to start a new youth activity at a local school, how would you figure out if it’s really needed? What kind of information would you look for?

Understanding Process Evaluation

Exploring Process Evaluation

Imagine a new program designed to improve relations between law enforcement and the community. To assess the program’s progress, a process evaluation is conducted. This type of evaluation is like the behind-the-scenes work in a theater production—it checks if the script is being followed, the scenes are set up right, and the actors know their cues before the show can go on.

The Significance of Process Evaluation

When a new initiative is launched, stakeholders anticipate that it will yield beneficial results, enhancing safety or reducing crime. However, before these outcomes can be measured, it’s crucial to verify that the program is unfolding as it was conceived. This is where process evaluation is pivotal.

Applying Process Evaluation

This evaluation is particularly focused on the nascent phases of a program. It meticulously scrutinizes aspects such as:

  • Are the program’s activities being implemented as strategized?
  • Is the collaboration between police and community leaders developing as intended?
  • Are training sessions for officers being conducted as per the guidelines?

By meticulously evaluating these early operational details, process evaluation helps to discover and address potential discrepancies that could later affect the program’s effectiveness.

🔍 Reflect: Consider a community safety program or a school safety drill you’re familiar with. How would you determine if the initial steps are being executed correctly? What measures would you take to ensure the program’s foundational practices are solid?

 

Delving into Outcome Evaluation

The Concept of Outcome Evaluation

Imagine a program that aims to improve reading skills across schools in a district. To see if it’s really working, educators don’t just rely on general feedback; they look at test scores, grades, and reading levels. This is outcome evaluation in action—a thorough check-up to ensure a program achieves its goals, often with a keen eye on numbers and statistics.

The Essence of Outcome Evaluation

Outcome evaluation zeroes in on the effects of a program. It’s an empirical approach, which means it’s based on observable evidence to validate whether the program’s goals have been met. The process begins with setting clear, measurable targets.

The Outcome Evaluation Process

Here’s how social scientists approach outcome evaluation:

  1. Set Clear Goals: Like a roadmap, clear goals guide the journey. What exactly is the program trying to achieve?
  2. Develop Evaluation Questions: These questions are like the checkpoints along the way. They help to focus on what needs to be measured to determine success.
  3. Choose the Right Methods: Different questions may require different tools. Just like a doctor has many instruments to diagnose a patient, evaluators use various methods to answer their questions.
  4. Gather Reliable Data: Data is the fuel for evaluation. It must be collected carefully and methodically to ensure it’s trustworthy.
  5. Analyze the Data: Finally, the data is put under a microscope to draw meaningful conclusions. This step is about making sense of the numbers to understand the program’s true impact.

🔍 Reflect: Think about a change you’ve seen or been part of, like a new policy at school or a community project. How would you measure its success? What specific outcomes would you look for?

Crafting an Evaluation Design

The Process of Choosing a Design

Think about planning a big event, like a school prom. You’d need to decide on the theme, the music, and the food, right? Similarly, after social scientists come up with their questions, they need to create a plan—or an evaluation design—to find the answers. This design is the blueprint for gathering and analyzing information to reach valid conclusions.

Key Elements of an Evaluation Design

An evaluation design combines several important parts:

  1. Evaluation Questions: The big “whats” – what are we trying to find out?
  2. Information Sources: The “wheres” – where will we get our data from?
  3. Data Collection Methods: The “hows” – how will we collect our information?
  4. Analysis Plan: The “so whats” – so what does the data tell us about the program’s performance?
  5. Study Limitations: The “buts” – but what are the potential weaknesses in our study?

By defining these elements, evaluators set the stage for meaningful discussions with stakeholders and lay a foundation for interpreting the results.

The Iterative Nature of Designing an Evaluation

Creating an evaluation design isn’t a one-shot deal. It’s a cycle where objectives, methods, and scope are refined over time as new information comes to light or as constraints change. It’s like planning that prom again—you might need to adjust your plans as you learn more about what’s possible or what people want.

The Role of Literature Reviews

Before jumping into collecting new data, it’s crucial to dig into existing research. This helps understand the program’s context and past evaluations, shaping the current evaluation questions and methods. A literature review is like checking out previous proms to see what worked and what didn’t.

Synthesizing Prior Evaluations

When previous studies have tackled similar questions, it might be smarter to synthesize their results first. Think of it as gathering the best ideas from past events to inform your current plans. This synthesis can either confirm what’s effective or reveal why results varied, providing valuable insights for the current evaluation.

🔍 Reflect: When you’re trying to improve something at school, like a club or a process, how do you decide what to change? What information would you look for to guide your decisions?

Assessing Data Quality

Understanding Data Quality

When researchers tackle questions about a program’s effectiveness, they’re like chefs selecting the best ingredients for a recipe. The quality of the data—its relevance, accuracy, and reliability—is just as crucial as the quality of ingredients in a dish. High-quality data can make or break a study’s findings.

Evaluating Data Sources

Researchers have a variety of places to look for data, such as records, reports, or surveys. They might even observe or interview people involved with the program. Choosing where to get data is a bit like choosing which store to buy ingredients from—you want the source that offers the best quality for what you need.

Sufficient and Appropriate Evidence

Data must be sufficient—meaning there’s enough to confidently support the study’s conclusions. It also must be appropriate—relevant and trustworthy enough to back up the objectives. For instance, if a program claims to improve education, you’d want to look at test scores (sufficient) that accurately reflect students’ learning (appropriate).

The Significance of Reliable Measures

Measures are the specific, observable things researchers look at to judge a program’s performance. They could be numbers like graduation rates or more descriptive like feedback on student engagement. Choosing the right measures is essential; they need to be clearly linked to what’s being evaluated. If you want to improve tax processing accuracy, looking only at how quickly returns are processed doesn’t give you the full picture—you’d be missing the measure of accuracy.

The Harmony of Measures and Criteria

Just as a meal is judged by how it tastes, a program is judged by how well it meets its goals. The measures used in the study should align with the criteria for success. This alignment ensures that when the study’s over, everyone can agree the data truly reflects the program’s performance, just as diners agree a meal was delicious.

🔍 Reflect: If you were trying to prove a new school policy was effective, what kind of data would you look for? How would you decide if the data was good enough to support your argument?

Summary

Case Studies and Program Evaluation

This section began with a discussion on case studies, emphasizing their role in providing comprehensive insights into specific instances within social programs. We then transitioned to the broader scope of program evaluation, which systematically assesses the effectiveness of a program against its objectives.

Needs Assessment and Process Evaluation

The importance of needs assessments was highlighted as a crucial step in determining the necessity of a program prior to its launch. Following this, process evaluation was introduced, focusing on the initial stages of program implementation to ensure it adheres to the planned processes.

Outcome Evaluation

With outcome evaluation, we assessed the end results of programs, scrutinizing whether they achieved their intended goals through systematic data collection and analysis.

Choosing an Evaluation Design

In choosing an evaluation design, we outlined the necessity of crafting a detailed plan that allows for the gathering of credible evidence while also being adaptable to the evolving nature of the evaluation process.

Data Quality

The section also addressed the critical aspect of data quality, underscoring the need for data to be both sufficient in quantity and appropriate in terms of relevance, validity, and reliability.

Iterative Nature of Evaluation and Synthesis of Research

We emphasized the iterative nature of evaluation, the significance of conducting thorough literature reviews, and the value of synthesizing existing research to build a solid foundation for current evaluations.

Reflecting on Evaluation Principles

Conclusively, the section invited readers to reflect on the application of these evaluation methods in various real-world contexts, thereby bridging the gap between theoretical concepts and practical application.

Modification History

File Created:  07/25/2018

Last Modified:  11/07/2023

[ Back | Content]

Print for Personal Use

You are welcome to print a copy of pages from this Open Educational Resource (OER) book for your personal use. Please note that mass distribution, commercial use, or the creation of altered versions of the content for distribution are strictly prohibited. This permission is intended to support your individual learning needs while maintaining the integrity of the material.

Print This Text Section Print This Text Section

This work is licensed under an Open Educational Resource-Quality Master Source (OER-QMS) License.

Open Education Resource--Quality Master Source License

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.