Applied research refers to research that occurs outside of an academic setting with the goal of producing positive change. In other words, it is the application of social scientific theory and research methods to solve (or at least mitigate) social problems. A major category of applied research is program evaluation. We will dedicate a considerable proportion of our time to that important topic.
Case Studies
A case study involves studying a particular individual in a very intense, detailed, and often personal way. Most case studies are conducted because there is some interesting attribute associated with the person (or institution) being studied. Case studies (also called case research), then, means that the researcher intensively studies a phenomenon over time, usually within a natural setting.
Case studies can use several of the methods that we’ve already discussed to collect data: Interviews, observations, archival records, and secondary data may all be used to make inferences about the phenomenon of interest. Such studies tend to be very detailed and contextualized. Contextualized means that the researcher takes the environment where the phenomenon takes place into account. This is extremely important in therapeutic evaluations where a client’s environment can be a critical element in the creation of negative behaviors and psychological states. The method is very versatile; it can be used in theory building, theory testing, and program evaluation.
In the recent history of social research, heavily quantitative methods based on experimental and quasi-experimental designs have come to dominate the professional journals. Because of this, there is a perception among some researchers that case studies are an unsophisticated method. The levels of objectivity and rigor in a study have more to do with the research designs of the particular researcher than the case study method itself. In some research situations, it has advantages over other methods. A major advantage of the case study method is that it is not as rigid as traditional theory testing methods such as true experiments.
Scholars and practitioners conducting case studies can have research questions that evolve over time as more information is gathered. Case studies are especially valuable when the researcher does not have enough information about the phenomenon to develop formal hypotheses. Case studies also have the advantage of capturing more contextual data, thus providing for a more authentic interpretation of the phenomenon under study. Less rigidity also means that the researcher can also shift the level of analysis and examine both individual and organizational characteristics.
The case study method is versatile and should not be ignored, but it does have some weaknesses that you should be aware of. The “authenticity” of the method means that there are no experimental controls, so the internal validity of inferences can be very weak. This problem with internal validity is not unique to case studies: Recall that true experiments are the absolute best method of making valid inferences. Because case studies rely in the insightfulness of the researcher to identify themes and trends rather than formal hypotheses, the method is often regarded as too subjective by its detractors. Because inferences drawn from case studies are highly contextualized, inferences to other individuals and organizations have to be made with extreme caution.
Students often see case studies as an “easy” alternative to more complex experimental methods. In truth, high quality case study research is very difficult and requires advanced skills on the part of the researcher. Consumers of case study research must always ask, “Will these inferences hold in my particular situation?” Another potential problem with poorly done case studies is that the researcher failed to specify any meaningful research questions. This can result in a study that makes no meaningful inferences or specific answers to social problems.
As we learned in our discussion of sampling techniques, samples of convenience often lead to terribly inaccurate results. This same caveat applied to the cases that a researcher chooses in a case study. Individuals (or organizations) should not be chosen because they are convenient; they should be chosen because they are highly relevant to understanding the problem under study. Case studies can be difficult for the consumer to evaluate. Published cases studies often do not provide adequate information as to how the data were gathered, how the data were analyzed, and how themes and trends were identified.
Conducting Case Research
Most case research studies tend to be interpretive in nature. Interpretive case research is an inductive technique where evidence collected from one or more case sites is systematically analyzed and synthesized to allow concepts and patterns to emerge for the purpose of building new theories or expanding existing ones. Eisenhardt (1989) propose a “roadmap” for building theories from case research, a slightly modified version of which is described below. For positivist case research, some of the following stages may need to be rearranged or modified; however sampling, data collection, and data analytic techniques should generally remain the same.
Define research questions. Like any other scientific research, case research must also start with defining research questions that are theoretically and practically interesting, and identifying some intuitive expectations about possible answers to those research questions or preliminary constructs to guide initial case design. In positivist case research, the preliminary constructs are based on theory, while no such theory or hypotheses should be considered ex ante in interpretive research. These research questions and constructs may be changed in interpretive case research later on, if needed, but not in positivist case research.
Select case sites. The researcher should use a process of “theoretical sampling” (not random sampling) to identify case sites. In this approach, case sites are chosen based on theoretical, rather than statistical, considerations, for instance, to replicate previous cases, to extend preliminary theories, or to fill theoretical categories or polar types. Care should be taken to ensure that the selected sites fit the nature of research questions, minimize extraneous variance or noise due to firm size, industry effects, and so forth, and maximize variance in the dependent variables of interest.
For instance, if the goal of the research is to examine how some firms innovate better than others, the researcher should select firms of similar size within the same industry to reduce industry or size effects, and select some more innovative and some less innovative firms to increase variation in firm innovation. Instead of cold-calling or writing to a potential site, it is better to contact someone at executive level inside each firm who has the authority to approve the project or someone who can identify a person of authority. During initial conversations, the researcher should describe the nature and purpose of the project, any potential benefits to the case site, how the collected data will be used, the people involved in data collection (other researchers, research assistants, etc.), desired interviewees, and the amount of time, effort, and expense required of the sponsoring organization. The researcher must also assure confidentiality, privacy, and anonymity of both the firm and the individual respondents.
Create instruments and protocols. Since the primary mode of data collection in case research is interviews, an interview protocol should be designed to guide the interview process. This is essentially a list of questions to be asked. Questions may be open-ended (unstructured) or closed-ended (structured) or a combination of both. The interview protocol must be strictly followed, and the interviewer must not change the order of questions or skip any question during the interview process, although some deviations are allowed to probe further into respondent’s comments that are ambiguous or interesting. The interviewer must maintain a neutral tone, not lead respondents in any specific direction, say by agreeing or disagreeing with any response. More detailed interviewing techniques are discussed in the chapter on surveys. In addition, additional sources of data, such as internal documents and memorandums, annual reports, financial statements, newspaper articles, and direct observations should be sought to supplement and validate interview data.
Select respondents. Select interview respondents at different organizational levels, departments, and positions to obtain divergent perspectives on the phenomenon of interest. A random sampling of interviewees is most preferable; however a snowball sample is acceptable, as long as a diversity of perspectives is represented in the sample. Interviewees must be selected based on their personal involvement with the phenomenon under investigation and their ability and willingness to answer the researcher’s questions accurately and adequately, and not based on convenience or access.
Start data collection. It is usually a good idea to electronically record interviews for future reference. However, such recording must only be done with the interviewee’s consent. Even when interviews are being recorded, the interviewer should take notes to capture important comments or critical observations, behavioral responses (e.g., respondent’s body language), and the researcher’s personal impressions about the respondent and his/her comments. After each interview is completed, the entire interview should be transcribed verbatim into a text document for analysis.
Conduct within-case data analysis. Data analysis may follow or overlap with data collection. Overlapping data collection and analysis has the advantage of adjusting the data collection process based on themes emerging from data analysis, or to further probe into these themes. Data analysis is done in two stages. In the first stage (within-case analysis), the researcher should examine emergent concepts separately at each case site and patterns between these concepts to generate an initial theory of the problem of interest. The researcher can interview data subjectively to “make sense” of the research problem in conjunction with using her personal observations or experience at the case site. Alternatively, a coding strategy such as Glasser and Strauss’ (1967) grounded theory approach, using techniques such as open coding, axial coding, and selective coding, may be used to derive a chain of evidence and inferences. These techniques are discussed in detail in a later chapter. Homegrown techniques, such as graphical representation of data (e.g., network diagram) or sequence analysis (for longitudinal data) may also be used. Note that there is no predefined way of analyzing the various types of case data, and the data analytic techniques can be modified to fit the nature of the research project.
Conduct cross-case analysis. Multi-site case research requires cross-case analysis as the second stage of data analysis. In such analysis, the researcher should look for similar concepts and patterns between different case sites, ignoring contextual differences that may lead to idiosyncratic conclusions. Such patterns may be used for validating the initial theory, or for refining it (by adding or dropping concepts and relationships) to develop a more inclusive and generalizable theory. This analysis may take several forms. For instance, the researcher may select categories (e.g., firm size, industry, etc.) and look for within-group similarities and between-group differences (e.g., high versus low performers, innovators versus laggards). Alternatively, she can compare firms in a pair-wise manner listing similarities and differences across pairs of firms.
Build and test hypotheses. Based on emergent concepts and themes that are generalizable across case sites, tentative hypotheses are constructed. These hypotheses should be compared iteratively with observed evidence to see if they fit the observed data, and if not, the constructs or relationships should be refined. Also the researcher should compare the emergent constructs and hypotheses with those reported in the prior literature to make a case for their internal validity and generalizability. Conflicting findings must not be rejected, but rather reconciled using creative thinking to generate greater insight into the emergent theory. When further iterations between theory and data yield no new insights or changes in the existing theory, “theoretical saturation” is reached and the theory building process is complete.
Write case research report. In writing the report, the researcher should describe very clearly the detailed process used for sampling, data collection, data analysis, and hypotheses development, so that readers can independently assess the reasonableness, strength, and consistency of the reported inferences. A high level of clarity in research methods is needed to ensure that the findings are not biased by the researcher’s preconceptions.
Adapted from Bhattacherjee, A. (2012). Social Science Research: Principles, Methods, and Practices. Tampa: USF Tampa Library Open Access Collections. (p. 104).
Program Evaluation
“A program evaluation is a systematic study using research methods to collect and analyze data to assess how well a program is working and why” (GAO, 2012, p. 3). In addition, program evaluation methods can be used to determine if a program is needed in the first place. A key component of program evaluation is that the program (however program defined) must have a clear set of articulated objectives. This fact suggests that good programs should include program evaluation from the very start. When federal grant funds are at stake, program developers must clearly articulate the need for the program, a rationale of why the proposed program will work, and a detailed plan for evaluating the outcomes of the program. In an age of government accountability, federal agencies are not likely to fund programs that cannot evidence positive outcomes related to money spent. For these reasons, it is important to understand the methods of program evaluation. The first stage in conducting a program evaluation is to develop a research design.
By this point in the text, the importance of designing any research project should be apparent. Good designs will enhance the quality, usefulness, and credibility of the study. This general statement about social science research includes program evaluations. Most of the methods used in program evaluation are those already discussed under the various research topics considered in previous sections. When it comes to grant funding, program evaluators must clearly articulate the research plan in writing.
Five Key Steps to an Evaluation Design
Evaluations are studies tailored to answer specific questions about how well (or whether) a program is working. To ensure that the resulting information and analyses meet decision maker’s needs, it is particularly useful to isolate the tasks and choices involved in putting together a good evaluation design. We propose that the following five steps be completed before significant data are collected. These steps give structure to the rest of this publication:
1. Clarify understanding of the program’s goals and strategy.
2. Develop relevant and useful evaluation questions.
3. Select an appropriate evaluation approach or design for each evaluation question.
4. Identify data sources and collection procedures to obtain relevant, credible information.
5. Develop plans to analyze the data in ways that allow valid conclusions to be drawn from the evaluation questions.
Adapted from the U.S. Government Accountability Office (2012). Designing Evaluations: 2012 Revision. Washington, D.C.: GAO. Retrieved from http://www.gao.gov/assets/590/588146.pdf
Needs Assessments
The basic purpose of a needs assessment is to determine if a program is needed. This can include an old program that may have become unnecessary, or whether a new program would be beneficial. From the social scientific standpoint, the need for the program should be established using empirical evidence. The first step in this process is to gather data that establish an accurate picture of the nature and scope of the problem that the program is designed to alleviate.
Process Evaluation
When governments and non-government organizations spend money on programs, they expect those programs to have an impact on society. In other words, programs by definition have outcomes. In the early stages of program development, however, the impact of the immature program may not be meaningful. At this formative stage, the more appropriate questions concern whether or not the program is being implemented as planned. Evaluation studies designed to address the quality or efficiency of program operations or their fidelity to program design are frequently called process evaluations or implementation evaluations. Process evaluations are extremely useful in pinpointing causes of unexpected outcomes later in the life of a program.
Outcome Evaluation
Most program evaluations focus on a simple question: Did the program have the intended outcomes? In other words, outcome evaluations are designed to demonstrate empirically that the program worked as intended. The first step is to clearly identify the goals and objectives of the program. Because quantitative evidence is considered more objective, program outcomes must be articulated with sufficient specificity to measure those outcomes numerically. Once the objectives have been clearly established, then the evaluator can begin to develop evaluation questions. Once these questions have been identified, an evaluation approach can be identified for each question. Note that different questions will be answered best by different methods, so different research design elements may be necessary within the scope of an evaluation project.
Next, the evaluator will consider the best method to collect valid and reliable data consistent with the chosen design. Both the sources of data and the collection procedures must be identified. Once the nature of the data has been determined, the evaluator can move on to specify how the data are to be analyzed. The ultimate goal of this data analysis plan is to yield valid conclusions about the evaluation questions.
Choosing a Design
Once evaluation questions have been formulated, the next step is to develop an evaluation design—to select appropriate measures and comparisons that will permit drawing valid conclusions on those questions. In the design process, the evaluator explores the variety of options available for collecting and analyzing information and chooses alternatives that will best address the evaluation objectives within available resources. Selecting an appropriate and feasible design, however, is an iterative process and may result in the need to revise the evaluation questions.
Key Components of an Evaluation Design
An evaluation design documents the activities best able to provide credible evidence on the evaluation questions within the time and resources available and the logical basis for drawing strong conclusions on those questions. The basic components of an evaluation design include the following:
the evaluation questions, objectives, and scope;
information sources and measures, or what information is needed;
data collection methods, including any sampling procedures, or how information or evidence will be obtained;
an analysis plan, including evaluative criteria or comparisons, or how or on what basis program performance will be judged or evaluated;
an assessment of study limitations.Clearly articulating the evaluation design and its rationale in advance aids in discussing these choices with the requester and other stakeholders. Documenting the study’s decisions and assumptions help manage the study and assists report writing and interpreting results.
Adapted from the U.S. Government Accountability Office (2012). Designing Evaluations: 2012 Revision. Washington, D.C.: GAO. Retrieved from http://www.gao.gov/assets/590/588146.pdf
Recall from Chapter 1 our discussion of the cyclical nature of the research endeavor. This is no less true for program evaluation. Designing an evaluation plan is iterative: evaluation objectives, scope, and methodology are defined together because what determines them often overlaps. Data limitations or new information about the program may arise as work is conducted and have implications for the adequacy of the original plans or the feasibility of answering the original questions. For example, a review of existing studies of alternative program approaches may uncover too few credible evaluations to support conclusions about which approach is most effective. Thus, evaluators should consider the need to make adjustments to the evaluation objectives, scope, and methodology throughout the project.
Nevertheless, the design phase of an evaluation is a period for examining options for answering the evaluation questions and for considering which options offer the strongest approach, given the time and resources available. After reviewing materials about the program, evaluators should develop and compare alternative designs and assess their strengths and weaknesses. For example, in choosing between using program administrative data or conducting a new survey of program officials, the evaluator might consider whether 1) the new information collected through a survey would justify the extra effort required, or 2) a high-quality survey can be conducted in the time available.
Very early on in this text, we discussed the importance of conducting quality literature reviews. Now that we approach the end, we once again encounter the same set of skills. A key first step in designing an evaluation is to conduct a literature review in order to understand the program’s history, related policies, and knowledge base. A review of the relevant policy literature can help focus evaluation questions on knowledge gaps, identify design and data collection options used in the past, and provide important context for the requester’s questions. An agency’s strategic plan and annual performance reports can also provide useful information on available data sources and measures and the efforts made to verify and validate those data.
Recall that synthesis is a key element of writing a good literature review. When a literature review reveals that several previous studies have addressed the evaluation question, then the evaluator should consider conducting a synthesis of their results before collecting new data. An evaluation synthesis can answer questions about overall program effectiveness or whether specific features of the program are working.
Findings supported by a number of soundly designed and executed studies add strength to the knowledge base exceeding that of any single study, especially when the findings are consistent across studies that used different methods. If, however, the studies produced inconsistent findings, systematic analysis of the circumstances and methods used across a number of soundly designed and executed studies may provide clues to explain variations in program performance. For example, differences between communities in how they staff or execute a program or in their client populations may explain differences in their effectiveness.
Data Quality
Depending on the program and study question, potential sources for evidence on the evaluation question include program administrative records, grantee reports, performance monitoring data, surveys of program participants, and existing surveys of the national population or private or public facilities. In addition, the evaluator may choose to conduct independent observations or interviews with public officials, program participants, or persons or organizations doing business with public agencies. In selecting sources of evidence to answer the evaluation question, the evaluator must assess whether these sources will provide evidence that is both sufficient and appropriate to support findings and conclusions on the evaluation question.
Sufficiency refers to the quantity of evidence—whether it is enough to persuade a knowledgeable person that the findings are reasonable. Appropriateness refers to the relevance, validity, and reliability of the evidence in supporting the evaluation objectives. The level of effort required to ensure that computer-processed data (such as agency records) are sufficiently reliable for use will depend on the extent to which the data will be used to support findings and conclusions and the level of risk or sensitivity associated with the study.
Measures are the concrete, observable events or conditions (or units of evidence) that represent the aspects of program performance of interest. Some evaluation questions may specify objective, quantifiable measures, such as the number of families receiving program benefits, or qualitative measures, such as the reasons for noncompliance. But often the evaluator will need to select measures to represent a broader characteristic, such as “service quality.” It is important to select measures that clearly represent or are related to the performance they are trying to assess. For example, a measure of the average processing time for tax returns does not represent and is not clearly related to the goal of increasing the accuracy of tax return processing. Measures are most usefully selected in concert with the criteria that program performance will be assessed against, so that agreement can be reached on the sufficiency and appropriateness of the evidence for drawing conclusions on those criteria.
Modification History File Created: 07/25/2018 Last Modified: 06/11/2021
APA Citation McKee, A. J. (2019). Fundamentals of Social Research. Forma Pauperis Press. https://www.docmckee.com/WP/oer/research-contents/
This work is licensed under an Open Educational Resource-Quality Master Source (OER-QMS) License.