Not all research articles and books are of equal value. Some are more accurate and informative than others. In large part, this value stems from the methods used by the researcher in gathering and analyzing data. If bad data are analyzed, the results are bad. Below a few important methodological considerations are introduced. We will examine these more closely in later chapters.
A Legal Analogy
When considering your role as a research writer, it is often helpful to think of yourself as a juror in a criminal trial. It is your job to decide the truth of a matter, just like a juror. You are a finder of fact. An opinion will not do: you must make an objective evaluation based on the evidence presented. As a research writer, your evidence is the literature. As a juror, it is up to you to decide how much weight to give each piece of evidence.
Some evidence will be inadmissible because it is highly questionable—the researcher’s equivalent of hearsay. Other evidence may be extremely compelling. To extend this analogy even further, proof in science is much like proof in a trial. We never hold science to a standard of absolute certainty. Most of the time we are dealing with a preponderance of the evidence standard. If we are lucky, we will find enough evidence to decide the matter beyond a reasonable doubt. What follows are some issues dealing with the nature and quality of research evidence.
Quantitative v. Qualitative
Because quantitative research studies rely heavily on statistics to simplify and interpret data, they are usually easy to spot. If the results section has many numbers in it, then it is a safe bet that you are dealing with a quantitative article. Quantitative articles present findings as numbers.
Knowing in advance that an article is quantitative can be helpful in understanding the content because quantitative articles tend to have some common points of emphasis. First, quantitative articles will usually have a specific hypothesis (or hypotheses) that remain unchanged throughout the study. Only after the data are analyzed will the researcher reevaluate the hypothesis. A second common element is the use of a random sample drawn from a population or the closest approximation to random that the author could achieve. These samples will tend to be very large relative to qualitative methods. The measuring and data collection tools will tend to be very objective.
In a qualitative article, the results section will generally be presented in terms of a description of themes and trends. Qualitative articles present findings as words. Many will include quotations from participants. Often authors will not the fact that their research is qualitative in the title and the introduction. Qualitative researchers take a different approach from quantitative researchers. Whereas quantitative researchers will generally have a rigid, specific hypothesis that they are testing, the qualitative researcher will have a general problem that is being explored.
Most qualitative researchers tend to use small samples that are selected for the characteristics of the participants; random samples will rarely be encountered in this type of research. Another characteristic of qualitative research is that measurements are not nearly as structured and objective as with quantitative. Unstructured observations and interviews are the norms.
Experimental vs. Nonexperimental Research
An experimental study is one in which treatments are given to participants for the purpose of assessing the effects of the treatment.
A nonexperimental study is one where the participants’ characteristics are measured without any attempt to change or manipulate them by the researcher.
This brings up a bad habit that you should make sure you don’t start: Do not refer to all studies as experiments. If you are including nonexperimental studies in your paper, refer to them as “studies” and not “experiments.” It is only appropriate to call a study an experiment if a treatment was administered and the effect of the treatment was assessed by the researcher.
Another important aspect of an experiment is whether or not participants were assigned to treatment conditions randomly. When there is a random assignment to groups, we can refer to the study as a true experiment. Other things being equal, true experiments have more evidentiary value than studies that use other methods of forming groups.
A final important consideration of experiments is the idea of cause-and-effect relationships. The only research method suited to making cause and effect determinations is the experiment. Any causal inference drawn from nonexperimental studies is questionable because the researcher may have overlooked many other possible causes.
When a researcher attempts to draw causal inferences from a current condition and searches in the past for possible causes, the study is called causal-comparative or ex post facto (a Latin phrase meaning “after the fact”). Many studies must be causal-comparative because of ethical, legal, or financial reasons. Take the effects of drug use by mothers on a developing fetus. It would be horrific if a researcher asked a group of randomly selected and randomly assigned pregnant women to smoke crack. Thus, the researcher must look for infants that were born to drug users (a condition of the past) and try to determine the effect of that drug use.
Most modern social scientific researchers directly address the issue of whether or not their measures are valid and reliable. In the context of measurement, validity refers to whether a measurement actually measures what it is supposed to measure. Reliability refers to how consistent or precise a measurement is. Measuring distance with a thermometer would produce invalid results because thermometers do not measure distance. Measuring distance with a rubber yardstick would produce unreliable results because the results would not be consistent (they would change with every measurement) and they would not be precise—a rubber ruler wouldn’t be very accurate.
As a general rule, if different researchers use different methods of measuring a construct and still reach the same results, then the evidence is stronger than conclusions reached using the same method.
Many researchers will provide various reliability coefficients as an objective way to gauge how reliable their instruments are. Reliability coefficients are interpreted like correlations—they range from 0.0 to 1.0. The closer the coefficient is to 1.0, the more reliable the measurement. The closer it is to zero, the less reliable it is.
Remember that most research studies use samples to find out information about a larger group—the population. For a research study to be valid, the sample must adequately reflect the characteristics of the population. If the sample does not mirror the population, then we cannot safely generalize our research findings to the population. This defeats our purpose! When considering the relative weight of a particular study, you should consider whether the sample is likely to reflect (be representative of) the population that the author was interested in. For quantitative research, random samples are considered to be the absolute best. A common method of considering the adequacy of a sample is to examine the demographic information about the participants. Such characteristics as gender, race, age, and income are commonly reported in the literature.
Statistical Significance v. Effect Size
Empirical research reports will usually use inferential statistics. Inferential statistics are statistical techniques that allow us to determine whether it is highly likely that our sample reflects the characteristics of the population. If the results of a statistical test indicate that a sample indeed is highly likely to reflect the characteristics of the population, we call the relationship statistically significant. Do not confuse the idea of statistical significant with important—significant means something entirely different to the researcher than it does in our everyday language. It is entirely possible that a relationship between two variables can be statistically significant and be of no practical importance.
Characteristic Flaws of Empirical Research Reports
Empirical research reports vary greatly in their content and quality. Still, there are certain aspects of the research writing process that are very common. You may find some of these things problematic when writing your literature reviews.
Characteristic 1: The researcher only deals with part of the problem. Researchers are usually interested in a broad problem area, but time, money, and methodological restraints usually force researchers to focus on a very specific aspect of the problem. The primary methodological limitation is that empirical researchers must reduce everything to numbers for description and analysis. This means that you must consider many different reports on subtopics and synthesize a view of the larger problem for yourself.
Characteristic 2: Researchers use flawed methods of observation. No measure is perfectly reliable, no matter how well conceived and executed it is. You must pay careful attention to how valid and reliable measurements are for every study that you review. The more abstract the concept, the larger measurement errors are likely to be. Researchers can measure characteristics of people like age and gender with a high degree of accuracy. In research terms, the data are reliable and valid. Abstract concepts like “racial attitudes” and “liberalism” are much more difficult to measure accurately. This is a critical issue in empirical research. Remember the mantra of computer scientists: Garbage in, garbage out. Invalid and unreliable data will produce invalid and unreliable results.
Characteristic 3: Researchers use flawed samples. Ideally, researchers want samples that are created by drawing individuals from the population at random. This is seldom feasible. Many times, researchers are forced to use samples of convenience. That is, they use samples that are readily available. A sample of college sophomores in a research class is obviously flawed: how can we generalize from this sample of college students to all American voters? When evaluating the quality of a researcher’s sample, two issues must be considered.
First, we must ask how well the sample reflects the attributes of the population of interest (those individuals we want the results to generalize to). Second, we must ask whether the researcher could have reasonably obtained a better sample. In an investigation of unique populations, such as the homeless or crack cocaine users, it is impossible to find a “list” of all the members of the sample population. In cases like this, random samples are impossible to obtain. Other, less desirable sampling techniques must be used. Even though these techniques may limit our ability to generalize our results, it seems counterproductive to science not to conduct studies using these types of samples when they are the only reasonable option.
Characteristic 4: Researchers make errors in data analysis. Summarizing large quantities of information accurately is a challenge. Researchers can make errors in entering data into a computer. They can violate assumptions and make other errors in their statistical analyses. Different statistical methods can yield different results. Even qualitative analysis can be prone to error because different researchers may interpret the same observations differently. If the results of one study seem to conflict with the findings of several other studies, errors such as these are a possible explanation.
Characteristic 5: Researchers often do not include important information. Even the best and most detailed research reports may not contain all the information that you want. There are several reasons for this. First, professional journals must conserve space. Journal publishing is an expensive business and editors are ethically obligated to get the most “bang for their buck.” Another issue is that the researcher may have had a slightly different focus than you do. Facts and information that you consider critical to the issue may not have been important at all to a particular researcher. Often you will find that the research evidence is insufficient to make a judgment in a given problem area.
Characteristic 6: Researchers often publish studies that are methodologically weak. Sometimes, weak articles slip past careless editors and reviewers. At other times, methodologically weak studies are considering a new or interesting problem. Sometimes such weak studies are called pilot studies. Pilot studies are studies done so that the researcher can get his or her “feet wet” before committing to a larger, more time consuming, and more expensive study. Pilot studies are often conducted when a new research method is being tried.
Characteristic 7: Researchers never prove anything. After reading the points made before this one, it should not come as a surprise to find this characteristic listed. With all the pitfalls, it is impossible to find a single research report conclusive on an issue. Be very skeptical if a researcher claims to have “proven” something, and never use any form of the word “proof” in your own writing. Always hedge your bets. Use terms that vary in intensity to reflect the degree of confidence that you have in the evidence.
Plagiarism is the use of another author’s words, ideas, arguments, or any other fruits of intellectual labor without giving due credit. Plagiarism can come from two primary reasons. First is inadvertent plagiarism. This happens when an author gets careless or lazy. The second is the more culpable circumstance where an author intentionally steals another author’s work. Both are equally damaging to the author and should be avoided at all costs.
Plagiarism v. Common Knowledge
Facts, opinions, and beliefs that are known to many people are considered “common knowledge” and do not have to be cited. Different people have a different take on what constitutes common knowledge; students are advised to check with each professor individually about what is or is not considered common knowledge. A general rule is that if a fact can be found without any accompanying documentation in at least five sources, then there is no need to cite it. Even so, it is often helpful to cite common knowledge when it may be of interest to your reader. Historical facts, general observations, and unacknowledged information are usually considered common knowledge.
The earth is round. George Washington was the first president of the United States. Winter is very cold in Alaska. Southern conservatives generally oppose abortions. These are examples of things that do not usually require citations. When in doubt, cite.
An author’s distinctive “style” (word choice, organization of material, sentence patterns), original facts (such as research findings), and new, original ideas all require citation
Modification History File Created: 07/25/2018 Last Modified: 07/25/2018
This work is licensed under an Open Educational Resource-Quality Master Source (OER-QMS) License.