• Assessments

Estimations and Models

Our main objective in this class is to use the tools of mathematics to make better decisions. We’ll explore where math is and isn’t useful, how to fit a real world situation into a mathematical model, and perform an accurate and legible computation for others to follow.

This section of the text introduces the idea of defining a decision, the estimation needed to support the decision, and a mathematical model to make the estimation. After we have made the estimation, we can evaluate its impact on our decision.

An important part of the estimation process is letting go of the idea that your answer will be “correct”. We often must make important decisions without complete information. Your answer will be an imperfect but useful estimation that will give us insight into a decision we are making. You can ask yourself as you begin the estimation how accurate do you need to be? This is a very different practice than many math problems which have exact answers.

Our scientific progress and the strength of our democracy our based on a shared understanding of how the world is and what we should do to improve it. Many of these claims about how we should change our world are based on quantitative evidence. Often the most challenging part of creating estimations and models is determining what question you are trying to answer and what decisions you should make based on the evidence.

  • Are plant-based diets a useful tool against climate change?
  • Are we spending too much money on health care?
  • What are the best ways to promote human health?

We are also seeing that people are putting deliberately false but plausible news on the web because it will earn them money. How will you use your knowledge of mathematics to sharpen your critical thinking and separate falsehoods from legitimate reporting? Will you be able to look at quantitative data and see any problems of justice that the data show? You are frequently presented with arguments persuading you to think, act, spend, or vote in a certain way. How will you evaluate the validity of these arguments, especially when mathematics is involved?

Critical Thinking and Arguments

There are several important concepts from critical thinking that we will use in this class.

What makes a strong argument?

What are logical fallacies?

Many arguments rely on mathematical claims

Are the arguments sound?

Are the math claims made to support the argument valid?

What assumptions to these claims make? Would you agree?

What alternate explanations are possible?

Evidence vs Allegations

It is important to distinguish between a quantitative allegation and evidence. Some questions to consider when you hear an allegation.

  • Is the allegation believable?
  • Is evidence being presented that supports the allegation?
  • Is there any evidence that could refute the allegation?

The Principle of Charity

In this class we will always use the principle of charity to evaluate arguments even if we do not agree with the argument. The principle of charity says we should:

  • Consider an argument to be rational and worthy of exploration
  • If we see an quantitative analysis we disagree with, it is only through understanding it that we can seek to find its flaws

Limits of Quantitative Argument and Reasoning

  • Mathematics can only describe the world as it is
  • It isn’t a good tool for claims about how it should be
  • Positive statements are about how the world is
  • Normative statements are how we think the world should be
  • ⋮⋮⋮ ×

Writing w/Calculator

Characteristics of Quantitative Writing Assignments:

  • Unlike conventional (non-quantitative) writing assignments, QW assignments require students to analyze and interpret quantitative data . Writers must use numbers in a variety of ways to help them define a problem, to see alternative points of view, to speculate about causes and effects, and to create evidence-based arguments. Often they must learn to construct and reference their own tables or graphs.
  • Quantitative writing generally presents students with an ' ill structured problem, ' requiring the analysis of quantitative data in an ambiguous context without a clear right answer. Unlike a math "story problem," which is usually a 'well-structured problem' with a single right answer, a QW assignment requires students to formulate a claim for a best solution and support it with reasons and evidence. Well structured versus Ill structured problems How a story problem differs from a QW Assignment
  • Quantitative writing forces students to contemplate the meaning of numbers , to understand where the numbers come from and how they are presented. Students must consider, for example, the different effects of using ordinal numbers versus percentages, means versus medians, raw numbers versus adjusted numbers, exact numbers versus approximated or rounded numbers, and so forth. At more advanced levels, students must understand the interpretive meaning of a standard deviation, the function of a chi square, or the purpose of specific kinds of algorithms in their disciplines. In all cases, they must consider their communicative goals and their audience's interests, needs, and background and to use numbers effectively within that rhetorical context.

Types of Quantitative Writing Assignments

Quantitative Writing doesn't have to mean writing a research paper. In fact, the majority of QW assignments are less ambitious than that. QW assignments can be designed in a variety of forms as indicated below.

  • Genre, audience and purpose - Good writing assignments include a rhetorical context for authors: What form should the writing take, to whom is it addressed and for what rhetorical purpose?
  • Length, stakes and complexity - QW assignments can range from very short to very long; they can be weighted little or much towards a student's grade; and they can employ simple or complex quantitative reasoning.
  • Informal writing - Quantitative writing need not be formal writing.
  • QW in formats other than essays - QW assignments need not be papers, per se. learn more about different types of QW assignments

Example of a Quantitative Writing Assignment

The following contains the core sentences from a representative QW assignment.

"Over the last century, the number of salmon that return to California rivers has been decreasing. Is this a serious problem? Should anything be done in response to this situation? You will investigate questions like this in your essay. The table below gives data for the number of Chinook salmon (in thousands) from 1986 to 2000."

This challenging assignment asks students to create an argument about salmon based on tabular data that students must analyze and interpret. To do the assignment, students must make inferences from the table, do calculations, convert tabular data to bar or line graphs, and then use the data meaningfully in their own arguments. The quantitative methods required are only moderately complex, but the questions posed "Is this a serious problem? Should anything be done?" make clear that this is an ill-structured problem. In the complete assignment , note how the instructors (Michael Burke and Jean Mach of the College of San Mateo) include intermediate steps that help guide students through their analysis of the data.

The salmon problem is just one example of the dozens of ways that instructors can create engaging quantitative writing assignments.

« Previous Page       Next Page »

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • PMC Canada Author Manuscripts

Logo of capmc

Revisiting the Quantitative-Qualitative Debate: Implications for Mixed-Methods Research

Joanna e. m. sale.

Institute for Work & Health; Health Research Methodology Program, Department of Clinical Epidemiology & Biostatistics, McMaster University

LYNNE H. LOHFELD

St. Joseph’s Hospital and Home; Department of Clinical Epidemiology & Biostatistics, McMaster University

KEVIN BRAZIL

St. Joseph’s Health Care System Research Network, St. Joseph’s Community Health Centre; Department of Clinical Epidemiology & Biostatistics, McMaster University

Health care research includes many studies that combine quantitative and qualitative methods. In this paper, we revisit the quantitative-qualitative debate and review the arguments for and against using mixed-methods. In addition, we discuss the implications stemming from our view, that the paradigms upon which the methods are based have a different view of reality and therefore a different view of the phenomenon under study. Because the two paradigms do not study the same phenomena, quantitative and qualitative methods cannot be combined for cross-validation or triangulation purposes. However, they can be combined for complementary purposes. Future standards for mixed-methods research should clearly reflect this recommendation.

1. Introduction

Health care research includes many studies that combine quantitative and qualitative methods, as seen in numerous articles and books published in the last decade ( Caracelli and Greene, 1993 ; Caracelli and Riggin, 1994 ; Casebeer and Verhoef, 1997 ; Datta, 1997 ; Droitcour, 1997 ; Greene and Caracelli, 1997 ; House, 1994 ; Morgan, 1998 ; Morse, 1991 ; Tashakkori and Teddlie, 1998 ). As many critics have noted, this is not without its problems. In this paper, we revisit the quantitative-qualitative debate which flourished in the 1970s and 1980s and review the arguments for and against using mixed-methods. In addition, we present what we believe to be a fundamental point in this debate.

Some people would say that we are beyond the debate and can now freely use mixed- method designs to carry out relevant and valuable research. According to Carey (1993) , quantitative and qualitative techniques are merely tools; integrating them allows us to answer questions of substantial importance. However, just because they are often combined does not mean that it is always appropriate to do so.

We believe that mixed-methods research is now being adopted uncritically by a new generation of researchers who have overlooked the underlying assumptions behind the qualitative-quantitative debate. In short, the philosophical distinctions between them have become so blurred that researchers are left with the impression that the differences between the two are merely technical ( Smith and Heshius, 1986 ).

Combining qualitative and quantitative methods in a single study is widely practiced and accepted in many areas of health care research. Despite the arguments presented for integrating methods, we will demonstrate that each of these methods is based on a particular paradigm, a patterned set of assumptions concerning reality (ontology), knowledge of that reality (epistemology), and the particular ways of knowing that reality (methodology) ( Guba, 1990 ). In fact, based on their paradigmatic assumptions, the two methods do not study the same phenomena. Evidence of this is reflected by the notion that quantitative methods cannot access some of the phenomena that health researchers are interested in, such as lived experiences as a patient, social interactions, and the patients’ perspective of doctor-patient interactions. The information presented in this paper is not new in the sense that we are making a “new” case for or against the debate. Rather, based on the paradigmatic differences concerning the phenomenon under study, we propose a “new” solution for using mixed-methods in research that we believe is both methodologically and philosophically sound.

2. The Two Paradigms

The quantitative paradigm is based on positivism. Science is characterized by empirical research; all phenomena can be reduced to empirical indicators which represent the truth. The ontological position of the quantitative paradigm is that there is only one truth, an objective reality that exists independent of human perception.

Epistemologically, the investigator and investigated are independent entities. Therefore, the investigator is capable of studying a phenomenon without influencing it or being influenced by it; “inquiry takes place as through a one way mirror” ( Guba and Lincoln, 1994 : 110). The goal is to measure and analyze causal relationships between variables within a value-free framework ( Denzin and Lincoln, 1994 ). Techniques to ensure this include randomization, blinding, highly structured protocols, and written or orally administered questionnaires with a limited range of predetermined responses. Sample sizes are much larger than those used in qualitative research so that statistical methods to ensure that samples are representative can be used ( Carey, 1993 ).

In contrast, the qualitative paradigm is based on interpretivism ( Altheide and Johnson, 1994 ; Kuzel and Like, 1991 ; Secker et al., 1995 ) and constructivism ( Guba and Lincoln, 1994 ). Ontologically speaking, there are multiple realities or multiple truths based on one’s construction of reality. Reality is socially constructed ( Berger and Luckmann, 1966 ) and so is constantly changing. On an epistemological level, there is no access to reality independent of our minds, no external referent by which to compare claims of truth ( Smith, 1983 ). The investigator and the object of study are interactively linked so that findings are mutually created within the context of the situation which shapes the inquiry ( Guba and Lincoln, 1994 ; Denzin and Lincoln, 1994 ). This suggests that reality has no existence prior to the activity of investigation, and reality ceases to exist when we no longer focus on it ( Smith, 1983 ). The emphasis of qualitative research is on process and meanings. Techniques used in qualitative studies include in-depth and focus group interviews and participant observation. Samples are not meant to represent large populations. Rather, small, purposeful samples of articulate respondents are used because they can provide important information, not because they are representative of a larger group ( Reid, 1996 ).

The underlying assumptions of the quantitative and qualitative paradigms result in differences which extend beyond philosophical and methodological debates. The two paradigms have given rise to different journals, different sources of funding, different expertise, and different methods. There are even differences in scientific language used to describe them. For example, the term “observational work” may refer to case control studies for a quantitative researcher, but to a qualitative researcher it would refer to ethnographic immersion in a culture. “Validity” to a quantitative researcher would mean that results correspond to how things really are out there in the world, whereas to a qualitative researcher “valid” is a label applied to an interpretation or description with which one agrees ( Smith and Heshusius, 1986 ). Similarly, the phrase “research has shown …” or “the results of research indicate …” refers to an accurate reflection of reality to the quantitative researcher, but to a qualitative researcher it announces an interpretation that itself becomes reality ( Smith and Heshusius, 1986 ).

The different assumptions of the quantitative and qualitative paradigms originated in the positivism-idealism debate of the late 19th century ( Smith, 1983 ). The inherent differences rarely are discussed or acknowledged by those using mixed-method designs. The reasons why may be because the positivist paradigm has become the predominant frame of reference in the physical and social sciences. In addition, research methods are presented as not belonging to or reflecting paradigms. Caracelli and Greene (1993) refer to mixed-method designs as those where neither type of method is inherently linked to a particular inquiry paradigm or philosophy. Guba and Lincoln (1989) claim that questions of method are secondary to questions of paradigms. We argue that methods are shaped by and represent paradigms that reflect a particular belief about reality. We also maintain that the assumptions of the qualitative paradigm are based on a worldview not represented by the quantitative paradigm.

3. Arguments Presented for Mixed-Method Research

Having discussed some of the basic philosophical assumptions of the two paradigms, we are better able to address the arguments given for combining quantitative and qualitative methods in a single study. There are several viewpoints as to why qualitative and quantitative methods can be combined. First, the two approaches can be combined because they share the goal of understanding the world in which we live ( Haase and Myers, 1988 ). King et al. (1994) claim that both qualitative and quantitative research share a unified logic, and that the same rules of inference apply to both.

Second, the two paradigms are thought to be compatible because they share the tenets of theory-ladenness of facts, fallibility of knowledge, indetermination of theory by fact, and a value-ladened inquiry process. They are also united by a shared commitment to understanding and improving the human condition, a common goal of disseminating knowledge for practical use, and a shared commitment for rigor, conscientiousness, and critique in the research process ( Reichardt and Rallis, 1994 ). In fact, Casebeer and Verhoef (1997) argue we should view qualitative and quantitative methods as part of a continuum of research with specific techniques selected based on the research objective.

Third, as noted by Clarke and Yaros (1988) , combining research methods is useful in some areas of research, such as nursing, because the complexity of phenomena requires data from a large number of perspectives. Similarly, some researchers have argued that the complexities of most public health problems ( Baum, 1995 ) or social interventions, such as health education and health promotion programs ( Steckler et al., 1992 ), require the use of a broad spectrum of qualitative and quantitative methods.

Fourth, others claim that researchers should not be preoccupied with the quantitative-qualitative debate because it will not be resolved in the near future, and that epistemological purity does not get research done ( Miles and Huberman, 1984 ).

None of these arguments adequately addresses the underlying assumptions behind the paradigmatic differences between qualitative and quantitative research. However, Reichardt and Rallis (1994) acknowledge the possibility of contention between the two paradigms concerning the nature of reality by conceding that the two paradigms are incompatible if the qualitative paradigm assumes that there are no external referents for understanding reality. We have argued that the qualitative paradigm does assume that there are no external referents for understanding reality. Therefore, we propose that in addressing this fundamental assumption, Reichardt and Rallis dismiss their own claim of compatibility between methodological camps.

An interesting argument has been made by Howe (1988) who suggests that researchers should forge ahead with what works. Truth, he states, is a normative concept, like good. Truth is what works. This appears to be the prevalent attitude in mixed-methods research. Howe’s argument seems to suggest that only pragmatists, or those not wedded to either paradigm, would attempt to combine research methods across paradigms. But this does not address the issue of differing ontological assumptions of the two paradigms.

A more interesting and complicated issue is how to explain results from studies using qualitative and quantitative methods which appear to agree. How can the results be similar if the two paradigms are supposedly looking at different phenomena? Achieving similar results may be merely a matter of perception. In order to synthesize results obtained via multiple method research, people often simplify the situation under study, highlighting and packaging results to reflect what they think is happening. The truth is we rarely know the extent of disagreement between qualitative and quantitative results because that is often not reported. Another possibility which may account for seemingly concordant results could be that both are, in fact, quantitative. Conducting a frequency count on responses to open-ended questions is not qualitative research. Given the overwhelming predominance of the positivist worldview in health care research, this is not surprising. This often translates to the misapplication of the canons of good “science” (quantitative research) to qualitative studies (see Sandelowski, 1986 ).

Perhaps the only convincing argument for mixing qualitative and quantitative research methods in a single study would be to challenge the underlying assumptions of the two paradigms themselves. A sound argument would be that both qualitative and quantitative paradigms are based on the tenets of positivism, not constructivism or interpretivism. Howe (1992) gives the impression of making this argument by denying there is an “either-or” choice to be made. Rather, he claims, both quantitative and qualitative researchers should embrace positivism coloured by a certain degree of interpretivism, an adjustment which he proposes is made possible by the critical social research model (or the critical educational research model) which eschews the positivist-interpretivist split in favour of compatibility.

A legitimate argument would have been for Howe and others who appear to be leaning toward this position (e.g. Reichardt and Rallis, 1994 ) to claim that the paradigmatic debate was oversimplified by a positivism-interpretivism split, and that the qualitative paradigm actually espoused positivism. If we take the position that qualitative researchers operate within a positivist world, we could argue that such a position actually negates or undermines the quantitative-qualitative debate in the first place because it does away with the beliefs about reality from which qualitative research arose. We believe, however, that one cannot be both a positivist and an interpretivist or constructivist.

Closely tied to the arguments for integrating qualitative and quantitative approaches are the reasons given for legitimately combining them. Two reasons for this are prevalent in the literature. The first is to achieve cross-validation or triangulation – combining two or more theories or sources of data to study the same phenomenon in order to gain a more complete understanding of it ( Denzin, 1970 ). The second is to achieve complementary results by using the strengths of one method to enhance the other ( Morgan, 1998 ). The former position maintains that research methods are interdependent (combinant); the latter, that they are independent (additive). Although these two reasons are often used interchangeably in the literature, it is important to make a distinction between them.

4. The Phenomenon of Study

It is probably safe to say that certain phenomena lend themselves to quantitative as opposed to qualitative inquiry and vice versa in other instances. Both quantitative and qualitative researchers often appear to study the same phenomena. However, these researchers’ definition of what the phenomena are and how they can best be described or known differ. Both paradigms may label phenomena identically, but in keeping with their paradigmatic assumptions, these labels refer to different things.

For the quantitative researcher, a label refers to an external referent; to a qualitative researcher, a label refers to a personal interpretation or meaning attached to phenomena. For example, a quantitative researcher might use a factory record as if it were representative of what actually happens in the workplace, whereas a qualitative researcher might interpret it as one of the ways that people in a factory view their work environment ( Needleman and Needleman, 1996 ). Because there is no external referent with which to gauge what the truth is, there is no interest in assessing the record as representative of the one and only reality in the workplace. Rather, the ways people use and describe it are expected to vary due to people’s differing realities based on such characteristics as gender, age, or role (e.g., employer, manager, worker). Another example is surgical waiting lists. To a quantitative researcher, the list is like a bus queue; patients are taken off the list based on the urgency of need for surgery or some other factors. To a qualitative researcher, the key to understanding the meaning of the list rests with determining how it is organized, managed and used by the people who actively create and maintain it ( Pope and Mays, 1993 ).

These two examples demonstrate that although qualitative and quantitative paradigms may use common labels to refer to phenomena, what the labels refer to is not the same. There are differences of phenomena within each paradigm as well. However, the differences in phenomena between the two paradigms are philosophical differences, whereas the difference in phenomena within each paradigm are not. Within the quantitative paradigm, we may compare the results of a magnetic resonance imaging (MRI) scan to those of a computed tomography (CT) scan. Although they may appear to reveal different realities, the use of the scans assumes that there is something to measure that exists independent of our minds. Both scans are trying to approximate or capture the one reality which correlates with the phenomenon of interest. Within the qualitative paradigm, one may compare the results of a phenomenological study to those of a grounded theory study on how nurses cope with the deaths of their patients. These two types of qualitative studies do not assume that external referents for coping skills exist independent of our minds.

Having taken the position that the quantitative and qualitative paradigms do not study the same phenomena, it follows that combining the two methods for cross-validation/triangulation purposes is not a viable option. (Cross validation refers to combining the two approaches to study the same phenomenon). Ironically, in a comprehensive review of mixed-method evaluation studies, Greene and Caracelli (1989) found that methodological triangulation was actually quite rare in mixed-method research, used by only 3 of 57 studies. Combining the two approaches in a complementary fashion is also not advisable if the ultimate goal is to study different aspects of the same phenomenon because, as we argue, mixed-methods research cannot claim to enrich the same phenomenon under study. The phenomenon under study is not the same across methods. Not only does cross-validation and complementarity in the above context violate paradigmatic assumptions, but it also misrepresents data. Loss of information is a particular risk when attempts are made to unite results from the two paradigms because it often promotes the selective search for similarities in data.

5. Further Considerations in Mixed-Method Research Designs

The most frequently used mixed-method designs start with a qualitative pilot study followed by quantitative research ( Morgan, 1998 ). This promotes the mis-perception that qualitative research is only exploratory, cannot stand on its own, and must be validated by quantitative work because the latter is “scientific” and studies truth. In response, qualitative researchers have increasingly tried to defend their work using quantitative criteria, such as validity and reliability, as defined in quantitative studies. They also increasingly use computer programs specifically designed for analysing qualitative data, such as NUD.IST or Ethnograph, in quantitative (counting) ways. These practices seriously violate the assumptions of the qualitative paradigm(s). For research to be valid or reliable in the narrow (quantitative) sense requires that what is studied be independent of the inquirer and be described without distortion by her interests, values, or purposes ( Smith and Heshusius, 1986 ). This is not how qualitative studies unfold. They are based on the minimum distance between the investigator and the investigated, and seek multiple definitions of reality embedded in various respondents’ experiences. Therefore, it is more appropriate for qualitative researchers to apply parallel but distinct canons of rigor appropriate to qualitative studies ( Strauss and Corbin, 1990 ).

It is difficult to say whether the growing trend of quantifying qualitative research is a direct result of mixing quantitative and qualitative approaches. It does seem to be a result of researchers from the two paradigms attempting to work together, or the desire for qualitative research to be “taken seriously” in the world of positivist research, such as is commonly found in medicine. In our opinion, mixing research methods across paradigms, as is currently practiced, often diminishes the value of both methods. Pressure is being exerted from the quantitative camp for qualitative research to “measure Up” to its standards without understanding the basic premises of qualitative investigations. Proponents of the qualitative paradigm need to address this pressure, but “without slipping on the mantle of quantitative inquiry” ( Smith and Heshusius, 1986 : 10). This pressure will no doubt continue to escalate as combined methods research becomes more common.

6. Our Solution

The key issues in the quantitative-qualitative debate are ontological and epistemological. Quantitative researchers perceive truth as something which describes an objective reality, separate from the observer and waiting to be discovered. Qualitative researchers are concerned with the changing nature of reality created through people’s experiences – an evolving reality in which the researcher and researched are mutually interactive and inseparable ( Phillips, 1988b ). Because quantitative and qualitative methods represent two different paradigms, they are incommensurate. As Guba states, “the one [paradigm] precludes the other just as surely belief in a round world precludes belief in a flat one” ( 1987 : 31). Fundamental to this viewpoint is that qualitative and quantitative researchers do not , in fact, study the same phenomena.

We propose a solution to mixed-method research and the quantitative-qualitative debate. Qualitative and quantitative research methods have grown out of, and still represent, different paradigms. However, the fact that the approaches are incommensurate does not mean that multiple methods cannot be combined in a single study if it is done for complementary purposes. Each method studies different phenomena. The distinction of phenomena in mixed-methods research is crucial and can be clarified by labelling the phenomenon examined by each method. For example, a mixed-methods study to develop a measure of burnout experienced by nurses could be described as a qualitative study of the lived experience of burnout to inform a quantitative measure of burnout. Although the phenomenon ‘burnout’ may appear the same across methods, the distinction between “lived experience” and “measure” reconciles the phenomenon to its respective method and paradigm.

This solution differs from that of merely using the strengths of each method to bolster the weaknesses of the other(s), or capturing various aspects of the same phenomena. This implies an additive outcome for mutual research partners. Based on this assertion, qualitative and quantitative work can be carried out simultaneously or sequentially in a single study or series of investigations.

7. Implications

Given that we have returned to debate in a no-debate world, what is the outlook for mixed-paradigm research? As Phillips (1988a) points out, it may be that quantitative and qualitative approaches are inadequate to the task of understanding the emerging science of wholeness because they give an incomplete view of people in their environments. Perhaps in a “Kuhnian” sense, a new paradigm is in order, one with a new ontology, epistemology, and methodology. Alternatively, we have proposed seeking complementarity which we believe is both philosophically and practically sound. This solution lends itself to new standards for mixed-paradigm research. We hope that future guidelines which assess the quality of such research consider this recommendation.

Contributor Information

JOANNA E. M. SALE, Institute for Work & Health; Health Research Methodology Program, Department of Clinical Epidemiology & Biostatistics, McMaster University.

LYNNE H. LOHFELD, St. Joseph’s Hospital and Home; Department of Clinical Epidemiology & Biostatistics, McMaster University.

KEVIN BRAZIL, St. Joseph’s Health Care System Research Network, St. Joseph’s Community Health Centre; Department of Clinical Epidemiology & Biostatistics, McMaster University.

  • Altheide DL, Johnson JM. Criteria for assessing interpretive validity in qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage Publications; 1994. pp. 485–499. [ Google Scholar ]
  • Baum F. Researching public health: Behind the qualitative-quantitative methodological debate. Social Science and Medicine. 1995; 40 :459–468. [ PubMed ] [ Google Scholar ]
  • Berger PL, Luckmann T. The Social Construction of Reality: A Treatise in the Sociology of Knowledge. Garden City, NY: Doubleday; 1966. [ Google Scholar ]
  • Caracelli VJ, Greene JC. Data analysis strategies for mixed-method evaluation designs. Educational Evaluation and Policy Analysis. 1993; 15 :195–207. [ Google Scholar ]
  • Caracelli VJ, Riggin LJC. Mixed-method evaluation: Developing quality criteria through concept mapping. Evaluation Practice. 1994; 15 :139–152. [ Google Scholar ]
  • Carey JW. Linking qualitative and quantitative methods: Integrating cultural factors into public health. Qualitative Health Research. 1993; 3 :298–318. [ Google Scholar ]
  • Casebeer AL, Verhoef MJ. Combining qualitative and quantitative research methods: Considering the possibilities for enhancing the study of chronic diseases. Chronic Diseases in Canada. 1997; 18 :130–135. [ PubMed ] [ Google Scholar ]
  • Clarke PN, Yaros PS. Research blenders: Commentary and response. Nursing Science Quarterly. 1988; 1 :147–149. [ PubMed ] [ Google Scholar ]
  • Creswell JW. Qualitative Inquiry and Research Design: Choosing Among Five Traditions. Thousand Oaks, CA: Sage Publications; 1998. [ Google Scholar ]
  • Datta L. Multimethod evaluations: Using case studies together with other methods. In: Chelimsky E, Shadish WR, editors. Evaluation for the 21st Century: A Handbook. Thousand Oaks, CA: Sage Publications; 1997. pp. 344–359. [ Google Scholar ]
  • Denzin NK. The Research Act in Sociology. London: Butterworth; 1970. [ Google Scholar ]
  • Denzin NK, Lincoln YS. Introduction: Entering the field of qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage; 1994. pp. 1–17. [ Google Scholar ]
  • Droitcour JA. Cross design synthesis: Concept and application. In: Chelimsky E, Shadish WR, editors. Evaluation for the 21st Century: A Handbook. Thousand Oaks, CA: Sage Publications; 1997. pp. 360–372. [ Google Scholar ]
  • Greene JC, Caracelli VJ, editors. Advances in Mixed-Method Evaluation: The Challenges and Benefits of Integrating Diverse Paradigms. San Francisco: Jossey-Bass Publishers; 1997. [ Google Scholar ]
  • Greene JC, Caracelli VJ, Graham WF. Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis. 1989; 11 :255–274. [ Google Scholar ]
  • Guba E. What have we learned about naturalistic evaluation? Evaluation Practice. 1987; 8 :23–43. [ Google Scholar ]
  • Guba EG. The alternative paradigm dialog. In: Guba EG, editor. The Paradigm Dialog. Newbury Park, CA: Sage; 1990. pp. 17–30. [ Google Scholar ]
  • Guba EG, Lincoln YS. Fourth Generation Evaluation. Newbury Park, CA: Sage Publications; 1989. [ Google Scholar ]
  • Guba EG, Lincoln YS. Competing paradigms in qualitative research. In: Denzin NK, Lincoln YS, editors. Handbook of Qualitative Research. Thousand Oaks, CA: Sage; 1994. pp. 105–117. [ Google Scholar ]
  • Haase JE, Myers ST. Reconciling paradigm assumptions of qualitative and quantitative research. Western Journal of Nursing Research. 1988; 10 :128–137. [ PubMed ] [ Google Scholar ]
  • House ER. Integrating the quantitative and qualitative. In: Reichardt CS, Rallis SF, editors. The Qualitative-Quantitative Debate: New Perspectives. San Francisco: Jossey-Bass; 1994. pp. 13–22. [ Google Scholar ]
  • Howe KR. Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Educational Researcher. 1988; 17 :10–16. [ Google Scholar ]
  • Howe KR. Getting over the quantitative-qualitative debate. American Journal of Education. 1992; 100 :236–257. [ Google Scholar ]
  • King G, Keohane RO, Verba S. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press; 1994. [ Google Scholar ]
  • Kuzel AJ, Like RC. Standards of trustworthiness for qualitative studies in primary care. In: Norton PG, Steward M, Tudiver F, Bass MJ, Dunn EV, editors. Primary Care Research. Newbury Park, CA: Sage Publications; 1991. pp. 138–158. [ Google Scholar ]
  • Miles M, Huberman A. Drawing valid meaning from qualitative data: Toward a shared craft. Educational Researcher. 1984; 13 :20–30. [ Google Scholar ]
  • Morgan DL. Practical strategies for combining qualitative and quantitative methods: Applications to health research. Qualitative Health Research. 1998; 8 :362–376. [ PubMed ] [ Google Scholar ]
  • Morse JM. Approaches to qualitative-quantitative methodological triangulation. Nursing Research. 1991; 40 :120–123. [ PubMed ] [ Google Scholar ]
  • Needleman C, Needleman ML. Qualitative methods for intervention research. American Journal of Industrial Medicine. 1996; 29 :329–337. [ PubMed ] [ Google Scholar ]
  • Phillips JR. Diggers of deeper holes. Nursing Science Quarterly. 1988a; 1 :149–151. [ Google Scholar ]
  • Phillips JR. Research blenders. Nursing Science Quarterly. 1988b; 1 :4–5. [ PubMed ] [ Google Scholar ]
  • Pope C, Mays N. Opening the black box: An encounter in the corridors of health sciences research. British Medical Journal. 1993; 306 :315–318. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Reichardt CS, Rallis SF. Qualitative and quantitative inquiries are not incompatible: A call for a new partnership. New Directions for Program Evaluation. 1994; 61 :85–91. [ Google Scholar ]
  • Reid AJ. What we want: Qualitative research. Canadian Family Physician. 1996; 42 :387–389. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Sandelowski M. The problem of rigour in qualitative research. Advances in Nursing Science. 1986; 8 :27–37. [ PubMed ] [ Google Scholar ]
  • Seeker J, Wimbush E, Watson J, Milburn K. Qualitative methods in health promotion research: Some criteria for quality. Health Education Journal. 1995; 54 :74–87. [ Google Scholar ]
  • Smith JK. Quantitative versus qualitative research: An attempt to clarify the issue. Educational Researcher. 1983; 12 :6–13. [ Google Scholar ]
  • Smith JK, Heshusius L. Closing down the conversation: The end of the quantitative-qualitative debate among educational inquiries. Educational Researcher. 1986; 15 :4–12. [ Google Scholar ]
  • Steckler A, McLeroy KR, Goodman RM, Bird ST, McCormick L. Toward integrating qualitative and quantitative methods: An introduction. Health Education Quarterly. 1992; 19 :1–8. [ PubMed ] [ Google Scholar ]
  • Strauss A, Corbin J. Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: Sage Publications; 1990. Chapter 1: Introduction; pp. 23–32. [ Google Scholar ]
  • Tashakkori A, Teddlie C. Mixed Methodology: Combining Qualitative and Quantitative Approaches. Thousand Oaks, CA: Sage Publications; 1998. [ Google Scholar ]

Quantitative Reasoning

Logic is the formal study of the processes and principles of thinking and reasoning. The reasoning process involves semantics (meanings) and inference (one thing leads to another) in the construction of arguments.  The following video claims that "an argument is a collective series of statements to establish a definite proposition."

Deductive Arguments

In this course, a deductive argument will use premises to prove a conclusion based on logic. Premises are statements that are assumed to be true. Conclusions are statements that follow from the premises.  Logic is the glue that connects the conclusion to the premises.

It is important to note that a deductive argument assumes that the premises are true.  If a premise is actually false, then the conclusion might be false, but that does not mean that the argument is illogical.  In this course, we are concerned more with whether the logic of the argument is valid or invalid than whether the conclusions are true or false.

All snakes are reptiles. No reptiles have fur. Therefore, no snakes have fur.

To illustrate that the truthfulness of the premises and truthfulness of the conclusion are completely independent, examine the two variants of the same logical argument.  In both cases, the argument is logically valid (the conclusion would be true if the premises were true).  However, in the first case, a false premise makes the conclusion false.  Having a false premise doesn't guarantee a false conclusion though, as shown by the second case.

Inductive Arguments

Unlike deductive arguments, which prove that a conclusion must be true (if the premises are true) using logic, an inductive argument  suggests that a conclusion is  likely  based on evidence.  Inductive arguments are used in most areas of our lives.  Cars nearly always remain stopped at red lights, so we can cross most intersections with confidence when our light is green.  We know that there is a small chance that something unexpected may happen, but the chance is small.  Rather than categorizing inductive arguments as valid or invalid, like deductive arguments, we categorize them as strong or weak.  The strength of an inductive argument depends on the amount of evidence supporting it.  If we have only a handful of cases, our argument is weak, but if our argument is based on hundreds or thousands of examples, our argument is very strong.  For example, all life forms that have been observed so far have required liquid water to survive and reproduce.  It is highly likely that any new form of life will also require liquid water, since our argument is based on millions of observations.  Highly likely, however, does not mean certain.  No inductive argument can prove that the conclusion must be true.

A fallacy is when a deductive argument is invalid, or an inductive argument is based on very thin evidence.  Fallacies have been used for thousands of years to convince people without having a reasonable argument.  Many fallacies have names in Latin or Greek that date back to when the early philosophers first used and abused them.  Fallacies are still abundant today, particularly in the media and in advertising.

Good movies are popular. That movie is popular. Therefore, that movie is good.

The missing middle above is also known as an "appeal to popularity" fallacy.  By changing the category labels, we can also create the "appeal to authority" and "appeal to emotion" fallacies - each of which is an example of a missing middle fallacy.  Several other common fallacies are listed in the table below.

Truth Tables and Boolean Logic Combinations

   A truth table lists all of the possible states for a set of premises.  A simple example is shown below for the premise "X is a shape" and the premise "Y is a color".  The table lists all of the possibilities as true (T) or false (F).

Truth tables and Venn diagrams describe the same things in slightly different ways.

Logic is the study of reasoning using arguments. Deductive arguments use premises to prove a conclusion based on logic. Premises are statements that are assumed to be true. Conclusions are statements that follow from the premises.  Logic is the glue that connects the conclusion to the premises through a chain of inference. Inductive arguments use evidence to make a compelling argument. They may be strong or weak, but they cannot be proven to be true. Fallacies are arguments that use neither logic nor evidence to support their conclusions.

APS

  • Teaching Tips

Teaching Quantitative Reasoning

How can psychology contribute to the public good? The Human Capital Initiative (HCI) report, prepared with the assistance of APS, cites an important means of doing so: helping people to improve their statistical reasoning. “The goal of learning statistical reasoning” it notes, “should be to develop better statistical ‘instincts,’ not just knowledge of particular statistical procedures” (Human Capital Initiative, 1998, p. 24).

Those instincts are crucial to contemporary life, for, as the National Council on Education and the Disciplines (Steen, 2001, p. 1) observed, “The world of the twenty-first century is a world awash in numbers.” Although data are not always used well (e.g., Best, 2004), data-based claims are nonetheless a staple of policy debates, advertisements, medical news, educational assessments, financial decision-making, and everyday conversation, as well as of pure and applied research in psychological science. In sum, our students need sharp statistical instincts to navigate psychology and to contribute to life beyond it.

Are we as faculty in higher education doing enough to help students develop quantitative values and skills? According to colleagues in mathematics, the answer is no. In a 1998 report, “Quantitative Reasoning for College Graduates,” the Mathematical Association of America suggested, “too many educated people… are quantitatively illiterate.” Some mathematicians hold their own discipline partially responsible. Lynn Steen (2004) of St. Olaf College has cogently argued that the postsecondary mathematics curriculum channels college students away from quantitative study. Psychology, then, has the opportunity to help undergraduates develop needed statistical instincts.

Psychology’s Special Role in Promoting Quantitative Reasoning There are at least four reasons why psychology as a discipline is well-suited to contribute to undergraduate education in quantitative reasoning (QR).

Psychology has Wide Exposure to Undergraduates. Approximately 1.2 million students take introductory psychology courses annually (M. Sugarman, McGraw-Hill Publishers, personal communication, June 1, 2005) and nearly 75,000 graduate each year with a degree in psychology (American Psychological Association, 2005).

Psychology has a Natural Affinity for QR. As the historian of statistics Stephen Stigler has shown, statistics and psychology are “inextricably bound together” (1999, p. 189). (Unfortunately, Stigler rejects the hypothesis that psychologists were so much quicker to adapt statistics because they were smarter than other social scientists; he might be wrong of course!).

Psychology has Rich Incentives to Hone Students’ QR Instincts. Not only is quantitative reasoning an essential component of training in a psychology major (see Task Force on Undergraduate Psychology Major Competencies, 2002), it is important to public understanding of contemporary psychological research and practice.

Psychologists can Appreciate the Educational Rationale for QR Across the Curriculum. We recognize that students need to encounter a broad array of stimulus conditions calling for QR if they are to develop and strengthen generalized QR cognitive tendencies. Psychology represents one of a number of content areas besides mathematics in which QR might naturally come into play for students.

What is QR? In literatures addressing quantitative reasoning and literacy, many authors attempt to specify lists of skills or outcomes constituting QR (e.g., Steen, 2001). Although there is variation among lists, most lists include the following: descriptive and inferential statistics, chance and probability, graphical presentations of data, modeling, and research design and methods. At my own institution, these are embedded in a broader goal: helping students learn to use and evaluate quantitative information in a principled way in accounts of phenomena and in the construction of arguments. This intertwining of QR with argument in learned and public discourses builds upon a theme articulated by psychologist Robert Abelson in 1995, “the purpose of statistics is to organize a useful argument from quantitative evidence, using a form of principled rhetoric” (p. xiii). In our approach, we do not only view statistics as a form of argument. We also view argument in general as potentially involving a form of statistics. Within this framework, QR involves (a) appreciating the value of quantitative approaches to understanding, (b) being willing to use QR electively in constructing an argument, (c) knowing or knowing how to find or generate relevant quantitative information, (d) evaluating implicit and explicit quantitative claims in light of relevant standards and critical issues, and (e) representing and communicating quantitative information or evaluations in a clear, informative, and responsible manner.

QR in the Classroom How can we help move our students toward quantitative literacy? In what follows, I will concentrate on suggestions for general or service psychology courses where teachers of psychology encounter the largest number of students. Students majoring in psychology are repeatedly called upon to use QR through statistics and methods courses, laboratories, readings, and research in psychology (Messer, Griggs, & Jackson, 1999). Whether psychology majors develop generalized statistical instincts, however, remains an open question. The suggestions below, then, might be used profitably across the psychology curriculum.

Focus Student Attention on Quantitative Information. Quantitative information is a content staple of basic psychology texts and class presentations on psychology. However, students may not attend carefully to numbers, figures, and tables they encounter in these sources. An instructor can elicit such attention by (a) highlighting key quantitative findings, (b) walking students through the interpretation of tables and figures, and (c) discussing when and why a particular degree of quantitative precision is warranted in psychology. An instructor can reinforce these points by telling students that examinations will assess their knowledge and use of meaningful quantitative information in psychology.

Invite Students to Interpret Quantitative Findings. A key QR goal is that students learn to interpret research results and recognize critical questions they ought to raise about quantitative claims. An instructor can facilitate this by presenting a quantitative stimulus in class — such as a graphic or table of results on a slide —and asking students to make sense of quantitative findings in a discussion or brief in-class writing assignment. The simple question, “What is this quantitative stimulus telling us?” will get students thinking about quantitative information and relating that information to key arguments in a psychological literature. The natural follow-up question, “What additional information would be useful to evaluate this quantitative presentation (e.g., graphic)?” can encourage critical thinking about quantitative claims as well as generate new research ideas.

Teach Students to Seek Quantitative Information. Students need to learn how to find and evaluate quantitative information relevant to psychology, for example, when they consider the cross-cultural or ecological validity of research results or learn about the epidemiology of mental disorders. Even basic quantitative facts about world population and literacy may help set psychology in context. It may be useful to collaborate with a local college librarian to develop instruction for students about finding sources of relevant quantitative information. Likewise, instructors can provide students a description of the standards employed in psychology to evaluate the adequacy of an informational source (e.g., peer review). An instructor could then expect students to use these skills to frame any oral or written presentations they are assigned.

Involve Students in Data Analysis. One method of getting students to learn and think about statistics is to give them a reason to use statistics. This is common in statistics and research methods courses in psychology, where students write research proposals and complete canned and novel empirical projects. It is also possible to involve general psychology students in data analysis through course laboratories or data set projects.

In my introductory course, for example, students complete two web-based research modules. In one, they take personality and happiness measures and then pose an empirical question answerable from the course data set to which they have contributed. Students then conduct a simple statistical analysis to answer their questions and submit short research reports. A benefit of projects such as these is that they give the instructor a meaningful context in which to assign the statistics appendix of an introductory text and to provide guidance on using and interpreting quantitative findings.

Require Students to Write About Data. Commonly, when students are asked to find quantitative information or to analyze data, they are also called upon to write about what they have discovered. Translating numerical information into words can be an effective means of strengthening statistics students’ computational and interpretive skills (Beins, 1993). Beins suggests asking students to write in jargon-free terms about quantitative information found in almanacs, psychology journal articles, and other sources. Because written work in psychology, even at the introductory level, commonly addresses quantitative information, students need to be taught when and how to present and use quantitative arguments. Writing about quantitative information should stimulate students to think about both the meaning of technical concepts (e.g., confidence intervals, statistical significance) and principles that apply to the effective communication of technical information and the construction of arguments. Miller (2004) and Tufte (2001) provide two excellent sources for instructors wanting to address these issues.

At my own institution, we are in the midst of a Department of Education Fund for the Improvement of Postsecondary Education (FIPSE) project (QUIRK, 2005) in which teams of faculty are reading course papers from across the curriculum to study how students incorporate or neglect relevant quantitative information in written arguments. For example, we have found that students are needlessly ambiguous, using words like “many” or “often” to represent a quantitative claim without providing supporting specifics (how many?). One goal of our project is to use what we learn to help faculty develop course work, writing and speaking assignments, and instruction that will teach students to use and present quantitative information more effectively.

Relate QR to Topics in Psychology. Instructors can relate the psychological science of everyday judgment and perception to issues in quantitative reasoning. For example, a presentation can contrast the cognitive tendencies to overgeneralize from single cases and to notice illusory correlations to concerns in formal statistical reasoning such as a reliance on incomplete data (see Lawson, Schwiers, Doellman, Grady, & Kelnhofer, 2003, for an elaboration). In this way, a discussion of a psychological phenomenon may help students appreciate the value of systematic quantitative reasoning.

Model QR and Make the Case for QR. We as faculty need to model quantitative reasoning for students. For example, when we present, assign, or encounter case studies or anecdotes we need to remind our students to ask how representative of some category a particular instance is. We need to draw student attention to the questions we would ask when we discuss quantities (e.g., questions about outliers, subgroups, and variability when we think about averages). We need to demonstrate how we use quantitative information to illuminate phenomena, construct responsible arguments, and express caution about what we believe we know.

The larger social significance of quantitative reasoning may be obvious to most psychologists; it probably is not so to students. I find it useful to remind students that quantitative reasoning is not only fundamental to psychology as a discipline but is also pertinent to a wide variety of professional and public discourses students will encounter and even rely upon in their lives (e.g., Poundstone, 2003). When I teach general psychology courses, for example, I encourage all students, whether they intend to major in psychology or not, to take a statistics course during their undergraduate careers, and I tell them why I believe such a course would be worthwhile.

QR for Psychology Faculty Although psychology faculty tend to be well-trained in statistics, we may be less familiar with the broader conversations occurring about quantitative reasoning as a fundamental goal of undergraduate training and how colleges and universities are attempting to address that goal. Because we may have an important role to play, we should consider becoming more involved in those discussions. Here are suggestions for doing so.

Read the Literature. There is a growing popular and educational literature on quantitative literacy and reasoning. I highly recommend recent books by Lynn Steen (2001, 2004) and Joel Best (2004), as well as the classic series on graphics by Edward Tufte (e.g., 2001).

Join the Networks. Link up with faculty across disciplines who are interested in quantitative reasoning. Good places to start are the National Numeracy Network, which has its web home at www.math.dartmouth.edu/~nnn , the Mathematical Association of America’s portal for quantitative literacy at www.maa.org/ql/index.html , and the Statistical Literacy web site, www.statlit.org . Lynn Steen also maintains a web list of higher education programs that address quantitative literacy and reasoning at www.stolaf.edu/people/steen/papers/qlprogs.pdf .

Get Involved in a Campus QR Initiative Psychologists have the potential to play an important role in curriculum and faculty development and assessment efforts to address quantitative reasoning, and such participation may, in turn, enhance attitudes toward psychology as a scientific discipline. Teachers of psychology have served as consultants for other faculty at their institution on quantitative reasoning, helped draft curricular definitions and standards for QR, developed methods for assessing students’ quantitative reasoning, and led campus workshops on integrating QR into the curriculum. Teachers have also configured their statistics and methods courses in psychology to meet QR standards at institutions that have a QR requirement.

Conclusion There is a movement afoot in higher education to raise both appreciation for and the quality of training in quantitative reasoning. Psychology and psychologists have important roles to play in this initiative, both as teachers and as participants, in local and national educational communities.

References and Recommended Readings

  • Abelson, R. P. (1995). Statistics as principled argument. Hillsdale, NJ: Erlbaum.
  • American Psychological Association. (2005). Number of psychology degrees conferred by level of degree: 1970-2000. Retrieved June 2, 2005 from http://research.apa.org/general01.html .
  • Beins, B. C. (1993). Writing assignments in statistics classes encourage students to learn interpretation. Teaching of Psychology, 20 , 161-164.
  • Best, J. (2004). More damned lies and statistics. Berkeley: University of California Press.
  • Human Capital Initiative (1998). Decision making and statistical reasoning. APS Observer, 11 (2), 23-25.
  • Lawson, T. J., Schwiers, M., Doellman, M., Grady, G., & Kelnhofer, R. (2003). Enhancing students’ ability to use statistical reasoning with everyday problems. Teaching of Psychology, 30 , 107-110.
  • Lutsky, N. (2002). Come, putative ends of psychology’s digital future. In S. F. Davis & W. Buskist (Eds.), The teaching of psychology: Essays in honor of Wilbert J. McKeachie and Charles L. Brewer (pp. 335-345). Mahwah, NJ: Erlbaum.
  • Mathematical Association of America. (1998). Quantitative Reasoning for College Graduates. www.maa.org/past/ql/gl_preface.html .
  • Messer, W. S., Griggs, R. A., & Jackson, S. L. (1999). A national survey of undergraduate psychology degree options and major requirements. Teaching of Psychology, 26 , 164-171.
  • Miller, J. E. (2004). The Chicago guide to writing about numbers. Chicago: University of Chicago Press.
  • Poundstone, W. (2003). How would you move Mount Fuji? Microsoft’s cult of the puzzle. Boston: Little, Brown and Company.
  • QUIRK (2005). Carleton College quantitative inquiry, reasoning, and knowledge project. www.go.carleton.edu/quirk .
  • Steen, L. A. (Ed.), (2001). Mathematics and democracy: The case for quantitative literacy. Princeton, NJ: National Council on Education and the Disciplines.
  • Steen, L. A. (2004). Achieving quantitative literacy. Washington, DC: The Mathematical Association of America.
  • Stigler, S. M. (1999). Statistics on the table: The history of statistical concepts and methods. Cambridge, MA: Harvard University Press.
  • Task Force on Undergraduate Psychology Major Competencies. (2002). Undergraduate psychology major learning goals and outcomes. Retrieved June 2, 2005 from www.apa.org/ed/resources.html .
  • Tufte, E. R. (2001). The visual display of quantitative information. Chesire, CT: Graphics Press.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

About the Author

NEIL LUTSKY is professor of psychology at Carleton College in Northfield, Minnesota. Lutsky heads Carleton's Quantitative Inquiry, Reasoning, and Knowledge initiative. For additional information contact [email protected].

a quantitative argument

Student Notebook: Five Tips for Working with Teaching Assistants in Online Classes

Sarah C. Turner suggests it’s best to follow the golden rule: Treat your TA’s time as you would your own.

Teaching Current Directions in Psychological Science

Aimed at integrating cutting-edge psychological science into the classroom, Teaching Current Directions in Psychological Science offers advice and how-to guidance about teaching a particular area of research or topic in psychological science that has been

European Psychology Learning and Teaching Conference

The School of Education of the Paris Lodron University of Salzburg is hosting the next European Psychology Learning and Teaching (EUROPLAT) Conference on September 18–20, 2017 in Salzburg, Austria. The main theme of the conference

Privacy Overview

What is Quantitative Data?

Data professionals work with two types of data: quantitative and qualitative. What is quantitative data? What is qualitative data? In simple terms, quantitative data is measurable while qualitative data is descriptive—think numbers versus words.

If you plan on working as a data analyst or a data scientist (or in any field that involves conducting research, like psychology), you’ll need to get to grips with both. In this post, we’ll focus on quantitative data. We’ll explain exactly what quantitative data is, including plenty of useful examples. We’ll also show you what methods you can use to collect and analyze quantitative data.

By the end of this post, you’ll have a clear understanding of quantitative data and how it’s used.

We’ll cover:

  • What is quantitative data? (Definition)
  • What are some examples of quantitative data?
  • What’s the difference between quantitative and qualitative data?
  • What are the different types of quantitative data?
  • How is quantitative data collected?
  • What methods are used to analyze quantitative data?
  • What are the advantages and disadvantages of quantitative data?
  • Should I use quantitative or qualitative data in my research?
  • What are some common quantitative data analysis tools?
  • What is quantitative data? FAQs
  • Key takeaways

So: what is quantitative data? Let’s find out.

1. What is quantitative data? (Definition)

Quantitative data is, quite simply, information that can be quantified. It can be counted or measured, and given a numerical value—such as length in centimeters or revenue in dollars. Quantitative data tends to be structured in nature and is suitable for statistical analysis. If you have questions such as “How many?”, “How often?” or “How much?”, you’ll find the answers in quantitative data.

2. What are some examples of quantitative data?

Some examples of quantitative data include:

  • Revenue in dollars
  • Weight in kilograms
  • Age in months or years
  • Length in centimeters
  • Distance in kilometers
  • Height in feet or inches
  • Number of weeks in a year

3. What is the difference between quantitative and qualitative data?

It’s hard to define quantitative data without comparing it to qualitative data—so what’s the difference between the two?

While quantitative data can be counted and measured, qualitative data is descriptive and, typically, unstructured. It usually takes the form of words and text—for example, a status posted on Facebook or an interview transcript are both forms of qualitative data. You can also think of qualitative data in terms of the “descriptors” you would use to describe certain attributes. For example, if you were to describe someone’s hair color as auburn, or an ice cream flavor as vanilla, these labels count as qualitative data.

Qualitative data cannot be used for statistical analysis; to make sense of such data, researchers and analysts will instead try to identify meaningful groups and themes.

You’ll find a detailed exploration of the differences between qualitative and quantitative data in this post . But, to summarize:

  • Quantitative data is countable or measurable, relating to numbers; qualitative data is descriptive, relating to words.
  • Quantitative data lends itself to statistical analysis; qualitative data is grouped and categorized according to themes.
  • Examples of quantitative data include numerical values such as measurements, cost, and weight; examples of qualitative data include descriptions (or labels) of certain attributes, such as “brown eyes” or “vanilla flavored ice cream”.

Now we know the difference between the two, let’s get back to quantitative data.

4. What are the different types of quantitative data?

There are two main types of quantitative data: discrete and continuous .

Discrete data

Discrete data is quantitative data that can only take on certain numerical values. These values are fixed and cannot be broken down. When you count something, you get discrete data. For example, if a person has three children, this is an example of discrete data. The number of children is fixed—it’s not possible for them to have, say, 3.2 children.

Another example of discrete quantitative data could be the number of visits to your website; you could have 150 visits in one day, but not 150.6 visits. Discrete data is usually visualized using tally charts, bar charts, and pie charts.

Continuous data

Continuous data, on the other hand, can be infinitely broken down into smaller parts. This type of quantitative data can be placed on a measurement scale; for example, the length of a piece of string in centimeters, or the temperature in degrees Celsius. Essentially, continuous data can take any value; it’s not limited to fixed values. What’s more, continuous data can also fluctuate over time—the room temperature will vary throughout the day, for example. Continuous data is usually represented using a line graph.

Continuous data can be further classified depending on whether it’s interval data or ratio data . Let’s take a look at those now.

Interval vs. ratio data

Interval data can be measured along a continuum, where there is an equal distance between each point on the scale. For example: The difference between 30 and 31 degrees C is equal to the difference between 99 and 100 degrees. Another thing to bear in mind is that interval data has no true or meaningful zero value . Temperature is a good example; a temperature of zero degrees does not mean that there is “no temperature”—it just means that it’s extremely cold!

Ratio data is the same as interval data in terms of equally spaced points on a scale, but unlike interval data, ratio data does have a true zero . Weight in grams would be classified as ratio data; the difference between 20 grams and 21 grams is equal to the difference between 8 and 9 grams, and if something weighs zero grams, it truly weighs nothing.

Beyond the distinction between discrete and continuous data, quantitative data can also be broken down into several different types:

  • Measurements: This type of data refers to the measurement of physical objects. For example, you might measure the length and width of your living room before ordering new sofas.
  • Sensors: A sensor is a device or system which detects changes in the surrounding environment and sends this information to another electronic device, usually a computer. This information is then converted into numbers—that’s your quantitative data. For example, a smart temperature sensor will provide you with a stream of data about the temperature of the room throughout the day.
  • Counts: As the name suggests, this is the quantitative data you get when you count things. You might count the number of people who attended an event, or the number of visits to your website in one week.
  • Quantification of qualitative data: This is when qualitative data is converted into numbers. Take the example of customer satisfaction. If a customer said “I’m really happy with this product”, that would count as qualitative data. You could turn this into quantitative data by asking them to rate their satisfaction on a scale of 1-10.
  • Calculations: This is any quantitative data that results from mathematical calculations, such as calculating your final profit at the end of the month.
  • Projections: Analysts may estimate or predict quantities using algorithms, artificial intelligence, or “manual” analysis. For example, you might predict how many sales you expect to make in the next quarter. The figure you come up with is a projection of quantitative data.

Knowing what type of quantitative data you’re working with helps you to apply the correct type of statistical analysis. We’ll look at how quantitative data is analyzed in section five.

5. How is quantitative data collected?

Now we know what quantitative data is, we can start to think about how analysts actually work with it in the real world. Before the data can be analyzed, it first needs to be generated or collected. So how is this done?

Researchers (for example, psychologists or scientists) will often conduct experiments and studies in order to gather quantitative data and test certain hypotheses. A psychologist investigating the relationship between social media usage and self-esteem might devise a questionnaire with various scales—for example, asking participants to rate, on a scale of one to five, the extent to which they agree with certain statements.

If the survey reaches enough people, the psychologist ends up with a large sample of quantitative data (for example, an overall self-esteem score for each participant) which they can then analyze.

Data analysts and data scientists are less likely to conduct experiments, but they may send out questionnaires and surveys—it all depends on the sector they’re working in. Usually, data professionals will work with “naturally occurring” quantitative data, such as the number of sales per quarter, or how often a customer uses a particular service.

Some common methods of data collection include:

  • Analytics tools, such as Google Analytics
  • Probability sampling

Questionnaires and surveys

  • Open-source datasets on the web

Analytics tools

Data analysts and data scientists rely on specialist tools to gather quantitative data from various sources. Google Analytics, for example, will gather data pertaining to your website; at a glance, you can see metrics such as how much traffic you got in one week, how many page views per minute, and average session length—all useful insights if you want to optimize the performance of your site.

Aside from Google Analytics, which tends to be used within the marketing sector, there are loads of tools out there which can be connected to multiple data sources at once. Tools like RapidMiner, Knime, Qlik, and Splunk can be integrated with internal databases, data lakes, cloud storage, business apps, social media, and IoT devices, allowing you to access data from multiple sources all in one place.

You can learn more about the top tools used by data analysts in this guide

Sampling is when, instead of analyzing an entire dataset, you select a sample or “section” of the data. Sampling may be used to save time and money, and in cases where it’s simply not possible to study an entire population. For example, if you wanted to analyze data pertaining to the residents of New York, it’s unlikely that you’d be able to get hold of data for every single person in the state. Instead, you’d analyze a representative sample.

There are two types of sampling: Random probability sampling, where each unit within the overall dataset has the same chance of being selected (i.e. included in the sample), and non-probability sampling, where the sample is actively selected by the researcher or analyst—not at random. Data analysts and scientists may use Python (the popular programming language) and various algorithms to extract samples from large datasets.

Another way to collect quantitative data is through questionnaires and surveys. Nowadays, it’s easy to create a survey and distribute it online—with tools like Typeform , SurveyMonkey , and Qualtrics , practically anyone can collect quantitative data. Surveys are a useful tool for gathering customer or user feedback, and generally finding out how people feel about certain products or services.

To make sure you gather quantitative data from your surveys, it’s important that you ask respondents to quantify their feelings—for example, asking them to rate their satisfaction on a scale of one to ten.

Open-source datasets online

In addition to analyzing data from internal databases, data analysts might also collect quantitative data from external sources. Again, it all depends on the field you’re working in and what kind of data you need. The internet is full of free and open datasets spanning a range of sectors, from government, business and finance, to science, transport, film, and entertainment—pretty much anything you can think of! We’ve put together a list of places where you can find free datasets here .

6. How is quantitative data analyzed?

A defining characteristic of quantitative data is that it’s suitable for statistical analysis. There are many different methods and techniques used for quantitative data analysis, and how you analyze your data depends on what you hope to find out.

Before we go into some specific methods of analysis, it’s important to distinguish between descriptive and inferential analysis .

What’s the difference between descriptive and inferential analysis of quantitative data?

Descriptive analysis does exactly what it says on the tin; it describes the data. This is useful as it allows you to see, at a glance, what the basic qualities of your data are and what you’re working with. Some commonly used descriptive statistics include the range (the difference between the highest and lowest scores), the minimum and maximum (the lowest and highest scores in a dataset), and frequency (how often a certain value appears in the dataset).

You might also calculate various measures of central tendency in order to gauge the general trend of your data. Measures of central tendency include the mean (the sum of all values divided by the number of values, otherwise known as the average), the median (the middle score when all scores are ordered numerically), and the mode (the most frequently occurring score). Another useful calculation is standard deviation . This tells you how representative of the entire dataset the mean value actually is.

While descriptive statistics give you an initial read on your quantitative data, they don’t allow you to draw definitive conclusions. That’s where inferential analysis comes in. With inferential statistics, you can make inferences and predictions. This allows you to test various hypotheses and to predict future outcomes based on probability theory.

Quantitative data analysis methods

When it comes to deriving insights from your quantitative data, there’s a whole host of techniques at your disposal. Some of the most common (and useful) methods of quantitative data analysis include:

  • Regression analysis: This is used to estimate the relationship between a set of variables, and to see if there’s any kind of correlation between the two. Regression is especially useful for making predictions and forecasting future trends.
  • Monte Carlo simulation : The Monte Carlo method is a computerized technique used to generate models of possible outcomes and their probability distributions based on your dataset. It essentially considers a range of possible outcomes and then calculates how likely it is that each particular outcome will occur. It’s used by data analysts to conduct advanced risk analysis, allowing them to accurately predict what might happen in the future.
  • Cohort analysis: A cohort is a group of people who share a common attribute or behavior during a given time period—for example, a cohort of students who all started university in 2020, or a cohort of customers who purchased via your app in the month of February. Cohort analysis essentially divides your dataset into cohorts and analyzes how these cohorts behave over time. This is especially useful for identifying patterns in customer behavior and tailoring your products and services accordingly.
  • Cluster analysis : This is an exploratory technique used to identify structures within a dataset. The aim of cluster analysis is to sort different data points into groups that are internally homogenous and externally heterogeneous—in other words, data points within a cluster are similar to each other, but dissimilar to data points in other clusters. Clustering is used to see how data is distributed in a given dataset, or as a preprocessing step for other algorithms.
  • Time series analysis : This is used to identify trends and cycles over time. Time series data is a sequence of data points which measure the same variable at different points in time, such as weekly sales figures or monthly email sign-ups. By looking at time-related trends, analysts can forecast how the variable of interest may fluctuate in the future. Extremely handy when it comes to making business decisions!

Above is just a very brief introduction to how you might analyze your quantitative data. For a more in-depth look, check out this comprehensive guide to some of the most useful data analysis techniques .

7. What are the advantages and disadvantages of quantitative data?

As with anything, there are both advantages and disadvantages of using quantitative data. So what are they? Let’s take a look.

Advantages of quantitative data

The main advantages of working with quantitative data are as follows:

  • Quantitative data is relatively quick and easy to collect , allowing you to gather a large sample size. And, the larger your sample size, the more accurate your conclusions are likely to be.
  • Quantitative data is less susceptible to bias. The use of random sampling helps to ensure that a given dataset is as representative as possible, and protects the sample from bias. This is crucial for drawing reliable conclusions.
  • Quantitative data is analyzed objectively. Because quantitative data is suitable for statistical analysis, it can be analyzed according to mathematical rules and principles. This greatly reduces the impact of analyst or researcher bias on how the results are interpreted.

Disadvantages of quantitative data

There are two main drawbacks to be aware of when working with quantitative data, especially within a research context:

  • Quantitative data can lack context. In some cases, context is key; for example, if you’re conducting a questionnaire to find out how customers feel about a new product. The quantitative data may tell you that 60% of customers are unhappy with the product, but that figure alone will not tell you why. Sometimes, you’ll need to delve deeper to gain valuable insights beyond the numbers.
  • There is a risk of bias when using surveys and questionnaires. Again, this point relates more to a research context, but it’s important to bear in mind when creating surveys and questionnaires. The way in which questions are worded can allow researcher bias to seep in, so it’s important to make sure that surveys are devised carefully. You can learn all about how to reduce survey bias in this post .

8. Should I use quantitative or qualitative data in my research?

Okay—so now we know what the difference between quantitative and qualitative data is, as well as other aspects of quantitative data. But when should you make use of quantitative or qualitative research? This answer to this question will depend on the type of project you’re working on—or client you’re working for—specifically. But use these simple criteria as a guide:

  • When to use quantitative research: when you want to confirm or test something, like a theory or hypothesis. When the data can be shown clearly in numbers. Think of a city census that shows the whole number of people living there, as well as their ages, incomes, and other useful information that makes up a city’s demographic.
  • When to use qualitative research: when you want to understand something—for example, a concept, experience, or opinions. Maybe you’re testing out a run of experiences for your company, and need to gather reviews for a specific time period. This would be an example of qualitative research.
  • When to use both quantitative and qualitative research: when you’re taking on a research project that demands both numerical and non-numerical data.

9. What are some common quantitative analysis tools?

The tools used for quantitative data collection and analysis should come as no surprise to the budding data analyst. You may end up using one tool per project, or a combination of tools:

  • Microsoft Power BI

10. What is quantitative data? FAQs

Who uses quantitative data.

Quantitative data is used in many fields—not just data analytics (though, you could argue that all of these fields are at least data-analytics-adjacent)! Those working in the fields of economics, epidemiology, psychology, sociology, and health—to name a few—would make great use of quantitative data in their work. You would be less likely to see quantitative data being used in fields such as anthropology and history.

Is quantitative data better than qualitative data?

It would be hard to make a solid argument of which form of data collection is “better”, as it really depends on the type of project you’re working on. However, quantitative research provides more “hard and fast” information that can be used to make informed, objective decisions.

Where is quantitative data used?

Quantitative data is used when a problem needs to be quantified. That is, to answer the questions that start with “how many…” or “how often…”, for example.

What is quantitative data in statistics?

As statistics is an umbrella term of a discipline concerning the collection, organization and analysis of data, it’s only natural that quantitative data falls under that umbrella—the practice of counting and measuring data sets according to a research question or set of research needs.

Can quantitative data be ordinal?

Ordinal data is a type of statistical data where the variables are sorted into ranges, and the distance between the ranges are not known. Think of the pain scale they sometimes use in the hospital, where you judge the level of pain you have on a scale of 1-10, with 1 being low and 10 being the highest. However, you can’t really quantify the difference between 1-10—it’s a matter of how you feel!

By that logic, ordinal data falls under qualitative data, not quantitative. You can learn more about the data levels of measurement in this post .

Is quantitative data objective?

Due to the nature of how quantitative data is produced—that is, using methods that are verifiable and replicable—it is objective.

11. Key takeaways and further reading

In this post, we answered the question: what is quantitative data? We looked at how it differs from qualitative data, and how it’s collected and analyzed. To recap what we’ve learned:

  • Quantitative data is data that can be quantified. It can be counted or measured, and given a numerical value.
  • Quantitative data lends itself to statistical analysis, while qualitative data is grouped according to themes.
  • Quantitative data can be discrete or continuous. Discrete data takes on fixed values (e.g. a person has three children), while continuous data can be infinitely broken down into smaller parts.
  • Quantitative data has several advantages: It is relatively quick and easy to collect, and it is analyzed subjectively.

Collecting and analyzing quantitative data is just one aspect of the data analyst’s work. To learn more about what it’s like to work as a data analyst, check out the following guides. And, if you’d like to dabble in some analytics yourself, why not try our free five-day introductory short course ?

  • What is data analytics? A beginner’s guide
  • A step-by-step guide to the data analysis process
  • Where could a career in data analytics take you?

Logo for MHCC Library Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Visual Arguments, Media and Advertising

Andrew Gurevich

Chess pieces placed on chessboard in room

Visual Arguments

In this chapter, we will be exploring the use of visuals (images, charts, graphs, etc.) in the presentation of arguments. Like any other piece of support, images and other visuals are compelling when used correctly. They also can be used in ways that contribute to all of the flaws, fallacies, and faulty reasoning we have been exploring all along. Images can support written or spoken arguments or become the arguments themselves . They hold great power in advertising, journalism, politics, academia, and many other areas of our media-managed perceptions of the world around us. As such they deserve our attention here as we continue our discussion of the analysis and construction of valid arguments.

When we say “argument,” we usually think of either spoken or written arguments. However, arguments can be made in all forms, including visual arguments. Visual arguments rely on images to persuade a viewer to believe or do something. Advertisements in magazines are often types of visual arguments. But there are many other examples to consider, each with their own particular set of parameters to evaluate in pursuit of analyzing and constructing valid arguments.

Basically, a visual argument is a supporting (or rebuttal) statement. It utilizes various images to intensify the effect on the audience. It is undoubtedly true that pictures or other visual art pieces help engage a wider range of people. In addition, images sometimes may reflect the values and beliefs of the culture. Thus, visuals arguments are more appealing to the public than verbal ones.

Exploring the usage of the images as a way of conveying the message requires substantial research. That is why visual rhetoric should be examined. The desire to watch a movie, streaming series, or a cartoon is probably familiar to everyone. Though, not everyone notices when it happens after seeing a poster. Most of us are unaware of how bombarded we are with visual rhetoric and the extent to which it actually does influence our thoughts and behaviors. But it’s not all nefarious. A bright advertising picture can lead to taking part in a charity event, as well, or lead people to donate money or blood to victims of a natural disaster or war. Such experiences may be deeply personal and at the same time shared by the majority of people within a society, culture, or subculture. These are just a few examples of the vast impact of visual rhetoric on the public mind. By employing visual rhetoric, the author can lead the reader/viewer to different outcomes. For instance, they can induce compassion, anger, fear, curiosity, etc.

Marketing companies often use visual rhetoric to the advantages. It can become an effective way for a successful product or a service promotion. Visual argument advertisements are often the most effective in persuading consumers to make a purchase, because they can communicate a lot of information, and more importantly emotional impact, very quickly. The “father” of this science, first called “public relations,” was a man by the name of Edward Bernays, who was none other than the nephew of the famous Swiss psychologist Sigmund Freud. In fact, Bernays used many of his uncle’s theories about the human mind to craft the basic models of the advertising industry that are still very much employed today. We will watch a film about the history of the advertising industry, and Ed Bernays in particular, below. But for now, it is important to understand how visual argument works and what the best practices are for using it effectively, ethically, and creatively to support the arguments you make in academic contexts.

Say you are at the doctor’s office in the waiting room, and you see an advertisement that has a beautiful model sitting in a Lexus driving down a long, open road. The image may evoke some feelings of inadequacy (“I’ll never be as pretty as her”), freedom (the long, winding road), and envy. All of these work together as an “argument” to convince you that a Lexus will change your life, and you will be as beautiful and as free as the model if you only had one. On a rational level, we know none of this is true. But the ad does not speak to our rational minds. It speaks to a more irrational place, the subconscious, where our desires and thoughts often mix with memories, projections, fears, and other phobias to encourage an irrational response to the stimulus. As we can already see, like with other forms of arguments, visual arguments may contain logical fallacies or use (and misuse) rhetorical appeals to persuade the viewer. Our job is to learn to spot the misuse of them, and to also use them ethically, accurately, and responsibly in our own argumentative contexts.

Learning to decode visual arguments can be challenging. We are bombarded with images every day and are often unaware of how they affect us. For instance, did you know that red, yellow, orange, and green make us hungry? Think about fast food chains. How many of them use one, or more, of those colors in their logo or design? In movies, we associate black with bad and white with good. In Star Wars , Darth Vader wears a black cloak, while Luke Skywalker often has light clothing. If a political cartoon showed a politician speaking in Times New Roman font and another politician speaking in Comic Sans, then it could be implying that one politician is serious while the other is childish. We tend to think of “visual” to mean only pictures, but learning to recognize how not just images, but color, layout, perspective, and even font choices, can affect people and influence their thoughts and choices  can help you to hone your visual literacy and learn how to identify and evaluate visual arguments.

Adding visual elements to a persuasive argument can often strengthen its persuasive effect. There are two main types of visual elements: quantitative visuals and qualitative visuals .

Quantitative visuals present data graphically. They allow the audience to see statistics spatially. The purpose of using quantitative visuals is to make logical appeals to the audience. For example, sometimes it is easier to understand the disparity in certain statistics if you can see how the disparity looks graphically. Bar graphs, pie charts, Venn diagrams, histograms, and line graphs are all ways of presenting quantitative data in spatial dimensions.

Qualitative visuals present images that appeal to the audience’s emotions. Photographs and pictorial images are examples of qualitative visuals. Such images often try to convey a story, and seeing an actual example can carry more power than hearing or reading about the example. For example, one image of a child suffering from malnutrition will likely have more of an emotional impact than pages dedicated to describing that same condition in writing.

image

The Venn diagram above is a great example of how an image can be used effectively to communicate a complicated idea rather quickly and efficiently. Here, we can see that “sustainability” is defined as the intersection of environmental, economic, and social concerns, for instance. Proper use of visuals can help us connect with an audience’s emotions and values, build credibility, and share data and logical information in memorable and engaging ways.

  • Review  the handout: Ideographs
  • Review the document: Conducting Visual Arguments

Visual Argument Example: Gatorade Ad

Among the diversity of visual arguments,  advertisers provide some of the most powerful examples. Let’s examine a visual argument for Gatorade—a drink for sportspeople. It illustrates the supposed superiority of the Gatorade drink, among other beverages. A bright picture of the bottle and a memorable slogan are a marketing specialist’s craft. It combines three main aspects of a successful visual ad: use of colors, “supernatural” power, and shock appeal.

Gatorade advertisement as a visual argument.

The developers of the given visual ad reached a perfect mix of colors. The dominating ones of the poster are blue and green, which are generally considered to be “natural” ones. Nothing can be more powerful than “nature.” These are also the colors of “sport”. The colors of the grass and the sky. This idea serves as the hidden message of this color combination. As a result of this color mixing technique, the ad creator reaches its primary goal—the assurance of success in the race!

In addition to an effective color combination, the advertisement reflects a concept in advertising often referred to as “supernatural power.” The image illustrates the bright container with the Gatorade drink pulling away from the others and dramatically winning the race. Moreover, it seems that the bottle with the advertised drink is “reaching for the sky.” This detail makes the ad even more eye-appealing and further suggests the one who has the drink will have the same power.

The rhetorical analysis helps to understand that the trick of placing the bottle ahead of other beverages is exceptionally effective. It persuades the audience to believe that Gatorade provides the drink takers with supernatural power. Hence, it motivates the target audience to purchase the beverage. The advertisement compares the athletes to the Gatorade. Thus, it convinces them that they will show excellent performance in the competition, as Gatorade does in the visual ad.

Apart from the use of colors and supernatural power, the given visual argument image implements other methods. For example, it uses a shock appeal technique. The ad demonstrates a real-life race, but with a metaphorical contestant—the Gatorade bottle. Consider the effect of “reaching the sky” by the container. It creates a vision of an incredibly strong nature of this beverage. As a result, the audience is “shocked” by Gatorade’s supernatural power and encouraged to buy it. Consequently, a shock appeal makes the visual argument images more effective. We will return to the ways advertisers and politicians use visuals to persuade us later, but for now let us look at the academic ways to both analyze and use visuals in argument.

  • View the vidcast: Purdue OWL – Visual Rhetoric
  • View the video: Visual Arguments Essay
  • View  the video:  Visual Arguments

Visuals in Advertising and Social Media

The following video content explores how visual stimuli impacts the ways we think, believe, and behave in the world. We begin by returning to the beginning of the discussion about Edward Bernays, the “father” of modern advertising and the nephew of Sigmund Freud. After that, we look at the more modern impacts of visuals on social media in young people with an informative Frontline episode with the media analyst Douglas Rushkoff:

  • View the film: The Century of the Self – Happiness Machines
  • View  the film:  Generation Like

Critical Thinking, Second Edition Copyright © 2023 by Andrew Gurevich is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Deductive Reasoning? | Explanation & Examples

What Is Deductive Reasoning? | Explanation & Examples

Published on January 20, 2022 by Pritha Bhandari . Revised on June 22, 2023.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic or top-down reasoning.

Deductive-reasoning

Table of contents

What is deductive reasoning, validity and soundness, deductive reasoning in research, deductive vs. inductive reasoning, other interesting articles, frequently asked questions about deductive reasoning.

In deductive reasoning, you’ll often make an argument for a certain idea. You make an inference, or come to a conclusion, by applying different premises.

A premise is a generally accepted idea, fact, or rule, and it’s a statement that lays the groundwork for a theory or general idea. Conclusions are statements supported by premises.

Deductive logic arguments

In a simple deductive logic argument, you’ll often begin with a premise, and add another premise. Then, you form a conclusion based on these two premises. This format is called “premise-premise-conclusion.”

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Validity and soundness are two criteria for assessing deductive reasoning arguments.

In this context, validity is about the way the premises relate to each other and the conclusion. This is a different concept from research validity .

An argument is valid if the premises logically support and relate to the conclusion. But the premises don’t need to be true for an argument to be valid.

  • If there’s a rainbow, flights get canceled.
  • There is a rainbow now.
  • Therefore, flights are canceled.
  • All chili peppers are spicy.
  • Tomatoes are a chili pepper.
  • Therefore, tomatoes are spicy.

In an invalid argument, your premises can be true but that doesn’t guarantee a true conclusion. Your conclusion may inadvertently be true, but your argument can still be invalid because your conclusion doesn’t logically follow from the relationship between the statements.

  • All leopards have spots.
  • My pet gecko has spots.
  • Therefore, my pet gecko is a leopard.
  • All US presidents live in the White House.
  • Barack Obama lived in the White House.
  • Therefore, Barack Obama was a US president.

An argument is sound only if it’s valid and the premises are true. All invalid arguments are unsound.

If you begin with true premises and a valid argument, you’re bound to come to a true conclusion.

  • Flights get canceled when there are extreme weather conditions.
  • There are extreme weather conditions right now.
  • All fruits are grown from flowers and contain seeds.
  • Tomatoes are grown from flowers and contain seeds.
  • Therefore, tomatoes are fruits.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

This method is used for academic as well as non-academic research.

Here are the general steps for deductive research:

  • Select a research problem and create a problem statement.
  • Develop falsifiable hypotheses .
  • Collect your data with appropriate measures.
  • Analyze and test your data.
  • Decide whether to reject your null hypothesis .

Importantly, your hypotheses should be falsifiable. If they aren’t, you won’t be able to determine whether your results support them or not.

You formulate your main hypothesis : Switching to a four-day work week will improve employee well-being. Your null hypothesis states that there’ll be no difference in employee well-being before and after the change.

You collect data on employee well-being through quantitative surveys on a monthly basis before and after the change. When analyzing the data, you note a 25% increase in employee well-being after the change in work week.

Deductive reasoning is a top-down approach, while inductive reasoning is a bottom-up approach.

In deductive reasoning, you start with general ideas and work toward specific conclusions through inferences. Based on theories, you form a hypothesis. Using empirical observations, you test that hypothesis using inferential statistics and form a conclusion.

Inductive reasoning is also called a hypothesis-generating approach, because you start with specific observations and build toward a theory. It’s an exploratory method that’s often applied before deductive research.

In practice, most research projects involve both inductive and deductive methods.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

a quantitative argument

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Deductive reasoning is also called deductive logic.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Deductive Reasoning? | Explanation & Examples. Scribbr. Retrieved February 20, 2024, from https://www.scribbr.com/methodology/deductive-reasoning/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, inductive reasoning | types, examples, explanation, inductive vs. deductive research approach | steps & examples, hypothesis testing | a step-by-step guide with easy examples, what is your plagiarism score.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 16 February 2024

Graduate students need more quantitative methods support

  • Andrea L. Howard   ORCID: orcid.org/0000-0002-9843-9577 1  

Nature Reviews Psychology ( 2024 ) Cite this article

155 Accesses

129 Altmetric

Metrics details

Graduate students in psychology need hands-on support to conduct research using quantitative techniques that exceed their curricular training. If supervisors are not willing or able to provide this support, student-led projects must be redesigned to leverage basic statistical skills learned in the classroom.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 digital issues and online access to articles

55,14 € per year

only 4,60 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Thompson, R., Wylie, J. & Hanna, D. Maths anxiety in psychology undergraduates: a mixed-methods approach to formulating and implementing interventions. Psychol. Teach. Rev. 22 , 58–68 (2016).

Article   Google Scholar  

Carpenter, T. P. & Kirk, R. E. Are psychology students getting worse at math? Trends in the math skills of psychology statistics students across 21 years. Educ. Stud. 43 , 282–295 (2017).

Blanca, M. J., Alarcón, R. & Bono, R. Current practices in data analysis procedures in psychology: What has changed? Front. Psychol. 9 , 2558 (2018).

Article   PubMed   PubMed Central   Google Scholar  

Hertzog, C., Dunlosky, J., Robinson, A. E. & Kidder, D. P. Encoding fluency is a cue used for judgments about learning. J. Exp. Psychol. Learn. Mem. Cogn. 29 , 22–34 (2003).

Article   PubMed   Google Scholar  

Pek, J. & Bauer, D. J. How can we move advanced methodology into practice more effectively? Policy Insights Behav. Brain Sci. 10 , 3–10 (2023).

American Psychological Association. Report of the Task Force for Increasing the Number of Quantitative Psychologists (APA, 2009).

Shrout, P. E. & Rodgers, J. L. Psychology, science, and knowledge construction: Broadening perspectives from the replication crisis. Annu. Rev. Psychol. 69 , 487–510 (2018).

Hardwicke, T. E. & Vazire, S. Transparency is now the default at Psychological Science . Psychol. Sci. https://doi.org/10.1177/09567976231221573 (2023).

GAISE College Report ASA Revision Committee. Guidelines for Assessment and Instruction in Statistics Education (GAISE) College Report 2016 (ASA, 2016).

Download references

Author information

Authors and affiliations.

Department of Psychology, Carleton University, Ottawa, Ontario, Canada

Andrea L. Howard

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Andrea L. Howard .

Ethics declarations

Competing interests.

The author declares no competing interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Howard, A.L. Graduate students need more quantitative methods support. Nat Rev Psychol (2024). https://doi.org/10.1038/s44159-024-00288-y

Download citation

Published : 16 February 2024

DOI : https://doi.org/10.1038/s44159-024-00288-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

a quantitative argument

Official websites use .gov

Secure .gov websites use HTTPS

Logo for U.S. Department of Defense

DOD Official Restates Why Supporting Ukraine Is in U.S. Interest

As Congress once again addresses U.S. military aid to Ukraine, a DOD official said helping Ukraine defeat Russian aggression is in the United States' interest.  

A person in a business suit shakes hand with a person in a military uniform.

Celeste Wallander, assistant secretary of defense for international security affairs, told Clifford May, president of the Foundation for the Defense of Democracies, that U.S. aid to Ukraine has global impacts. 

Russia invaded neighboring Ukraine on Feb. 22, 2021. Russian forces were larger and better equipped, but Ukrainian forces stopped them from capturing the capital of Kyiv and decapitating the government and installing a puppet regime that answered to Russian President Vladimir Putin.  

The Ukrainians also held Kharkiv, the country's second largest city, and fought Russian forces to a standstill in the south and east.  

The United States has provided $43 billion in support to Ukraine, covering everything from Javelin missiles to tanks to ambulances to long-range strike missiles to air defense capabilities and much, much more. U.S. service members are training Ukrainian forces in Europe and the United States. Secretary of Defense Lloyd J. Austin III formed and still leads the Ukraine Defense Security Group which now has 50 nations that contribute to Ukraine's defense.  

This aid is key to helping Ukrainian forces take on and, in many areas, push back the Russians. U.S. government officials said in January that more than 300,000 Russians have been killed or wounded in Ukraine. 

Spotlight: Support for Ukraine

Wallander said the United States wants a Ukraine that is sovereign, independent and secure, adding that the Ukrainian people do not want Russian overlords and are fighting for their freedom. "We want the Ukrainian people to be able to live the European life they have chosen," she said during the discussion. 

While supporting Ukraine is the right thing to do, U.S. support is about more than just Ukraine, Wallander said. "[Our support] is about the international order that keeps all countries and all populations safe, including Russia," she said. 

Putin is seeking to "shred" the international order, the assistant secretary said. Putin wants the ability for large countries to intimidate and dominate smaller neighbors.  

And Russian actions have implications around the world, she said. "It's not just a European security issue, it is a global security issue," Wallander said. 

Built into the fabric of the Nuclear Non-Proliferation Treaty is the agreement that nuclear powers will respect the territorial integrity and sovereignty of other countries and agree to support the peaceful use of nuclear energy for their prosperity. "All of that is at stake in Russia's invasion and occupation of Ukraine," she said. 

Further afield, Chinese leaders are watching the war in Ukraine closely and have "a huge stake in Russia['s] success," Wallander said.  

A soldier boresights an M1A2 Abrams main gun.

If Putin is successful in shredding the United Nations Charter and benefiting from the use of force in Europe, "What's to stop China from following that path when it is ready?" she asked.  

China has supported Russia in its illegal invasion, and the Asian nation has benefited from Russia's increasing isolation. "The Chinese leadership doesn't want Putin to lose, because of what that would mean about the strength of the international community in pushing back against a bully," she said.  

Beyond the geopolitical reasons for supporting Ukraine, there are very human reasons, as well. The Russian invasion has been incredibly brutal, with indiscriminate attacks on civilians throughout the country. Wallander noted that Russian brutality has not been limited to Ukraine. The Russian military has used the same tactics in Chechnya and Georgia.  

But in Ukraine, Russia has gone beyond merely targeting civilian infrastructure. Russia has been taking Ukrainian children from their families or taking orphans and sending them to Russia. It is an almost "Nazi-like idea of ethnic purity that they need to be educated as Russians and that they are somehow going to be re-educated and brought back to benefit the Russian Federation," Wallander said. "It is just astonishing to think that a Europe, which faced the horror of such a leadership doing that to populations in the 1940s, is now confronted with another leadership that is doing that … in the 2020s."

Subscribe to Defense.gov Products

Choose which Defense.gov products you want delivered to your inbox.

Related Stories

Defense.gov, helpful links.

  • Live Events
  • Today in DOD
  • For the Media
  • DOD Resources
  • DOD Social Media Policy
  • Help Center
  • DOD / Military Websites
  • Agency Financial Report
  • Value of Service
  • Taking Care of Our People
  • FY 2024 Defense Budget
  • National Defense Strategy

U.S. Department of Defense logo

The Department of Defense provides the military forces needed to deter war and ensure our nation's security.

China restricts quant fund Lingjun in effort to boost market

A security guard stands at the Shanghai Stock Exchange building at the Pudong financial district in Shanghai

  • Jiaxing Junwei Lingjun Investment Partnership LP Follow

BROKEN RULES

Reporting by Jason Xue in Shanghai and Summer Zhen in Hong Kong; editing by Barbara Lewis and Stephen Coates

Our Standards: The Thomson Reuters Trust Principles. , opens new tab

Britain's health regulator said on Wednesday it had revoked a conditional marketing authorisation for Novartis' sickle cell disease drug, Adakveo.

Regulators testify before a Senate Banking hearing about recent bank failures on Capitol Hill in Washington

ASIA Nvidia to set the tone

A look at the day ahead in Asian markets.

A passenger walks past a Qantas Airways emblem at the Sydney International Airport terminal

IMAGES

  1. Quantitative Argument Summarization and Beyond: Cross-Domain Key Point

    a quantitative argument

  2. PPT

    a quantitative argument

  3. Quantitative research questions: Types, tips & examples

    a quantitative argument

  4. PPT

    a quantitative argument

  5. Quantitative argument summarization and beyond: Cross-domain key point

    a quantitative argument

  6. PPT

    a quantitative argument

VIDEO

  1. Research Problem [Eng.]

  2. CRITICAL THINKING # Logica Reasoning

  3. 2 scientific argumentation

  4. Quantitative Approach

  5. Argument and Analysis

  6. Rethinking the Opposition Between Qualitative & Quantitative Research

COMMENTS

  1. PDF UNDERSTANDING QUANTITATIVE ARGUMENTS

    focus on understanding various arguments about public and community service issues. We will concentrate on arguments that include numerical data and quantitative words like "minority." We will also consider arguments about the nature and importance of using quantitative reasoning. We will examine arguments in a variety of forms (e.g ...

  2. Quantitative argument

    Decisions Our scientific progress and the strength of our democracy our based on a shared understanding of how the world is and what we should do to improve it. Many of these claims about how we should change our world are based on quantitative evidence.

  3. What Is Quantitative Research?

    Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations.

  4. Qualitative vs. Quantitative Research

    When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge. Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions.

  5. A Practical Guide to Writing Quantitative and Qualitative Research

    In quantitative research, hypotheses predict the expected relationships among variables.15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable (simple hypothesis) or 2) between two or more independent and dependent variables (complex hypothesis).4,11 Hypotheses may ...

  6. What is Quantitative Writing?

    Quantitative writing (QW) requires students to grapple with numbers in a real world context, to describe observations using numbers, and to use the numbers in their own analyses and arguments. Good quantitative writing assignments ask students to do more than compute an answer.

  7. Revisiting the Quantitative-Qualitative Debate: Implications for Mixed

    In this paper, we revisit the quantitative-qualitative debate and review the arguments for and against using mixed-methods. In addition, we discuss the implications stemming from our view, that the paradigms upon which the methods are based have a different view of reality and therefore a different view of the phenomenon under study.

  8. PDF Arguing with Numbers: Teaching Quantitative Reasoning through Argument

    Quantitative reasoning can help students as they construct and evaluate arguments. This is because quantitative reasoning can contribute to the framing, articulation, testing, principled presentation, and public analysis of arguments.

  9. Quantitative research

    Quantitative research is a research strategy that focuses on quantifying the collection and analysis of data. [1] It is formed from a deductive approach where emphasis is placed on the testing of theory, shaped by empiricist and positivist philosophies. [1]

  10. Logic

    Logic - Quantitative Reasoning Logic Logic is the formal study of the processes and principles of thinking and reasoning. The reasoning process involves semantics (meanings) and inference (one thing leads to another) in the construction of arguments.

  11. PDF Quantitative Reasoning: Argument with Data

    quantitative reasoning that aims to help students develop their ability to interpret and assess quantitative arguments. In the context of practical and important appli cations, course material emphasizes criti cal thinking about quantitative informa tion rather than numerical computations or symbolic manipulations. The pedagog

  12. Teaching Quantitative Reasoning

    This intertwining of QR with argument in learned and public discourses builds upon a theme articulated by psychologist Robert Abelson in 1995, "the purpose of statistics is to organize a useful argument from quantitative evidence, using a form of principled rhetoric" (p. xiii).

  13. Quantitative Argument

    Quantitative Argument. Our scientific progress and the strength of our democracy our based on a shared understanding of how the world is and what we should do to improve it. Many of these claims about how we should change our world are based on quantitative evidence. Often the most challenging part of creating estimations and models is ...

  14. Qualitative Vs. Quantitative Research: What's the Difference?

    Quantitative Research: What's the Difference? Udemy Editor Share this article The terms qualitative and quantitative apply to two types of perspective reasoning, used most often when conducting research. Your first clue into the differences between these types of reasoning lies in the words themselves.

  15. Mixed Methods Research

    Mixed methods research combines elements of quantitative research and qualitative research in order to answer your research question. Mixed methods can help you gain a more complete picture than a standalone quantitative or qualitative study, as it integrates benefits of both methods. Mixed methods research is often used in the behavioral ...

  16. What is Quantitative Data? [Definition, Examples & FAQ]

    Is quantitative data better than qualitative data? It would be hard to make a solid argument of which form of data collection is "better", as it really depends on the type of project you're working on. However, quantitative research provides more "hard and fast" information that can be used to make informed, objective decisions.

  17. Visual Arguments, Media and Advertising

    Visual Arguments. In this chapter, we will be exploring the use of visuals (images, charts, graphs, etc.) in the presentation of arguments. Like any other piece of support, images and other visuals are compelling when used correctly. ... There are two main types of visual elements: quantitative visuals and qualitative visuals. Quantitative ...

  18. Qualitative Versus Quantitative Methods: A Relevant Argument?

    Musical ensemble participation and social behaviors in autistic children: Collective case study. C. Cardella. Psychology. 2014. Semantic Scholar extracted view of "Qualitative Versus Quantitative Methods: A Relevant Argument?" by R. Pierce.

  19. A quantitative argumentation-based Automated eXplainable Decision

    In this explanation mechanism, if a node in an argument tree is queried, and there is no supporting or attacking node under it, then the explanation based on the argument's base score is given in conjunction with the background problem. 4. Quantitative Argumentation-based Automated eXplainable Decision System for fake news detection4.1.

  20. A Guide to Quantitative and Qualitative Dissertation Research (Second

    A Guide to Quantitative and Qualitative Dissertation Research (Second Edition) March 24, 2017 James P. Sampson, Jr., Ph.D. 1114 West Call Street, Suite 1100 College of Education Florida State University Tallahassee, FL 32306-4450 [email protected] James P. Sampson, Jr., is the Mode L. Stone Distinguished Professor of Counseling and Career

  21. What Is Deductive Reasoning?

    Revised on June 22, 2023. Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It's often contrasted with inductive reasoning, where you start with specific observations and form general conclusions. Deductive reasoning is also called deductive logic or top-down reasoning. Note

  22. Chapter 10 Evaluations Flashcards

    Understanding Evaluations (211) · Evaluations are everyday arguments. · Some evaluations require more elaborate standards, evidence, and paperwork. Criteria of Evaluation (212 - 214) Arguments of evaluation can produce simple rankings and winners or can lead to profound decisions about our lives. · Criteria get complicated when a subject is ...

  23. Graduate students need more quantitative methods support

    Education. Psychology. Graduate students in psychology need hands-on support to conduct research using quantitative techniques that exceed their curricular training. If supervisors are not willing ...

  24. DOD Official Restates Why Supporting Ukraine Is in U.S. Interest

    As Congress again addresses U.S. military aid to Ukraine, helping Ukraine defeat Russian aggression is in the United States' interest, the assistant secretary of defense for international security

  25. Chinese hedge funds struggle to calm investors amid losses, regulatory

    Chinese hedge fund managers are scrambling to soothe investors after a rout in small-value stocks, even as regulators step up scrutiny of major market players' activities as they try to revive the ...

  26. China restricts quant fund Lingjun in effort to boost market

    China's stock exchanges on Tuesday said major quant fund Lingjun Investment had broken rules on orderly trading and barred it from buying and selling for three days, as part of wider regulatory ...