Rigor In Qualitative Analysis

The element that most people struggle most with, in qualitative analysis, is how to achieve a rigorous analysis of data. Terms like "validity" are used to mean very different constructs, when dealing with qualitative data analysis vs. quantitative data analysis. This page presents a few of the issues and discusses how to deal with them.

First, let's compare the positivist research paradigm to the interpretive paradigm. Table 1 compares how researchers operating under each paradigm view four core aspects of research. The meaning of a "positivist" vs. "interpretive" approach to research is defined by the combination of these four beliefs.

Table 1: The Positivist Research Paradigm Compared to the Interpretive Research Paradigm (Gasson, 2003)


Positivist / Functionalist

Interpretive / Constructivist

Ontological (beliefs about the nature of reality)

Real-world phenomena & relationships exist independently of the individual’s perceptions

Phenomena & relationships are viewed as social constructs by which an individual makes sense of the external world/reality

(beliefs about knowledge & how we know reality)

Natural laws govern all aspects of existence. These laws may be observed from outside the situation and abstracted to provide generally-applicable models and theories.

Rules governing behavior in various  situations are dependent on context. Inferred relationships between contextual factors and observed behaviors may be transferred to similar situations.

Human Nature
(how we account for human behavior)

The behavior of individuals en masse (with exceptions that can be explained by a lack of rationality or variance from the mean) can be viewed as determined by the situation.

Human beings have complete autonomy:  their actions are dictated by free will (which may be constrained by external forces). So they do not act according to any laws of rational behavior.

(beliefs about how we apply inquiry methods)

Researchers derive generalizable models or theories of behavior through the analysis of small-scope findings from large samples and systematic methods to construct scientific theories regarding the “real world”.

Researchers infer transferable, in-depth subjective accounts of situations, that analyze observations from small samples in great detail. The presence of the observer is accounted for.

There are, of course, zillions of positions between these two extremes and people rarely hold one or the other consistently. Few positivist researchers would claim that everything exists independently of the researcher, just as few interpretive researchers would claim that nothing exists independently of the constructs imposed by individuals. But most people tend to one or the other of these positions, particularly in their beliefs concerning universal laws and generalizability. This affects how they interpret the constructs that lead to "rigor" in research, as can be seen from Table 2.

Table 2: Rigor in Positivist vs. Interpretive Research (Gasson, 2003)

Issue of Concern

Positivist Worldview

Interpretive Worldview

Representativeness of findings

Objectivity: findings are free from researcher bias.

Confirmability: conclusions depend on subjects and conditions of the study, rather than the researcher.

Reproducibility of findings

Reliability: the study findings can be replicated, independently of context, time or researcher.

Dependability/Auditability: the study process is consistent and reasonably stable over time and between researchers.

Rigor of method

Internal validity: a statistically-significant relationship is established, to demonstrate that certain conditions are associated with other conditions, often by "triangulation" of findings.

Internal consistency: the research findings are credible and consistent to the people we study and to our readers. For authenticity, our findings should be related to significant elements in the research context/situation.

Generalizability of findings.

External validity: the researcher establishes a domain in which findings are generalizable.

Transferability: how far can the findings/conclusions be transferred to other contexts and how do they help to derive useful theories?

A belief in "generalizability" (one law fits all circumstances) leads to a search for objectivity. Think about that word. Talking about "objectivity" implies that there is an objective reality to be discovered, independently of the interpretations and constructs that people place on what they see. If you are working in the physical sciences, this might be a realistic assumption. But in the social sciences, where "data" are obtained through reports and observations, it is far less defensible. Positivist researchers tend to respond by using a sampling strategy that assumes large numbers of subjects or data samples, to "even out" the variability between individuals. If you can assume that your total population follows a Gaussian distribution, then a large enough data sample will also follow the same distribution. These are big "ifs" that can be proven incorrect later, invalidating the whole theory on which findings are constructed. A good example of this is provided by The Long Tail (Anderson, 2006). But if we can assume a Gaussian distribution, this approach allows us two luxuries not afforded to interpretive research:
(i) We can construct theories by testing and extending existing theories (i.e. we do not need to start at first principles in theory construction);
(ii) We can statistically determine a sample size that can be expected to provide a "valid" determination of the relationships between research variables.
Generalizability, from a positivist perspective, therefore depends on sampling a sufficiently large (statistically valid) population.

If we are more interested in the details of individual situations and the depth of insight that can be obtained by studying individual situations, then the search for "objectivity" is less about discovering an external reality and more about interpreting the internal reality of the situation for others, so that they can apply the lessons learned from this. For example, if we understand why IS developers consistently fail to apply objective analysis methods for the analysis and design of information systems, we can manage IS development more effectively than if we just assume that they will use formal analysis methods in full. Interpretive researchers tend to respond to this need by adopting a sampling strategy that analyzes which aspects of individual situations are transferable to similar situations. This is very different to assuming that there is some generalizable law governing human behavior. But it is still important for the researcher to remove their own biases and prejudices from the analysis of data, in order for the findings to be useful to others. We often find what we are looking for, unless we constantly question the basis for our interpretation of the data. To ensure confirmability. we must record the basis for our findings and constantly question why and how we found specific patterns in the data. We must conduct a "conversation with the self" that asks whether our findings came from something that we read (it is very easy to be sensitized to a model or relationships suggested by a recently-read paper), from a pre-conceived theory, or whether it came from an "objective" analysis of the data. Having someone else code part of your data and then discussing differences and similarities in the coding is a very good way to question your analysis. Or you can present your analysis to others, with detailed coding instances, to allow others to "audit" your work. These methods ensure the dependability of yoru findings.

Once you have established a reasonable basis for data coding and synthesis, it is important to ask whether the people in the situation would themselves interpret what happened in the way that you interpreted it. I have read some very funny analyses of IS development, which revealed that the (social science) researchers conducting the study did not have a clue about what was happening between their subjects. Discussing your findings with subjects (or other people in a similar situation), presenting your findings to managers, or simply discussing your interpretations of what is happening with your subjects as you proceed, are all good ways of "validating" your interpretation of what is happening. I have heard the argument that this affects the outcome of the research. This is nonsense - everything we do, as researchers, affects what happens at our research site. The trick is to account for this in your analysis of the situation and to time your interactions with subjects so that they do not affect the sequences of behavior that you wish to observe au naturel. For example, if I observe IS developers in their analysis of a design problem, then I present my findings to the group, I would expect their behavior to be affected the next time that they analyze a design problem. They may reject my findings, which might cause them to act in a way that excludes the behaviors that I observed previously. They may build on my findings, by structuring their investigation differently. Either way, observing their behavior before and after I present my findings would lead to additional findings from the study. This is what is meant by "interpretive" research - we interpret and account for all the aspects of our study, including our own presence. In doing so, we make our research internally consistent (we have accounted for all the influences on the outcome, from a perspective internal to the study).

I realize that this has been a lightning-fast discussion of how we ensure rigor in qualitative (interpretive) research. Feel free to contact me if you have any questions!


Anderson, C. (2006) The Long Tail: Why The Future of Business Is Selling Less Of More Hyperion Books, New York NY.

Gasson, S. (2003) 'Rigor in Grounded Theory Research - An Interpretive Perspective on Generating Theory From Qualitative Field Studies', in Whitman, M. and Woszczynski, A. (Eds.), Handbook for Information Systems Research , Idea Group Publishing, Hershey PA, pp. 79-102).