• Increase Font Size

Logo for Digital Editions

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

2 Chapter 2: Principles of Research

Principles of research, 2.1  basic concepts.

Before we address where research questions in psychology come from—and what makes them more or less interesting—it is important to understand the kinds of questions that researchers in psychology typically ask. This requires a quick introduction to several basic concepts, many of which we will return to in more detail later in the book.

Research questions in psychology are about variables. A variable is a quantity or quality that varies across people or situations. For example, the height of the students in a psychology class is a variable because it varies from student to student. The sex of the students is also a variable as long as there are both male and female students in the class. A quantitative variable is a quantity, such as height, that is typically measured by assigning a number to each individual. Other examples of quantitative variables include people’s level of talkativeness, how depressed they are, and the number of siblings they have. A categorical variable is a quality, such as sex, and is typically measured by assigning a category label to each individual. Other examples include people’s nationality, their occupation, and whether they are receiving psychotherapy.

“Lots of Candy Could Lead to Violence”

Although researchers in psychology know that  correlation does not imply causation , many journalists do not. Many headlines suggest that a causal relationship has been demonstrated, when a careful reading of the articles shows that it has not because of the directionality and third-variable problems.

One article is about a study showing that children who ate candy every day were more likely than other children to be arrested for a violent offense later in life. But could candy really “lead to” violence, as the headline suggests? What alternative explanations can you think of for this statistical relationship? How could the headline be rewritten so that it is not misleading?

As we will see later in the book, there are various ways that researchers address the directionality and third-variable problems. The most effective, however, is to conduct an experiment. An experiment is a study in which the researcher manipulates the independent variable. For example, instead of simply measuring how much people exercise, a researcher could bring people into a laboratory and randomly assign half of them to run on a treadmill for 15 minutes and the rest to sit on a couch for 15 minutes. Although this seems like a minor addition to the research design, it is extremely important. Now if the exercisers end up in more positive moods than those who did not exercise, it cannot be because their moods affected how much they exercised (because it was the researcher who determined how much they exercised). Likewise, it cannot be because some third variable (e.g., physical health) affected both how much they exercised and what mood they were in (because, again, it was the researcher who determined how much they exercised). Thus experiments eliminate the directionality and third-variable problems and allow researchers to draw firm conclusions about causal relationships.

2.2  Generating Good Research Questions

Good research must begin with a good research question. Yet coming up with good research questions is something that novice researchers often find difficult and stressful. One reason is that this is a creative process that can appear mysterious—even magical—with experienced researchers seeming to pull interesting research questions out of thin air. However, psychological research on creativity has shown that it is neither as mysterious nor as magical as it appears. It is largely the product of ordinary thinking strategies and persistence (Weisberg, 1993). This section covers some fairly simple strategies for finding general research ideas, turning those ideas into empirically testable research questions, and finally evaluating those questions in terms of how interesting they are and how feasible they would be to answer.

Finding Inspiration

Research questions often begin as more general research ideas—usually focusing on some behaviour or psychological characteristic: talkativeness, memory for touches, depression, bungee jumping, and so on. Before looking at how to turn such ideas into empirically testable research questions, it is worth looking at where such ideas come from in the first place. Three of the most common sources of inspiration are informal observations, practical problems, and previous research.

Informal observations include direct observations of our own and others’ behaviour as well as secondhand observations from nonscientific sources such as newspapers, books, and so on. For example, you might notice that you always seem to be in the slowest moving line at the grocery store. Could it be that most people think the same thing? Or you might read in the local newspaper about people donating money and food to a local family whose house has burned down and begin to wonder about who makes such donations and why. Some of the most famous research in psychology has been inspired by informal observations. Stanley Milgram’s famous research on obedience, for example, was inspired in part by journalistic reports of the trials of accused Nazi war criminals—many of whom claimed that they were only obeying orders. This led him to wonder about the extent to which ordinary people will commit immoral acts simply because they are ordered to do so by an authority figure (Milgram, 1963).

Practical problems can also inspire research ideas, leading directly to applied research in such domains as law, health, education, and sports. Can human figure drawings help children remember details about being physically or sexually abused? How effective is psychotherapy for depression compared to drug therapy? To what extent do cell phones impair people’s driving ability? How can we teach children to read more efficiently? What is the best mental preparation for running a marathon?

Probably the most common inspiration for new research ideas, however, is previous research. Recall that science is a kind of large-scale collaboration in which many different researchers read and evaluate each other’s work and conduct new studies to build on it. Of course, experienced researchers are familiar with previous research in their area of expertise and probably have a long list of ideas. This suggests that novice researchers can find inspiration by consulting with a more experienced researcher (e.g., students can consult a faculty member). But they can also find inspiration by picking up a copy of almost any professional journal and reading the titles and abstracts. In one typical issue of Psychological Science, for example, you can find articles on the perception of shapes, anti-Semitism, police lineups, the meaning of death, second-language learning, people who seek negative emotional experiences, and many other topics. If you can narrow your interests down to a particular topic (e.g., memory) or domain (e.g., health care), you can also look through more specific journals, such as Memory Cognition or Health Psychology.

Generating Empirically Testable Research Questions

Once you have a research idea, you need to use it to generate one or more empirically testable research questions, that is, questions expressed in terms of a single variable or relationship between variables. One way to do this is to look closely at the discussion section in a recent research article on the topic. This is the last major section of the article, in which the researchers summarize their results, interpret them in the context of past research, and suggest directions for future research. These suggestions often take the form of specific research questions, which you can then try to answer with additional research. This can be a good strategy because it is likely that the suggested questions have already been identified as interesting and important by experienced researchers.

But you may also want to generate your own research questions. How can you do this? First, if you have a particular behaviour or psychological characteristic in mind, you can simply conceptualize it as a variable and ask how frequent or intense it is. How many words on average do people speak per day? How accurate are children’s memories of being touched? What percentage of people have sought professional help for depression? If the question has never been studied scientifically—which is something that you will learn in your literature review—then it might be interesting and worth pursuing.

If scientific research has already answered the question of how frequent or intense the behaviour or characteristic is, then you should consider turning it into a question about a statistical relationship between that behaviour or characteristic and some other variable. One way to do this is to ask yourself the following series of more general questions and write down all the answers you can think of.

·         What are some possible causes of the behaviour or characteristic?

·         What are some possible effects of the behaviour or characteristic?

·         What types of people might exhibit more or less of the behaviour or characteristic?

·         What types of situations might elicit more or less of the behaviour or characteristic?

In general, each answer you write down can be conceptualized as a second variable, suggesting a question about a statistical relationship. If you were interested in talkativeness, for example, it might occur to you that a possible cause of this psychological characteristic is family size. Is there a statistical relationship between family size and talkativeness? Or it might occur to you that people seem to be more talkative in same-sex groups than mixed-sex groups. Is there a difference in the average level of talkativeness of people in same-sex groups and people in mixed-sex groups? This approach should allow you to generate many different empirically testable questions about almost any behaviour or psychological characteristic.

If through this process you generate a question that has never been studied scientifically—which again is something that you will learn in your literature review—then it might be interesting and worth pursuing. But what if you find that it has been studied scientifically? Although novice researchers often want to give up and move on to a new question at this point, this is not necessarily a good strategy. For one thing, the fact that the question has been studied scientifically and the research published suggests that it is of interest to the scientific community. For another, the question can almost certainly be refined so that its answer will still contribute something new to the research literature. Again, asking yourself a series of more general questions about the statistical relationship is a good strategy.

·         Are there other ways to operationally define the variables?

·         Are there types of people for whom the statistical relationship might be stronger or weaker?

·         Are there situations in which the statistical relationship might be stronger or weaker—including situations with practical importance?

For example, research has shown that women and men speak about the same number of words per day—but this was when talkativeness was measured in terms of the number of words spoken per day among college students in the United States and Mexico. We can still ask whether other ways of measuring talkativeness—perhaps the number of different people spoken to each day—produce the same result. Or we can ask whether studying elderly people or people from other cultures produces the same result. Again, this approach should help you generate many different research questions about almost any statistical relationship.

2.3  Evaluating Research Questions

Researchers usually generate many more research questions than they ever attempt to answer. This means they must have some way of evaluating the research questions they generate so that they can choose which ones to pursue. In this section, we consider two criteria for evaluating research questions: the interestingness of the question and the feasibility of answering it.


How often do people tie their shoes? Do people feel pain when you punch them in the jaw? Are women more likely to wear makeup than men? Do people prefer vanilla or chocolate ice cream? Although it would be a fairly simple matter to design a study and collect data to answer these questions, you probably would not want to because they are not interesting. We are not talking here about whether a research question is interesting to us personally but whether it is interesting to people more generally and, especially, to the scientific community. But what makes a research question interesting in this sense? Here we look at three factors that affect the interestingness of a research question: the answer is in doubt, the answer fills a gap in the research literature, and the answer has important practical implications.

First, a research question is interesting to the extent that its answer is in doubt. Obviously, questions that have been answered by scientific research are no longer interesting as the subject of new empirical research. But the fact that a question has not been answered by scientific research does not necessarily make it interesting. There has to be some reasonable chance that the answer to the question will be something that we did not already know. But how can you assess this before actually collecting data? One approach is to try to think of reasons to expect different answers to the question—especially ones that seem to conflict with common sense. If you can think of reasons to expect at least two different answers, then the question might be interesting. If you can think of reasons to expect only one answer, then it probably is not. The question of whether women are more talkative than men is interesting because there are reasons to expect both answers. The existence of the stereotype itself suggests the answer could be yes, but the fact that women’s and men’s verbal abilities are fairly similar suggests the answer could be no. The question of whether people feel pain when you punch them in the jaw is not interesting because there is absolutely no reason to think that the answer could be anything other than a resounding yes.

A second important factor to consider when deciding if a research question is interesting is whether answering it will fill a gap in the research literature. Again, this means in part that the question has not already been answered by scientific research. But it also means that the question is in some sense a natural one for people who are familiar with the research literature. For example, the question of whether human figure drawings can help children recall touch information would be likely to occur to anyone who was familiar with research on the unreliability of eyewitness memory (especially in children) and the ineffectiveness of some alternative interviewing techniques.

A final factor to consider when deciding whether a research question is interesting is whether its answer has important practical implications. Again, the question of whether human figure drawings help children recall information about being touched has important implications for how children are interviewed in physical and sexual abuse cases. The question of whether cell phone use impairs driving is interesting because it is relevant to the personal safety of everyone who travels by car and to the debate over whether cell phone use should be restricted by law.


A second important criterion for evaluating research questions is the feasibility of successfully answering them. There are many factors that affect feasibility, including time, money, equipment and materials, technical knowledge and skill, and access to research participants. Clearly, researchers need to take these factors into account so that they do not waste time and effort pursuing research that they cannot complete successfully.

Looking through a sample of professional journals in psychology will reveal many studies that are complicated and difficult to carry out. These include longitudinal designs in which participants are tracked over many years, neuroimaging studies in which participants’ brain activity is measured while they carry out various mental tasks, and complex non-experimental studies involving several variables and complicated statistical analyses. Keep in mind, though, that such research tends to be carried out by teams of highly trained researchers whose work is often supported in part by government and private grants. Keep in mind also that research does not have to be complicated or difficult to produce interesting and important results. Looking through a sample of professional journals will also reveal studies that are relatively simple and easy to carry out—perhaps involving a convenience sample of college students and a paper-and-pencil task.

A final point here is that it is generally good practice to use methods that have already been used successfully by other researchers. For example, if you want to manipulate people’s moods to make some of them happy, it would be a good idea to use one of the many approaches that have been used successfully by other researchers (e.g., paying them a compliment). This is good not only for the sake of feasibility—the approach is “tried and true”—but also because it provides greater continuity with previous research. This makes it easier to compare your results with those of other researchers and to understand the implications of their research for yours, and vice versa.

Key Takeaways

·         Research ideas can come from a variety of sources, including informal observations, practical problems, and previous research.

·         Research questions expressed in terms of variables and relationships between variables can be suggested by other researchers or generated by asking a series of more general questions about the behaviour or psychological characteristic of interest.

·         It is important to evaluate how interesting a research question is before designing a study and collecting data to answer it. Factors that affect interestingness are the extent to which the answer is in doubt, whether it fills a gap in the research literature, and whether it has important practical implications.

·         It is also important to evaluate how feasible a research question will be to answer. Factors that affect feasibility include time, money, technical knowledge and skill, and access to special equipment and research participants.

References from Chapter 2

Milgram, S. (1963). Behavioral study of obedience. Journal of Abnormal and Social Psychology, 67, 371–378.

Stanovich, K. E. (2010). How to think straight about psychology (9th ed.). Boston, MA: Allyn Bacon.

Weisberg, R. W. (1993). Creativity: Beyond the myth of genius. New York, NY: Freeman.

Research Methods in Psychology & Neuroscience Copyright © by Dalhousie University Introduction to Psychology and Neuroscience Team. All Rights Reserved.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Academy of Sciences (US), National Academy of Engineering (US) and Institute of Medicine (US) Panel on Scientific Responsibility and the Conduct of Research. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US); 1992.

Cover of Responsible Science

Responsible Science: Ensuring the Integrity of the Research Process: Volume I.

  • Hardcopy Version at National Academies Press

2 Scientific Principles and Research Practices

Until the past decade, scientists, research institutions, and government agencies relied solely on a system of self-regulation based on shared ethical principles and generally accepted research practices to ensure integrity in the research process. Among the very basic principles that guide scientists, as well as many other scholars, are those expressed as respect for the integrity of knowledge, collegiality, honesty, objectivity, and openness. These principles are at work in the fundamental elements of the scientific method, such as formulating a hypothesis, designing an experiment to test the hypothesis, and collecting and interpreting data. In addition, more particular principles characteristic of specific scientific disciplines influence the methods of observation; the acquisition, storage, management, and sharing of data; the communication of scientific knowledge and information; and the training of younger scientists. 1 How these principles are applied varies considerably among the several scientific disciplines, different research organizations, and individual investigators.

The basic and particular principles that guide scientific research practices exist primarily in an unwritten code of ethics. Although some have proposed that these principles should be written down and formalized, 2 the principles and traditions of science are, for the most part, conveyed to successive generations of scientists through example, discussion, and informal education. As was pointed out in an early Academy report on responsible conduct of research in the health sciences, “a variety of informal and formal practices and procedures currently exist in the academic research environment to assure and maintain the high quality of research conduct” (IOM, 1989a, p. 18).

Physicist Richard Feynman invoked the informal approach to communicating the basic principles of science in his 1974 commencement address at the California Institute of Technology (Feynman, 1985):

[There is an] idea that we all hope you have learned in studying science in school—we never explicitly say what this is, but just hope that you catch on by all the examples of scientific investigation. . . . It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it; other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. In summary, the idea is to try to give all the information to help others to judge the value of your contribution, not just the information that leads to judgment in one particular direction or another. (pp. 311-312)

Many scholars have noted the implicit nature and informal character of the processes that often guide scientific research practices and inference. 3 Research in well-established fields of scientific knowledge, guided by commonly accepted theoretical paradigms and experimental methods, involves few disagreements about what is recognized as sound scientific evidence. Even in a revolutionary scientific field like molecular biology, students and trainees have learned the basic principles governing judgments made in such standardized procedures as cloning a new gene and determining its sequence.

In evaluating practices that guide research endeavors, it is important to consider the individual character of scientific fields. Research fields that yield highly replicable results, such as ordinary organic chemical structures, are quite different from fields such as cellular immunology, which are in a much earlier stage of development and accumulate much erroneous or uninterpretable material before the pieces fit together coherently. When a research field is too new or too fragmented to support consensual paradigms or established methods, different scientific practices can emerge.


In broadest terms, scientists seek a systematic organization of knowledge about the universe and its parts. This knowledge is based on explanatory principles whose verifiable consequences can be tested by independent observers. Science encompasses a large body of evidence collected by repeated observations and experiments. Although its goal is to approach true explanations as closely as possible, its investigators claim no final or permanent explanatory truths. Science changes. It evolves. Verifiable facts always take precedence. . . .

Scientists operate within a system designed for continuous testing, where corrections and new findings are announced in refereed scientific publications. The task of systematizing and extending the understanding of the universe is advanced by eliminating disproved ideas and by formulating new tests of others until one emerges as the most probable explanation for any given observed phenomenon. This is called the scientific method.

An idea that has not yet been sufficiently tested is called a hypothesis. Different hypotheses are sometimes advanced to explain the same factual evidence. Rigor in the testing of hypotheses is the heart of science, if no verifiable tests can be formulated, the idea is called an ad hoc hypothesis—one that is not fruitful; such hypotheses fail to stimulate research and are unlikely to advance scientific knowledge.

A fruitful hypothesis may develop into a theory after substantial observational or experimental support has accumulated. When a hypothesis has survived repeated opportunities for disproof and when competing hypotheses have been eliminated as a result of failure to produce the predicted consequences, that hypothesis may become the accepted theory explaining the original facts.

Scientific theories are also predictive. They allow us to anticipate yet unknown phenomena and thus to focus research on more narrowly defined areas. If the results of testing agree with predictions from a theory, the theory is provisionally corroborated. If not, it is proved false and must be either abandoned or modified to account for the inconsistency.

Scientific theories, therefore, are accepted only provisionally. It is always possible that a theory that has withstood previous testing may eventually be disproved. But as theories survive more tests, they are regarded with higher levels of confidence. . . .

In science, then, facts are determined by observation or measurement of natural or experimental phenomena. A hypothesis is a proposed explanation of those facts. A theory is a hypothesis that has gained wide acceptance because it has survived rigorous investigation of its predictions. . . .

. . . science accommodates, indeed welcomes, new discoveries: its theories change and its activities broaden as new facts come to light or new potentials are recognized. Examples of events changing scientific thought are legion. . . . Truly scientific understanding cannot be attained or even pursued effectively when explanations not derived from or tested by the scientific method are accepted.

SOURCE: National Academy of Sciences and National Research Council (1984), pp. 8-11.

A well-established discipline can also experience profound changes during periods of new conceptual insights. In these moments, when scientists must cope with shifting concepts, the matter of what counts as scientific evidence can be subject to dispute. Historian Jan Sapp has described the complex interplay between theory and observation that characterizes the operation of scientific judgment in the selection of research data during revolutionary periods of paradigmatic shift (Sapp, 1990, p. 113):

What “liberties” scientists are allowed in selecting positive data and omitting conflicting or “messy” data from their reports is not defined by any timeless method. It is a matter of negotiation. It is learned, acquired socially; scientists make judgments about what fellow scientists might expect in order to be convincing. What counts as good evidence may be more or less well-defined after a new discipline or specialty is formed; however, at revolutionary stages in science, when new theories and techniques are being put forward, when standards have yet to be negotiated, scientists are less certain as to what others may require of them to be deemed competent and convincing.

Explicit statements of the values and traditions that guide research practice have evolved through the disciplines and have been given in textbooks on scientific methodologies. 4 In the past few decades, many scientific and engineering societies representing individual disciplines have also adopted codes of ethics (see Volume II of this report for examples), 5 and more recently, a few research institutions have developed guidelines for the conduct of research (see Chapter 6 ).

But the responsibilities of the research community and research institutions in assuring individual compliance with scientific principles, traditions, and codes of ethics are not well defined. In recent years, the absence of formal statements by research institutions of the principles that should guide research conducted by their members has prompted criticism that scientists and their institutions lack a clearly identifiable means to ensure the integrity of the research process.


In all of science, but with unequal emphasis in the several disciplines, inquiry proceeds based on observation and experimentation, the exercising of informed judgment, and the development of theory. Research practices are influenced by a variety of factors, including:

The general norms of science;

The nature of particular scientific disciplines and the traditions of organizing a specific body of scientific knowledge;

The example of individual scientists, particularly those who hold positions of authority or respect based on scientific achievements;

The policies and procedures of research institutions and funding agencies; and

Socially determined expectations.

The first three factors have been important in the evolution of modern science. The latter two have acquired more importance in recent times.

Norms of Science

As members of a professional group, scientists share a set of common values, aspirations, training, and work experiences. 6 Scientists are distinguished from other groups by their beliefs about the kinds of relationships that should exist among them, about the obligations incurred by members of their profession, and about their role in society. A set of general norms are imbedded in the methods and the disciplines of science that guide individual, scientists in the organization and performance of their research efforts and that also provide a basis for nonscientists to understand and evaluate the performance of scientists.

But there is uncertainty about the extent to which individual scientists adhere to such norms. Most social scientists conclude that all behavior is influenced to some degree by norms that reflect socially or morally supported patterns of preference when alternative courses of action are possible. However, perfect conformity with any relevant set of norms is always lacking for a variety of reasons: the existence of competing norms, constraints, and obstacles in organizational or group settings, and personality factors. The strength of these influences, and the circumstances that may affect them, are not well understood.

In a classic statement of the importance of scientific norms, Robert Merton specified four norms as essential for the effective functioning of science: communism (by which Merton meant the communal sharing of ideas and findings), universalism, disinterestedness, and organized skepticism (Merton, 1973). Neither Merton nor other sociologists of science have provided solid empirical evidence for the degree of influence of these norms in a representative sample of scientists. In opposition to Merton, a British sociologist of science, Michael Mulkay, has argued that these norms are “ideological” covers for self-interested behavior that reflects status and politics (Mulkay, 1975). And the British physicist and sociologist of science John Ziman, in an article synthesizing critiques of Merton's formulation, has specified a set of structural factors in the bureaucratic and corporate research environment that impede the realization of that particular set of norms: the proprietary nature of research, the local importance and funding of research, the authoritarian role of the research manager, commissioned research, and the required expertise in understanding how to use modern instruments (Ziman, 1990).

It is clear that the specific influence of norms on the development of scientific research practices is simply not known and that further study of key determinants is required, both theoretically and empirically. Commonsense views, ideologies, and anecdotes will not support a conclusive appraisal.

Individual Scientific Disciplines

Science comprises individual disciplines that reflect historical developments and the organization of natural and social phenomena for study. Social scientists may have methods for recording research data that differ from the methods of biologists, and scientists who depend on complex instrumentation may have authorship practices different from those of scientists who work in small groups or carry out field studies. Even within a discipline, experimentalists engage in research practices that differ from the procedures followed by theorists.

Disciplines are the “building blocks of science,” and they “designate the theories, problems, procedures, and solutions that are prescribed, proscribed, permitted, and preferred” (Zuckerman, 1988a, p. 520). The disciplines have traditionally provided the vital connections between scientific knowledge and its social organization. Scientific societies and scientific journals, some of which have tens of thousands of members and readers, and the peer review processes used by journals and research sponsors are visible forms of the social organization of the disciplines.

The power of the disciplines to shape research practices and standards is derived from their ability to provide a common frame of reference in evaluating the significance of new discoveries and theories in science. It is the members of a discipline, for example, who determine what is “good biology” or “good physics” by examining the implications of new research results. The disciplines' abilities to influence research standards are affected by the subjective quality of peer review and the extent to which factors other than disciplinary quality may affect judgments about scientific achievements. Disciplinary departments rely primarily on informal social and professional controls to promote responsible behavior and to penalize deviant behavior. These controls, such as social ostracism, the denial of letters of support for future employment, and the withholding of research resources, can deter and penalize unprofessional behavior within research institutions. 7

Many scientific societies representing individual disciplines have adopted explicit standards in the form of codes of ethics or guidelines governing, for example, the editorial practices of their journals and other publications. 8 Many societies have also established procedures for enforcing their standards. In the past decade, the societies' codes of ethics—which historically have been exhortations to uphold high standards of professional behavior—have incorporated specific guidelines relevant to authorship practices, data management, training and mentoring, conflict of interest, reporting research findings, treatment of confidential or proprietary information, and addressing error or misconduct.

The Role of Individual Scientists and Research Teams

The methods by which individual scientists and students are socialized in the principles and traditions of science are poorly understood. The principles of science and the practices of the disciplines are transmitted by scientists in classroom settings and, perhaps more importantly, in research groups and teams. The social setting of the research group is a strong and valuable characteristic of American science and education. The dynamics of research groups can foster—or inhibit—innovation, creativity, education, and collaboration.

One author of a historical study of research groups in the chemical and biochemical sciences has observed that the laboratory director or group leader is the primary determinant of a group's practices (Fruton, 1990). Individuals in positions of authority are visible and are also influential in determining funding and other support for the career paths of their associates and students. Research directors and department chairs, by virtue of personal example, thus can reinforce, or weaken, the power of disciplinary standards and scientific norms to affect research practices.

To the extent that the behavior of senior scientists conforms with general expectations for appropriate scientific and disciplinary practice, the research system is coherent and mutually reinforcing. When the behavior of research directors or department chairs diverges from expectations for good practice, however, the expected norms of science become ambiguous, and their effects are thus weakened. Thus personal example and the perceived behavior of role models and leaders in the research community can be powerful stimuli in shaping the research practices of colleagues, associates, and students.

The role of individuals in influencing research practices can vary by research field, institution, or time. The standards and expectations for behavior exemplified by scientists who are highly regarded for their technical competence or creative insight may have greater influence than the standards of others. Individual and group behaviors may also be more influential in times of uncertainty and change in science, especially when new scientific theories, paradigms, or institutional relationships are being established.

Institutional Policies

Universities, independent institutes, and government and industrial research organizations create the environment in which research is done. As the recipients of federal funds and the institutional sponsors of research activities, administrative officers must comply with regulatory and legal requirements that accompany public support. They are required, for example, “to foster a research environment that discourages misconduct in all research and that deals forthrightly with possible misconduct” (DHHS, 1989a, p. 32451).

Academic institutions traditionally have relied on their faculty to ensure that appropriate scientific and disciplinary standards are maintained. A few universities and other research institutions have also adopted policies or guidelines to clarify the principles that their members are expected to observe in the conduct of scientific research. 9 In addition, as a result of several highly publicized incidents of misconduct in science and the subsequent enactment of governmental regulations, most major research institutions have now adopted policies and procedures for handling allegations of misconduct in science.

Institutional policies governing research practices can have a powerful effect on research practices if they are commensurate with the norms that apply to a wide spectrum of research investigators. In particular, the process of adopting and implementing strong institutional policies can sensitize the members of those institutions to the potential for ethical problems in their work. Institutional policies can establish explicit standards that institutional officers then have the power to enforce with sanctions and penalties.

Institutional policies are limited, however, in their ability to specify the details of every problematic situation, and they can weaken or displace individual professional judgment in such situations. Currently, academic institutions have very few formal policies and programs in specific areas such as authorship, communication and publication, and training and supervision.

Government Regulations and Policies

Government agencies have developed specific rules and procedures that directly affect research practices in areas such as laboratory safety, the treatment of human and animal research subjects, and the use of toxic or potentially hazardous substances in research.

But policies and procedures adopted by some government research agencies to address misconduct in science (see Chapter 5 ) represent a significant new regulatory development in the relationships between research institutions and government sponsors. The standards and criteria used to monitor institutional compliance with an increasing number of government regulations and policies affecting research practices have been a source of significant disagreement and tension within the research community.

In recent years, some government research agencies have also adopted policies and procedures for the treatment of research data and materials in their extramural research programs. For example, the National Science Foundation (NSF) has implemented a data-sharing policy through program management actions, including proposal review and award negotiations and conditions. The NSF policy acknowledges that grantee institutions will “keep principal rights to intellectual property conceived under NSF sponsorship” to encourage appropriate commercialization of the results of research (NSF, 1989b, p. 1). However, the NSF policy emphasizes “that retention of such rights does not reduce the responsibility of researchers and institutions to make results and supporting materials openly accessible” (p. 1).

In seeking to foster data sharing under federal grant awards, the government relies extensively on the scientific traditions of openness and sharing. Research agency officials have observed candidly that if the vast majority of scientists were not so committed to openness and dissemination, government policy might require more aggressive action. But the principles that have traditionally characterized scientific inquiry can be difficult to maintain. For example, NSF staff have commented, “Unless we can arrange real returns or incentives for the original investigator, either in financial support or in professional recognition, another researcher's request for sharing is likely to present itself as ‘hassle'—an unwelcome nuisance and diversion. Therefore, we should hardly be surprised if researchers display some reluctance to share in practice, however much they may declare and genuinely feel devotion to the ideal of open scientific communication” (NSF, 1989a, p. 4).

Social Attitudes and Expectations

Research scientists are part of a larger human society that has recently experienced profound changes in attitudes about ethics, morality, and accountability in business, the professions, and government. These attitudes have included greater skepticism of the authority of experts and broader expectations about the need for visible mechanisms to assure proper research practices, especially in areas that affect the public welfare. Social attitudes are also having a more direct influence on research practices as science achieves a more prominent and public role in society. In particular, concern about waste, fraud, and abuse involving government funds has emerged as a factor that now directly influences the practices of the research community.

Varying historical and conceptual perspectives also can affect expectations about standards of research practice. For example, some journalists have criticized several prominent scientists, such as Mendel, Newton, and Millikan, because they “cut corners in order to make their theories prevail” (Broad and Wade, 1982, p. 35). The criticism suggests that all scientists at all times, in all phases of their work, should be bound by identical standards.

Yet historical studies of the social context in which scientific knowledge has been attained suggest that modern criticism of early scientific work often imposes contemporary standards of objectivity and empiricism that have in fact been developed in an evolutionary manner. 10 Holton has argued, for example, that in selecting data for publication, Millikan exercised creative insight in excluding unreliable data resulting from experimental error. But such practices, by today's standards, would not be acceptable without reporting the justification for omission of recorded data.

In the early stages of pioneering studies, particularly when fundamental hypotheses are subject to change, scientists must be free to use creative judgment in deciding which data are truly significant. In such moments, the standards of proof may be quite different from those that apply at stages when confirmation and consensus are sought from peers. Scientists must consistently guard against self-deception, however, particularly when theoretical prejudices tend to overwhelm the skepticism and objectivity basic to experimental practices.

In discussing “the theory-ladenness of observations,” Sapp (1990) observed the fundamental paradox that can exist in determining the “appropriateness” of data selection in certain experiments done in the past: scientists often craft their experiments so that the scientific problems and research subjects conform closely with the theory that they expect to verify or refute. Thus, in some cases, their observations may come closer to theoretical expectations than what might be statistically proper.

This source of bias may be acceptable when it is influenced by scientific insight and judgment. But political, financial, or other sources of bias can corrupt the process of data selection. In situations where both kinds of influence exist, it is particularly important for scientists to be forthcoming about possible sources of bias in the interpretation of research results. The coupling of science to other social purposes in fostering economic growth and commercial technology requires renewed vigilance to maintain acceptable standards for disclosure and control of financial or competitive conflicts of interest and bias in the research environment. The failure to distinguish between appropriate and inappropriate sources of bias in research practices can lead to erosion of public trust in the autonomy of the research enterprise.


In reviewing modern research practices for a range of disciplines, and analyzing factors that could affect the integrity of the research process, the panel focused on the following four areas:

Data handling—acquisition, management, and storage;

Communication and publication;

Correction of errors; and

Research training and mentorship.

Commonly understood practices operate in each area to promote responsible research conduct; nevertheless, some questionable research practices also occur. Some research institutions, scientific societies, and journals have established policies to discourage questionable practices, but there is not yet a consensus on how to treat violations of these policies. 11 Furthermore, there is concern that some questionable practices may be encouraged or stimulated by other institutional factors. For example, promotion or appointment policies that stress quantity rather than the quality of publications as a measure of productivity could contribute to questionable practices.

Data Handling

Acquisition and management.

Scientific experiments and measurements are transformed into research data. The term “research data” applies to many different forms of scientific information, including raw numbers and field notes, machine tapes and notebooks, edited and categorized observations, interpretations and analyses, derived reagents and vectors, and tables, charts, slides, and photographs.

Research data are the basis for reporting discoveries and experimental results. Scientists traditionally describe the methods used for an experiment, along with appropriate calibrations, instrument types, the number of repeated measurements, and particular conditions that may have led to the omission of some datain the reported version. Standard procedures, innovations for particular purposes, and judgments concerning the data are also reported. The general standard of practice is to provide information that is sufficiently complete so that another scientist can repeat or extend the experiment.

When a scientist communicates a set of results and a related piece of theory or interpretation in any form (at a meeting, in a journal article, or in a book), it is assumed that the research has been conducted as reported. It is a violation of the most fundamental aspect of the scientific research process to set forth measurements that have not, in fact, been performed (fabrication) or to ignore or change relevant data that contradict the reported findings (falsification).

On occasion what is actually proper research practice may be confused with misconduct in science. Thus, for example, applying scientific judgment to refine data and to remove spurious results places special responsibility on the researcher to avoid misrepresentation of findings. Responsible practice requires that scientists disclose the basis for omitting or modifying data in their analyses of research results, especially when such omissions or modifications could alter the interpretation or significance of their work.

In the last decade, the methods by which research scientists handle, store, and provide access to research data have received increased scrutiny, owing to conflicts, over ownership, such as those described by Nelkin (1984); advances in the methods and technologies that are used to collect, retain, and share data; and the costs of data storage. More specific concerns have involved the profitability associated with the patenting of science-based results in some fields and the need to verify independently the accuracy of research results used in public or private decision making. In resolving competing claims, the interests of individual scientists and research institutions may not always coincide: researchers may be willing to exchange scientific data of possible economic significance without regard for financial or institutional implications, whereas their institutions may wish to establish intellectual property rights and obligations prior to any disclosure.

The general norms of science emphasize the principle of openness. Scientists are generally expected to exchange research data as well as unique research materials that are essential to the replication or extension of reported findings. The 1985 report Sharing Research Data concluded that the general principle of data sharing is widely accepted, especially in the behavioral and social sciences (NRC, 1985). The report catalogued the benefits of data sharing, including maintaining the integrity of the research process by providing independent opportunities for verification, refutation, or refinement of original results and data; promoting new research and the development and testing of new theories; and encouraging appropriate use of empirical data in policy formulation and evaluation. The same report examined obstacles to data sharing, which include the criticism or competition that might be stimulated by data sharing; technical barriers that may impede the exchange of computer-readable data; lack of documentation of data sets; and the considerable costs of documentation, duplication, and transfer of data.

The exchange of research data and reagents is ideally governed by principles of collegiality and reciprocity: scientists often distribute reagents with the hope that the recipient will reciprocate in the future, and some give materials out freely with no stipulations attached. 12 Scientists who repeatedly or flagrantly deviate from the tradition of sharing become known to their peers and may suffer subtle forms of professional isolation. Such cases may be well known to senior research investigators, but they are not well documented.

Some scientists may share materials as part of a collaborative agreement in exchange for co-authorship on resulting publications. Some donors stipulate that the shared materials are not to be used for applications already being pursued by the donor's laboratory. Other stipulations include that the material not be passed on to third parties without prior authorization, that the material not be used for proprietary research, or that the donor receive prepublication copies of research publications derived from the material. In some instances, so-called materials transfer agreements are executed to specify the responsibilities of donor and recipient. As more academic research is being supported under proprietary agreements, researchers and institutions are experiencing the effects of these arrangements on research practices.

Governmental support for research studies may raise fundamental questions of ownership and rights of control, particularly when data are subsequently used in proprietary efforts, public policy decisions, or litigation. Some federal research agencies have adopted policies for data sharing to mitigate conflicts over issues of ownership and access (NIH, 1987; NSF, 1989b).

Many research investigators store primary data in the laboratories in which the data were initially derived, generally as electronic records or data sheets in laboratory notebooks. For most academic laboratories, local customary practice governs the storage (or discarding) of research data. Formal rules or guidelines concerning their disposition are rare.

Many laboratories customarily store primary data for a set period (often 3 to 5 years) after they are initially collected. Data that support publications are usually retained for a longer period than are those tangential to reported results. Some research laboratories serve as the proprietor of data and data books that are under the stewardship of the principal investigator. Others maintain that it is the responsibility of the individuals who collected the data to retain proprietorship, even if they leave the laboratory.

Concerns about misconduct in science have raised questions about the roles of research investigators and of institutions in maintaining and providing access to primary data. In some cases of alleged misconduct, the inability or unwillingness of an investigator to provide primary data or witnesses to support published reports sometimes has constituted a presumption that the experiments were not conducted as reported. 13 Furthermore, there is disagreement about the responsibilities of investigators to provide access to raw data, particularly when the reported results have been challenged by others. Many scientists believe that access should be restricted to peers and colleagues, usually following publication of research results, to reduce external demands on the time of the investigator. Others have suggested that raw data supporting research reports should be accessible to any critic or competitor, at any time, especially if the research is conducted with public funds. This topic, in particular, could benefit from further research and systematic discussion to clarify the rights and responsibilities of research investigators, institutions, and sponsors.

Institutional policies have been developed to guide data storage practices in some fields, often stimulated by desires to support the patenting of scientific results and to provide documentation for resolving disputes over patent claims. Laboratories concerned with patents usually have very strict rules concerning data storage and note keeping, often requiring that notes be recorded in an indelible form and be countersigned by an authorized person each day. A few universities have also considered the creation of central storage repositories for all primary data collected by their research investigators. Some government research institutions and industrial research centers maintain such repositories to safeguard the record of research developments for scientific, historical, proprietary, and national security interests.

In the academic environment, however, centralized research records raise complex problems of ownership, control, and access. Centralized data storage is costly in terms of money and space, and it presents logistical problems of cataloguing and retrieving data. There have been suggestions that some types of scientific data should be incorporated into centralized computerized data banks, a portion of which could be subject to periodic auditing or certification. 14 But much investigator-initiated research is not suitable for random data audits because of the exploratory nature of basic or discovery research. 15

Some scientific journals now require that full data for research papers be deposited in a centralized data bank before final publication. Policies and practices differ, but in some fields support is growing for compulsory deposit to enhance researchers' access to supporting data.

Issues Related to Advances in Information Technology

Advances in electronic and other information technologies have raised new questions about the customs and practices that influence the storage, ownership, and exchange of electronic data and software. A number of special issues, not addressed by the panel, are associated with computer modeling, simulation, and other approaches that are becoming more prevalent in the research environment. Computer technology can enhance research collaboration; it can also create new impediments to data sharing resulting from increased costs, the need for specialized equipment, or liabilities or uncertainties about responsibilities for faulty data, software, or computer-generated models.

Advances in computer technology may assist in maintaining and preserving accurate records of research data. Such records could help resolve questions about the timing or accuracy of specific research findings, especially when a principal investigator is not available or is uncooperative in responding to such questions. In principle, properly managed information technologies, utilizing advances in nonerasable optical disk systems, might reinforce openness in scientific research and make primary data more transparent to collaborators and research managers. For example, the so-called WORM (write once, read many) systems provide a high-density digital storage medium that supplies an ineradicable audit trail and historical record for all entered information (Haas, 1991).

Advances in information technologies could thus provide an important benefit to research institutions that wish to emphasize greater access to and storage of primary research data. But the development of centralized information systems in the academic research environment raises difficult issues of ownership, control, and principle that reflect the decentralized character of university governance. Such systems are also a source of additional research expense, often borne by individual investigators. Moreover, if centralized systems are perceived by scientists as an inappropriate or ineffective form of management or oversight of individual research groups, they simply may not work in an academic environment.

Communication and Publication

Scientists communicate research results by a variety of formal and informal means. In earlier times, new findings and interpretations were communicated by letter, personal meeting, and publication. Today, computer networks and facsimile machines have supplemented letters and telephones in facilitating rapid exchange of results. Scientific meetings routinely include poster sessions and press conferences as well as formal presentations. Although research publications continue to document research findings, the appearance of electronic publications and other information technologies heralds change. In addition, incidents of plagiarism, the increasing number of authors per article in selected fields, and the methods by which publications are assessed in determining appointments and promotions have all increased concerns about the traditions and practices that have guided communication and publication.

Journal publication, traditionally an important means of sharing information and perspectives among scientists, is also a principal means of establishing a record of achievement in science. Evaluation of the accomplishments of individual scientists often involves not only the numbers of articles that have resulted from a selected research effort, but also the particular journals in which the articles have appeared. Journal submission dates are often important in establishing priority and intellectual property claims.

Authorship of original research reports is an important indicator of accomplishment, priority, and prestige within the scientific community. Questions of authorship in science are intimately connected with issues of credit and responsibility. Authorship practices are guided by disciplinary traditions, customary practices within research groups, and professional and journal standards and policies. 16 There is general acceptance of the principle that each named author has made a significant intellectual contribution to the paper, even though there remains substantial disagreement over the types of contributions that are judged to be significant.

A general rule is that an author must have participated sufficiently in the work to take responsibility for its content and vouch for its validity. Some journals have adopted more specific guidelines, suggesting that credit for authorship be contingent on substantial participation in one or more of the following categories: (1) conception and design of the experiment, (2) execution of the experiment and collection and storage of the supporting data, (3) analysis and interpretation of the primary data, and (4) preparation and revision of the manuscript. The extent of participation in these four activities required for authorship varies across journals, disciplines, and research groups. 17

“Honorary,” “gift,” or other forms of noncontributing authorship are problems with several dimensions. 18 Honorary authors reap an inflated list of publications incommensurate with their scientific contributions (Zen, 1988). Some scientists have requested or been given authorship as a form of recognition of their status or influence rather than their intellectual contribution. Some research leaders have a custom of including their own names in any paper issuing from their laboratory, although this practice is increasingly discouraged. Some students or junior staff encourage such “gift authorship” because they feel that the inclusion of prestigious names on their papers increases the chance of publication in well-known journals. In some cases, noncontributing authors have been listed without their consent, or even without their being told. In response to these practices, some journals now require all named authors to sign the letter that accompanies submission of the original article, to ensure that no author is named without consent.

“Specialized” authorship is another issue that has received increasing attention. In these cases, a co-author may claim responsibility for a specialized portion of the paper and may not even see or be able to defend the paper as a whole. 19 “Specialized” authorship may also result from demands that co-authorship be given as a condition of sharing a unique research reagent or selected data that do not constitute a major contribution—demands that many scientists believe are inappropriate. “Specialized” authorship may be appropriate in cross-disciplinary collaborations, in which each participant has made an important contribution that deserves recognition. However, the risks associated with the inabilities of co-authors to vouch for the integrity of an entire paper are great; scientists may unwittingly become associated with a discredited publication.

Another problem of lesser importance, except to the scientists involved, is the order of authors listed on a paper. The meaning of author order varies among and within disciplines. For example, in physics the ordering of authors is frequently alphabetical, whereas in the social sciences and other fields, the ordering reflects a descending order of contribution to the described research. Another practice, common in biology, is to list the senior author last.

Appropriate recognition for the contributions of junior investigators, postdoctoral fellows, and graduate students is sometimes a source of discontent and unease in the contemporary research environment. Junior researchers have raised concerns about treatment of their contributions when research papers are prepared and submitted, particularly if they are attempting to secure promotions or independent research funding or if they have left the original project. In some cases, well-meaning senior scientists may grant junior colleagues undeserved authorship or placement as a means of enhancing the junior colleague's reputation. In others, significant contributions may not receive appropriate recognition.

Authorship practices are further complicated by large-scale projects, especially those that involve specialized contributions. Mission teams for space probes, oceanographic expeditions, and projects in high-energy physics, for example, all involve large numbers of senior scientists who depend on the long-term functioning of complex equipment. Some questions about communication and publication that arise from large science projects such as the Superconducting Super Collider include: Who decides when an experiment is ready to be published? How is the spokesperson for the experiment determined? Who determines who can give talks on the experiment? How should credit for technical or hardware contributions be acknowledged?

Apart from plagiarism, problems of authorship and credit allocation usually do not involve misconduct in science. Although some forms of “gift authorship,” in which a designated author made no identifiable contribution to a paper, may be viewed as instances of falsification, authorship disputes more commonly involve unresolved differences of judgment and style. Many research groups have found that the best method of resolving authorship questions is to agree on a designation of authors at the outset of the project. The negotiation and decision process provides initial recognition of each member's effort, and it may prevent misunderstandings that can arise during the course of the project when individuals may be in transition to new efforts or may become preoccupied with other matters.

Plagiarism. Plagiarism is using the ideas or words of another person without giving appropriate credit. Plagiarism includes the unacknowledged use of text and ideas from published work, as well as the misuse of privileged information obtained through confidential review of research proposals and manuscripts.

As described in Honor in Science, plagiarism can take many forms: at one extreme is the exact replication of another's writing without appropriate attribution (Sigma Xi, 1986). At the other is the more subtle “borrowing” of ideas, terms, or paraphrases, as described by Martin et al., “so that the result is a mosaic of other people's ideas and words, the writer's sole contribution being the cement to hold the pieces together.” 20 The importance of recognition for one's intellectual abilities in science demands high standards of accuracy and diligence in ensuring appropriate recognition for the work of others.

The misuse of privileged information may be less clear-cut because it does not involve published work. But the general principles of the importance of giving credit to the accomplishments of others are the same. The use of ideas or information obtained from peer review is not acceptable because the reviewer is in a privileged position. Some organizations, such as the American Chemical Society, have adopted policies to address these concerns (ACS, 1986).

Additional Concerns. Other problems related to authorship include overspecialization, overemphasis on short-term projects, and the organization of research communication around the “least publishable unit.” In a research system that rewards quantity at the expense of quality and favors speed over attention to detail (the effects of “publish or perish”), scientists who wait until their research data are complete before releasing them for publication may be at a disadvantage. Some institutions, such as Harvard Medical School, have responded to these problems by limiting the number of publications reviewed for promotion. Others have placed greater emphasis on major contributions as the basis for evaluating research productivity.

As gatekeepers of scientific journals, editors are expected to use good judgment and fairness in selecting papers for publication. Although editors cannot be held responsible for the errors or inaccuracies of papers that may appear in their journals, editors have obligations to consider criticism and evidence that might contradict the claims of an author and to facilitate publication of critical letters, errata, or retractions. 21 Some institutions, including the National Library of Medicine and professional societies that represent editors of scientific journals, are exploring the development of standards relevant to these obligations (Bailar et al., 1990).

Should questions be raised about the integrity of a published work, the editor may request an author's institution to address the matter. Editors often request written assurances that research reported conforms to all appropriate guidelines involving human or animal subjects, materials of human origin, or recombinant DNA.

In theory, editors set standards of authorship for their journals. In practice, scientists in the specialty do. Editors may specify the. terms of acknowledgment of contributors who fall short of authorship status, and make decisions regarding appropriate forms of disclosure of sources of bias or other potential conflicts of interest related to published articles. For example, the New England Journal of Medicine has established a category of prohibited contributions from authors engaged in for-profit ventures: the journal will not allow such persons to prepare review articles or editorial commentaries for publication. Editors can clarify and insist on the confidentiality of review and take appropriate actions against reviewers who violate it. Journals also may require or encourage their authors to deposit reagents and sequence and crystallographic data into appropriate databases or storage facilities. 22

Peer Review

Peer review is the process by which editors and journals seek to be advised by knowledgeable colleagues about the quality and suitability of a manuscript for publication in a journal. Peer review is also used by funding agencies to seek advice concerning the quality and promise of proposals for research support. The proliferation of research journals and the rewards associated with publication and with obtaining research grants have put substantial stress on the peer review system. Reviewers for journals or research agencies receive privileged information and must exert great care to avoid sharing such information with colleagues or allowing it to enter their own work prematurely.

Although the system of peer review is generally effective, it has been suggested that the quality of refereeing has declined, that self-interest has crept into the review process, and that some journal editors and reviewers exert inappropriate influence on the type of work they deem publishable. 23

Correction of Errors

At some level, all scientific reports, even those that mark profound advances, contain errors of fact or interpretation. In part, such errors reflect uncertainties intrinsic to the research process itself—a hypothesis is formulated, an experimental test is devised, and based on the interpretation of the results, the hypothesis is refined, revised, or discarded. Each step in this cycle is subject to error. For any given report, “correctness” is limited by the following:

The precision and accuracy of the measurements. These in turn depend on available technology, the use of proper statistical and analytical methods, and the skills of the investigator.

Generality of the experimental system and approach. Studies must often be carried out using “model systems.” In biology, for example, a given phenomenon is examined in only one or a few among millions of organismal species.

Experimental design—a product of the background and expertise of the investigator.

Interpretation and speculation regarding the significance of the findings—judgments that depend on expert knowledge, experience, and the insightfulness and boldness of the investigator.

Viewed in this context, errors are an integral aspect of progress in attaining scientific knowledge. They are consequences of the fact that scientists seek fundamental truths about natural processes of vast complexity. In the best experimental systems, it is common that relatively few variables have been identified and that even fewer can be controlled experimentally. Even when important variables are accounted for, the interpretation of the experimental results may be incorrect and may lead to an erroneous conclusion. Such conclusions are sometimes overturned by the original investigator or by others when new insights from another study prompt a reexamination of older reported data. In addition, however, erroneous information can also reach the scientific literature as a consequence of misconduct in science.

What becomes of these errors or incorrect interpretations? Much has been made of the concept that science is “self-correcting”—that errors, whether honest or products of misconduct, will be exposed in future experiments because scientific truth is founded on the principle that results must be verifiable and reproducible. This implies that errors will generally not long confound the direction of thinking or experimentation in actively pursued areas of research. Clearly, published experiments are not routinely replicated precisely by independent investigators. However, each experiment is based on conclusions from prior studies; repeated failure of the experiment eventually calls into question those conclusions and leads to reevaluation of the measurements, generality, design, and interpretation of the earlier work.

Thus publication of a scientific report provides an opportunity for the community at large to critique and build on the substance of the report, and serves as one stage at which errors and misinterpretations can be detected and corrected. Each new finding is considered by the community in light of what is already known about the system investigated, and disagreements with established measurements and interpretations must be justified. For example, a particular interpretation of an electrical measurement of a material may implicitly predict the results of an optical experiment. If the reported optical results are in disagreement with the electrical interpretation, then the latter is unlikely to be correct—even though the measurements themselves were carefully and correctly performed. It is also possible, however, that the contradictory results are themselves incorrect, and this possibility will also be evaluated by the scientists working in the field. It is by this process of examination and reexamination that science advances.

The research endeavor can therefore be viewed as a two-tiered process: first, hypotheses are formulated, tested, and modified; second, results and conclusions are reevaluated in the course of additional study. In fact, the two tiers are interrelated, and the goals and traditions of science mandate major responsibilities in both areas for individual investigators. Importantly, the principle of self-correction does not diminish the responsibilities of the investigator in either area. The investigator has a fundamental responsibility to ensure that the reported results can be replicated in his or her laboratory. The scientific community in general adheres strongly to this principle, but practical constraints exist as a result of the availability of specialized instrumentation, research materials, and expert personnel. Other forces, such as competition, commercial interest, funding trends and availability, or pressure to publish may also erode the role of replication as a mechanism for fostering integrity in the research process. The panel is unaware of any quantitative studies of this issue.

The process of reevaluating prior findings is closely related to the formulation and testing of hypotheses. 24 Indeed, within an individual laboratory, the formulation/testing phase and the reevaluation phase are ideally ongoing interactive processes. In that setting, the precise replication of a prior result commonly serves as a crucial control in attempts to extend the original findings. It is not unusual that experimental flaws or errors of interpretation are revealed as the scope of an investigation deepens and broadens.

If new findings or significant questions emerge in the course of a reevaluation that affect the claims of a published report, the investigator is obliged to make public a correction of the erroneous result or to indicate the nature of the questions. Occasionally, this takes the form of a formal published retraction, especially in situations in which a central claim is found to be fundamentally incorrect or irreproducible. More commonly, a somewhat different version of the original experiment, or a revised interpretation of the original result, is published as part of a subsequent report that extends in other ways the initial work. Some concerns have been raised that such “revisions” can sometimes be so subtle and obscure as to be unrecognizable. Such behavior is, at best, a questionable research practice. Clearly, each scientist has a responsibility to foster an environment that encourages and demands rigorous evaluation and reevaluation of every key finding.

Much greater complexity is encountered when an investigator in one research group is unable to confirm the published findings of another. In such situations, precise replication of the original result is commonly not attempted because of the lack of identical reagents, differences in experimental protocols, diverse experimental goals, or differences in personnel. Under these circumstances, attempts to obtain the published result may simply be dropped if the central claim of the original study is not the major focus of the new study. Alternatively, the inability to obtain the original finding may be documented in a paper by the second investigator as part of a challenge to the original claim. In any case, such questions about a published finding usually provoke the initial investigator to attempt to reconfirm the original result, or to pursue additional studies that support and extend the original findings.

In accordance with established principles of science, scientists have the responsibility to replicate and reconfirm their results as a normal part of the research process. The cycles of theoretical and methodological formulation, testing, and reevaluation, both within and between laboratories, produce an ongoing process of revision and refinement that corrects errors and strengthens the fabric of research.

Research Training and Mentorship

The panel defined a mentor as that person directly responsible for the professional development of a research trainee. 25 Professional development includes both technical training, such as instruction in the methods of scientific research (e.g., research design, instrument use, and selection of research questions and data), and socialization in basic research practices (e.g., authorship practices and sharing of research data).

Positive Aspects of Mentorship

The relationship of the mentor and research trainee is usually characterized by extraordinary mutual commitment and personal involvement. A mentor, as a research advisor, is generally expected to supervise the work of the trainee and ensure that the trainee's research is completed in a sound, honest, and timely manner. The ideal mentor challenges the trainee, spurs the trainee to higher scientific achievement, and helps socialize the trainee into the community of scientists by demonstrating and discussing methods and practices that are not well understood.

Research mentors thus have complex and diverse roles. Many individuals excel in providing guidance and instruction as well as personal support, and some mentors are resourceful in providing funds and securing professional opportunities for their trainees. The mentoring relationship may also combine elements of other relationships, such as parenting, coaching, and guildmastering. One mentor has written that his “research group is like an extended family or small tribe, dependent on one another, but led by the mentor, who acts as their consultant, critic, judge, advisor, and scientific father” (Cram, 1989, p. 1). Another mentor described as “orphaned graduate students” trainees who had lost their mentors to death, job changes, or in other ways (Sindermann, 1987). Many students come to respect and admire their mentors, who act as role models for their younger colleagues.

Difficulties Associated with Mentorship

However, the mentoring relationship does not always function properly or even satisfactorily. Almost no literature exists that evaluates which problems are idiosyncratic and which are systemic. However, it is clear that traditional practices in the area of mentorship and training are under stress. In some research fields, for example, concerns are being raised about how the increasing size and diverse composition of research groups affect the quality of the relationship between trainee and mentor. As the size of research laboratories expands, the quality of the training environment is at risk (CGS, 1990a).

Large laboratories may provide valuable instrumentation and access to unique research skills and resources as well as an opportunity to work in pioneering fields of science. But as only one contribution to the efforts of a large research team, a graduate student's work may become highly specialized, leading to a narrowing of experience and greater dependency on senior personnel; in a period when the availability of funding may limit research opportunities, laboratory heads may find it necessary to balance research decisions for the good of the team against the individual educational interests of each trainee. Moreover, the demands of obtaining sufficient resources to maintain a laboratory in the contemporary research environment often separate faculty from their trainees. When laboratory heads fail to participate in the everyday workings of the laboratory—even for the most beneficent of reasons, such as finding funds to support young investigators—their inattention may harm their trainees' education.

Although the size of a research group can influence the quality of mentorship, the more important issues are the level of supervision received by trainees, the degree of independence that is appropriate for the trainees' experience and interests, and the allocation of credit for achievements that are accomplished by groups composed of individuals with different status. Certain studies involving large groups of 40 to 100 or more are commonly carried out by collaborative or hierarchical arrangements under a single investigator. These factors may affect the ability of research mentors to transmit the methods and ethical principles according to which research should be conducted.

Problems also arise when faculty members are not directly rewarded for their graduate teaching or training skills. Although faculty may receive indirect rewards from the contributions of well-trained graduate students to their own research as well as the satisfaction of seeing their students excelling elsewhere, these rewards may not be sufficiently significant in tenure or promotion decisions. When institutional policies fail to recognize and reward the value of good teaching and mentorship, the pressures to maintain stable funding for research teams in a competitive environment can overwhelm the time allocated to teaching and mentorship by a single investigator.

The increasing duration of the training period in many research fields is another source of concern, particularly when it prolongs the dependent status of the junior investigator. The formal period of graduate and postdoctoral training varies considerably among fields of study. In 1988, the median time to the doctorate from the baccalaureate degree was 6.5 years (NRC, 1989). The disciplinary median varied: 5.5 years in chemistry; 5.9 years in engineering; 7.1 years in health sciences and in earth, atmospheric, and marine sciences; and 9.0 years in anthropology and sociology. 26

Students, research associates, and faculty are currently raising various questions about the rights and obligations of trainees. Sexist behavior by some research directors and other senior scientists is a particular source of concern. Another significant concern is that research trainees may be subject to exploitation because of their subordinate status in the research laboratory, particularly when their income, access to research resources, and future recommendations are dependent on the goodwill of the mentor. Foreign students and postdoctoral fellows may be especially vulnerable, since their immigration status often depends on continuation of a research relationship with the selected mentor.

Inequalities between mentor and trainee can exacerbate ordinary conflicts such as the distribution of credit or blame for research error (NAS, 1989). When conflicts arise, the expectations and assumptions that govern authorship practices, ownership of intellectual property, and the giving of references and recommendations are exposed for professional—and even legal—scrutiny (Nelkin, 1984; Weil and Snapper, 1989).

Making Mentorship Better

Ideally, mentors and trainees should select each other with an eye toward scientific merit, intellectual and personal compatibility, and other relevant factors. But this situation operates only under conditions of freely available information and unconstrained choice—conditions that usually do not exist in academic research groups. The trainee may choose to work with a faculty member based solely on criteria of patronage, perceived influence, or ability to provide financial support.

Good mentors may be well known and highly regarded within their research communities and institutions. Unfortunately, individuals who exploit the mentorship relationship may be less visible. Poor mentorship practices may be self-correcting over time, if students can detect and avoid research groups characterized by disturbing practices. However, individual trainees who experience abusive relationships with a mentor may discover only too late that the practices that constitute the abuse were well known but were not disclosed to new initiates.

It is common practice for a graduate student to be supervised not only by an individual mentor but also by a committee that represents the graduate department or research field of the student. However, departmental oversight is rare for the postdoctoral research fellow. In order to foster good mentorship practices for all research trainees, many groups and institutions have taken steps to clarify the nature of individual and institutional responsibilities in the mentor–trainee relationship. 27


The self-regulatory system that characterizes the research process has evolved from a diverse set of principles, traditions, standards, and customs transmitted from senior scientists, research directors, and department chairs to younger scientists by example, discussion, and informal education. The principles of honesty, collegiality, respect for others, and commitment to dissemination, critical evaluation, and rigorous training are characteristic of all the sciences. Methods and techniques of experimentation, styles of communicating findings, the relationship between theory and experimentation, and laboratory groupings for research and for training vary with the particular scientific disciplines. Within those disciplines, practices combine the general with the specific. Ideally, research practices reflect the values of the wider research community and also embody the practical skills needed to conduct scientific research.

Practicing scientists are guided by the principles of science and the standard practices of their particular scientific discipline as well as their personal moral principles. But conflicts are inherent among these principles. For example, loyalty to one's group of colleagues can be in conflict with the need to correct or report an abuse of scientific practice on the part of a member of that group.

Because scientists and the achievements of science have earned the respect of society at large, the behavior of scientists must accord not only with the expectations of scientific colleagues, but also with those of a larger community. As science becomes more closely linked to economic and political objectives, the processes by which scientists formulate and adhere to responsible research practices will be subject to increasing public scrutiny. This is one reason for scientists and research institutions to clarify and strengthen the methods by which they foster responsible research practices.

Accordingly, the panel emphasizes the following conclusions:

  • The panel believes that the existing self-regulatory system in science is sound. But modifications are necessary to foster integrity in a changing research environment, to handle cases of misconduct in science, and to discourage questionable research practices.
  • Individual scientists have a fundamental responsibility to ensure that their results are reproducible, that their research is reported thoroughly enough so that results are reproducible, and that significant errors are corrected when they are recognized. Editors of scientific journals share these last two responsibilities.
  • Research mentors, laboratory directors, department heads, and senior faculty are responsible for defining, explaining, exemplifying, and requiring adherence to the value systems of their institutions. The neglect of sound training in a mentor's laboratory will over time compromise the integrity of the research process.
  • Administrative officials within the research institution also bear responsibility for ensuring that good scientific practices are observed in units of appropriate jurisdiction and that balanced reward systems appropriately recognize research quality, integrity, teaching, and mentorship. Adherence to scientific principles and disciplinary standards is at the root of a vital and productive research environment.
  • At present, scientific principles are passed on to trainees primarily by example and discussion, including training in customary practices. Most research institutions do not have explicit programs of instruction and discussion to foster responsible research practices, but the communication of values and traditions is critical to fostering responsible research practices and detering misconduct in science.
  • Efforts to foster responsible research practices in areas such as data handling, communication and publication, and research training and mentorship deserve encouragement by the entire research community. Problems have also developed in these areas that require explicit attention and correction by scientists and their institutions. If not properly resolved, these problems may weaken the integrity of the research process.

1. See, for example, Kuyper (1991).

2. See, for example, the proposal by Pigman and Carmichael (1950).

3. See, for example, Holton (1988) and Ravetz (1971).

4. Several excellent books on experimental design and statistical methods are available. See, for example, Wilson (1952) and Beveridge (1957).

5. For a somewhat dated review of codes of ethics adopted by the scientific and engineering societies, see Chalk et al. (1981).

6. The discussion in this section is derived from Mark Frankel's background paper, “Professional Societies and Responsible Research Conduct,” included in Volume II of this report.

7. For a broader discussion on this point, see Zuckerman (1977).

8. For a full discussion of the roles of scientific societies in fostering responsible research practices, see the background paper prepared by Mark Frankel, “Professional Societies and Responsible Research Conduct,” in Volume II of this report.

9. Selected examples of academic research conduct policies and guidelines are included in Volume II of this report.

10. See, for example, Holton's response to the criticisms of Millikan in Chapter 12 of Thematic Origins of Scientific Thought (Holton, 1988). See also Holton (1978).

11. See, for example, responses to the Proceedings of the National Academy of Sciences action against Friedman: Hamilton (1990) and Abelson et al. (1990). See also the discussion in Bailar et al. (1990).

12. Much of the discussion in this section is derived from a background paper, “Reflections on the Current State of Data and Reagent Exchange Among Biomedical Researchers,” prepared by Robert Weinberg and included in Volume II of this report.

13. See, for example, Culliton (1990) and Bradshaw et al. (1990). For the impact of the inability to provide corroborating data or witnesses, also see Ross et al. (1989).

14. See, for example, Rennie (1989) and Cassidy and Shamoo (1989).

15. See, for example, the discussion on random data audits in Institute of Medicine (1989a), pp. 26-27.

16. For a full discussion of the practices and policies that govern authorship in the biological sciences, see Bailar et al. (1990).

17. Note that these general guidelines exclude the provision of reagents or facilities or the supervision of research as a criteria of authorship.

18. A full discussion of problematic practices in authorship is included in Bailar et al. (1990). A controversial review of the responsibilities of co-authors is presented by Stewart and Feder (1987).

19. In the past, scientific papers often included a special note by a named researcher, not a co-author of the paper, who described, for example, a particular substance or procedure in a footnote or appendix. This practice seems to.have been abandoned for reasons that are not well understood.

20. Martin et al. (1969), as cited in Sigma Xi (1986), p. 41.

21. Huth (1988) suggests a “notice of fraud or notice of suspected fraud” issued by the journal editor to call attention to the controversy (p. 38). Angell (1983) advocates closer coordination between institutions and editors when institutions have ascertained misconduct.

22. Such facilities include Cambridge Crystallographic Data Base, GenBank at Los Alamos National Laboratory, the American Type Culture Collection, and the Protein Data Bank at Brookhaven National Laboratory. Deposition is important for data that cannot be directly printed because of large volume.

23. For more complete discussions of peer review in the wider context, see, for example, Cole et al. (1977) and Chubin and Hackett (1990).

24. The strength of theories as sources of the formulation of scientific laws and predictive power varies among different fields of science. For example, theories derived from observations in the field of evolutionary biology lack a great deal of predictive power. The role of chance in mutation and natural selection is great, and the future directions that evolution may take are essentially impossible to predict. Theory has enormous power for clarifying understanding of how evolution has occurred and for making sense of detailed data, but its predictive power in this field is very limited. See, for example, Mayr (1982, 1988).

25. Much of the discussion on mentorship is derived from a background paper prepared for the panel by David Guston. A copy of the full paper, “Mentorship and the Research Training Experience,” is included in Volume II of this report.

26. Although the time to the doctorate is increasing, there is some evidence that the magnitude of the increase may be affected by the organization of the cohort chosen for study. In the humanities, the increased time to the doctorate is not as large if one chooses as an organizational base the year in which the baccalaureate was received by Ph.D. recipients, rather than the year in which the Ph.D. was completed; see Bowen et al. (1991).

27. Some universities have written guidelines for the supervision or mentorship of trainees as part of their institutional research policy guidelines (see, for example, the guidelines adopted by Harvard University and the University of Michigan that are included in Volume II of this report). Other groups or institutions have written “guidelines” (IOM, 1989a; NIH, 1990), “checklists” (CGS, 1990a), and statements of “areas of concern” and suggested “devices” (CGS, 1990c).

The guidelines often affirm the need for regular, personal interaction between the mentor and the trainee. They indicate that mentors may need to limit the size of their laboratories so that they are able to interact directly and frequently with all of their trainees. Although there are many ways to ensure responsible mentorship, methods that provide continuous feedback, whether through formal or informal mechanisms, are apt to be the most successful (CGS, 1990a). Departmental mentorship awards (comparable to teaching or research prizes) can recognize, encourage, and enhance the mentoring relationship. For other discussions on mentorship, see the paper by David Guston in Volume II of this report.

One group convened by the Institute of Medicine has suggested “that the university has a responsibility to ensure that the size of a research unit does not outstrip the mentor's ability to maintain adequate supervision” (IOM, 1989a, p. 85). Others have noted that although it may be desirable to limit the number of trainees assigned to a senior investigator, there is insufficient information at this time to suggest that numbers alone significantly affect the quality of research supervision (IOM, 1989a, p. 33).

  • Cite this Page National Academy of Sciences (US), National Academy of Engineering (US) and Institute of Medicine (US) Panel on Scientific Responsibility and the Conduct of Research. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US); 1992. 2, Scientific Principles and Research Practices.
  • PDF version of this title (1.2M)

In this Page

Recent activity.

  • Scientific Principles and Research Practices - Responsible Science Scientific Principles and Research Practices - Responsible Science

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers


Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base


Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

type of principles research

Try for free

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

What is your plagiarism score?

Grad Coach

Research Philosophy & Paradigms

Positivism, Interpretivism & Pragmatism, Explained Simply

By: Derek Jansen (MBA) | Reviewer: Eunice Rautenbach (DTech) | June 2023

Research philosophy is one of those things that students tend to either gloss over or become utterly confused by when undertaking formal academic research for the first time. And understandably so – it’s all rather fluffy and conceptual. However, understanding the philosophical underpinnings of your research is genuinely important as it directly impacts how you develop your research methodology.

In this post, we’ll explain what research philosophy is , what the main research paradigms  are and how these play out in the real world, using loads of practical examples . To keep this all as digestible as possible, we are admittedly going to simplify things somewhat and we’re not going to dive into the finer details such as ontology, epistemology and axiology (we’ll save those brain benders for another post!). Nevertheless, this post should set you up with a solid foundational understanding of what research philosophy and research paradigms are, and what they mean for your project.

Overview: Research Philosophy

  • What is a research philosophy or paradigm ?
  • Positivism 101
  • Interpretivism 101
  • Pragmatism 101
  • Choosing your research philosophy

What is a research philosophy or paradigm?

Research philosophy and research paradigm are terms that tend to be used pretty loosely, even interchangeably. Broadly speaking, they both refer to the set of beliefs, assumptions, and principles that underlie the way you approach your study (whether that’s a dissertation, thesis or any other sort of academic research project).

For example, one philosophical assumption could be that there is an external reality that exists independent of our perceptions (i.e., an objective reality), whereas an alternative assumption could be that reality is constructed by the observer (i.e., a subjective reality). Naturally, these assumptions have quite an impact on how you approach your study (more on this later…).

The research philosophy and research paradigm also encapsulate the nature of the knowledge that you seek to obtain by undertaking your study. In other words, your philosophy reflects what sort of knowledge and insight you believe you can realistically gain by undertaking your research project. For example, you might expect to find a concrete, absolute type of answer to your research question , or you might anticipate that things will turn out to be more nuanced and less directly calculable and measurable . Put another way, it’s about whether you expect “hard”, clean answers or softer, more opaque ones.

So, what’s the difference between research philosophy and paradigm?

Well, it depends on who you ask. Different textbooks will present slightly different definitions, with some saying that philosophy is about the researcher themselves while the paradigm is about the approach to the study . Others will use the two terms interchangeably. And others will say that the research philosophy is the top-level category and paradigms are the pre-packaged combinations of philosophical assumptions and expectations.

To keep things simple in this video, we’ll avoid getting tangled up in the terminology and rather focus on the shared focus of both these terms – that is that they both describe (or at least involve) the set of beliefs, assumptions, and principles that underlie the way you approach your study .

Importantly, your research philosophy and/or paradigm form the foundation of your study . More specifically, they will have a direct influence on your research methodology , including your research design , the data collection and analysis techniques you adopt, and of course, how you interpret your results. So, it’s important to understand the philosophy that underlies your research to ensure that the rest of your methodological decisions are well-aligned .

Research philosophy describes the set of beliefs, assumptions, and principles that underlie the way you approach your study.

So, what are the options?

We’ll be straight with you – research philosophy is a rabbit hole (as with anything philosophy-related) and, as a result, there are many different approaches (or paradigms) you can take, each with its own perspective on the nature of reality and knowledge . To keep things simple though, we’ll focus on the “big three”, namely positivism , interpretivism and pragmatism . Understanding these three is a solid starting point and, in many cases, will be all you need.

Paradigm 1: Positivism

When you think positivism, think hard sciences – physics, biology, astronomy, etc. Simply put, positivism is rooted in the belief that knowledge can be obtained through objective observations and measurements . In other words, the positivist philosophy assumes that answers can be found by carefully measuring and analysing data, particularly numerical data .

As a research paradigm, positivism typically manifests in methodologies that make use of quantitative data , and oftentimes (but not always) adopt experimental or quasi-experimental research designs. Quite often, the focus is on causal relationships – in other words, understanding which variables affect other variables, in what way and to what extent. As a result, studies with a positivist research philosophy typically aim for objectivity, generalisability and replicability of findings.

Let’s look at an example of positivism to make things a little more tangible.

Assume you wanted to investigate the relationship between a particular dietary supplement and weight loss. In this case, you could design a randomised controlled trial (RCT) where you assign participants to either a control group (who do not receive the supplement) or an intervention group (who do receive the supplement). With this design in place, you could measure each participant’s weight before and after the study and then use various quantitative analysis methods to assess whether there’s a statistically significant difference in weight loss between the two groups. By doing so, you could infer a causal relationship between the dietary supplement and weight loss, based on objective measurements and rigorous experimental design.

As you can see in this example, the underlying assumptions and beliefs revolve around the viewpoint that knowledge and insight can be obtained through carefully controlling the environment, manipulating variables and analysing the resulting numerical data . Therefore, this sort of study would adopt a positivistic research philosophy. This is quite common for studies within the hard sciences – so much so that research philosophy is often just assumed to be positivistic and there’s no discussion of it within the methodology section of a dissertation or thesis.

Positivism is rooted in the belief that knowledge can be obtained through objective observations and measurements of an external reality.

Paradigm 2: Interpretivism

 If you can imagine a spectrum of research paradigms, interpretivism would sit more or less on the opposite side of the spectrum from positivism. Essentially, interpretivism takes the position that reality is socially constructed . In other words, that reality is subjective , and is constructed by the observer through their experience of it , rather than being independent of the observer (which, if you recall, is what positivism assumes).

The interpretivist paradigm typically underlies studies where the research aims involve attempting to understand the meanings and interpretations that people assign to their experiences. An interpretivistic philosophy also typically manifests in the adoption of a qualitative methodology , relying on data collection methods such as interviews , observations , and textual analysis . These types of studies commonly explore complex social phenomena and individual perspectives, which are naturally more subjective and nuanced.

Let’s look at an example of the interpretivist approach in action:

Assume that you’re interested in understanding the experiences of individuals suffering from chronic pain. In this case, you might conduct in-depth interviews with a group of participants and ask open-ended questions about their pain, its impact on their lives, coping strategies, and their overall experience and perceptions of living with pain. You would then transcribe those interviews and analyse the transcripts, using thematic analysis to identify recurring themes and patterns. Based on that analysis, you’d be able to better understand the experiences of these individuals, thereby satisfying your original research aim.

As you can see in this example, the underlying assumptions and beliefs revolve around the viewpoint that insight can be obtained through engaging in conversation with and exploring the subjective experiences of people (as opposed to collecting numerical data and trying to measure and calculate it). Therefore, this sort of study would adopt an interpretivistic research philosophy. Ultimately, if you’re looking to understand people’s lived experiences , you have to operate on the assumption that knowledge can be generated by exploring people’s viewpoints, as subjective as they may be.

Interpretivism takes the position that reality is constructed by the observer through their experience of it, rather than being independent.

Paradigm 3: Pragmatism

Now that we’ve looked at the two opposing ends of the research philosophy spectrum – positivism and interpretivism, you can probably see that both of the positions have their merits , and that they both function as tools for different jobs . More specifically, they lend themselves to different types of research aims, objectives and research questions . But what happens when your study doesn’t fall into a clear-cut category and involves exploring both “hard” and “soft” phenomena? Enter pragmatism…

As the name suggests, pragmatism takes a more practical and flexible approach, focusing on the usefulness and applicability of research findings , rather than an all-or-nothing, mutually exclusive philosophical position. This allows you, as the researcher, to explore research aims that cross philosophical boundaries, using different perspectives for different aspects of the study .

With a pragmatic research paradigm, both quantitative and qualitative methods can play a part, depending on the research questions and the context of the study. This often manifests in studies that adopt a mixed-method approach , utilising a combination of different data types and analysis methods. Ultimately, the pragmatist adopts a problem-solving mindset , seeking practical ways to achieve diverse research aims.

Let’s look at an example of pragmatism in action:

Imagine that you want to investigate the effectiveness of a new teaching method in improving student learning outcomes. In this case, you might adopt a mixed-methods approach, which makes use of both quantitative and qualitative data collection and analysis techniques. One part of your project could involve comparing standardised test results from an intervention group (students that received the new teaching method) and a control group (students that received the traditional teaching method). Additionally, you might conduct in-person interviews with a smaller group of students from both groups, to gather qualitative data on their perceptions and preferences regarding the respective teaching methods.

As you can see in this example, the pragmatist’s approach can incorporate both quantitative and qualitative data . This allows the researcher to develop a more holistic, comprehensive understanding of the teaching method’s efficacy and practical implications, with a synthesis of both types of data . Naturally, this type of insight is incredibly valuable in this case, as it’s essential to understand not just the impact of the teaching method on test results, but also on the students themselves!

Pragmatism takes a more flexible approach, focusing on the potential usefulness and applicability of the research findings.

Wrapping Up: Philosophies & Paradigms

Now that we’ve unpacked the “big three” research philosophies or paradigms – positivism, interpretivism and pragmatism, hopefully, you can see that research philosophy underlies all of the methodological decisions you’ll make in your study. In many ways, it’s less a case of you choosing your research philosophy and more a case of it choosing you (or at least, being revealed to you), based on the nature of your research aims and research questions .

  • Research philosophies and paradigms encapsulate the set of beliefs, assumptions, and principles that guide the way you, as the researcher, approach your study and develop your methodology.
  • Positivism is rooted in the belief that reality is independent of the observer, and consequently, that knowledge can be obtained through objective observations and measurements.
  • Interpretivism takes the (opposing) position that reality is subjectively constructed by the observer through their experience of it, rather than being an independent thing.
  • Pragmatism attempts to find a middle ground, focusing on the usefulness and applicability of research findings, rather than an all-or-nothing, mutually exclusive philosophical position.

If you’d like to learn more about research philosophy, research paradigms and research methodology more generally, be sure to check out the rest of the Grad Coach blog . Alternatively, if you’d like hands-on help with your research, consider our private coaching service , where we guide you through each stage of the research journey, step by step.

type of principles research

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Research limitations vs delimitations

was very useful for me, I had no idea what a philosophy is, and what type of philosophy of my study. thank you


Thanks for this explanation, is so good for me


You contributed much to my master thesis development and I wish to have again your support for PhD program through research.

sintayehu hailu

the way of you explanation very good keep it up/continuous just like this

David Kavuma

Very precise stuff. It has been of great use to me. It has greatly helped me to sharpen my PhD research project!


Very clear and very helpful explanation above. I have clearly understand the explanation.

Nigatu Kalse

I would like to thank Grad Coach TV or Youtube organizers and presenters. Since then, I have been able to learn a lot by finding very informative posts from them.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

type of principles research

  • Print Friendly


  1. Research

    type of principles research

  2. Basic Principles In Research Methodology

    type of principles research

  3. Research Ethics: Definition, Principles and Advantages

    type of principles research

  4. Types Of Research Methodology With Examples Ppt

    type of principles research

  5. Basic principles of research

    type of principles research

  6. Principles of research design.

    type of principles research



  2. 1.1.Definition of Research

  3. Best Motivational seminar on complete success in hindi for all students

  4. Types of Research methodology

  5. What is EEG and How Does it Relate to Cognitive Neuroscience?

  6. I should have missed Non-Violent Communication: Heartbreak toxic parents next day thoughts