• USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 6. The Methodology
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The methods section describes actions taken to investigate a research problem and the rationale for the application of specific procedures or techniques used to identify, select, process, and analyze information applied to understanding the problem, thereby, allowing the reader to critically evaluate a study’s overall validity and reliability. The methodology section of a research paper answers two main questions: How was the data collected or generated? And, how was it analyzed? The writing should be direct and precise and always written in the past tense.

Kallet, Richard H. "How to Write the Methods Section of a Research Paper." Respiratory Care 49 (October 2004): 1229-1232.

Importance of a Good Methodology Section

You must explain how you obtained and analyzed your results for the following reasons:

  • Readers need to know how the data was obtained because the method you chose affects the results and, by extension, how you interpreted their significance in the discussion section of your paper.
  • Methodology is crucial for any branch of scholarship because an unreliable method produces unreliable results and, as a consequence, undermines the value of your analysis of the findings.
  • In most cases, there are a variety of different methods you can choose to investigate a research problem. The methodology section of your paper should clearly articulate the reasons why you have chosen a particular procedure or technique.
  • The reader wants to know that the data was collected or generated in a way that is consistent with accepted practice in the field of study. For example, if you are using a multiple choice questionnaire, readers need to know that it offered your respondents a reasonable range of answers to choose from.
  • The method must be appropriate to fulfilling the overall aims of the study. For example, you need to ensure that you have a large enough sample size to be able to generalize and make recommendations based upon the findings.
  • The methodology should discuss the problems that were anticipated and the steps you took to prevent them from occurring. For any problems that do arise, you must describe the ways in which they were minimized or why these problems do not impact in any meaningful way your interpretation of the findings.
  • In the social and behavioral sciences, it is important to always provide sufficient information to allow other researchers to adopt or replicate your methodology. This information is particularly important when a new method has been developed or an innovative use of an existing method is utilized.

Bem, Daryl J. Writing the Empirical Journal Article. Psychology Writing Center. University of Washington; Denscombe, Martyn. The Good Research Guide: For Small-Scale Social Research Projects . 5th edition. Buckingham, UK: Open University Press, 2014; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008.

Structure and Writing Style

I.  Groups of Research Methods

There are two main groups of research methods in the social sciences:

  • The e mpirical-analytical group approaches the study of social sciences in a similar manner that researchers study the natural sciences . This type of research focuses on objective knowledge, research questions that can be answered yes or no, and operational definitions of variables to be measured. The empirical-analytical group employs deductive reasoning that uses existing theory as a foundation for formulating hypotheses that need to be tested. This approach is focused on explanation.
  • The i nterpretative group of methods is focused on understanding phenomenon in a comprehensive, holistic way . Interpretive methods focus on analytically disclosing the meaning-making practices of human subjects [the why, how, or by what means people do what they do], while showing how those practices arrange so that it can be used to generate observable outcomes. Interpretive methods allow you to recognize your connection to the phenomena under investigation. However, the interpretative group requires careful examination of variables because it focuses more on subjective knowledge.

II.  Content

The introduction to your methodology section should begin by restating the research problem and underlying assumptions underpinning your study. This is followed by situating the methods you used to gather, analyze, and process information within the overall “tradition” of your field of study and within the particular research design you have chosen to study the problem. If the method you choose lies outside of the tradition of your field [i.e., your review of the literature demonstrates that the method is not commonly used], provide a justification for how your choice of methods specifically addresses the research problem in ways that have not been utilized in prior studies.

The remainder of your methodology section should describe the following:

  • Decisions made in selecting the data you have analyzed or, in the case of qualitative research, the subjects and research setting you have examined,
  • Tools and methods used to identify and collect information, and how you identified relevant variables,
  • The ways in which you processed the data and the procedures you used to analyze that data, and
  • The specific research tools or strategies that you utilized to study the underlying hypothesis and research questions.

In addition, an effectively written methodology section should:

  • Introduce the overall methodological approach for investigating your research problem . Is your study qualitative or quantitative or a combination of both (mixed method)? Are you going to take a special approach, such as action research, or a more neutral stance?
  • Indicate how the approach fits the overall research design . Your methods for gathering data should have a clear connection to your research problem. In other words, make sure that your methods will actually address the problem. One of the most common deficiencies found in research papers is that the proposed methodology is not suitable to achieving the stated objective of your paper.
  • Describe the specific methods of data collection you are going to use , such as, surveys, interviews, questionnaires, observation, archival research. If you are analyzing existing data, such as a data set or archival documents, describe how it was originally created or gathered and by whom. Also be sure to explain how older data is still relevant to investigating the current research problem.
  • Explain how you intend to analyze your results . Will you use statistical analysis? Will you use specific theoretical perspectives to help you analyze a text or explain observed behaviors? Describe how you plan to obtain an accurate assessment of relationships, patterns, trends, distributions, and possible contradictions found in the data.
  • Provide background and a rationale for methodologies that are unfamiliar for your readers . Very often in the social sciences, research problems and the methods for investigating them require more explanation/rationale than widely accepted rules governing the natural and physical sciences. Be clear and concise in your explanation.
  • Provide a justification for subject selection and sampling procedure . For instance, if you propose to conduct interviews, how do you intend to select the sample population? If you are analyzing texts, which texts have you chosen, and why? If you are using statistics, why is this set of data being used? If other data sources exist, explain why the data you chose is most appropriate to addressing the research problem.
  • Provide a justification for case study selection . A common method of analyzing research problems in the social sciences is to analyze specific cases. These can be a person, place, event, phenomenon, or other type of subject of analysis that are either examined as a singular topic of in-depth investigation or multiple topics of investigation studied for the purpose of comparing or contrasting findings. In either method, you should explain why a case or cases were chosen and how they specifically relate to the research problem.
  • Describe potential limitations . Are there any practical limitations that could affect your data collection? How will you attempt to control for potential confounding variables and errors? If your methodology may lead to problems you can anticipate, state this openly and show why pursuing this methodology outweighs the risk of these problems cropping up.

NOTE:   Once you have written all of the elements of the methods section, subsequent revisions should focus on how to present those elements as clearly and as logically as possibly. The description of how you prepared to study the research problem, how you gathered the data, and the protocol for analyzing the data should be organized chronologically. For clarity, when a large amount of detail must be presented, information should be presented in sub-sections according to topic. If necessary, consider using appendices for raw data.

ANOTHER NOTE: If you are conducting a qualitative analysis of a research problem , the methodology section generally requires a more elaborate description of the methods used as well as an explanation of the processes applied to gathering and analyzing of data than is generally required for studies using quantitative methods. Because you are the primary instrument for generating the data [e.g., through interviews or observations], the process for collecting that data has a significantly greater impact on producing the findings. Therefore, qualitative research requires a more detailed description of the methods used.

YET ANOTHER NOTE:   If your study involves interviews, observations, or other qualitative techniques involving human subjects , you may be required to obtain approval from the university's Office for the Protection of Research Subjects before beginning your research. This is not a common procedure for most undergraduate level student research assignments. However, i f your professor states you need approval, you must include a statement in your methods section that you received official endorsement and adequate informed consent from the office and that there was a clear assessment and minimization of risks to participants and to the university. This statement informs the reader that your study was conducted in an ethical and responsible manner. In some cases, the approval notice is included as an appendix to your paper.

III.  Problems to Avoid

Irrelevant Detail The methodology section of your paper should be thorough but concise. Do not provide any background information that does not directly help the reader understand why a particular method was chosen, how the data was gathered or obtained, and how the data was analyzed in relation to the research problem [note: analyzed, not interpreted! Save how you interpreted the findings for the discussion section]. With this in mind, the page length of your methods section will generally be less than any other section of your paper except the conclusion.

Unnecessary Explanation of Basic Procedures Remember that you are not writing a how-to guide about a particular method. You should make the assumption that readers possess a basic understanding of how to investigate the research problem on their own and, therefore, you do not have to go into great detail about specific methodological procedures. The focus should be on how you applied a method , not on the mechanics of doing a method. An exception to this rule is if you select an unconventional methodological approach; if this is the case, be sure to explain why this approach was chosen and how it enhances the overall process of discovery.

Problem Blindness It is almost a given that you will encounter problems when collecting or generating your data, or, gaps will exist in existing data or archival materials. Do not ignore these problems or pretend they did not occur. Often, documenting how you overcame obstacles can form an interesting part of the methodology. It demonstrates to the reader that you can provide a cogent rationale for the decisions you made to minimize the impact of any problems that arose.

Literature Review Just as the literature review section of your paper provides an overview of sources you have examined while researching a particular topic, the methodology section should cite any sources that informed your choice and application of a particular method [i.e., the choice of a survey should include any citations to the works you used to help construct the survey].

It’s More than Sources of Information! A description of a research study's method should not be confused with a description of the sources of information. Such a list of sources is useful in and of itself, especially if it is accompanied by an explanation about the selection and use of the sources. The description of the project's methodology complements a list of sources in that it sets forth the organization and interpretation of information emanating from those sources.

Azevedo, L.F. et al. "How to Write a Scientific Paper: Writing the Methods Section." Revista Portuguesa de Pneumologia 17 (2011): 232-238; Blair Lorrie. “Choosing a Methodology.” In Writing a Graduate Thesis or Dissertation , Teaching Writing Series. (Rotterdam: Sense Publishers 2016), pp. 49-72; Butin, Dan W. The Education Dissertation A Guide for Practitioner Scholars . Thousand Oaks, CA: Corwin, 2010; Carter, Susan. Structuring Your Research Thesis . New York: Palgrave Macmillan, 2012; Kallet, Richard H. “How to Write the Methods Section of a Research Paper.” Respiratory Care 49 (October 2004):1229-1232; Lunenburg, Frederick C. Writing a Successful Thesis or Dissertation: Tips and Strategies for Students in the Social and Behavioral Sciences . Thousand Oaks, CA: Corwin Press, 2008. Methods Section. The Writer’s Handbook. Writing Center. University of Wisconsin, Madison; Rudestam, Kjell Erik and Rae R. Newton. “The Method Chapter: Describing Your Research Plan.” In Surviving Your Dissertation: A Comprehensive Guide to Content and Process . (Thousand Oaks, Sage Publications, 2015), pp. 87-115; What is Interpretive Research. Institute of Public and International Affairs, University of Utah; Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University; Methods and Materials. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College.

Writing Tip

Statistical Designs and Tests? Do Not Fear Them!

Don't avoid using a quantitative approach to analyzing your research problem just because you fear the idea of applying statistical designs and tests. A qualitative approach, such as conducting interviews or content analysis of archival texts, can yield exciting new insights about a research problem, but it should not be undertaken simply because you have a disdain for running a simple regression. A well designed quantitative research study can often be accomplished in very clear and direct ways, whereas, a similar study of a qualitative nature usually requires considerable time to analyze large volumes of data and a tremendous burden to create new paths for analysis where previously no path associated with your research problem had existed.

To locate data and statistics, GO HERE .

Another Writing Tip

Knowing the Relationship Between Theories and Methods

There can be multiple meaning associated with the term "theories" and the term "methods" in social sciences research. A helpful way to delineate between them is to understand "theories" as representing different ways of characterizing the social world when you research it and "methods" as representing different ways of generating and analyzing data about that social world. Framed in this way, all empirical social sciences research involves theories and methods, whether they are stated explicitly or not. However, while theories and methods are often related, it is important that, as a researcher, you deliberately separate them in order to avoid your theories playing a disproportionate role in shaping what outcomes your chosen methods produce.

Introspectively engage in an ongoing dialectic between the application of theories and methods to help enable you to use the outcomes from your methods to interrogate and develop new theories, or ways of framing conceptually the research problem. This is how scholarship grows and branches out into new intellectual territory.

Reynolds, R. Larry. Ways of Knowing. Alternative Microeconomics . Part 1, Chapter 3. Boise State University; The Theory-Method Relationship. S-Cool Revision. United Kingdom.

Yet Another Writing Tip

Methods and the Methodology

Do not confuse the terms "methods" and "methodology." As Schneider notes, a method refers to the technical steps taken to do research . Descriptions of methods usually include defining and stating why you have chosen specific techniques to investigate a research problem, followed by an outline of the procedures you used to systematically select, gather, and process the data [remember to always save the interpretation of data for the discussion section of your paper].

The methodology refers to a discussion of the underlying reasoning why particular methods were used . This discussion includes describing the theoretical concepts that inform the choice of methods to be applied, placing the choice of methods within the more general nature of academic work, and reviewing its relevance to examining the research problem. The methodology section also includes a thorough review of the methods other scholars have used to study the topic.

Bryman, Alan. "Of Methods and Methodology." Qualitative Research in Organizations and Management: An International Journal 3 (2008): 159-168; Schneider, Florian. “What's in a Methodology: The Difference between Method, Methodology, and Theory…and How to Get the Balance Right?” PoliticsEastAsia.com. Chinese Department, University of Leiden, Netherlands.

  • << Previous: Scholarly vs. Popular Publications
  • Next: Qualitative Methods >>
  • Last Updated: Aug 30, 2024 10:02 AM
  • URL: https://libguides.usc.edu/writingguide

Reference management. Clean and simple.

What is research methodology?

importance of methodology

The basics of research methodology

Why do you need a research methodology, what needs to be included, why do you need to document your research method, what are the different types of research instruments, qualitative / quantitative / mixed research methodologies, how do you choose the best research methodology for you, frequently asked questions about research methodology, related articles.

When you’re working on your first piece of academic research, there are many different things to focus on, and it can be overwhelming to stay on top of everything. This is especially true of budding or inexperienced researchers.

If you’ve never put together a research proposal before or find yourself in a position where you need to explain your research methodology decisions, there are a few things you need to be aware of.

Once you understand the ins and outs, handling academic research in the future will be less intimidating. We break down the basics below:

A research methodology encompasses the way in which you intend to carry out your research. This includes how you plan to tackle things like collection methods, statistical analysis, participant observations, and more.

You can think of your research methodology as being a formula. One part will be how you plan on putting your research into practice, and another will be why you feel this is the best way to approach it. Your research methodology is ultimately a methodological and systematic plan to resolve your research problem.

In short, you are explaining how you will take your idea and turn it into a study, which in turn will produce valid and reliable results that are in accordance with the aims and objectives of your research. This is true whether your paper plans to make use of qualitative methods or quantitative methods.

The purpose of a research methodology is to explain the reasoning behind your approach to your research - you'll need to support your collection methods, methods of analysis, and other key points of your work.

Think of it like writing a plan or an outline for you what you intend to do.

When carrying out research, it can be easy to go off-track or depart from your standard methodology.

Tip: Having a methodology keeps you accountable and on track with your original aims and objectives, and gives you a suitable and sound plan to keep your project manageable, smooth, and effective.

With all that said, how do you write out your standard approach to a research methodology?

As a general plan, your methodology should include the following information:

  • Your research method.  You need to state whether you plan to use quantitative analysis, qualitative analysis, or mixed-method research methods. This will often be determined by what you hope to achieve with your research.
  • Explain your reasoning. Why are you taking this methodological approach? Why is this particular methodology the best way to answer your research problem and achieve your objectives?
  • Explain your instruments.  This will mainly be about your collection methods. There are varying instruments to use such as interviews, physical surveys, questionnaires, for example. Your methodology will need to detail your reasoning in choosing a particular instrument for your research.
  • What will you do with your results?  How are you going to analyze the data once you have gathered it?
  • Advise your reader.  If there is anything in your research methodology that your reader might be unfamiliar with, you should explain it in more detail. For example, you should give any background information to your methods that might be relevant or provide your reasoning if you are conducting your research in a non-standard way.
  • How will your sampling process go?  What will your sampling procedure be and why? For example, if you will collect data by carrying out semi-structured or unstructured interviews, how will you choose your interviewees and how will you conduct the interviews themselves?
  • Any practical limitations?  You should discuss any limitations you foresee being an issue when you’re carrying out your research.

In any dissertation, thesis, or academic journal, you will always find a chapter dedicated to explaining the research methodology of the person who carried out the study, also referred to as the methodology section of the work.

A good research methodology will explain what you are going to do and why, while a poor methodology will lead to a messy or disorganized approach.

You should also be able to justify in this section your reasoning for why you intend to carry out your research in a particular way, especially if it might be a particularly unique method.

Having a sound methodology in place can also help you with the following:

  • When another researcher at a later date wishes to try and replicate your research, they will need your explanations and guidelines.
  • In the event that you receive any criticism or questioning on the research you carried out at a later point, you will be able to refer back to it and succinctly explain the how and why of your approach.
  • It provides you with a plan to follow throughout your research. When you are drafting your methodology approach, you need to be sure that the method you are using is the right one for your goal. This will help you with both explaining and understanding your method.
  • It affords you the opportunity to document from the outset what you intend to achieve with your research, from start to finish.

A research instrument is a tool you will use to help you collect, measure and analyze the data you use as part of your research.

The choice of research instrument will usually be yours to make as the researcher and will be whichever best suits your methodology.

There are many different research instruments you can use in collecting data for your research.

Generally, they can be grouped as follows:

  • Interviews (either as a group or one-on-one). You can carry out interviews in many different ways. For example, your interview can be structured, semi-structured, or unstructured. The difference between them is how formal the set of questions is that is asked of the interviewee. In a group interview, you may choose to ask the interviewees to give you their opinions or perceptions on certain topics.
  • Surveys (online or in-person). In survey research, you are posing questions in which you ask for a response from the person taking the survey. You may wish to have either free-answer questions such as essay-style questions, or you may wish to use closed questions such as multiple choice. You may even wish to make the survey a mixture of both.
  • Focus Groups.  Similar to the group interview above, you may wish to ask a focus group to discuss a particular topic or opinion while you make a note of the answers given.
  • Observations.  This is a good research instrument to use if you are looking into human behaviors. Different ways of researching this include studying the spontaneous behavior of participants in their everyday life, or something more structured. A structured observation is research conducted at a set time and place where researchers observe behavior as planned and agreed upon with participants.

These are the most common ways of carrying out research, but it is really dependent on your needs as a researcher and what approach you think is best to take.

It is also possible to combine a number of research instruments if this is necessary and appropriate in answering your research problem.

There are three different types of methodologies, and they are distinguished by whether they focus on words, numbers, or both.

Data typeWhat is it?Methodology

Quantitative

This methodology focuses more on measuring and testing numerical data. What is the aim of quantitative research?

When using this form of research, your objective will usually be to confirm something.

Surveys, tests, existing databases.

For example, you may use this type of methodology if you are looking to test a set of hypotheses.

Qualitative

Qualitative research is a process of collecting and analyzing both words and textual data.

This form of research methodology is sometimes used where the aim and objective of the research are exploratory.

Observations, interviews, focus groups.

Exploratory research might be used where you are trying to understand human actions i.e. for a study in the sociology or psychology field.

Mixed-method

A mixed-method approach combines both of the above approaches.

The quantitative approach will provide you with some definitive facts and figures, whereas the qualitative methodology will provide your research with an interesting human aspect.

Where you can use a mixed method of research, this can produce some incredibly interesting results. This is due to testing in a way that provides data that is both proven to be exact while also being exploratory at the same time.

➡️ Want to learn more about the differences between qualitative and quantitative research, and how to use both methods? Check out our guide for that!

If you've done your due diligence, you'll have an idea of which methodology approach is best suited to your research.

It’s likely that you will have carried out considerable reading and homework before you reach this point and you may have taken inspiration from other similar studies that have yielded good results.

Still, it is important to consider different options before setting your research in stone. Exploring different options available will help you to explain why the choice you ultimately make is preferable to other methods.

If proving your research problem requires you to gather large volumes of numerical data to test hypotheses, a quantitative research method is likely to provide you with the most usable results.

If instead you’re looking to try and learn more about people, and their perception of events, your methodology is more exploratory in nature and would therefore probably be better served using a qualitative research methodology.

It helps to always bring things back to the question: what do I want to achieve with my research?

Once you have conducted your research, you need to analyze it. Here are some helpful guides for qualitative data analysis:

➡️  How to do a content analysis

➡️  How to do a thematic analysis

➡️  How to do a rhetorical analysis

Research methodology refers to the techniques used to find and analyze information for a study, ensuring that the results are valid, reliable and that they address the research objective.

Data can typically be organized into four different categories or methods: observational, experimental, simulation, and derived.

Writing a methodology section is a process of introducing your methods and instruments, discussing your analysis, providing more background information, addressing your research limitations, and more.

Your research methodology section will need a clear research question and proposed research approach. You'll need to add a background, introduce your research question, write your methodology and add the works you cited during your data collecting phase.

The research methodology section of your study will indicate how valid your findings are and how well-informed your paper is. It also assists future researchers planning to use the same methodology, who want to cite your study or replicate it.

Rhetorical analysis illustration

  • Privacy Policy

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

Research MethodologyResearch Methods
Research methodology refers to the philosophical and theoretical frameworks that guide the research process. refer to the techniques and procedures used to collect and analyze data.
It is concerned with the underlying principles and assumptions of research.It is concerned with the practical aspects of research.
It provides a rationale for why certain research methods are used.It determines the specific steps that will be taken to conduct research.
It is broader in scope and involves understanding the overall approach to research.It is narrower in scope and focuses on specific techniques and tools used in research.
It is concerned with identifying research questions, defining the research problem, and formulating hypotheses.It is concerned with collecting data, analyzing data, and interpreting results.
It is concerned with the validity and reliability of research.It is concerned with the accuracy and precision of data.
It is concerned with the ethical considerations of research.It is concerned with the practical considerations of research.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Limitations in Research

Limitations in Research – Types, Examples and...

APA Research Paper Format

APA Research Paper Format – Example, Sample and...

Thesis Outline

Thesis Outline – Example, Template and Writing...

Purpose of Research

Purpose of Research – Objectives and Applications

Data Interpretation

Data Interpretation – Process, Methods and...

Data Verification

Data Verification – Process, Types and Examples

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Dissertation
  • What Is a Research Methodology? | Steps & Tips

What Is a Research Methodology? | Steps & Tips

Published on 25 February 2019 by Shona McCombes . Revised on 10 October 2022.

Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.

It should include:

  • The type of research you conducted
  • How you collected and analysed your data
  • Any tools or materials you used in the research
  • Why you chose these methods
  • Your methodology section should generally be written in the past tense .
  • Academic style guides in your field may provide detailed guidelines on what to include for different types of studies.
  • Your citation style might provide guidelines for your methodology section (e.g., an APA Style methods section ).

Instantly correct all language mistakes in your text

Be assured that you'll submit flawless writing. Upload your document to correct all your mistakes.

upload-your-document-ai-proofreader

Table of contents

How to write a research methodology, why is a methods section important, step 1: explain your methodological approach, step 2: describe your data collection methods, step 3: describe your analysis method, step 4: evaluate and justify the methodological choices you made, tips for writing a strong methodology chapter, frequently asked questions about methodology.

The only proofreading tool specialized in correcting academic writing

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

importance of methodology

Correct my document today

Your methods section is your opportunity to share how you conducted your research and why you chose the methods you chose. It’s also the place to show that your research was rigorously conducted and can be replicated .

It gives your research legitimacy and situates it within your field, and also gives your readers a place to refer to if they have any questions or critiques in other sections.

You can start by introducing your overall approach to your research. You have two options here.

Option 1: Start with your “what”

What research problem or question did you investigate?

  • Aim to describe the characteristics of something?
  • Explore an under-researched topic?
  • Establish a causal relationship?

And what type of data did you need to achieve this aim?

  • Quantitative data , qualitative data , or a mix of both?
  • Primary data collected yourself, or secondary data collected by someone else?
  • Experimental data gathered by controlling and manipulating variables, or descriptive data gathered via observations?

Option 2: Start with your “why”

Depending on your discipline, you can also start with a discussion of the rationale and assumptions underpinning your methodology. In other words, why did you choose these methods for your study?

  • Why is this the best way to answer your research question?
  • Is this a standard methodology in your field, or does it require justification?
  • Were there any ethical considerations involved in your choices?
  • What are the criteria for validity and reliability in this type of research ?

Once you have introduced your reader to your methodological approach, you should share full details about your data collection methods .

Quantitative methods

In order to be considered generalisable, you should describe quantitative research methods in enough detail for another researcher to replicate your study.

Here, explain how you operationalised your concepts and measured your variables. Discuss your sampling method or inclusion/exclusion criteria, as well as any tools, procedures, and materials you used to gather your data.

Surveys Describe where, when, and how the survey was conducted.

  • How did you design the questionnaire?
  • What form did your questions take (e.g., multiple choice, Likert scale )?
  • Were your surveys conducted in-person or virtually?
  • What sampling method did you use to select participants?
  • What was your sample size and response rate?

Experiments Share full details of the tools, techniques, and procedures you used to conduct your experiment.

  • How did you design the experiment ?
  • How did you recruit participants?
  • How did you manipulate and measure the variables ?
  • What tools did you use?

Existing data Explain how you gathered and selected the material (such as datasets or archival data) that you used in your analysis.

  • Where did you source the material?
  • How was the data originally produced?
  • What criteria did you use to select material (e.g., date range)?

The survey consisted of 5 multiple-choice questions and 10 questions measured on a 7-point Likert scale.

The goal was to collect survey responses from 350 customers visiting the fitness apparel company’s brick-and-mortar location in Boston on 4–8 July 2022, between 11:00 and 15:00.

Here, a customer was defined as a person who had purchased a product from the company on the day they took the survey. Participants were given 5 minutes to fill in the survey anonymously. In total, 408 customers responded, but not all surveys were fully completed. Due to this, 371 survey results were included in the analysis.

Qualitative methods

In qualitative research , methods are often more flexible and subjective. For this reason, it’s crucial to robustly explain the methodology choices you made.

Be sure to discuss the criteria you used to select your data, the context in which your research was conducted, and the role you played in collecting your data (e.g., were you an active participant, or a passive observer?)

Interviews or focus groups Describe where, when, and how the interviews were conducted.

  • How did you find and select participants?
  • How many participants took part?
  • What form did the interviews take ( structured , semi-structured , or unstructured )?
  • How long were the interviews?
  • How were they recorded?

Participant observation Describe where, when, and how you conducted the observation or ethnography .

  • What group or community did you observe? How long did you spend there?
  • How did you gain access to this group? What role did you play in the community?
  • How long did you spend conducting the research? Where was it located?
  • How did you record your data (e.g., audiovisual recordings, note-taking)?

Existing data Explain how you selected case study materials for your analysis.

  • What type of materials did you analyse?
  • How did you select them?

In order to gain better insight into possibilities for future improvement of the fitness shop’s product range, semi-structured interviews were conducted with 8 returning customers.

Here, a returning customer was defined as someone who usually bought products at least twice a week from the store.

Surveys were used to select participants. Interviews were conducted in a small office next to the cash register and lasted approximately 20 minutes each. Answers were recorded by note-taking, and seven interviews were also filmed with consent. One interviewee preferred not to be filmed.

Mixed methods

Mixed methods research combines quantitative and qualitative approaches. If a standalone quantitative or qualitative study is insufficient to answer your research question, mixed methods may be a good fit for you.

Mixed methods are less common than standalone analyses, largely because they require a great deal of effort to pull off successfully. If you choose to pursue mixed methods, it’s especially important to robustly justify your methods here.

Prevent plagiarism, run a free check.

Next, you should indicate how you processed and analysed your data. Avoid going into too much detail: you should not start introducing or discussing any of your results at this stage.

In quantitative research , your analysis will be based on numbers. In your methods section, you can include:

  • How you prepared the data before analysing it (e.g., checking for missing data , removing outliers , transforming variables)
  • Which software you used (e.g., SPSS, Stata or R)
  • Which statistical tests you used (e.g., two-tailed t test , simple linear regression )

In qualitative research, your analysis will be based on language, images, and observations (often involving some form of textual analysis ).

Specific methods might include:

  • Content analysis : Categorising and discussing the meaning of words, phrases and sentences
  • Thematic analysis : Coding and closely examining the data to identify broad themes and patterns
  • Discourse analysis : Studying communication and meaning in relation to their social context

Mixed methods combine the above two research methods, integrating both qualitative and quantitative approaches into one coherent analytical process.

Above all, your methodology section should clearly make the case for why you chose the methods you did. This is especially true if you did not take the most standard approach to your topic. In this case, discuss why other methods were not suitable for your objectives, and show how this approach contributes new knowledge or understanding.

In any case, it should be overwhelmingly clear to your reader that you set yourself up for success in terms of your methodology’s design. Show how your methods should lead to results that are valid and reliable, while leaving the analysis of the meaning, importance, and relevance of your results for your discussion section .

  • Quantitative: Lab-based experiments cannot always accurately simulate real-life situations and behaviours, but they are effective for testing causal relationships between variables .
  • Qualitative: Unstructured interviews usually produce results that cannot be generalised beyond the sample group , but they provide a more in-depth understanding of participants’ perceptions, motivations, and emotions.
  • Mixed methods: Despite issues systematically comparing differing types of data, a solely quantitative study would not sufficiently incorporate the lived experience of each participant, while a solely qualitative study would be insufficiently generalisable.

Remember that your aim is not just to describe your methods, but to show how and why you applied them. Again, it’s critical to demonstrate that your research was rigorously conducted and can be replicated.

1. Focus on your objectives and research questions

The methodology section should clearly show why your methods suit your objectives  and convince the reader that you chose the best possible approach to answering your problem statement and research questions .

2. Cite relevant sources

Your methodology can be strengthened by referencing existing research in your field. This can help you to:

  • Show that you followed established practice for your type of research
  • Discuss how you decided on your approach by evaluating existing research
  • Present a novel methodological approach to address a gap in the literature

3. Write for your audience

Consider how much information you need to give, and avoid getting too lengthy. If you are using methods that are standard for your discipline, you probably don’t need to give a lot of background or justification.

Regardless, your methodology should be a clear, well-structured text that makes an argument for your approach, not just a list of technical details and procedures.

Methodology refers to the overarching strategy and rationale of your research. Developing your methodology involves studying the research methods used in your field and the theories or principles that underpin them, in order to choose the approach that best matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. interviews, experiments , surveys , statistical tests ).

In a dissertation or scientific paper, the methodology chapter or methods section comes after the introduction and before the results , discussion and conclusion .

Depending on the length and type of document, you might also include a literature review or theoretical framework before the methodology.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). What Is a Research Methodology? | Steps & Tips. Scribbr. Retrieved 29 August 2024, from https://www.scribbr.co.uk/thesis-dissertation/methodology/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, how to write a dissertation proposal | a step-by-step guide, what is a literature review | guide, template, & examples, what is a theoretical framework | a step-by-step guide.

Still have questions? Leave a comment

Add Comment

Checklist: Dissertation Proposal

Enter your email id to get the downloadable right in your inbox!

Examples: Edited Papers

Need editing and proofreading services, research methodology guide: writing tips, types, & examples.

calender

  • Tags: Academic Research , Research

No dissertation or research paper is complete without the research methodology section. Since this is the chapter where you explain how you carried out your research, this is where all the meat is! Here’s where you clearly lay out the steps you have taken to test your hypothesis or research problem.

Through this blog, we’ll unravel the complexities and meaning of research methodology in academic writing , from its fundamental principles and ethics to the diverse types of research methodology in use today. Alongside offering research methodology examples, we aim to guide you on how to write research methodology, ensuring your research endeavors are both impactful and impeccably grounded!

Ensure your research methodology is foolproof. Learn more

Let’s first take a closer look at a simple research methodology definition:

Defining what is research methodology

Research methodology is the set of procedures and techniques used to collect, analyze, and interpret data to understand and solve a research problem. Methodology in research not only includes the design and methods but also the basic principles that guide the choice of specific methods.

Grasping the concept of methodology in research is essential for students and scholars, as it demonstrates the thorough and structured method used to explore a hypothesis or research question. Understanding the definition of methodology in research aids in identifying the methods used to collect data. Be it through any type of research method approach, ensuring adherence to the proper research paper format is crucial.

Now let’s explore some research methodology types:

Types of research methodology

1. qualitative research methodology.

Qualitative research methodology is aimed at understanding concepts, thoughts, or experiences. This approach is descriptive and is often utilized to gather in-depth insights into people’s attitudes, behaviors, or cultures. Qualitative research methodology involves methods like interviews, focus groups, and observation. The strength of this methodology lies in its ability to provide contextual richness.

2. Quantitative research methodology

Quantitative research methodology, on the other hand, is focused on quantifying the problem by generating numerical data or data that can be transformed into usable statistics. It uses measurable data to formulate facts and uncover patterns in research. Quantitative research methodology typically involves surveys, experiments, or statistical analysis. This methodology is appreciated for its ability to produce objective results that are generalizable to a larger population.

3. Mixed-Methods research methodology

Mixed-methods research combines both qualitative and quantitative research methodologies to provide a more comprehensive understanding of the research problem. This approach leverages the strengths of both methodologies to provide a deeper insight into the research question of a research paper .

Research methodology vs. research methods

The research methodology or design is the overall strategy and rationale that you used to carry out the research. Whereas, research methods are the specific tools and processes you use to gather and understand the data you need to test your hypothesis.

Research methodology examples and application

To further understand research methodology, let’s explore some examples of research methodology:

a. Qualitative research methodology example: A study exploring the impact of author branding on author popularity might utilize in-depth interviews to gather personal experiences and perspectives.

b. Quantitative research methodology example: A research project investigating the effects of a book promotion technique on book sales could employ a statistical analysis of profit margins and sales before and after the implementation of the method.

c. Mixed-Methods research methodology example: A study examining the relationship between social media use and academic performance might combine both qualitative and quantitative approaches. It could include surveys to quantitatively assess the frequency of social media usage and its correlation with grades, alongside focus groups or interviews to qualitatively explore students’ perceptions and experiences regarding how social media affects their study habits and academic engagement.

These examples highlight the meaning of methodology in research and how it guides the research process, from data collection to analysis, ensuring the study’s objectives are met efficiently.

Importance of methodology in research papers

When it comes to writing your study, the methodology in research papers or a dissertation plays a pivotal role. A well-crafted methodology section of a research paper or thesis not only enhances the credibility of your research but also provides a roadmap for others to replicate or build upon your work.

How to structure the research methods chapter

Wondering how to write the research methodology section? Follow these steps to create a strong methods chapter:

Step 1: Explain your research methodology

At the start of a research paper , you would have provided the background of your research and stated your hypothesis or research problem. In this section, you will elaborate on your research strategy. 

Begin by restating your research question and proceed to explain what type of research you opted for to test it. Depending on your research, here are some questions you can consider: 

a. Did you use qualitative or quantitative data to test the hypothesis? 

b. Did you perform an experiment where you collected data or are you writing a dissertation that is descriptive/theoretical without data collection? 

c. Did you use primary data that you collected or analyze secondary research data or existing data as part of your study? 

These questions will help you establish the rationale for your study on a broader level, which you will follow by elaborating on the specific methods you used to collect and understand your data. 

Step 2: Explain the methods you used to test your hypothesis 

Now that you have told your reader what type of research you’ve undertaken for the dissertation, it’s time to dig into specifics. State what specific methods you used and explain the conditions and variables involved. Explain what the theoretical framework behind the method was, what samples you used for testing it, and what tools and materials you used to collect the data. 

Step 3: Explain how you analyzed the results

Once you have explained the data collection process, explain how you analyzed and studied the data. Here, your focus is simply to explain the methods of analysis rather than the results of the study. 

Here are some questions you can answer at this stage: 

a. What tools or software did you use to analyze your results? 

b. What parameters or variables did you consider while understanding and studying the data you’ve collected? 

c. Was your analysis based on a theoretical framework? 

Your mode of analysis will change depending on whether you used a quantitative or qualitative research methodology in your study. If you’re working within the hard sciences or physical sciences, you are likely to use a quantitative research methodology (relying on numbers and hard data). If you’re doing a qualitative study, in the social sciences or humanities, your analysis may rely on understanding language and socio-political contexts around your topic. This is why it’s important to establish what kind of study you’re undertaking at the onset. 

Step 4: Defend your choice of methodology 

Now that you have gone through your research process in detail, you’ll also have to make a case for it. Justify your choice of methodology and methods, explaining why it is the best choice for your research question. This is especially important if you have chosen an unconventional approach or you’ve simply chosen to study an existing research problem from a different perspective. Compare it with other methodologies, especially ones attempted by previous researchers, and discuss what contributions using your methodology makes.  

Step 5: Discuss the obstacles you encountered and how you overcame them

No matter how thorough a methodology is, it doesn’t come without its hurdles. This is a natural part of scientific research that is important to document so that your peers and future researchers are aware of it. Writing in a research paper about this aspect of your research process also tells your evaluator that you have actively worked to overcome the pitfalls that came your way and you have refined the research process. 

Tips to write an effective methodology chapter

1. Remember who you are writing for. Keeping sight of the reader/evaluator will help you know what to elaborate on and what information they are already likely to have. You’re condensing months’ work of research in just a few pages, so you should omit basic definitions and information about general phenomena people already know.

2. Do not give an overly elaborate explanation of every single condition in your study. 

3. Skip details and findings irrelevant to the results.

4. Cite references that back your claim and choice of methodology. 

5. Consistently emphasize the relationship between your research question and the methodology you adopted to study it. 

To sum it up, what is methodology in research? It’s the blueprint of your research, essential for ensuring that your study is systematic, rigorous, and credible. Whether your focus is on qualitative research methodology, quantitative research methodology, or a combination of both, understanding and clearly defining your methodology is key to the success of your research.

Once you write the research methodology and complete writing the entire research paper, the next step is to edit your paper. As experts in research paper editing and proofreading services , we’d love to help you perfect your paper!

Here are some other articles that you might find useful: 

  • Essential Research Tips for Essay Writing
  • How to Write a Lab Report: Examples from Academic Editors
  • The Essential Types of Editing Every Writer Needs to Know
  • Editing and Proofreading Academic Papers: A Short Guide
  • The Top 10 Editing and Proofreading Services of 2023

Frequently Asked Questions

What does research methodology mean, what types of research methodologies are there, what is qualitative research methodology, how to determine sample size in research methodology, what is action research methodology.

Found this article helpful?

One comment on “ Research Methodology Guide: Writing Tips, Types, & Examples ”

This is very simplified and direct. Very helpful to understand the research methodology section of a dissertation

Leave a Comment: Cancel reply

Your email address will not be published.

Your vs. You’re: When to Use Your and You’re

Your organization needs a technical editor: here’s why, your guide to the best ebook readers in 2024, writing for the web: 7 expert tips for web content writing.

Subscribe to our Newsletter

Get carefully curated resources about writing, editing, and publishing in the comfort of your inbox.

How to Copyright Your Book?

If you’ve thought about copyrighting your book, you’re on the right path.

© 2024 All rights reserved

  • Terms of service
  • Privacy policy
  • Self Publishing Guide
  • Pre-Publishing Steps
  • Fiction Writing Tips
  • Traditional Publishing
  • Additional Resources
  • Dissertation Writing Guide
  • Essay Writing Guide
  • Academic Writing and Publishing
  • Citation and Referencing
  • Partner with us
  • Annual report
  • Website content
  • Marketing material
  • Job Applicant
  • Cover letter
  • Resource Center
  • Case studies

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

Qualitative to broader populations. .
Quantitative .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Primary . methods.
Secondary

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Descriptive . .
Experimental

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Research methods for collecting data
Research method Primary or secondary? Qualitative or quantitative? When to use
Primary Quantitative To test cause-and-effect relationships.
Primary Quantitative To understand general characteristics of a population.
Interview/focus group Primary Qualitative To gain more in-depth understanding of a topic.
Observation Primary Either To understand how something occurs in its natural setting.
Secondary Either To situate your research in an existing body of work, or to evaluate trends within a research topic.
Either Either To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study.

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

Research methods for analyzing data
Research method Qualitative or quantitative? When to use
Quantitative To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations).
Meta-analysis Quantitative To statistically analyze the results of a large collection of studies.

Can only be applied to studies that collected data in a statistically valid manner.

Qualitative To analyze data collected from interviews, , or textual sources.

To understand general themes in the data and how they are communicated.

Either To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources.

Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words).

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

Get unlimited documents corrected

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Perspective
  • Published: 29 November 2022

The fundamental importance of method to theory

  • Rick Dale   ORCID: orcid.org/0000-0001-7865-474X 1 ,
  • Anne S. Warlaumont   ORCID: orcid.org/0000-0001-9450-1372 1 &
  • Kerri L. Johnson   ORCID: orcid.org/0000-0002-1458-2019 1 , 2  

Nature Reviews Psychology volume  2 ,  pages 55–66 ( 2023 ) Cite this article

806 Accesses

3 Citations

14 Altmetric

Metrics details

  • Communication
  • Human behaviour

Many domains of inquiry in psychology are concerned with rich and complex phenomena. At the same time, the field of psychology is grappling with how to improve research practices to address concerns with the scientific enterprise. In this Perspective, we argue that both of these challenges can be addressed by adopting a principle of methodological variety. According to this principle, developing a variety of methodological tools should be regarded as a scientific goal in itself, one that is critical for advancing scientific theory. To illustrate, we show how the study of language and communication requires varied methodologies, and that theory development proceeds, in part, by integrating disparate tools and designs. We argue that the importance of methodological variation and innovation runs deep, travelling alongside theory development to the core of the scientific enterprise. Finally, we highlight ongoing research agendas that might help to specify, quantify and model methodological variety and its implications.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 digital issues and online access to articles

55,14 € per year

only 4,60 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

importance of methodology

Similar content being viewed by others

importance of methodology

Comparing meta-analyses and preregistered multiple-laboratory replication projects

importance of methodology

High replicability of newly discovered social-behavioural findings is achievable

importance of methodology

The replication crisis has led to positive structural, procedural, and community changes

Hacking, I. Representing and Intervening: Introductory Topics in The Philosophy of Natural Science (Cambridge Univ. Press, 1983).

Mayo, D. G. in PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association Vol. 1994, 270–279 (Cambridge Univ. Press, 1994).

Ackermann, R. The new experimentalism. Brit. J. Philos. Sci. 40 , 185–190 (1989).

Article   Google Scholar  

Simons, M. & Vagelli, M. Were experiments ever neglected? Ian Hacking and the history of philosophy of experiment. Phil. Inq. 9 , 167–188 (2021).

Google Scholar  

Nosek, B. A. et al. Promoting an open research culture. Science 348 , 1422–1425 (2015).

Richardson, D. C., Dale, R. & Tomlinson, J. M. Conversation, gaze coordination, and beliefs about visual context. Cogn. Sci. 33 , 1468–1482 (2009).

Laidlaw, K. E., Foulsham, T., Kuhn, G. & Kingstone, A. Potential social interactions are important to social attention. Proc. Natl Acad. Sci. USA 108 , 5548–5553 (2011).

Richardson, D. C. et al. Joint perception: gaze and social context. Front. Hum. Neurosci. 6 , 194 (2012).

Risko, E. F., Richardson, D. C. & Kingstone, A. Breaking the fourth wall of cognitive science: real-world social attention and the dual function of gaze. Curr. Dir. Psychol. Sci. 25 , 70–74 (2016).

Levin, I. P., Schneider, S. L. & Gaeth, G. J. All frames are not created equal: a typology and critical analysis of framing effects. Organ. Behav. Hum. Decis. Process. 76 , 149–188 (1998).

Pothos, E. M. & Busemeyer, J. R. A quantum probability explanation for violations of ‘rational’ decision theory. Proc. R. Soc. B 276 , 2171–2178 (2009).

Pärnamets, P. et al. Biasing moral decisions by exploiting the dynamics of eye gaze. Proc. Natl Acad. Sci. USA 112 , 4170–4175 (2015).

Vinson, D. W., Dale, R. & Jones, M. N. Decision contamination in the wild: sequential dependencies in online review ratings. Behav. Res. Methods 51 , 1477–1484 (2019).

Longino, H. E. Gender, politics, and the theoretical virtues. Synthese 104 , 383–397 (1995).

Hansson, S. O. in Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) https://plato.stanford.edu/archives/fall2021/entries/pseudo-science/ (Stanford Univ., 2015).

Dupré, J. The Disorder of Things: Metaphysical Foundations of the Disunity of Science (Harvard Univ. Press, 1993).

Cartwright, N. The Dappled World: A Study of the Boundaries of Science (Cambridge Univ. Press, 1999).

McCauley, R. N. & Bechtel, W. Explanatory pluralism and heuristic identity theory. Theory Psychol. 11 , 736–760 (2001).

Mitchell, S. D. Integrative pluralism. Biol. Phil. 17 , 55–70 (2002).

Abrahamsen, A. & Bechtel, W. in Contemporary Debates in Cognitive Science (ed. Stainton, R.) 159–187 (Blackwell, 2006).

Kellert, S. H., Longino, H. E. & Waters, C. K. Scientific Pluralism (Univ. Minnesota Press, 2006).

Dale, R., Dietrich, E. & Chemero, A. Explanatory pluralism in cognitive science. Cogn. Sci. 33 , 739–742 (2009).

Weisberg, M. & Muldoon, R. Epistemic landscapes and the division of cognitive labor. Phil. Sci. 76 , 225–252 (2009).

Zollman, K. J. S. The epistemic benefit of transient diversity. Erkenntnis 72 , 17–35 (2010).

Horst, S. Beyond reduction: from naturalism to cognitive pluralism. Mind Matter 12 , 197–244 (2014).

Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349 , 943 (2015).

Lin, H., Werner, K. M. & Inzlicht, M. Promises and perils of experimentation: the mutual-internal-validity problem. Perspect. Psychol. Sci. 16 , 854–863 (2021).

van Rooij, I. & Baggio, G. Theory before the test: how to build high-verisimilitude explanatory theories in psychological science. Perspect. Psychol. Sci. 16 , 682–697 (2021).

MacLeod, C. M. The Stroop task: the gold standard of attentional measures. J. Exp. Psychol. Gen. 121 , 12–14 (1992).

Eriksen, C. W. The flankers task and response competition: a useful tool for investigating a variety of cognitive problems. Vis. Cogn. 2 , 101–118 (1995).

Dickter, C. L. & Bartholow, B. D. Ingroup categorization and response conflict: interactive effects of target race, flanker compatibility, and infrequency on N2 amplitude. Psychophysiology 47 , 596–601 (2010).

Parris, B. A., Hasshim, N., Wadsley, M., Augustinova, M. & Ferrand, L. The loci of Stroop effects: a critical review of methods and evidence for levels of processing contributing to color-word Stroop effects and the implications for the loci of attentional selection. Psychol. Res. 86 , 1029–1053 (2022).

Barzykowski, K., Wereszczyski, M., Hajdas, S. & Radel, R. Cognitive inhibition behavioral tasks in online and laboratory settings: data from Stroop, SART and Eriksen Flanker tasks. Data Brief. 43 , 108398 (2022).

Gobel, M. S., Kim, H. S. & Richardson, D. C. The dual function of social gaze. Cognition 136 , 359–364 (2015).

Gallup, A. C., Chong, A. & Couzin, I. D. The directional flow of visual information transfer between pedestrians. Biol. Lett. 8 , 520–522 (2012).

Lick, D. J. & Johnson, K. L. Straight until proven gay: a systematic bias toward straight categorizations in sexual orientation judgments. J. Pers. Soc. Psychol. 110 , 801 (2016).

Alt, N. P., Lick, D. J. & Johnson, K. L. The straight categorization bias: a motivated and altruistic reasoning account. J. Pers. Soc. Psychol. https://doi.org/10.1037/pspi0000232 (2020).

Popper, K. Conjectures and Refutations: The Growth of Scientific Knowledge (Routledge, 2002).

Heit, E. & Hahn, U. Diversity-based reasoning in children. Cogn. Psychol. 43 , 243–273 (2001).

Heit, E., Hahn, U. & Feeney, A. in Categorization Inside and Outside the Laboratory: Essays in Honor of Douglas L. Medin 87–99 (American Psychological Association, 2005).

MacWhinney, B. in The Handbook of Linguistics 2nd edn (eds. Aronoff, M. & Rees-Miller, J.) 397–413 (Wiley, 2017).

Stivers, T. et al. Universals and cultural variation in turn-taking in conversation. Proc. Natl Acad. Sci. USA 106 , 10587–10592 (2009).

Louwerse, M. M., Dale, R., Bard, E. G. & Jeuniaux, P. Behavior matching in multimodal communication is synchronized. Cogn. Sci. 36 , 1404–1426 (2012).

Fusaroli, R., Bjørndahl, J. S., Roepstorff, A. & Tylén, K. A heart for interaction: shared physiological dynamics and behavioral coordination in a collective, creative construction task. J. Exp. Psychol. Hum. Percept. Perform. 42 , 1297 (2016).

Rasenberg, M., Özyürek, A. & Dingemanse, M. Alignment in multimodal interaction: an integrative framework. Cogn. Sci. 44 , e12911 (2020).

Dunn, M., Greenhill, S. J., Levinson, S. C. & Gray, R. D. Evolved structure of language shows lineage-specific trends in word-order universals. Nature 473 , 79–82 (2011).

Hua, X., Greenhill, S. J., Cardillo, M., Schneemann, H. & Bromham, L. The ecological drivers of variation in global language diversity. Nat. Commun. 10 , 1–10 (2019).

Christiansen, M. H. & Chater, N. The now-or-never bottleneck: a fundamental constraint on language. Behav. Brain Sci. 39 , e62 (2016).

Fitch, W. T., De Boer, B., Mathur, N. & Ghazanfar, A. A. Monkey vocal tracts are speech-ready. Sci. Adv. 2 , e1600723 (2016).

Hauser, M. D., Chomsky, N. & Fitch, W. T. The faculty of language: what is it, who has it, and how did it evolve? Science 298 , 1569–1579 (2002).

Cowley, S. J. Distributed Language (John Benjamins, 2011).

Samuels, R. Nativism in cognitive science. Mind Lang. 17 , 233–265 (2002).

Behme, C. & Deacon, S. H. Language learning in infancy: does the empirical evidence support a domain specific language acquisition device? Phil. Psychol. 21 , 641–671 (2008).

Chomsky, N. 4. A Review Of BF Skinner’s Verbal Behavior (Harvard Univ. Press, 2013).

Vihman, M. M. Phonological Development: The Origins of Language in the Child (Blackwell, 1996).

Oller, D. K. The Emergence of the Speech Capacity (Psychology Press, 2000).

Clark, E. V. & Casillas, M. First Language Acquisition (Routledge, 2015).

Goldstein, M. H., King, A. P. & West, M. J. Social interaction shapes babbling: testing parallels between birdsong and speech. Proc. Natl Acad. Sci. USA 100 , 8030–8035 (2003).

Warlaumont, A. S. Modeling the emergence of syllabic structure. J. Phonet. 53 , 61–65 (2015).

VanDam, M. et al. HomeBank: An online repository of daylong child-centered audio recordings. Semin. Speech Lang. 37 , 128–142 (2016).

Elmlinger, S. L., Schwade, J. A. & Goldstein, M. H. The ecology of prelinguistic vocal learning: parents simplify the structure of their speech in response to babbling. J. Child Lang. 46 , 998–1011 (2019).

Roy, B. C., Frank, M. C., DeCamp, P., Miller, M. & Roy, D. Predicting the birth of a spoken word. Proc. Natl Acad. Sci. USA 112 , 12663–12668 (2015).

McClelland, J. L. The place of modeling in cognitive science. Top. Cogn. Sci. 1 , 11–38 (2009).

Smaldino, P. E. in Computational Social Psychology (eds Vallacher’, R., Read, S. J. & Nowak, A.) 311–331 (Routledge, 2017).

Guest, O. & Martin, A. E. How computational modeling can force theory building in psychological science. Perspect. Psychol. Sci. 16 , 789–802 (2021).

Elman, J. L., Bates, E. A. & Johnson, M. H. Rethinking Innateness: A Connectionist Perspective on Development Vol. 10 (MIT Press, 1996).

Warlaumont, A. S., Westermann, G., Buder, E. H. & Oller, D. K. Prespeech motor learning in a neural network using reinforcement. Neural Netw. 38 , 64–75 (2013).

Warlaumont, A. S. & Finnegan, M. K. Learning to produce syllabic speech sounds via reward-modulated neural plasticity. PLoS One 11 , e0145096 (2016).

MacWhinney, B. & Snow, C. The child language data exchange system: an update. J. Child. Lang. 17 , 457–472 (1990).

MacWhinney, B. The CHILDES Project: Tools for Analyzing Talk 3rd edn (Psychology Press, 2014).

Kachergis, G., Marchman, V. A. & Frank, M. C. Toward a “standard model” of early language learning. Curr. Dir. Psychol. Sci. 31 , 20–27 (2022).

Lewis, J. D. & Elman, J. L. A connectionist investigation of linguistic arguments from the poverty of the stimulus: learning the unlearnable. In Proc. Annual Meeting of the Cognitive Science Society Vol. 23, 552–557 (eds. Moore, J. D. & Stenning, K.) (2001).

Regier, T. & Gahl, S. Learning the unlearnable: the role of missing evidence. Cognition 93 , 147–155 (2004).

Reali, F. & Christiansen, M. H. Uncovering the richness of the stimulus: structure dependence and indirect statistical evidence. Cogn. Sci. 29 , 1007–1028 (2005).

Foraker, S., Regier, T., Khetarpal, N., Perfors, A. & Tenenbaum, J. Indirect evidence and the poverty of the stimulus: the case of anaphoric one . Cognit. Sci. 33 , 287–300 (2009).

Saffran, J. R., Aslin, R. N. & Newport, E. L. Statistical learning by 8-month-old infants. Science 274 , 1926–1928 (1996).

McMurray, B. & Hollich, G. Core computational principles of language acquisition: can statistical learning do the job? Dev. Sci. 12 , 365–368 (2009).

Frost, R., Armstrong, B. C., Siegelman, N. & Christiansen, M. H. Domain generality versus modality specificity: the paradox of statistical learning. Trends Cogn. Sci. 19 , 117–125 (2015).

Isbilen, E. S., Frost, R. L. A., Monaghan, P. & Christiansen, M. H. Statistically based chunking of nonadjacent dependencies. J. Exp. Psychol. Gen. https://doi.org/10.1037/xge0001207 (2022).

Ruba, A. L., Pollak, S. D. & Saffran, J. R. Acquiring complex communicative systems: Statistical learning of language and emotion. Top. Cogn. Sci. https://doi.org/10.1111/tops.12612 (2022).

Abney, D. H., Warlaumont, A. S., Oller, D. K., Wallot, S. & Kello, C. T. Multiple coordination patterns in infant and adult vocalizations. Infancy 22 , 514–539 (2017).

Mendoza, J. K. & Fausey, C. M. Everyday music in infancy. Dev. Sci. 24 , e13122 (2019).

Ritwika, V. et al. Exploratory dynamics of vocal foraging during infant-caregiver communication. Sci. Rep. 10 , 10469 (2020).

Mendoza, J. K. & Fausey, C. M. Quantifying everyday ecologies: principles for manual annotation of many hours of infants’ lives. Front. Psychol. 12 , 710636 (2021).

Fernald, A., Zangl, R., Portillo, A. L. & Marchman, V. A. in Developmental Psycholinguistics: On-line Methods in Children’s Language Processing (eds. Sekerina, I. A., Fernández, E. M. & Clahsen, H.) Vol. 44, 97 (John Benjamins, 2008).

Weisleder, A. & Fernald, A. Talking to children matters: early language experience strengthens processing and builds vocabulary. Psychol. Sci. 24 , 2143–2152 (2013).

Bergelson, E. & Aslin, R. N. Nature and origins of the lexicon in 6-mo-olds. Proc. Natl Acad. Sci. USA 114 , 12916–12921 (2017).

Brennan, S. E., Galati, A. & Kuhlen, A. K. in Psychology of Learning and Motivation (ed. Ross, B. H.) Vol. 53, 301–344 (Elsevier, 2010).

Streeck, J., Goodwin, C. & LeBaron, C. Embodied Interaction: Language and Body in the Material World (Cambridge Univ. Press, 2011).

Goodwin, C. Co-operative Action (Cambridge Univ. Press, 2017).

Dale, R., Spivey, M. J. in Eye-Tracking In Interaction. Studies On The Role Of Eye Gaze In Dialogue (eds. Oben, B. and Brône, G.) 67–90 (John Benjamins, 2018).

Richardson, D. C. & Spivey, M. J. in Encyclopedia of Biomaterials and Biomedical Engineering 573–582 (CRC Press, 2004).

Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. Integration of visual and linguistic information in spoken language comprehension. Science 268 , 1632–1634 (1995).

Spivey, M. J., Tanenhaus, M. K., Eberhard, K. M. & Sedivy, J. C. Eye movements and spoken language comprehension: effects of visual context on syntactic ambiguity resolution. Cogn. Psychol. 45 , 447–481 (2002).

Richardson, D. C., Dale, R. & Spivey, M. J. Eye movements in language and cognition. Methods Cogn. Linguist. 18 , 323–344 (2007).

Ferreira, F. & Clifton, C. Jr The independence of syntactic processing. J. Mem. Lang. 25 , 348–368 (1986).

Kamide, Y., Altmann, G. T. & Haywood, S. L. The time-course of prediction in incremental sentence processing: evidence from anticipatory eye movements. J. Mem. Lang. 49 , 133–156 (2003).

Coco, M. I., Keller, F. & Malcolm, G. L. Anticipation in real‐world scenes: the role of visual context and visual memory. Cogn. Sci. 40 , 1995–2024 (2016).

Coco, M. I. & Keller, F. Scan patterns predict sentence production in the cross‐modal processing of visual scenes. Cogn. Sci. 36 , 1204–1223 (2012).

Kieslich, P. J., Henninger, F., Wulff, D. U., Haslbeck, J. M. & Schulte-Mecklenbeck, M. in A Handbook of Process Tracing Methods 111–130 (Routledge, 2019).

Spivey, M. J., Grosjean, M. & Knoblich, G. Continuous attraction toward phonological competitors. Proc. Natl Acad. Sci. USA 102 , 10393–10398 (2005).

Freeman, J., Dale, R. & Farmer, T. Hand in motion reveals mind in motion. Front. Psychol. 2 , 59 (2011).

Freeman, J. B. & Johnson, K. L. More than meets the eye: split-second social perception. Trends Cogn. Sci. 20 , 362–374 (2016).

Goodale, B. M., Alt, N. P., Lick, D. J. & Johnson, K. L. Groups at a glance: perceivers infer social belonging in a group based on perceptual summaries of sex ratio. J. Exp. Psychol. Gen. 147 , 1660–1676 (2018).

Sneller, B. & Roberts, G. Why some behaviors spread while others don’t: a laboratory simulation of dialect contact. Cognition 170 , 298–311 (2018).

Atkinson, M., Mills, G. J. & Smith, K. Social group effects on the emergence of communicative conventions and language complexity. J. Lang. Evol. 4 , 1–18 (2019).

Raviv, L., Meyer, A. & Lev‐Ari, S. The role of social network structure in the emergence of linguistic structure. Cogn. Sci. 44 , e12876 (2020).

Lupyan, G. & Dale, R. Language structure is partly determined by social structure. PLoS One 5 , e8559 (2010).

Lupyan, G. & Dale, R. Why are there different languages? The role of adaptation in linguistic diversity. Trends Cogn. Sci. 20 , 649–660 (2016).

Wu, L., Waber, B. N., Aral, S., Brynjolfsson, E. & Pentland, A. Mining face-to-face interaction networks using sociometric badges: predicting productivity in an IT configuration task. Inf. Syst. Behav. Soc. Methods https://doi.org/10.2139/ssrn.1130251 (2008).

Paxton, A. & Dale, R. Argument disrupts interpersonal synchrony. Q. J. Exp. Psychol. 66 , 2092–2102 (2013).

Alviar, C., Dale, R. & Galati, A. Complex communication dynamics: exploring the structure of an academic talk. Cogn. Sci. 43 , e12718 (2019).

Joo, J., Bucy, E. P. & Seidel, C. Automated coding of televised leader displays: detecting nonverbal political behavior with computer vision and deep learning. Int. J. Commun. 13 , 4044–4066 (2019).

Metallinou, A. et al. The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations. Lang. Res. Eval.   50 , 497–521 (2016).

Pouw, W., Paxton, A., Harrison, S. J. & Dixon, J. A. Acoustic information about upper limb movement in voicing. Proc. Natl Acad. Sci. USA 117 , 11364–11367 (2020).

Enfield, N., Levinson, S. C., De Ruiter, J. P. & Stivers, T. in Field Manual Vol. 10, 96–99 (ed. Majid, A.) (Max Planck Institute for Psycholinguistics, 2007).

Enfield, N. & Sidnell, J. On the concept of action in the study of interaction. Discourse Stud. 19 , 515–535 (2017).

Duran, N. D., Paxton, A. & Fusaroli, R. ALIGN: analyzing linguistic interactions with generalizable techNiques — a Python library. Psychol. Methods 24 , 419 (2019).

Brennan, S. E. & Clark, H. H. Conceptual pacts and lexical choice in conversation. J. Exp. Psychol. Learn. Mem. Cogn. 22 , 1482–1493 (1996).

Hasson, U., Nir, Y., Levy, I., Fuhrmann, G. & Malach, R. Intersubject synchronization of cortical activity during natural vision. Science 303 , 1634–1640 (2004).

Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E. & Gallant, J. L. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 532 , 453–458 (2016).

Shain, C. et al. Robust effects of working memory demand during naturalistic language comprehension in language-selective cortex. J. Neurosci.   42 , 7412–7430 (2022).

Fedorenko, E., Blank, I. A., Siegelman, M. & Mineroff, Z. Lack of selectivity for syntax relative to word meanings throughout the language network. Cognition 203 , 104348 (2020).

Stephens, G. J., Silbert, L. J. & Hasson, U. Speaker–listener neural coupling underlies successful communication. Proc. Natl Acad. Sci. USA 107 , 14425–14430 (2010).

Schilbach, L. et al. Toward a second-person neuroscience. Behav. Brain Sci. 36 , 393–414 (2013).

Redcay, E. & Schilbach, L. Using second-person neuroscience to elucidate the mechanisms of social interaction. Nat. Rev. Neurosci. 20 , 495–505 (2019).

Riley, M. A., Richardson, M., Shockley, K. & Ramenzoni, V. C. Interpersonal synergies. Front. Psychol. 2 , 38 (2011).

Dale, R., Fusaroli, R., Duran, N. D. & Richardson, D. C. in Psychology of Learning and Motivation (ed. Ross, B. H.) Vol. 59, 43–95 (Elsevier, 2013).

Fusaroli, R., Rączaszek-Leonardi, J. & Tylén, K. Dialog as interpersonal synergy. N. Ideas Psychol. 32 , 147–157 (2014).

Hadley, L. V., Naylor, G. & Hamilton, A. F. D. C. A review of theories and methods in the science of face-to-face social interaction. Nat. Rev. Psychol. 1 , 42–54 (2022).

Cornejo, C., Cuadros, Z., Morales, R. & Paredes, J. Interpersonal coordination: methods achievements and challenges. Front. Psychol. https://doi.org/10.3389/fpsyg.2017.01685  (2017).

Smaldino, P. E. in Computational Social Psychology 311–331 (Routledge, 2017).

Devezer, B., Nardin, L. G., Baumgaertner, B. & Buzbas, E. O. Scientific discovery in a model-centric framework: reproducibility, innovation, and epistemic diversity. PLoS One 14 , e0216125 (2019).

Sulik, J., Bahrami, B. & Deroy, O. The diversity gap: when diversity matters for knowledge. Perspect. Psychol. Sci. 17 , 752–767 (2022).

O’Connor, C. & Bruner, J. Dynamics and diversity in epistemic communities. Erkenntnis 84 , 101–119 (2019).

Longino, H. in The Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) https://plato.stanford.edu/archives/sum2019/entries/scientific-knowledge-social/ (Stanford Univ., 2019).

Van Rooij, I. The tractable cognition thesis. Cogn. Sci. 32 , 939–984 (2008).

Kwisthout, J., Wareham, T. & Van Rooij, I. Bayesian intractability is not an ailment that approximation can cure. Cogn. Sci. 35 , 779–784 (2011).

Contreras Kallens, P. & Dale, R. Exploratory mapping of theoretical landscapes through word use in abstracts. Scientometrics 116 , 1641–1674 (2018).

Methods for methods’ sake. Nat. Methods https://doi.org/10.1038/nmeth1004-1 (2004).

Oberauer, K. & Lewandowsky, S. Addressing the theory crisis in psychology. Psychon. Bull. Rev. 26 , 1596–1618 (2019).

Meehl, P. E. Theory-testing in psychology and physics: a methodological paradox. Phil. Sci. 34 , 103–115 (1967).

Klein, O. et al. A practical guide for transparency in psychological science. Collabra Psychol. 4 , 20 (2018).

Muthukrishna, M. & Henrich, J. A problem in theory. Nat. Hum. Behav. 3 , 221–229 (2019).

Eronen, M. I. & Bringmann, L. F. The theory crisis in psychology: how to move forward. Perspect. Psychol. Sci. https://doi.org/10.1177/1745691620970586 (2021).

Borsboom, D., van der Maas, H. L. J., Dalege, J., Kievit, R. A. & Haig, B. D. Theory construction methodology: a practical framework for building theories in psychology. Perspect. Psychol. Sci. https://doi.org/10.1177/1745691620969647 (2021).

Kyvik, S. & Reymert, I. Research collaboration in groups and networks: differences across academic fields. Scientometrics 113 , 951–967 (2017).

Tebes, J. K. & Thai, N. D. Interdisciplinary team science and the public: steps toward a participatory team science. Am. Psychol. 73 , 549 (2018).

Falk-Krzesinski, H. J. et al. Mapping a research agenda for the science of team science. Res. Eval. 20 , 145–158 (2011).

da Silva, J. A. T. The Matthew effect impacts science and academic publishing by preferentially amplifying citations, metrics and status. Scientometrics 126 , 5373–5377 (2021).

Scheel, A. M., Tiokhin, L., Isager, P. M. & Lakens, D. Why hypothesis testers should spend less time testing hypotheses. Perspect. Psychol. Sci. 16 , 744–755 (2020).

Jones, M. N. Big Data in Cognitive Science (Psychology Press, 2016).

Paxton, A. & Griffiths, T. L. Finding the traces of behavioral and cognitive processes in big data and naturally occurring datasets. Behav. Res. Methods 49 , 1630–1638 (2017).

Lupyan, G. & Goldstone, R. L. Beyond the lab: using big data to discover principles of cognition. Behav Res. Methods , 51 , 1554–3528 (2019).

Haspelmath, M., Dryer, M. S., Gil, D. & Comrie, B. (eds) The World Atlas of Language Structures (Max Planck Digital Library, 2013).

Eberhard, D. M., Simons, G. F. & Fennig, C. D. Ethnologue: Languages of the World (SIL International, 2021).

MacCorquodale, K. & Meehl, P. E. On a distinction between hypothetical constructs and intervening variables. Psychol. Rev. 55 , 95–107 (1948).

van Rooij, I. & Baggio, G. Theory before the test: how to build high-verisimilitude explanatory theories in psychological science. Perspect. Psychol. Sci . https://doi.org/10.1177/1745691620970604 (2021).

Christiansen, M. H. & Chater, N. Creating Language: Integrating Evolution, Acquisition, and Processing (MIT Press, 2016).

Berwick, R. C. & Chomsky, N. Why Only Us: Language and Evolution (MIT Press, 2016).

Gopnik, A. Scientific thinking in young children: theoretical advances, empirical research, and policy implications. Science 337 , 1623–1627 (2012).

Pereira, A. F., James, K. H., Jones, S. S. & Smith, L. B. Early biases and developmental changes in self-generated object views. J. Vis. 10 , 22–22 (2010).

Fagan, M. K. & Iverson, J. M. The influence of mouthing on infant vocalization. Infancy 11 , 191–202 (2007).

Martin, J., Ruthven, M., Boubertakh, R. & Miquel, M. E. Realistic dynamic numerical phantom for MRI of the upper vocal tract. J. Imaging 6 , 86 (2020).

Spivey, M. J. & Dale, R. Continuous dynamics in real-time cognition. Curr. Dir. Psychol. Sci. 15 , 207–211 (2006).

Publication Manual of the American Psychological Association 3rd edn (American Psychological Association, 1983).

Publication Manual of the American Psychological Association 6th edn (American Psychological Association, 2010).

Ashby, W. R. An Introduction to Cybernetics (Martino, 1956).

de Raadt, J. D. R. Ashby’s law of requisite variety: an empirical study. Cybern. Syst. 18 , 517–536 (1987).

Ward, L. M. Dynamical Cognitive Science (MIT Press, 2002).

Regier, T., Carstensen, A. & Kemp, C. Languages support efficient communication about the environment: words for snow revisited. PLoS One 11 , e0151138 (2016).

Newell, A. Unified Theories of Cognition (Harvard Univ. Press, 1990).

Rich, P., de Haan, R., Wareham, T. & van Rooij, I. in Proc. Annual Meeting of the Cognitive Science Society Vol. 43, 3034–3040 (eds. Fitch, T., Lamm, C., Leder, H., and Teßmar-Raible, K.) (2021).

Potochnik, A. & Sanches de Oliveira, G. Patterns in cognitive phenomena and pluralism of explanatory styles. Top. Cogn. Sci. 12 , 1306–1320 (2020).

Leydesdorff, L. & Schank, T. Dynamic animations of journal maps: indicators of structural changes and interdisciplinary developments. J. Am. Soc. Inf. Sci. Technol. 59 , 1810–1818 (2008).

Leydesdorff, L. & Goldstone, R. L. Interdisciplinarity at the journal and specialty level: the changing knowledge bases of the journal Cognitive Science . J. Assoc. Inf. Sci. Technol. 65 , 164–177 (2014).

DeStefano, I., Oey, L. A., Brockbank, E. & Vul, E. Integration by parts: collaboration and topic structure in the CogSci community. Top. Cogn. Sci. 13 , 399–413 (2021).

Cummins, R. in Explanation And Cognition (eds Keil, F. C. & Wilson, R.) 117–144 (MIT Press, 2000).

Boyd, N. M. & Bogen, J. in Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) https://plato.stanford.edu/archives/win2021/entries/science-theory-observation/ (Stanford Univ., 2021).

Smaldino, P. E. How to build a strong theoretical foundation. Psychol. Inq. 31 , 297–301 (2020).

Chang, H. Inventing Temperature: Measurement and Scientific Progress (Oxford Univ. Press, 2004).

Download references

Acknowledgements

A.S.W. was supported by the National Science Foundation (grants 1529127 and 1539129/1827744) and by the James S. McDonnell Foundation ( https://doi.org/10.37717/220020507 ). K.L.J. was supported by the National Science Foundation (grant 2017245).

Author information

Authors and affiliations.

Department of Communication, University of California, Los Angeles, Los Angeles, CA, USA

Rick Dale, Anne S. Warlaumont & Kerri L. Johnson

Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA

Kerri L. Johnson

You can also search for this author in PubMed   Google Scholar

Contributions

R.D. discussed the submission theme with the editor and wrote the first draft. A.S.W. and K.L.J. refined and added to this plan and contributed major new sections of writing and revision. All authors contributed to developing the figures.

Corresponding author

Correspondence to Rick Dale .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Reviews Psychology thanks Berna Devezer; Michael Frank, who co-reviewed with Anjie Cao; and Justin Sulik for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Dale, R., Warlaumont, A.S. & Johnson, K.L. The fundamental importance of method to theory. Nat Rev Psychol 2 , 55–66 (2023). https://doi.org/10.1038/s44159-022-00120-5

Download citation

Accepted : 22 September 2022

Published : 29 November 2022

Issue Date : January 2023

DOI : https://doi.org/10.1038/s44159-022-00120-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Mechanisms linking social media use to adolescent mental health vulnerability.

  • Adrian Meier
  • Sarah-Jayne Blakemore

Nature Reviews Psychology (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

importance of methodology

  • Research Process
  • Manuscript Preparation
  • Manuscript Review
  • Publication Process
  • Publication Recognition
  • Language Editing Services
  • Translation Services

Elsevier QRcode Wechat

Choosing the Right Research Methodology: A Guide for Researchers

  • 3 minute read
  • 50.4K views

Table of Contents

Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an informed decision.

Understanding different research methods:

There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based . Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research, and action research. Each method can be opted for and modified, depending on the type of research hypotheses and objectives.

Qualitative vs quantitative research:

When deciding on a research methodology, one of the key factors to consider is whether your research will be qualitative or quantitative. Qualitative research is used to understand people’s experiences, concepts, thoughts, or behaviours . Quantitative research, on the contrary, deals with numbers, graphs, and charts, and is used to test or confirm hypotheses, assumptions, and theories. 

Qualitative research methodology:

Qualitative research is often used to examine issues that are not well understood, and to gather additional insights on these topics. Qualitative research methods include open-ended survey questions, observations of behaviours described through words, and reviews of literature that has explored similar theories and ideas. These methods are used to understand how language is used in real-world situations, identify common themes or overarching ideas, and describe and interpret various texts. Data analysis for qualitative research typically includes discourse analysis, thematic analysis, and textual analysis. 

Quantitative research methodology:

The goal of quantitative research is to test hypotheses, confirm assumptions and theories, and determine cause-and-effect relationships. Quantitative research methods include experiments, close-ended survey questions, and countable and numbered observations. Data analysis for quantitative research relies heavily on statistical methods.

Analysing qualitative vs quantitative data:

The methods used for data analysis also differ for qualitative and quantitative research. As mentioned earlier, quantitative data is generally analysed using statistical methods and does not leave much room for speculation. It is more structured and follows a predetermined plan. In quantitative research, the researcher starts with a hypothesis and uses statistical methods to test it. Contrarily, methods used for qualitative data analysis can identify patterns and themes within the data, rather than provide statistical measures of the data. It is an iterative process, where the researcher goes back and forth trying to gauge the larger implications of the data through different perspectives and revising the analysis if required.

When to use qualitative vs quantitative research:

The choice between qualitative and quantitative research will depend on the gap that the research project aims to address, and specific objectives of the study. If the goal is to establish facts about a subject or topic, quantitative research is an appropriate choice. However, if the goal is to understand people’s experiences or perspectives, qualitative research may be more suitable. 

Conclusion:

In conclusion, an understanding of the different research methods available, their applicability, advantages, and disadvantages is essential for making an informed decision on the best methodology for your project. If you need any additional guidance on which research methodology to opt for, you can head over to Elsevier Author Services (EAS). EAS experts will guide you throughout the process and help you choose the perfect methodology for your research goals.

Why is data validation important in research

Why is data validation important in research?

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

Writing in Environmental Engineering

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

Writing a good review article

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Methodology for research I

Rakesh garg.

Department of Onco-anaesthesiology and Palliative Medicine, Dr. BRAIRCH, All India Institute of Medical Sciences, New Delhi, India

The conduct of research requires a systematic approach involving diligent planning and its execution as planned. It comprises various essential predefined components such as aims, population, conduct/technique, outcome and statistical considerations. These need to be objective, reliable and in a repeatable format. Hence, the understanding of the basic aspects of methodology is essential for any researcher. This is a narrative review and focuses on various aspects of the methodology for conduct of a clinical research. The relevant keywords were used for literature search from various databases and from bibliographies of the articles.

INTRODUCTION

Research is a process for acquiring new knowledge in systematic approach involving diligent planning and interventions for discovery or interpretation of the new-gained information.[ 1 , 2 ] The outcome reliability and validity of a study would depend on well-designed study with objective, reliable, repeatable methodology with appropriate conduct, data collection and its analysis with logical interpretation. Inappropriate or faulty methodology would make study unacceptable and may even provide clinicians faulty information. Hence, the understanding the basic aspects of methodology is essential.

This is a narrative review based on existing literature search. This review focuses on specific aspects of the methodology for conduct of a research/clinical trial. The relevant keywords for literature search included ‘research’, ‘study design’, ‘study controls’, ‘study population’, ‘inclusion/exclusion criteria’, ‘variables’, ‘sampling’, ‘randomisation’, ‘blinding’, ‘masking’, ‘allocation concealment’, ‘sample size’, ‘bias’, ‘confounders’ alone and in combinations. The search engine included PubMed/MEDLINE, Google Scholar and Cochrane. The bibliographies of the searched articles were specifically searched for missing manuscripts from the search engines and manually from the print journals in the library.

The following text highlights/describes the basic essentials of methodology which needs to be adopted for conducting a good research.

Aims and objectives of study

The aims and objectives of research need to be known thoroughly and should be specified before start of the study based on thorough literature search and inputs from professional experience. Aims and objectives state whether nature of the problem (formulated as research question or research problem) has to be investigated or its solution has to be found by different more appropriate method. The lacunae in existing knowledge would help formulate a research question. These statements have to be objective specific with all required details such as population, intervention, control, outcome variables along with time interventions.[ 3 , 4 , 5 ] This would help formulate a hypothesis which is a scientifically derived statement about a particular problem in the defined population. The hypothesis generation depends on the type of study as well. Researcher observation related to any aspect initiates hypothesis generation. A cross-sectional survey would generate hypothesis. An observational study establishes associations and supports/rejects the hypothesis. An experiment would finally test the hypothesis.[ 5 , 6 , 7 ]

STUDY POPULATION AND PATIENT SELECTION, STUDY AREA, STUDY PERIOD

The flow of study in an experimental design has various sequential steps [ Figure 1 ].[ 1 , 2 , 6 ] Population refers to an aggregate of individuals, things, cases, etc., i.e., observation units that are of interest and remain the focus of investigation. This reference population or target population is the group on which the study outcome would be extrapolated.[ 6 ] Once this target population is identified, researcher needs to assess whether it is possible to study all the individuals for an outcome. Usually, all cannot be included, so a study population is sampled. The important attribute of a sample is that every individual should have equal and non-zero chance of getting included in the study. The sample should be made independently, i.e., selection of one does not influence inclusion or exclusion of other. In clinical practice, the sampling is restricted to a particular place (patients attending to clinics or posted for surgery) or includes multiple centres rather than sampling the universe. Hence, the researcher should be cautious in generalising the outcomes. For example, in a tertiary care hospital, patients are referred and may have more risk factors as compared to primary centres where a patient with lesser severity are managed. Hence, researchers must disclose details of the study area. The study period needs to be disclosed as it would make readers understand the population characteristics. Furthermore, study period would tell about relevance of the study with respect to the present period.

An external file that holds a picture, illustration, etc.
Object name is IJA-60-640-g001.jpg

Flow of an experimental study

The size of sample has to be pre-determined, analytically approached and sufficiently large to represent the population.[ 7 , 8 , 9 ] Including a larger sample would lead to wastage of resources, risk that the true treatment effect may be missed due to heterogeneity of large population and would be time-consuming.[ 6 ] If a study is too small, it will not provide the suitable answer to research question. The main determinant of the sample size includes clinical hypothesis, primary endpoint, study design, probability of Type I and II error, power, minimum treatment difference of clinical importance.[ 7 ] Attrition of patients should be attended during the sample size calculation.[ 6 , 9 ]

SELECTION OF STUDY DESIGN

The appropriate study design is essential for the intervention outcome in terms of its best possible and most reliable estimate. The study design selection is based on parameters such as objectives, therapeutic area, treatment comparison, outcome and phase of the trial.[ 6 ] The study design may be broadly classified as:[ 5 , 6 , 7 ]

  • Descriptive: Case report, case series, survey
  • Analytical: Case-control, cohort, cross-sectional
  • Experimental: Randomised controlled trial (RCT), quasi-experiment
  • Qualitative.

For studying causality, analytical observational studies would be prudent to avoid posing risk to subjects. For clinical drugs or techniques, experimental study would be more appropriate.[ 6 ] The treatments remain concurrent, i.e. the active and control interventions happen at the same period in RCT. It may parallel group design wherein treatment and control groups are allocated to different individuals. This requires comparing a placebo group or a gold standard intervention (control) with newer agent or technique.[ 6 ] In matched-design RCT, randomisation is between matched pairs. For cross-over study design, two or more treatments are administered sequentially to the same subject and thus each subject acts as its own control. However, researches should be aware of ‘carryover effect’ of the previous intervention and suitable wash period needs to be ensured. In cohort study design, subjects with disease/symptom or free of study variable are followed for a particular period. The cross-sectional study examines the prevalence of the disease, surveys, validating instruments, tools and questionnaires. The qualitative research is a study design wherein health-related issue in the population is explored with regard to its description, exploration and explanation.[ 6 ]

Selection of controls

The control is required because disease may be self-remitting, Hawthorne effect (change in response or behaviours of subjects when included in study), placebo effect (patients feel improvement even with placebo), effect of confounder, co-intervention and regression to the mean phenomenon (for example, white coat hypertension, i.e. patients at recruitment may have higher study parameter but subsequently may get normal).[ 2 , 6 , 7 ] The control could be a placebo, no treatment, different dose or regimen or intervention or the standard/gold treatment. Avoiding a routine care for placebo is not desirable and unethical. For instance, for studying analgesic regimen, it would be unethical not to administer analgesics in a control group. It is advisable to continue standard of care, i.e. providing routine analgesics even in control group. The use of placebo or no treatment may be considered where no current proven intervention exists or placebo is required to evaluate efficacy or safety of an intervention without serious or irreversible harm.

The comparisons to be made in the study among groups also need to be specified.[ 6 , 7 , 9 ] These comparisons may prove superiority, non-inferiority or equivalence among groups. The superiority trials demonstrate superiority either to a placebo in a placebo-controlled trial or to an active control treatment. The non-inferiority trials would prove that the efficacy of an intervention is no worse than that of the active comparative treatment. The equivalence trials demonstrate that the outcome of two or more interventions differs by a clinically unimportant margin and either technique or drug may be clinically acceptable.

STUDY TOOLS

The study tools such as measurements scales, questionnaires and scoring systems need to be specified with an objective definition. These tools should be validated before its use and appropriate use by the research staff is mandatory to avoid any bias. These tools should be simple and easily understandable to everyone involved in the study.

Inclusion/exclusion criteria

In clinical research, specific group of relatively homogeneous patient population needs to be selected.[ 6 ] Inclusion and exclusion criteria define who can be included or excluded from the study sample. The inclusion criteria identify the study population in a consistent, reliable, uniform and objective manner. The exclusion criteria include factors or characteristics that make the recruited population ineligible for the study. These factors may be confounders for the outcome parameter. For example, patients with liver disease would be excluded if coagulation parameters would impact the outcome. The exclusion criteria are inclusive of inclusion criteria.

VARIABLES: PRIMARY AND SECONDARY

Variables are definite characteristics/parameters that are being studied. Clear, precise and objective definition for measurement of these characteristics needs to be defined.[ 2 ] These should be measurable and interpretable, sensitive to the objective of the study and clinically relevant. The most common end-point is related to efficacy, safety and quality of life. The study variables could be primary or secondary.[ 6 ] The primary end-point, usually one, provides the most relevant, reliable and convincing evidence related to the aim and objective. It is the characteristic on the basis of which research question/hypothesis has been formulated. It reflects clinically relevant and important treatment benefits. It determines the sample size. Secondary end-points are the other objectives indirectly related to primary objective with regard to its close association or they may be some associated effects/adverse effects related to intervention. The measurement timing of the variables must be defined a priori . These are usually done at screening, baseline and completion of trial.

The study end-point parameter may be clinical or surrogate in nature. A clinical end-point is related directly to clinical implications with regard to beneficial outcome of the intervention. The surrogate end-point is indirectly related to patient clinical benefit and is usually measures laboratory measurement or physical sign as a substitute for a clinically meaningful end-point. Surrogate end-points are more convenient, easily measurable, repeatable and faster.

SAMPLING TECHNIQUES: RANDOMISATION, BLINDING/MASKING AND ALLOCATION CONCEALMENT

Randomisation.

Randomisation or random allocation is a method to allocate individuals into one of the groups (arms) of a study.[ 1 , 2 ] It is the basic assumption required for statistical analysis of data. The randomisation would maximise statistical power, especially in subgroup analyses, minimise selection bias and minimise allocation bias (or confounding). This leads to distribution of all the characteristics, measured or non-measured, visible or invisible and known or unknown equally into the groups. Randomisation uses various strategies as per the study design and outcome.

Probability sampling/randomisation

  • Simple/unrestricted: Each individual of the population has the same chance of being included in the sample. This is used when population is small, homogenous and the sampling frame is available. For example, lottery method, table of random numbers or computer-generated
  • Stratified: It is used in non-homogenous population. Population is divided into homogenous groups (strata), and the sample is drawn for each stratum at random. It keeps the ‘characteristics’ of the participants (for example, age, weight or physical status) as similar as possible across the study groups. The allocation to strata can be by equal or proportional allocation
  • Systematic: This is used when complete and up-to-date sampling frame is available. The first unit is selected at random and the rest get selected automatically according to some pre-designed pattern
  • Cluster: This applies for large geographical area. Population is divided into a finite numbers of distinct and identifiable units (sampling units/element). A group of such elements is a cluster and sampling of these clusters is done. All units of the selected clusters are included in the study
  • Multistage: This applies for large nationwide surveys. Sampling is done in stages using random sampling. Here, sub-sampling within the selected clusters is done. If procedure is repeated in more number of stages, then they termed as multistage sampling
  • Multiphase: Here, some data are collected from whole of the units of a sample, and other data are collected from a sub-sample of the units constituting the original sample (two-phase sampling). If three or more phases are used, then they termed as multiphase sampling.

Non-probability sampling/randomisation

This technique does not give equal and non-zero chances to all the individuals in the population to be selected in the sample.

  • Convenience: Sampling is done as per the convenience of the investigator, i.e., easily available
  • Purposive/judgemental/selective/subjective: The sample is selected as per judgement of investigator
  • Quota: It is done as per judgement of the interviewer based on some specified characteristics such as sex and physical status.

ALLOCATION CONCEALMENT

Allocation concealment refers to the process ensuring the person who generates the random assignment remains blind to what arm the person will be allotted.[ 8 , 9 , 10 ] It is a strategy to avoid ascertainment or selection bias. For example, based on an outcome, researcher may recruit a specific category as lesser sicker patients to a particular group and vice versa to the other group. This selective recruitment would underestimate (if treatment group is sicker) or overestimate (if control group is sicker) the intervention effect.[ 9 ] The allocation should be concealed from investigator till the initiation of intervention. Hence, randomisation should be performed by an independent person who is not involved in the conduct of the study or its monitoring. The randomisation list is kept secret. The methods of allocation concealment include:[ 9 , 10 ]

  • Central randomisation: Some centrally independent authority performs randomisation and informs the investigators via telephone, E-mail or fax
  • Pharmacy controlled: Here, pharmacy provides coded drugs for use
  • Sequentially numbered containers: Identical containers equal in weight, similar in appearance and tamper-proof are used
  • Sequentially numbered, opaque, sealed envelopes: The randomised numbers are concealed in opaque envelope to be opened just before intervention and are the most common and easy to perform method.

BLINDING/MASKING

Blinding ensures the group to which the study subjects are assigned not known or easily ascertained by those who are ‘masked’, i.e., participants, investigators, evaluators or statistician to limit occurrence of bias.[ 1 , 2 ] It confirms that the intervention and standard or placebo treatment appears the same. Blinding is different from allocation concealment. Allocation concealment is done before, whereas blinding is done at and after initiation of treatment. In situations such as study drugs with different formulations or medical versus surgical interventions, blinding may not be feasible.[ 8 ] Sham blocks or needling in subjects may not be ethical. In such situation, the outcome measurement should be made objective to the fullest to avoid bias and whosoever may be masked should be blinded. The research manuscript must mention the details about blinding including who was blinded after assignment to interventions and process or technique used. Blinding could be:[ 8 , 9 ]

  • Unblinded: The process cannot conceal randomisation
  • Single blind: One of the participants, investigators or evaluators remains masked
  • Double-blind: The investigator and participants remained masked
  • Triple blind: Not only investigator but also participant maintains a blind data analysis.

BIAS AND CONFOUNDERS

Bias is a systematic deviation of the real, true effect (better or worst outcome) resulting from faulty study design.[ 1 , 2 ] The various steps of study such as randomisation, concealment, blinding, objective measurement and strict protocol adherence would reduce bias.

The various possible and potential biases in a trial can be:[ 7 ]

  • Investigator bias: An investigator either consciously or subconsciously favours one group than other
  • Evaluator bias: The investigator taking end-point variable measurement intentionally or unintentionally favours one group over other. It is more common with subjective or quality of life end-points
  • Performance bias: It occurs when participant knows of exposure to intervention or its response, be it inactive or active
  • Selection bias: This occurs due to sampling method such as admission bias (selective factors for admission), non-response bias (refusals to participate and the population who refused may be different from who participated) or sample is not representative of the population
  • Ascertainment or information bias: It occurs due to measurement error or misclassification of patient. For example, diagnostic bias (more diagnostic procedures performed in cases as compared with controls), recall bias (error of categorisation, investigator aggressively search for exposure variables in cases)
  • Allocation bias: Allocation bias occurs when the measured treatment effect differs from the true treatment effect
  • Detection bias: It occurs when observations in one group are not as vigilantly sought as in the other
  • Attrition bias/loss-to-follow-up bias: It occurs when patient is lost to follow-up preferentially in a particular group.

Confounding occurs when outcome parameters are affected by effects of other factors not directly relevant to the research question.[ 1 , 7 ] For example, if impact of drug on haemodynamics is studied on hypertensive patients, then diabetes mellitus would be confounder as it also effects the hemodynamic response to autonomic disturbances. Hence, it becomes prudent during the designing stage for a study that all potential confounders should be carefully considered. If the confounders are known, then they can be adjusted statistically but with loss of precision (statistical power). Hence, confounding can be controlled either by preventing it or by adjusting for it in the statistical analysis. The confounding can be controlled by restriction by study design (for example, restricted age range as 2-6 years), matching (use of constraints in the selection of the comparison group so that the study and comparison group have similar distribution with regard to potential confounder), stratification in the analysis without matching (involves restriction of the analysis to narrow ranges of the extraneous variable) and mathematical modelling in the analysis (use of advanced statistical methods of analysis such as multiple linear regression and logistic regression). Strategies during data analysis include stratified analysis using the Mantel-Haenszel method to adjust for confounders, using a matched design approach, data restriction and model fitting using regression techniques.

Basic understanding of the methodology is essential to have reliable, repeatable and clinically acceptable outcome. The study plan including all its components needs to be designed before start of the study, and the study protocol should be strictly adhered during the conduct of study.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

importance of methodology

What is Research Methodology? Definition, Types, and Examples

importance of methodology

Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. Several aspects must be considered before selecting an appropriate research methodology, such as research limitations and ethical concerns that may affect your research.

The research methodology section in a scientific paper describes the different methodological choices made, such as the data collection and analysis methods, and why these choices were selected. The reasons should explain why the methods chosen are the most appropriate to answer the research question. A good research methodology also helps ensure the reliability and validity of the research findings. There are three types of research methodology—quantitative, qualitative, and mixed-method, which can be chosen based on the research objectives.

What is research methodology ?

A research methodology describes the techniques and procedures used to identify and analyze information regarding a specific research topic. It is a process by which researchers design their study so that they can achieve their objectives using the selected research instruments. It includes all the important aspects of research, including research design, data collection methods, data analysis methods, and the overall framework within which the research is conducted. While these points can help you understand what is research methodology, you also need to know why it is important to pick the right methodology.

Why is research methodology important?

Having a good research methodology in place has the following advantages: 3

  • Helps other researchers who may want to replicate your research; the explanations will be of benefit to them.
  • You can easily answer any questions about your research if they arise at a later stage.
  • A research methodology provides a framework and guidelines for researchers to clearly define research questions, hypotheses, and objectives.
  • It helps researchers identify the most appropriate research design, sampling technique, and data collection and analysis methods.
  • A sound research methodology helps researchers ensure that their findings are valid and reliable and free from biases and errors.
  • It also helps ensure that ethical guidelines are followed while conducting research.
  • A good research methodology helps researchers in planning their research efficiently, by ensuring optimum usage of their time and resources.

Writing the methods section of a research paper? Let Paperpal help you achieve perfection

Types of research methodology.

There are three types of research methodology based on the type of research and the data required. 1

  • Quantitative research methodology focuses on measuring and testing numerical data. This approach is good for reaching a large number of people in a short amount of time. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations.
  • Qualitative research methodology examines the opinions, behaviors, and experiences of people. It collects and analyzes words and textual data. This research methodology requires fewer participants but is still more time consuming because the time spent per participant is quite large. This method is used in exploratory research where the research problem being investigated is not clearly defined.
  • Mixed-method research methodology uses the characteristics of both quantitative and qualitative research methodologies in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method.

What are the types of sampling designs in research methodology?

Sampling 4 is an important part of a research methodology and involves selecting a representative sample of the population to conduct the study, making statistical inferences about them, and estimating the characteristics of the whole population based on these inferences. There are two types of sampling designs in research methodology—probability and nonprobability.

  • Probability sampling

In this type of sampling design, a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are:

  • Systematic —sample members are chosen at regular intervals. It requires selecting a starting point for the sample and sample size determination that can be repeated at regular intervals. This type of sampling method has a predefined range; hence, it is the least time consuming.
  • Stratified —researchers divide the population into smaller groups that don’t overlap but represent the entire population. While sampling, these groups can be organized, and then a sample can be drawn from each group separately.
  • Cluster —the population is divided into clusters based on demographic parameters like age, sex, location, etc.
  • Convenience —selects participants who are most easily accessible to researchers due to geographical proximity, availability at a particular time, etc.
  • Purposive —participants are selected at the researcher’s discretion. Researchers consider the purpose of the study and the understanding of the target audience.
  • Snowball —already selected participants use their social networks to refer the researcher to other potential participants.
  • Quota —while designing the study, the researchers decide how many people with which characteristics to include as participants. The characteristics help in choosing people most likely to provide insights into the subject.

What are data collection methods?

During research, data are collected using various methods depending on the research methodology being followed and the research methods being undertaken. Both qualitative and quantitative research have different data collection methods, as listed below.

Qualitative research 5

  • One-on-one interviews: Helps the interviewers understand a respondent’s subjective opinion and experience pertaining to a specific topic or event
  • Document study/literature review/record keeping: Researchers’ review of already existing written materials such as archives, annual reports, research articles, guidelines, policy documents, etc.
  • Focus groups: Constructive discussions that usually include a small sample of about 6-10 people and a moderator, to understand the participants’ opinion on a given topic.
  • Qualitative observation : Researchers collect data using their five senses (sight, smell, touch, taste, and hearing).

Quantitative research 6

  • Sampling: The most common type is probability sampling.
  • Interviews: Commonly telephonic or done in-person.
  • Observations: Structured observations are most commonly used in quantitative research. In this method, researchers make observations about specific behaviors of individuals in a structured setting.
  • Document review: Reviewing existing research or documents to collect evidence for supporting the research.
  • Surveys and questionnaires. Surveys can be administered both online and offline depending on the requirement and sample size.

Let Paperpal help you write the perfect research methods section. Start now!

What are data analysis methods.

The data collected using the various methods for qualitative and quantitative research need to be analyzed to generate meaningful conclusions. These data analysis methods 7 also differ between quantitative and qualitative research.

Quantitative research involves a deductive method for data analysis where hypotheses are developed at the beginning of the research and precise measurement is required. The methods include statistical analysis applications to analyze numerical data and are grouped into two categories—descriptive and inferential.

Descriptive analysis is used to describe the basic features of different types of data to present it in a way that ensures the patterns become meaningful. The different types of descriptive analysis methods are:

  • Measures of frequency (count, percent, frequency)
  • Measures of central tendency (mean, median, mode)
  • Measures of dispersion or variation (range, variance, standard deviation)
  • Measure of position (percentile ranks, quartile ranks)

Inferential analysis is used to make predictions about a larger population based on the analysis of the data collected from a smaller population. This analysis is used to study the relationships between different variables. Some commonly used inferential data analysis methods are:

  • Correlation: To understand the relationship between two or more variables.
  • Cross-tabulation: Analyze the relationship between multiple variables.
  • Regression analysis: Study the impact of independent variables on the dependent variable.
  • Frequency tables: To understand the frequency of data.
  • Analysis of variance: To test the degree to which two or more variables differ in an experiment.

Qualitative research involves an inductive method for data analysis where hypotheses are developed after data collection. The methods include:

  • Content analysis: For analyzing documented information from text and images by determining the presence of certain words or concepts in texts.
  • Narrative analysis: For analyzing content obtained from sources such as interviews, field observations, and surveys. The stories and opinions shared by people are used to answer research questions.
  • Discourse analysis: For analyzing interactions with people considering the social context, that is, the lifestyle and environment, under which the interaction occurs.
  • Grounded theory: Involves hypothesis creation by data collection and analysis to explain why a phenomenon occurred.
  • Thematic analysis: To identify important themes or patterns in data and use these to address an issue.

How to choose a research methodology?

Here are some important factors to consider when choosing a research methodology: 8

  • Research objectives, aims, and questions —these would help structure the research design.
  • Review existing literature to identify any gaps in knowledge.
  • Check the statistical requirements —if data-driven or statistical results are needed then quantitative research is the best. If the research questions can be answered based on people’s opinions and perceptions, then qualitative research is most suitable.
  • Sample size —sample size can often determine the feasibility of a research methodology. For a large sample, less effort- and time-intensive methods are appropriate.
  • Constraints —constraints of time, geography, and resources can help define the appropriate methodology.

Got writer’s block? Kickstart your research paper writing with Paperpal now!

How to write a research methodology .

A research methodology should include the following components: 3,9

  • Research design —should be selected based on the research question and the data required. Common research designs include experimental, quasi-experimental, correlational, descriptive, and exploratory.
  • Research method —this can be quantitative, qualitative, or mixed-method.
  • Reason for selecting a specific methodology —explain why this methodology is the most suitable to answer your research problem.
  • Research instruments —explain the research instruments you plan to use, mainly referring to the data collection methods such as interviews, surveys, etc. Here as well, a reason should be mentioned for selecting the particular instrument.
  • Sampling —this involves selecting a representative subset of the population being studied.
  • Data collection —involves gathering data using several data collection methods, such as surveys, interviews, etc.
  • Data analysis —describe the data analysis methods you will use once you’ve collected the data.
  • Research limitations —mention any limitations you foresee while conducting your research.
  • Validity and reliability —validity helps identify the accuracy and truthfulness of the findings; reliability refers to the consistency and stability of the results over time and across different conditions.
  • Ethical considerations —research should be conducted ethically. The considerations include obtaining consent from participants, maintaining confidentiality, and addressing conflicts of interest.

Streamline Your Research Paper Writing Process with Paperpal

The methods section is a critical part of the research papers, allowing researchers to use this to understand your findings and replicate your work when pursuing their own research. However, it is usually also the most difficult section to write. This is where Paperpal can help you overcome the writer’s block and create the first draft in minutes with Paperpal Copilot, its secure generative AI feature suite.  

With Paperpal you can get research advice, write and refine your work, rephrase and verify the writing, and ensure submission readiness, all in one place. Here’s how you can use Paperpal to develop the first draft of your methods section.  

  • Generate an outline: Input some details about your research to instantly generate an outline for your methods section 
  • Develop the section: Use the outline and suggested sentence templates to expand your ideas and develop the first draft.  
  • P araph ras e and trim : Get clear, concise academic text with paraphrasing that conveys your work effectively and word reduction to fix redundancies. 
  • Choose the right words: Enhance text by choosing contextual synonyms based on how the words have been used in previously published work.  
  • Check and verify text : Make sure the generated text showcases your methods correctly, has all the right citations, and is original and authentic. .   

You can repeat this process to develop each section of your research manuscript, including the title, abstract and keywords. Ready to write your research papers faster, better, and without the stress? Sign up for Paperpal and start writing today!

Frequently Asked Questions

Q1. What are the key components of research methodology?

A1. A good research methodology has the following key components:

  • Research design
  • Data collection procedures
  • Data analysis methods
  • Ethical considerations

Q2. Why is ethical consideration important in research methodology?

A2. Ethical consideration is important in research methodology to ensure the readers of the reliability and validity of the study. Researchers must clearly mention the ethical norms and standards followed during the conduct of the research and also mention if the research has been cleared by any institutional board. The following 10 points are the important principles related to ethical considerations: 10

  • Participants should not be subjected to harm.
  • Respect for the dignity of participants should be prioritized.
  • Full consent should be obtained from participants before the study.
  • Participants’ privacy should be ensured.
  • Confidentiality of the research data should be ensured.
  • Anonymity of individuals and organizations participating in the research should be maintained.
  • The aims and objectives of the research should not be exaggerated.
  • Affiliations, sources of funding, and any possible conflicts of interest should be declared.
  • Communication in relation to the research should be honest and transparent.
  • Misleading information and biased representation of primary data findings should be avoided.

Q3. What is the difference between methodology and method?

A3. Research methodology is different from a research method, although both terms are often confused. Research methods are the tools used to gather data, while the research methodology provides a framework for how research is planned, conducted, and analyzed. The latter guides researchers in making decisions about the most appropriate methods for their research. Research methods refer to the specific techniques, procedures, and tools used by researchers to collect, analyze, and interpret data, for instance surveys, questionnaires, interviews, etc.

Research methodology is, thus, an integral part of a research study. It helps ensure that you stay on track to meet your research objectives and answer your research questions using the most appropriate data collection and analysis tools based on your research design.

Accelerate your research paper writing with Paperpal. Try for free now!

  • Research methodologies. Pfeiffer Library website. Accessed August 15, 2023. https://library.tiffin.edu/researchmethodologies/whatareresearchmethodologies
  • Types of research methodology. Eduvoice website. Accessed August 16, 2023. https://eduvoice.in/types-research-methodology/
  • The basics of research methodology: A key to quality research. Voxco. Accessed August 16, 2023. https://www.voxco.com/blog/what-is-research-methodology/
  • Sampling methods: Types with examples. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/types-of-sampling-for-social-research/
  • What is qualitative research? Methods, types, approaches, examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-qualitative-research-methods-types-examples/
  • What is quantitative research? Definition, methods, types, and examples. Researcher.Life blog. Accessed August 15, 2023. https://researcher.life/blog/article/what-is-quantitative-research-types-and-examples/
  • Data analysis in research: Types & methods. QuestionPro website. Accessed August 16, 2023. https://www.questionpro.com/blog/data-analysis-in-research/#Data_analysis_in_qualitative_research
  • Factors to consider while choosing the right research methodology. PhD Monster website. Accessed August 17, 2023. https://www.phdmonster.com/factors-to-consider-while-choosing-the-right-research-methodology/
  • What is research methodology? Research and writing guides. Accessed August 14, 2023. https://paperpile.com/g/what-is-research-methodology/
  • Ethical considerations. Business research methodology website. Accessed August 17, 2023. https://research-methodology.net/research-methodology/ethical-considerations/

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • Dangling Modifiers and How to Avoid Them in Your Writing 
  • Webinar: How to Use Generative AI Tools Ethically in Your Academic Writing
  • Research Outlines: How to Write An Introduction Section in Minutes with Paperpal Copilot
  • How to Paraphrase Research Papers Effectively

Language and Grammar Rules for Academic Writing

Climatic vs. climactic: difference and examples, you may also like, dissertation printing and binding | types & comparison , what is a dissertation preface definition and examples , how to write a research proposal: (with examples..., how to write your research paper in apa..., how to choose a dissertation topic, how to write a phd research proposal, how to write an academic paragraph (step-by-step guide), maintaining academic integrity with paperpal’s generative ai writing..., research funding basics: what should a grant proposal..., how to write an abstract in research papers....

importance of methodology

How To Choose Your Research Methodology

Qualitative vs quantitative vs mixed methods.

By: Derek Jansen (MBA). Expert Reviewed By: Dr Eunice Rautenbach | June 2021

Without a doubt, one of the most common questions we receive at Grad Coach is “ How do I choose the right methodology for my research? ”. It’s easy to see why – with so many options on the research design table, it’s easy to get intimidated, especially with all the complex lingo!

In this post, we’ll explain the three overarching types of research – qualitative, quantitative and mixed methods – and how you can go about choosing the best methodological approach for your research.

Overview: Choosing Your Methodology

Understanding the options – Qualitative research – Quantitative research – Mixed methods-based research

Choosing a research methodology – Nature of the research – Research area norms – Practicalities

Free Webinar: Research Methodology 101

1. Understanding the options

Before we jump into the question of how to choose a research methodology, it’s useful to take a step back to understand the three overarching types of research – qualitative , quantitative and mixed methods -based research. Each of these options takes a different methodological approach.

Qualitative research utilises data that is not numbers-based. In other words, qualitative research focuses on words , descriptions , concepts or ideas – while quantitative research makes use of numbers and statistics. Qualitative research investigates the “softer side” of things to explore and describe, while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them.

Importantly, qualitative research methods are typically used to explore and gain a deeper understanding of the complexity of a situation – to draw a rich picture . In contrast to this, quantitative methods are usually used to confirm or test hypotheses . In other words, they have distinctly different purposes. The table below highlights a few of the key differences between qualitative and quantitative research – you can learn more about the differences here.

  • Uses an inductive approach
  • Is used to build theories
  • Takes a subjective approach
  • Adopts an open and flexible approach
  • The researcher is close to the respondents
  • Interviews and focus groups are oftentimes used to collect word-based data.
  • Generally, draws on small sample sizes
  • Uses qualitative data analysis techniques (e.g. content analysis , thematic analysis , etc)
  • Uses a deductive approach
  • Is used to test theories
  • Takes an objective approach
  • Adopts a closed, highly planned approach
  • The research is disconnected from respondents
  • Surveys or laboratory equipment are often used to collect number-based data.
  • Generally, requires large sample sizes
  • Uses statistical analysis techniques to make sense of the data

Mixed methods -based research, as you’d expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data. Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that model empirically.

In other words, while qualitative and quantitative methods (and the philosophies that underpin them) are completely different, they are not at odds with each other. It’s not a competition of qualitative vs quantitative. On the contrary, they can be used together to develop a high-quality piece of research. Of course, this is easier said than done, so we usually recommend that first-time researchers stick to a single approach , unless the nature of their study truly warrants a mixed-methods approach.

The key takeaway here, and the reason we started by looking at the three options, is that it’s important to understand that each methodological approach has a different purpose – for example, to explore and understand situations (qualitative), to test and measure (quantitative) or to do both. They’re not simply alternative tools for the same job. 

Right – now that we’ve got that out of the way, let’s look at how you can go about choosing the right methodology for your research.

Methodology choices in research

2. How to choose a research methodology

To choose the right research methodology for your dissertation or thesis, you need to consider three important factors . Based on these three factors, you can decide on your overarching approach – qualitative, quantitative or mixed methods. Once you’ve made that decision, you can flesh out the finer details of your methodology, such as the sampling , data collection methods and analysis techniques (we discuss these separately in other posts ).

The three factors you need to consider are:

  • The nature of your research aims, objectives and research questions
  • The methodological approaches taken in the existing literature
  • Practicalities and constraints

Let’s take a look at each of these.

Factor #1: The nature of your research

As I mentioned earlier, each type of research (and therefore, research methodology), whether qualitative, quantitative or mixed, has a different purpose and helps solve a different type of question. So, it’s logical that the key deciding factor in terms of which research methodology you adopt is the nature of your research aims, objectives and research questions .

But, what types of research exist?

Broadly speaking, research can fall into one of three categories:

  • Exploratory – getting a better understanding of an issue and potentially developing a theory regarding it
  • Confirmatory – confirming a potential theory or hypothesis by testing it empirically
  • A mix of both – building a potential theory or hypothesis and then testing it

As a rule of thumb, exploratory research tends to adopt a qualitative approach , whereas confirmatory research tends to use quantitative methods . This isn’t set in stone, but it’s a very useful heuristic. Naturally then, research that combines a mix of both, or is seeking to develop a theory from the ground up and then test that theory, would utilize a mixed-methods approach.

Exploratory vs confirmatory research

Let’s look at an example in action.

If your research aims were to understand the perspectives of war veterans regarding certain political matters, you’d likely adopt a qualitative methodology, making use of interviews to collect data and one or more qualitative data analysis methods to make sense of the data.

If, on the other hand, your research aims involved testing a set of hypotheses regarding the link between political leaning and income levels, you’d likely adopt a quantitative methodology, using numbers-based data from a survey to measure the links between variables and/or constructs .

So, the first (and most important thing) thing you need to consider when deciding which methodological approach to use for your research project is the nature of your research aims , objectives and research questions. Specifically, you need to assess whether your research leans in an exploratory or confirmatory direction or involves a mix of both.

The importance of achieving solid alignment between these three factors and your methodology can’t be overstated. If they’re misaligned, you’re going to be forcing a square peg into a round hole. In other words, you’ll be using the wrong tool for the job, and your research will become a disjointed mess.

If your research is a mix of both exploratory and confirmatory, but you have a tight word count limit, you may need to consider trimming down the scope a little and focusing on one or the other. One methodology executed well has a far better chance of earning marks than a poorly executed mixed methods approach. So, don’t try to be a hero, unless there is a very strong underpinning logic.

Need a helping hand?

importance of methodology

Factor #2: The disciplinary norms

Choosing the right methodology for your research also involves looking at the approaches used by other researchers in the field, and studies with similar research aims and objectives to yours. Oftentimes, within a discipline, there is a common methodological approach (or set of approaches) used in studies. While this doesn’t mean you should follow the herd “just because”, you should at least consider these approaches and evaluate their merit within your context.

A major benefit of reviewing the research methodologies used by similar studies in your field is that you can often piggyback on the data collection techniques that other (more experienced) researchers have developed. For example, if you’re undertaking a quantitative study, you can often find tried and tested survey scales with high Cronbach’s alphas. These are usually included in the appendices of journal articles, so you don’t even have to contact the original authors. By using these, you’ll save a lot of time and ensure that your study stands on the proverbial “shoulders of giants” by using high-quality measurement instruments .

Of course, when reviewing existing literature, keep point #1 front of mind. In other words, your methodology needs to align with your research aims, objectives and questions. Don’t fall into the trap of adopting the methodological “norm” of other studies just because it’s popular. Only adopt that which is relevant to your research.

Factor #3: Practicalities

When choosing a research methodology, there will always be a tension between doing what’s theoretically best (i.e., the most scientifically rigorous research design ) and doing what’s practical , given your constraints . This is the nature of doing research and there are always trade-offs, as with anything else.

But what constraints, you ask?

When you’re evaluating your methodological options, you need to consider the following constraints:

  • Data access
  • Equipment and software
  • Your knowledge and skills

Let’s look at each of these.

Constraint #1: Data access

The first practical constraint you need to consider is your access to data . If you’re going to be undertaking primary research , you need to think critically about the sample of respondents you realistically have access to. For example, if you plan to use in-person interviews , you need to ask yourself how many people you’ll need to interview, whether they’ll be agreeable to being interviewed, where they’re located, and so on.

If you’re wanting to undertake a quantitative approach using surveys to collect data, you’ll need to consider how many responses you’ll require to achieve statistically significant results. For many statistical tests, a sample of a few hundred respondents is typically needed to develop convincing conclusions.

So, think carefully about what data you’ll need access to, how much data you’ll need and how you’ll collect it. The last thing you want is to spend a huge amount of time on your research only to find that you can’t get access to the required data.

Constraint #2: Time

The next constraint is time. If you’re undertaking research as part of a PhD, you may have a fairly open-ended time limit, but this is unlikely to be the case for undergrad and Masters-level projects. So, pay attention to your timeline, as the data collection and analysis components of different methodologies have a major impact on time requirements . Also, keep in mind that these stages of the research often take a lot longer than originally anticipated.

Another practical implication of time limits is that it will directly impact which time horizon you can use – i.e. longitudinal vs cross-sectional . For example, if you’ve got a 6-month limit for your entire research project, it’s quite unlikely that you’ll be able to adopt a longitudinal time horizon. 

Constraint #3: Money

As with so many things, money is another important constraint you’ll need to consider when deciding on your research methodology. While some research designs will cost near zero to execute, others may require a substantial budget .

Some of the costs that may arise include:

  • Software costs – e.g. survey hosting services, analysis software, etc.
  • Promotion costs – e.g. advertising a survey to attract respondents
  • Incentive costs – e.g. providing a prize or cash payment incentive to attract respondents
  • Equipment rental costs – e.g. recording equipment, lab equipment, etc.
  • Travel costs
  • Food & beverages

These are just a handful of costs that can creep into your research budget. Like most projects, the actual costs tend to be higher than the estimates, so be sure to err on the conservative side and expect the unexpected. It’s critically important that you’re honest with yourself about these costs, or you could end up getting stuck midway through your project because you’ve run out of money.

Budgeting for your research

Constraint #4: Equipment & software

Another practical consideration is the hardware and/or software you’ll need in order to undertake your research. Of course, this variable will depend on the type of data you’re collecting and analysing. For example, you may need lab equipment to analyse substances, or you may need specific analysis software to analyse statistical data. So, be sure to think about what hardware and/or software you’ll need for each potential methodological approach, and whether you have access to these.

Constraint #5: Your knowledge and skillset

The final practical constraint is a big one. Naturally, the research process involves a lot of learning and development along the way, so you will accrue knowledge and skills as you progress. However, when considering your methodological options, you should still consider your current position on the ladder.

Some of the questions you should ask yourself are:

  • Am I more of a “numbers person” or a “words person”?
  • How much do I know about the analysis methods I’ll potentially use (e.g. statistical analysis)?
  • How much do I know about the software and/or hardware that I’ll potentially use?
  • How excited am I to learn new research skills and gain new knowledge?
  • How much time do I have to learn the things I need to learn?

Answering these questions honestly will provide you with another set of criteria against which you can evaluate the research methodology options you’ve shortlisted.

So, as you can see, there is a wide range of practicalities and constraints that you need to take into account when you’re deciding on a research methodology. These practicalities create a tension between the “ideal” methodology and the methodology that you can realistically pull off. This is perfectly normal, and it’s your job to find the option that presents the best set of trade-offs.

Recap: Choosing a methodology

In this post, we’ve discussed how to go about choosing a research methodology. The three major deciding factors we looked at were:

  • Exploratory
  • Confirmatory
  • Combination
  • Research area norms
  • Hardware and software
  • Your knowledge and skillset

If you have any questions, feel free to leave a comment below. If you’d like a helping hand with your research methodology, check out our 1-on-1 research coaching service , or book a free consultation with a friendly Grad Coach.

importance of methodology

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

Dr. Zara

Very useful and informative especially for beginners

Goudi

Nice article! I’m a beginner in the field of cybersecurity research. I am a Telecom and Network Engineer and Also aiming for PhD scholarship.

Margaret Mutandwa

I find the article very informative especially for my decitation it has been helpful and an eye opener.

Anna N Namwandi

Hi I am Anna ,

I am a PHD candidate in the area of cyber security, maybe we can link up

Tut Gatluak Doar

The Examples shows by you, for sure they are really direct me and others to knows and practices the Research Design and prepration.

Tshepo Ngcobo

I found the post very informative and practical.

Baraka Mfilinge

I struggle so much with designs of the research for sure!

Joyce

I’m the process of constructing my research design and I want to know if the data analysis I plan to present in my thesis defense proposal possibly change especially after I gathered the data already.

Janine Grace Baldesco

Thank you so much this site is such a life saver. How I wish 1-1 coaching is available in our country but sadly it’s not.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

42k Accesses

58 Citations

61 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

importance of methodology

Library & Information Management

  • Classification
  • Physical Education
  • Travel and Tourism
  • BIBLIOMETRICS
  • Banking System
  • Real Estate

Select Page

Research methodology | Importance & Types of Research Methodology in Research

Posted by Md. Harun Ar Rashid | Mar 28, 2022 | Research Methodology

Research methodology

Research methodology is a collective term for the structured process of conducting research. There are many different methodologies used in various types of research and the term is usually considered to include research design, data gathering, and data analysis. Research methodology seeks to inform: Why a research study has been undertaken, how the research problem has been defined, in what way and why the hypothesis has been formulated, what data have been collected and what particular method has been adopted, why particular technique of analyzing data has been used and a host of similar other questions are usually answered when we talk of research methodology concerning a research problem or study.

In simple terms, research methodology is used to give a clear-cut idea of what the researcher is carrying out his or her research. In order to plan at the right point of time and to advance the research work, research methodology makes the right platform for the researcher to map out the research work in relevance to make solid plans. Moreover, research methodology guides the researcher to involve and be active in his or her particular field of inquiry. Most of the time, the aim of the research and the research topic won’t be the same at all times it varies from the objectives and flow of the research, but by adopting a suitable methodology this can be achieved.

Right from selecting the topic and carrying out the research, the research methodology drives the researcher on the right track. The entire research plan is based on the concept of the right research methodology. Moreover, through the research methodology, the external environment constitutes the research by giving an in-depth idea on setting the right research objective, followed by literature point of view, based on that chosen analysis through interviews or questionnaires findings will be obtained and finally concluded the message by this research.

The Research Methodology Framework - Research methodology | Importance & Types of Research Methodology in Research

Source: Hentschel (1999)

The research methodology constitutes the internal environment by understanding and identifying the right type of research, strategy, philosophy, time horizon, and approaches, followed by the right procedures and techniques based on his or her research work. Additionally, the research methodology acts as the nerve center because the entire research is bounded by it, and to perform good research work, the internal and external environment has to follow the right research methodology process.

The system of collecting data for research projects is known as a research methodology. The data may be collected for either theoretical or practical research for example management research may be strategically conceptualized along with operational planning methods and change management. Some important factors in research methodology include the validity of research data, ethics, and the reliability of most of your work is finished by the time you finish the analysis of your data. This is followed by a research design, which may be either experimental or quasi-experimental. The last two stages are data analysis and finally writing the research paper, which is organized carefully into graphs and tables so that only important relevant data is shown.

Importance of Research Methodology in Research

It is necessary for a researcher to design a research methodology for the problem chosen. One should note that even if the research method considered for two problems are the same the research methodology may be different. It is important for the researcher to know not only the research methods necessary for the research undertaken but also the methodology. For example, a researcher not only needs to know how to calculate the mean, variance, and distribution function for a set of data, how to find a solution to a physical system described by a mathematical model, how to determine the roots of algebraic equations and how to apply a particular method but also need to know (i) which is a suitable method for the chosen problem?, (ii) what is the order of accuracy of the result of a method?, (iii) what is the efficiency of the method? And so on. Considerations of these aspects constitute a research methodology. More precisely, research methods help us get a solution to a problem. On the other hand, the research methodology is concerned with the explanation of the following:

(1) Why is a particular research study undertaken?

(2) How did one formulate a research problem?

(3) What types of data were collected?

(4) What particular method has been used?

(5) Why was a particular technique of analysis of data used?

The study of research methods gives the training to apply them to a problem. The study of research methodology provides us with the necessary training in choosing research methods, materials, scientific tools, and training in techniques relevant to the problem chosen.

Types of Research Methodologies

Research methodologies can be quantitative or qualitative. Ideally, comprehensive research should try to incorporate both qualitative and quantitative methodologies but this is not always possible, usually due to time and financial constraints. Research methodologies are generally used in academic research to test hypotheses or theories. A good design should ensure the research is valid, i.e. it clearly tests the hypothesis and not extraneous variables, and that the research is reliable, i.e. it yields consistent results every time.

Qualitative Research Methodology: is a highly subjective research discipline, designed to look beyond the percentages to gain an understanding of feelings, impressions, and viewpoints.

Key Characteristics of Qualitative Research

  • Events can be understood adequately only if they are seen in context. Therefore, a qualitative researcher immerses her/himself in the setting.
  • The contexts of inquiry are not contrived; they are natural. Nothing is predefined or taken for granted.
  • Qualitative researchers want those who are studied to speak for themselves, to provide their perspectives in words and other actions. Therefore, qualitative research is an interactive process in which the persons studied teach the researcher about their lives.
  • Qualitative researchers attend to the experience as a whole, not as separate variables. The aim of qualitative research is to understand experience as unified.
  • Qualitative methods are appropriate to the above statements. There is no one general method.
  • For many qualitative researchers, the process entails appraisal of what was studied.
  • Qualitative implies a direct concern with experience as it is `lived’ or `felt’ or `undergone’, then, has the aim of understanding experience as nearly as possible as its participants feel it or live it.

Quantitative Research Methodology: as the term suggests, concerned with the collection and analysis of data in numeric form. It tends to emphasize relatively large scale and representative sets of data, and is often, falsely in our view, presented or perceived as being about the gathering of `facts’.

Key Characteristics of Quantitative Research

  • Control: This is the most important element because it enables the scientist to identify the causes of his or her observations. Experiments are conducted in an attempt to answer certain questions. They represent attempts to identify why something happens, what causes some event, or under what conditions an event does occur. Control is necessary in order to provide unambiguous answers to such questions. To answer questions in education and social science we have to eliminate the simultaneous influence of many variables to isolate the cause of an effect. Controlled inquiry is absolutely essential to this because without it the cause of an effect could not be isolated.
  • Operational Definition: This means that terms must be defined by the steps or operations used to measure them. Such a procedure is necessary to eliminate any confusion in meaning and communication. Consider the statement `Anxiety causes students to score poorly on tests. One might ask, `What is meant by anxiety?’ Stating that anxiety refers to being tense or some other such term only adds to the confusion. However, stating that anxiety refers to a score over a criterion level on an anxiety scale enables others to realize what you mean by anxiety. Stating an operational definition forces one to identify the empirical referents or terms. In this manner, ambiguity is minimized. Again, introversion may be defined as a score on a particular personality scale, hunger as so many hours since the last feed, and social class as defined by occupation.
  • Replication: To be replicable, the data obtained in an experiment must be reliable; that is, the same result must be found in the study is repeated. If observations are not repeatable, our descriptions and explanations are thought to be unreliable.
  • Hypothesis Testing: The systematic creation of a hypothesis and subjecting it to an empirical test.

md harun ar rashid 4 - Research methodology | Importance & Types of Research Methodology in Research

Former Student at Rajshahi University

About The Author

Md. Harun Ar Rashid

Md. Harun Ar Rashid

Related posts.

Differences between Theoretical and Conceptual Frameworks

Differences between Theoretical and Conceptual Frameworks

March 28, 2023

How to Write Statement of the Problem in Research

How to Write Statement of the Problem in Research

April 2, 2023

How to Write Significance of the Study in Research

How to Write Significance of the Study in Research

April 6, 2023

Approaches to Discourse Analysis | How to Do Discourse Analysis | Strengths and Weaknesses of Discourse Analysis | When to Use Discourse Analysis

Approaches to Discourse Analysis | How to Do Discourse Analysis | Strengths and Weaknesses of Discourse Analysis | When to Use Discourse Analysis

February 10, 2023

Follow us on Facebook

Library & Information Management Community

Recent Posts

Sole Proprietorship: Understanding the Simplest Form of Business Ownership

August 2024
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  

Pin It on Pinterest

  • LiveJournal

Amplify is almost here! Join thousands of accounting, finance, ESG, audit and risk pros online on September 12th.  Learn More

What is a Cash Flow Statement?

cash flow statement graphical image

Hamish Prince Solutions Manager

While you might have heard the term before, or have a basic grasp of its importance within your organization, you might still be asking yourself: What is a cash flow statement? If you’ve been tasked with working on this important financial document, it’s important that you fully understand both its purpose and what is reported in the statement.

By understanding a cash flow statement, you will be able to better assess whether your company has enough cash to meet its short-term obligations, evaluate the efficiency of core operations in generating cash, and evaluate the potential returns and risks associated with investment opportunities. 

This article should help you understand the essentials of cash flow reporting, and how leveraging modern financial management platforms like Workiva can simplify cash flow analysis in financial management.

What is a cash flow statement?

Cash flow statement: definition.

The cash flow statement provides information about the cash inflows and outflows of a business during a specific period, typically monthly, quarterly, or annually. 

It’s split up into three main sections: operating activities, investing activities, and financing activities, presenting a summary of how cash has been generated and spent by a company.

The statement provides a comprehensive view of a company’s financial performance and financial positioning, complementing an organization’s income statement and balance sheet. It provides insight into a company’s liquidity, operational efficiency, investment potential, and overall financial health during financial reporting and financial analysis. 

You can read How to Prepare Statements of Cash Flows for some helpful tips on how to prepare a cash flow statement correctly. 

The components of a cash flow statement

It’s important to understand all three elements of a cash statement to grasp how the cash reporting process works.

Operating activities

These appear at the top of a cash flow statement and detail ongoing business activities such as sales and manufacturing. This section is designed to show where a company gets its cash from and how it uses that money during any given period of time.

Investing activities

While the operating activities section of a cash flow statement is concerned with a company’s day-to-day income, the investing activities section looks at long-term cash usage, such as buying or selling property or essential equipment. It also looks at sales of a division, or a cash out, from a merger or acquisition.

Financing activities

The final section included within the statement details the flow of cash between a business, its owners, and its creditors. It shows how a business raises capital and pays back its investors, including activities like issuing and selling stock, paying cash dividends, and adding loans.

The methods for preparing a cash flow statement

There are two methods for preparing a cash flow statement: the direct method and the indirect method.

Direct method

While less common than the indirect method, the direct method has its advantages. In fact, even though each method is accepted by the Generally Accepted Accounting Practices (GAAP) and International Financial Reporting Standards (IFRS) , both guidelines actually encourage the use of the direct method. 

Instead of modifying the operating section from accrual accounting to a cash basis, the direct method uses actual cash inflows and outflows—such as cash received from customers, cash paid to suppliers and employees, and cash paid for other operating expenses—from the company’s operations. Click here for some helpful tips on how to more effectively prepare a cash flow statement.

Indirect method

The more commonly used indirect method starts with net income and then adjusts for non-cash transactions and changes in working capital to arrive at the cash flow from operating activities. Indirect cash flow statements help stakeholders understand how company operations will contribute to the company’s current cash flow.

Understanding cash flow: positive vs. negative

After putting together a cash flow statement, you’ll have a better idea of what is going on financially throughout your business, but what does a cash flow statement show? Put simply, it is a report of how much cash is flowing into or out of a business over a specified period.

If you have a positive cash flow, you have more cash flowing into the business than is going out of it. On the other hand, if you have a negative cash flow, more money is leaving your organization than is flowing in. 

While a positive or negative cash flow does not necessarily directly translate to profits gained or lost, it’s generally better to have a positive cash flow. An excess of cash allows companies to reinvest and find new ways to grow their businesses, whereas a negative cash flow means that there’s more cash leaving a business than entering it.

Interpreting cash flow statements

To properly read and analyze a cash flow statement, an organization needs to compare statements over multiple periods to see if there are any noticeable trends or warning signs.

When analyzing your cash flow statement, here are some of the key indicators to look for:  

  • Operating cash flow
  • Free cash flow
  • Cash flow from investing activities
  • Cash flow from financing activities
  • Changes in cash and cash equivalents
  • Cash flow ratios like operating cash flow ratio (the number of times a company can pay off current debts with cash generated within the same period), free cash flow yield (a comparison of the free cash flow per share a company is expected to earn against its market value per share) and cash flow to debt ratio (a company’s cash flow from operations to its total debt)

Cash flow analysis techniques

There are various techniques for analyzing cash flow statements, and the most basic is comparing outlays to inflows to determine whether cash flow is positive or negative.

Another technique is to analyze the operating cash flow to net sales ratio—also known as revenue—which tells a business how much cash has been generated per sale.

One of the most important techniques is the free cash flow, which is the capital expenditures subtracted from the net operating cash flow. This is seen as equally important because it shows just how efficient a company is at generating cash. Free cash flows are used by investors to measure whether a company might have enough money to pay investors through dividends and share buybacks, after they have finished funding operations and capital expenditures.

The importance of cash flow planning and management

With a general understanding of what a cash flow statement is and why financial reporting is important , it’s imperative to also learn strategies for improving cash flow management. This can help you to maintain financial stability and support business growth. 

The best way to manage cash flow better is to look for ways to optimize operations, investing, and financing activities that will enhance cash flows. However, it can be complicated, as you need to closely monitor capital expenditures, optimize working capital through careful assessment, and implement capital management strategies. It’s also important to track key cash flow metrics and ratios to monitor liquidity and identify potential issues early. 

Fortunately, SaaS solutions like the Workiva platform make it easier than ever to improve cash flow planning and management. They allow for real-time data access, automating financial processes like invoicing, expense tracking, and accounts receivable management.

Cash flow statements vs. other financial statements

But what is the purpose of a cash flow statement compared to other financial statements such as income statements or balance sheets? 

Whereas a cash flow statement shows cash movements from operating, investing, and financing activities, income statements illustrate company profitability under accrual accounting rules. Balance sheets show company assets, liabilities, and shareholder equity. 

All three of these financial reporting methods complement each other and play an integrated role in assessing a business’s financial performance during a specific time frame.

However, cash flow statements provide a unique perspective on financial health. Unlike balance sheets and income statements, which primarily capture financial positions at a single point in time or over a period, cash flow statements track the actual cash inflows and outflows during the entire reporting period.

Real-world applications and case studies

In recent history, proper cash flow management has led to some overwhelmingly positive results. For example, Apple’s success story is often attributed to both its innovative products and its meticulous cash flow management . 

By maintaining a strong focus on cash flow optimisation and liquidity management, Apple accumulated a significant cash reserve, allowing the company to survive economic downturns, invest in research and development and pursue strategic acquisitions like Beats Electronics and Shazam.

Microsoft has also focused their attention on accelerating cash flow generation from its core software products, while expanding into high-growth areas like cloud computing and artificial intelligence. This approach to capital allocation and cash flow optimisation has enabled them to invest in innovation and make strategic acquisitions such as LinkedIn and GitHub . 

Unfortunately, companies like Toys “R” Us, Blockbuster, Kodak and RadioShack have also faced significant challenges from poor cash flow management, which has led them to financial distress, disruptions to operations and even bankruptcy. 

Cash flow statement takeaways

Cash flow statements provide valuable insights, and it’s fundamental that organisations prioritise understanding and managing their cash flow to ensure long-term business health. Financial reporting tools like the Workiva platform help to make this task simpler, helping businesses to improve financial decision-making and drive their sustainable growth.

Try Workiva today! Request a demo to see how the Workiva platform brings clarity and confidence as you pull data and insights from your financial statements.

All product names, trademarks and registered trademarks are property of their respective owners.

All company, product and service names used are for identification purposes only.

importance of methodology

Statement of Cash Flows Datasheet

Learn how to connect your data to create a clean, simple statement of cash flows.

hamish prince

Solutions Manager

Hamish has helped clients improve their financial reporting process for over 20 years. This has involved helping clients implement collaborative workflows, SEC compliance with HTML for their annual reports and Form 20-Fs, iXBRL for HMRC and now iXBRL for the ESEF mandate. While these changes are often driven by external events, helping clients successfully implementing change within organisations is often the greatest challenge.

You May Also Like

Workiva Whitepaper Icon

Predictions for 2024: How Finance, ESG, Legal, Audit, and Risk Teams Can Prepare

This year’s Amplify conference focused on four tracks: ESG, accounting & finance, legal, and governance, risk, and compliance. Dive into these high-level insights so you  and your team can be prepared for what’s to come in 2024. 

How Playa Resorts Makes Financial Reporting, ESG, and GRC Processes Work Together

Esg risk readiness: the intersection of esg & grc, where to start with legal technology: a modern roadmap for legal teams, financial reporting datasheet, online registration is currently unavailable..

Please email events@workiva to register for this event.

Our forms are currently down.

Please contact us at [email protected]

Identifying students with dyslexia: exploration of current assessment methods

  • Open access
  • Published: 29 August 2024

Cite this article

You have full access to this open access article

importance of methodology

  • Johny Daniel   ORCID: orcid.org/0000-0002-5057-9933 1 ,
  • Lauryn Clucas   ORCID: orcid.org/0009-0009-4439-9619 1 &
  • Hsuan-Hui Wang   ORCID: orcid.org/0000-0002-1877-910X 2  

27 Altmetric

Explore all metrics

Early identification plays a crucial role in providing timely support to students with learning disabilities, such as dyslexia, in order to overcome their reading difficulties. However, there is significant variability in the methods used for identifying dyslexia. This study aimed to explore and understand the practices of dyslexia identification in the UK. A survey was conducted among 274 dyslexia professionals, including educational psychologists and dyslexia specialists, to investigate the types of assessments they employ, their approach to utilizing assessment data, their decision-making processes, and their conceptualization of dyslexia. Additionally, the study examined whether these professionals held any misconceptions or myths associated with dyslexia. Analysis of the survey data revealed substantial variability in how professionals conceptualize dyslexia, as well as variations in assessment methods. Furthermore, a significant proportion of the survey respondents subscribed to one or more misconceptions regarding dyslexia; the most common misconception identified among professionals was the belief that children with dyslexia read letters in reverse order. The findings highlight the need for standardized approaches to dyslexia identification and debunking prevailing misconceptions. The implications of these findings are discussed, emphasizing the importance of informed policy and practice in supporting students with dyslexia. Recommendations are provided to enhance consistency and accuracy in dyslexia identification, with the aim of facilitating early intervention and support for affected students.

Similar content being viewed by others

importance of methodology

A Revised Discrepancy Method for Identifying Dyslexia

The critical role of instructional response in defining and identifying students with dyslexia: a case for updating existing definitions, evaluating the impact of dyslexia laws on the identification of specific learning disability and dyslexia, explore related subjects.

  • Medical Ethics
  • Artificial Intelligence

Avoid common mistakes on your manuscript.

Students identified with learning disabilities such as dyslexia are defined as those who demonstrate difficulties in reading skills compared to peers, despite opportunities to learn to read. Intervention efforts to help students overcome their reading challenges generally show greater effects of intervention in early primary grades compared to intervention efforts for students identified in secondary grades (Scammacca et al., 2013 , 2016 ). Indeed, a wealth of data supports early identification as one of the key factors in helping students overcome their reading challenges (see Fletcher et al., 2019 ).

However, the identification process and the criteria used to identify students with dyslexia have been a subject of ongoing debate (see Elliott & Grigorenko, 2014 ). While there is consensus in the field regarding what does not constitute dyslexia, there are debates over its specific definition and identification procedures (e.g., Elliott, 2020 ). Despite the critical importance of accurately identifying dyslexia, there remains a notable gap in the literature regarding the assessment processes used in the UK. Thus, the focus of this study is to investigate what assessments, benchmarks, and procedures assessors such as educational psychologists, dyslexia specialists, and school personnel use to identify school-age children with dyslexia in the UK.

Dyslexia identification

According the Diagnostic and Statistical Manual of Mental Disorders (DSM-V), dyslexia is defined as “…learning difficulties characterized by problems with accurate or fluent word recognition, poor decoding, and poor spelling abilities…” in the absence of other sensory, emotional, or cognitive disabilities (American Psychiatric Association, 2013 , p. 67). Thus, the core observable deficits individuals with dyslexia present are difficulties in decoding and encoding (i.e., spelling) words. In this section, we provide a brief history of dyslexia identification procedures, outline the components that are directly and indirectly associated with dyslexia identification, and highlight some misconceptions that are controversial and may influence diagnostic guidelines and assessment procedures.

Dyslexia identification has a long and complex history. One of the first observations of an individual with dyslexia was made in the late 1800s. In this report, it was noted that a 14-year-old boy who was “bright” and observed to have normal intelligence demonstrated a remarkable inability to read and spell words in isolation (Morgan, 1896 ). In an attempt to identify the cause of dyslexia, early researchers alluded to theories that this inability to read was associated with some form of “congenital deficits” or “word blindness” or “derangements of visual memory” (Hinshelwood, 1896 ; Morgan, 1896 ). It is important to note that these early researchers were vital in raising awareness of conditions associated with the inability to read; however, their inferences were based on observational data and lacked sophisticated methods to support theories associated with cognitive or visual deficits as a cause for dyslexia.

Models of dyslexia identification

Over the years, researchers have explored different methods to identify students with learning disabilities such as dyslexia. Some of the earlier identification methods relied on hypotheses that visual deficits were a source of dyslexia. For instance, the visual-perceptual deficit model hypotheses (Willows et al., 1993 ) proposed that reading difficulties are caused by a dysfunction in the magnocellular pathway, which is responsible for processing fast-moving, low-contrast visual information. Based on correlational studies, this pathway was thought to play a crucial role in visual perception, including the ability to perceive letter shapes accurately. However, the causal nature of this pathway has not been established and there is little empirical data to support the visual deficit hypotheses as an explanation for dyslexia (Fletcher et al., 2019 ; Iovino et al., 1998 ).

One assessment model which was predominantly used in the last century for identifying students with dyslexia and other learning disabilities, but has been refuted, was the IQ-reading ability discrepancy model. In this identification method, an individual’s assessment scores needed to demonstrate a discrepancy in their IQ test scores and their reading scores. This method aligned with the earliest observations where children were observed to have been “bright” with “normal intelligence” but demonstrated an inability to read. Overwhelming evidence has demonstrated the issues related to the validity of the process and poor reliability in identification (e.g., Fletcher et al., 1998 ; Francis et al., 2005 ; Meyer, 2000 ; Stanovich, 1991 ; Stuebing et al., 2002 ). Thus, current evidence does not support the use of this model in the identification process.

More recently, another model of discrepancy known as the patterns of cognitive strengths and weaknesses has been proposed for dyslexia identification (Hale et al., 2014 ). In this assessment model, individuals’ assessment scores need to demonstrate strengths in certain cognitive domains and weakness in other cognitive domains that are associated with low reading scores (Fenwick et al., 2015 ). However, multiple studies demonstrate lack of reliability in identification of students with learning disabilities using this assessment method (Fletcher & Miciak, 2017 ; Kranzler et al., 2016 ; Maki et al., 2022 ; Miciak et al., 2015 ; Stuebing et al., 2012 ; Taylor et al., 2017 ). For instance, Maki et al. ( 2022 ) observing school psychologists’ dyslexia identification process using the patterns of cognitive strengths and weaknesses model observed that they used considerable amount of time and resources administering cognitive assessments that were associated with low probability of accurate identification.

In addition to the unreliability of this assessment method, another challenge reported is that these assessment procedures are not very informative for educators who have to plan intervention to support students diagnosed with dyslexia (Taylor et al., 2017 ). For instance, one past meta-analysis reported that interventions that target improvement in students’ cognitive abilities such as working memory have negligible effects on the academic outcomes such as reading (Kearns & Fuchs, 2013 ).

Another discrepancy model concerns the learning opportunity and poor reading performance (de Jong, 2020 ), in which learning opportunity is viewed as adequate instruction received by students, and poor reading performance is considered as the unexpected underachievement. In other words, dyslexia is viewed as a discrepancy between reading growth and instructional quality. Based on this perspective, the response to intervention (RTI) model was proposed (Fletcher et al., 2019 ; D. Fuchs et al., 2012 ). In the RTI model, all students are screened for reading difficulties, their reading progress is then monitored, and increasingly intense interventions are provided according to their response to progress monitoring assessments (Fletcher & Vaughn, 2009 ). With this approach, a dyslexia diagnosis can only be fulfilled with severe reading lag and two additional conditions: (a) inadequate growth in reading in general instructional settings and (b) inadequate response to small group or one-on-one evidence-based reading interventions (de Jong, 2020 ; Fuchs et al., 2012 ).

The RTI model is supported for substantial advantages, including early intervention and academic prevention, reduction of over-identification, collaboration between general and special education, encouraging evidence-based instruction, providing educational services to students without labeling, and reducing the cost associated with identification process (Fletcher & Vaughn, 2009 ; D. Fuchs et al., 2012 ; L. S. Fuchs & Vaughn, 2012 ). However, the RTI model is not a panacea for dyslexia identification. Issues related to reliability and validity still remain, including problems of identifying adequate instruction and response (Denton, 2012 ; Kauffman et al., 2011 ; O’Connor & Sanchez, 2011 ).

To address the problems of the above-mentioned discrepancy models, one possible solution is to integrate multiple criteria for dyslexia identification. Therefore, hybrid models have been proposed (Fletcher & Vaughn, 2009 ; Fletcher et al., 2012 ; Miciak & Fletcher, 2020 ; Rice & Gilson, 2023 ). The hybrid models may differ in the assessment implementation (Fletcher et al., 2012 ) and vary with or without the unexpectedness component (Rice & Gilson, 2023 ). Current recommendations suggest that a dyslexia diagnosis should be made based on (a) low achievement in reading, (b) inadequate response to evidence-based instructions, and (c) exclusion factors to ensure that low achievement is not due to another disability or contextual factors (Fletcher & Vaughn, 2009 ; Rice & Gilson, 2023 ).

Furthermore, assessments are always involved when identifying dyslexia, regardless of which model is applied. It is thus reasonable to consider issues related to the assessments. For example, Miciak et al. ( 2016 ) suggested that it is more reliable to incorporate multiple reading assessments and to employ confidence intervals instead of rigid cut-off points during the process of dyslexia identification. In addition to that, culture and language factors should be taken into considerations whenever necessary when administering assessments (American Educational Research Association et al., 2014 ; Fletcher et al., 2019 ).

Distal associations and proximal causes

In this section, we delve into the proximal causes and distal associations of dyslexia, drawing insights from Hulme and Snowling’s ( 2009 ) analogy of lung cancer. Emphasizing the significance of reliability and validity in the identification process and its relevance in instructional decision-making within the RTI or hybrid model framework, we aim to explore the key factors that contribute to a reliable identification of students with dyslexia.

Proximal causes. Proximal causes refer to factors that directly and immediately impact the outcome. Taking Hulme and Snowling’s ( 2009 ) lung cancer as the exemplar, the gene mutation in the lung tissue would be a direct and proximal cause of lung cancer. Based on this analogy, proximal causes of dyslexia refer to components that directly and immediately produce poor word reading or spelling. Several theoretical models of reading have posited that successful word reading/spelling can be achieved only when multiple proximal causes function together (e.g., Gough & Tunmer, 1986 ), such as the ability to manipulate sounds or phonological awareness, knowledge of letter-sound relationships or decoding skills, and reading fluency (Gough & Tunmer, 1986 ; McArthur & Castles, 2017 ). Failure in any of the above factors could be directly linked to failure in reading or spelling words accurately.

Distal associations. Distal associations refer to factors that have indirect impact on the result. In Hulme and Snowling’s ( 2009 ) example, cigarette smoke would be a distal link to lung cancer as it increases the risk of cancer. Regarding dyslexia, distal associations refer to cognitive components that are associated with individuals’ word reading or spelling but are not intrinsic components of reading. In the literature, examples of distal factors associated with reading are working memory, verbal memory, and attention (Burns et al., 2016 ; Feifer, 2008 ; McArthur & Castles, 2017 ).

Although some studies have argued that a comprehensive array of cognitive assessment data, including proximal and distal measures, would contribute to the development of suitable treatment for dyslexia (e.g., Feifer, 2008 ), other studies have shown that cognitive assessment data is not necessarily helpful for identification and intervention (Burns et al., 2016 ; Galuschka et al., 2014 ; McArthur & Castles, 2017 ). Previous studies have consistently supported the significance of proximal measures for identification and treatment compared to distal measures (Burns et al., 2016 ; Galuschka et al., 2014 ). In a meta-analysis that examined the effects of using cognitive data screening and designing interventions among 37 studies, although a small effect was found for distal cognitive measures (i.e., intelligence tests and memory assessments), larger effects were found for proximal measures (i.e., phonological awareness and reading fluency) (Burns et al., 2016 ). Another meta-analysis has also observed that cognitively focused interventions did not generalize to improvements in reading performance (Kearns & Fuchs, 2013 ). On the contrary, a proximal intervention, which focuses on the proximal causes of reading, such as phonics instruction and reading fluency training, has shown to be more effective (e.g., Daniel et al., 2021 ; Scammacca et al., 2016 ) than a distal intervention that centers on distal associations of reading, such as colored overlays and sensorimotor training (Galuschka et al., 2014 ).

Dyslexia misconceptions

The different identification models and evidence supporting or refuting them have given rise to a series of misconceptions that has been reported in mainstream media and academic literature (Elliott & Grigorenko, 2014 ). Most of these misconceptions stem from procedures that have historical precedence but lack empirical data supporting their use in the identification process. Below we highlight some misconceptions that align with the misconception that having dyslexia is more than deficits in reading and spelling words.

Some portrayals of children with dyslexia note that children see letters and words reversed and this is an indicator of dyslexia. Studies that have explored the letter reversal aspect have compared dyslexic and non-dyslexic individuals and demonstrated that letter reversals are more characteristic of being at a certain stage of reading development, rather than a core aspect of dyslexia; these studies have also reported no significant differences in letter reversals among dyslexic and non-dyslexic children and adults (Cassar et al., 2005 ; Peter et al., 2020 ). It is important to note that there is some empirical data to support the hypothesis that individuals with dyslexia misread words due to letter positioning. Some researchers have observed that individuals with dyslexia when reading anagram words (e.g., smile and slime; tried and tired) make migration errors more frequently than control group peers that impact their word reading accuracy and their comprehension (Brunsdon et al., 2006 ; Friedmann & Rahamim, 2007 ; Kohnen et al., 2012 ). In these experiments, individuals with dyslexia might make migration errors wherein they read the word “bowls” as “blows” and this decoding error also impacts their comprehension. However, it is important to highlight that migration errors are different from letter reversals, and we could not locate any studies that observe letter reversals solely in individuals with dyslexia.

Other common misconceptions that are not empirically supported are dyslexic individuals demonstrating high levels of creativity (Erbeli et al., 2021 ) and sensory-motor difficulties (Kaltner & Jansen, 2014 ; Savage, 2004 ). For instance, Erbeli et al. ( 2021 ) reviewed 20 studies in their meta-analysis and reported that there was lack of evidence to support the notion of creative benefits for individuals with dyslexia; there were no significant differences in levels of creativity between individuals with and without dyslexia.

There are also misguided recommendations in improving students with dyslexia’s reading skills that align with unsupported theories of visual-perceptual deficit. For instance, there is little evidence to recommend using color overlays (Henderson et al., 2012 ; Suttle et al., 2018 ) and specific dyslexic fonts (Galliussi et al., 2020 ; Kuster et al., 2017 ; Joseph & Powell, 2022 ; Wery & Dilberto, 2016 ) in improving reading skills in students with dyslexia. For example, Galliussi et al. ( 2020 ) evaluated the impact of letter form or different fonts on typical and dyslexic individuals’ reading speed and accuracy. Authors reported no additional benefits of reading text in dyslexia friendly fonts compared to common fonts for children with and without dyslexia.

Of concern is that if individuals assessing students for dyslexia adhere to these misconceptions, then this could lead assessors to make erroneous judgments. Thus, in our study, we explore UK dyslexia assessors’ conceptualization of dyslexia and whether they consider these misconceptions as an indicator of dyslexia.

Literature on dyslexia identification assessment procedures from different countries

In the United States (US), a recent study on identifying school-age students with learning disabilities showed variability in identification criteria, assessments, and diagnostic labels across a wide-range of surveyed educational professionals (Al Dahhan et al., 2021 ). In a survey of close to 1000 assessors, authors (Al Dahhan et al., 2021 ) reported assessors using a variety of different criteria when evaluating assessment data and lengthy wait times for individuals to receive assessment and diagnostic results. Similarly, Benson et al. ( 2020 ) reported that school psychologists in the US used various identification frameworks, including outdated ones like intelligence-achievement discrepancy. These different frameworks resulted in varied identification decisions, impacting students’ access to support. In Norway, Andresen and Monsrud ( 2021 ) found that assessors reported consensus in the types of assessments used to identify students with dyslexia. However, their study also reported that assessors place heavy emphasis on students’ performance on intelligence tests and use reading assessments which lack reliable psychometric properties (Andresen & Monsrud, 2021 ). A recent systematic review of assessment practices to identify students with dyslexia reported that various dyslexia assessment practices were employed, encompassing cognitive discrepancy and response-to-intervention methods to identify students with dyslexia (Sadusky, et al., 2021 ). Authors also note that most of the studies reviewed were conducted in the US, with very few studies exploring dyslexia assessment procedures in other countries (Sadusky, et al., 2021 ). In the United Kingdom (UK), Russell et al. ( 2012 ) conducted a case study with one 6-year-old child who was assessed on multiple measures by four different professionals. Authors reported that there was general lack of agreement among professionals on the assessment methodology, which lead to different diagnosis of the child’s areas of needs. However, given this study included only one child, it is hard to generalize these findings to assessment practices in the UK.

These past studies on diagnostic procedures in dyslexia identification highlight the discrepancies in the diagnostic process among assessors leading to inconsistent identification approaches that can impact the services students receive to overcome their learning challenges. To ensure that students with additional needs gain timely access to services, it is essential that all students who have additional needs are identified reliably for support services. More importantly, it is vital to understand that procedures professionals are undertaking to identify students with dyslexia are not only reliable but also valid and align with current recommendations in the field. Furthermore, none of the past studies to our knowledge have explored methods of assessment for students who are English language learners in English-speaking countries, indicating a crucial area for future research to ensure equitable and effective diagnostic practices for this significant student population.

The UK context: dyslexia identification policy and practice

In the UK, the Equality Act ( 2010 ) legally protects individuals with disabilities from discrimination in society, including in educational settings. The Equality Act ( 2010 ) provides clarity that it is against the law to discriminate against someone because of “protected characteristics,” one of which is having a disability. “Disabled” is defined as having a physical or mental impairment that has substantial, long-term adverse effects on an individual’s ability to conduct day-to-day activities (Equality Act, 2010 ). However, neither dyslexia nor specific learning disabilities/difficulties are explicitly mentioned in the Equality Act.

More recently, the Children and Families Act 2014 provides regulations for the Special Educational Needs and Disability Code of Practice (Department of Education, 2014 ). This regulatory document mentions dyslexia as a condition associated with specific learning difficulties (SpLD). However, it does not provide a definition of what constitutes dyslexia and refers the reader to the Dyslexia-SpLD Trust for guidance. Thus, in the UK, there is no official guidance from policymakers on defining and identifying students with dyslexia or other learning difficulties.

It is also important to state that there are a variety of credentials relating to dyslexia assessment that can be obtained in the UK. For example, the British Dyslexia Association (BDA) offers Associate Membership of the British Dyslexia Association (AMBDA), which is used as an indicator of professional competence in diagnostic assessment. To apply for AMBDA, individuals must have completed an AMBDA accredited Level 7 postgraduate course. These courses are run by various dyslexia organizations, such as Dyslexia Action and Dyslexia Matters, and example courses include a Postgraduate Certificate in Specialist Assessment for Literacy-Related Difficulties and a Level 7 Diploma in Teaching and Assessing Learners with Dyslexia, Specific Learning Differences, and Barriers to Literacy. Completion of one of these courses can then lead to an Assessment Practising Certificate (APC). An APC is used as an indicator that an assessor has competed an AMBDA accredited course and recognizes the knowledge and skills gained from this. This credential is especially important in the UK, as the Department of Education states that a diagnosis of dyslexia will only be accepted as part of a Disabled Students’ Allowance application if it is completed by an assessor holding an APC or if they are a registered psychologist. Because of this, the BDA recommend that all assessors should hold an APC.

Study purpose

There is currently no clear guidance from policymakers in the UK on the definition and diagnostic procedures of dyslexia. The onus of developing diagnostic procedures and standards relies heavily on various independent professional organizations that develop their criteria for assessments, conduct assessment procedures, and provide diagnostic information to individuals, their caregivers, and school personnel. Apart from one case study that included one participant (Russell et al., 2012 ), no previous study to our knowledge has explored how independent assessors identify school-age children with dyslexia in the UK. By providing a detailed exploration of the current assessment methods in the UK, this research contributes significantly to the broader understanding of dyslexia identification. We explored the following research questions:

How do professional assessors identify students for dyslexia in the UK?

What is the common referral process for dyslexia assessment?

What types of assessments are used to identify dyslexia, how are standardized measures and cut-off scores utilized in dyslexia diagnosis, how many assessments are conducted and how long does the assessment process take, how do assessors make decisions regarding a dyslexia diagnosis, what assessments are used to assess english language learners for dyslexia.

How do professionals conceptualize dyslexia?

What is dyslexia assessors’ level of confidence in the validly and reliability of their assessment procedures and their diagnostic judgment?

The study received ethical approval from the Ethics Committee at first author’s university. All responses were anonymous and no identifiable information was collected. Participants were able to exit the survey at any time if they no longer wished to participate.

Recruitment

A recruitment email was sent to various UK-based dyslexia and psychological associations. Four dyslexia associations based in the UK, together with two psychological associations, distributed the survey email and its accompanying link to their members, with the email being sent on one occasion. Also sharing the survey with dyslexia and psychological associations, online searches were conducted to identify potential participants. This involved searching for the terms “dyslexia assessor” and “dyslexia specialist” and specifying the region. The regions included in the search were UK, England, Scotland, Wales, Northern Ireland, North East, North West, South East, and South West of England. These searches allowed us to identify personal websites for individuals offering dyslexia assessment, such as specialist teachers. These individuals were then contacted via the email listed on their website with an invitation to take part in the study and a link to the survey. These professionals were contacted once via email. All survey responses were collected between a 4-week period between January and February 2023.

Participants

To take part in the survey, participants had to work in a role that involved assessing students for dyslexia, such as a dyslexia specialist, specialist assessor, or educational psychologist. Participants were asked to indicate their current role and qualifications in identifying school-aged students suspected of having dyslexia. See Table  1 for participant demographic information.

Development of survey instrument

Based on past studies (e.g., Al Dahhan et al., 2021 ; Andresen & Monsrud, 2021 ; Benson et al., 2020 ), we developed a survey to explore how various professionals identify school-age students with dyslexia. The online survey (see Appendix A ) included four sections, which were “Demographic Information,” “Assessing and Identifying Students with Dyslexia,” “Conceptualising Dyslexia,” and “Thoughts on the Process of Assessment and Identification.” Before distributing the survey, feedback was obtained from professionals in the field, which resulted in slight changes to the wording of some questions. All survey questions were optional, and participants could choose to skip any of the survey items.

The “Demographic Information” section included nine questions about participants’ background, such as their highest degree and relevant qualifications, their role in identifying students with dyslexia and how long they have worked in this role, and the age groups of students they assess.

The “Assessing and Identifying Students with Dyslexia” section included 25 questions on participants’ assessment and identification process. It included questions about the different types of assessments (e.g., phonological awareness, vocabulary, working memory) they used to identify pupils with dyslexia, the standardized assessments they typically use, their use of benchmarks or cut-off points on these assessments, and their reasons for selecting these assessments. Participants are also asked about the referral process, such as reasons for referral, who generally begins the process, and the average time from referral to diagnosis. The survey also asked participants to report if they assessed individuals who are English language learners and the language of assessments used for this subgroup of individuals.

The “Conceptualising Dyslexia” section had 27 questions that addressed how respondents conceptualize and define dyslexia. The questions focused on the models that participants use to define dyslexia and the criteria they use to identify it. In this section, participants are shown a list of criteria and asked to indicate if they would use these to identify dyslexia. These indicators fell under three subcategories: proximal causes of dyslexia such as poor knowledge of letter names, distal associations of dyslexia such as poor performance on working memory tasks, and myths or misconceptions such as reading letters in reverse order or high levels of creativity.

The “Thoughts on the Process of Assessment and Identification” section had two questions that asked participants about their confidence in their assessment of a student having or not having dyslexia and their perceptions on the reliability of the process in helping them make decisions.

The survey included various types of question items. Many questions allowed respondents to select one or more multiple choice options from a list of choices, for example, questions about the types of assessments used to identify dyslexia or the reasons for referrals (e.g., “What types of assessments do you use to identify students with dyslexia? Choose all that apply.”). Some items used a Likert scale for responses, where participants rate their agreement or frequency of a particular behavior or belief, for example, questions about confidence in assessments (e.g., “How confident do you feel in your assessment of the child as having or not having a reading disability post your assessment? [0 = not confident at all; 10 = certain]”). Participants were also asked open-ended questions to elaborate on their choices such as how they used the assessment data in their diagnostic process.

Data analysis

We utilized an online polling website for the data collection phase. Upon completion of the data collection process, we downloaded all the collected data onto a spreadsheet. We used the dplyr package (Wickham et al., 2017 ) in R (R Core Team, 2021 ) for data cleaning and descriptive analyses.

RQ1: how do professional assessors identify students for dyslexia in the UK?

Survey participants reported that the most common reason that a parent or school refers a child for assessment is their reading proficiency being below average (62.50% and 59.00%, respectively). Many respondents also reported that parents and school refer a child due to them being unresponsive to classroom reading instruction (65.50% and 35.00%, respectively). However, many are also referred by their parents or school because their cognitive, motor, or visual skills are below average (34.00% and 24.50%, respectively), indicating that more distal indicators are also used to inform referrals. Further reasons for referral provided by participants include students struggling with studies despite showing good general ability, issues with writing and spelling, disparities between verbal and written work, struggling with the curriculum (e.g., working slowly, misreading questions), and running out of time on assessments. Table 2 also shows participants’ responses to the average amount of time it takes from the time they receive a referral to individuals receiving a diagnosis. The majority (59%) of pupils received a diagnosis within 1 month of referral, while 30% received a diagnosis between 1 and 6 months after referral.

As shown in Table  3 , participants were asked to indicate the types of assessments that they use to identify students with dyslexia. Almost all respondents reported assessing reading-related constructs and phonological processing. A vast majority also reported assessing students on various distal measures such as working memory, verbal processing speed, cognitive ability, verbal memory, and reasoning skills. Additionally, Table  4 shows the frequency and types of reading assessments assessors use when conducting assessments with word reading and reading fluency assessments administered most frequently.

To understand participants’ use of standardized measures and cut-off scores, they were asked to report which assessments they use and how they use standardized assessment scores. Across our sample, 80 different standardized assessments were reported as being used during assessments. See Appendix B for a list of the most frequently used standardized assessments. Post assessment administration, a substantial majority (63%) of the participants reported not using cut-off score on standardized assessment to diagnose dyslexia. In contrast, 36% reported utilizing cut-off scores on multiple assessments before completing their diagnostic report. Only one individual in our sample reported using a cut-off score on a single assessment prior to diagnosis.

When asked to explain how they use assessment scores, many reported using the assessments to get an overall picture of a student’s underlying cognitive ability and to look for patterns of strengths and weaknesses that are indicative of dyslexia. It was also often reported that assessors did not use these assessments in isolation, but considered them alongside background information, observations, and reports from parents and teachers. For example, many responses indicated that if a score was low but did not meet a cut-off point, they would consider the assessment scores in relation to background information to determine if, taken together, they indicate dyslexia. Some participants also reported using assessment scores to get a holistic view of strengths and weaknesses and to identify a “spiky” profile in order to build a picture of a student’s areas of need.

Participants were asked to report the minimum and maximum number of assessments they use during the identification process and the time the assessment takes. The minimum number of assessments ranged from 1 to 31, with a median of 6, and the maximum ranged from 1 to 50, with a median of 8 assessments.

The minimum assessment time ranged from 45 to 240 min, with a median of 150 min, and the maximum time ranged from 90 to 600 min, with a median of 220 min. These results indicate that there is large variation in the number of assessments used and assessment time, with some professionals, on the extreme end, assessing a child for up to 10 h on up to 50 assessments.

More than four in five respondents make their decisions on a diagnosis independently (85.00%). Of the remaining respondents who work with a team to make decisions, team members included educational psychologists, special education needs coordinators, teachers, other specialists, and families. These results suggest that the vast majority of professionals rely solely on their judgment to make decisions on a child’s diagnosis.

Among the 274 survey participants, a subset of 61 respondents indicated that they conduct assessments for individuals who are English language learners. Within the group of 61 assessors who assess English language learners, only a small number, specifically 5, stated that they conduct assessments in the individual’s first language; the remaining 56 reported using the same assessments that are administered to monolingual English-speaking students.

RQ2: how do professionals conceptualize dyslexia?

Familiarity.

When presented with the DSM-V definition, which states that dyslexia is characterized by difficulties with reading, spelling, and writing, over two-thirds indicated that the definition was missing elements of cognitive, visual, or motor skills (68.16%). Also, almost a fifth of respondents indicated that the DSM-V definition was inaccurate (19.55%).

Dyslexia indicators and myths

Results indicated that almost two-thirds of participants use 5 or more of the proximal indicators (e.g., poor knowledge of letters or letter names, labored or error prone reading fluency) to identify dyslexia (62.15%). Results also demonstrate that 7.91% agree with 5 or more misconceptions as an indicator of dyslexia and close to half of the survey participants associate with at least one misconception as an indicator for dyslexia (43.50%) (e.g., high levels of creativity, use of dyslexia fonts or colored overlays, seeing letters in reverse order).

Models of dyslexia

To understand how participants conceptualize dyslexia, they were asked what constitutes dyslexia. As shown in Table  5 , findings indicate that there is large variation in the way that professionals are conceptualizing dyslexia. A large majority reported dyslexia to be a phonological deficit while many also conceptualize dyslexia as a discrepancy between an individual’s reading skills and their cognitive ability (i.e., patterns of strengths and weakness model).

RQ3: what is dyslexia assessors’ level of confidence in the validly and reliability of their assessment procedures and their diagnostic judgment?

In assessing the confidence levels of dyslexia assessors, the study found that professionals generally felt confident in their diagnostic judgment following an assessment for a child’s potential dyslexia. On a scale from 0 (not confident at all) to 10 (certain), the confidence level was reported with a mean of 8.5, a standard deviation of 1.1, and a median of 9. Similarly, when evaluating the validity and reliability of the assessments they employed in making eligibility decisions, assessors reported high confidence levels, with a mean of 8.3, a standard deviation of 1.3, and a median of 9, on the same confidence scale.

In this study, we explored existing assessment methodologies for identifying school-age children with dyslexia in the UK. We aimed to solicit responses from assessors on their background, their assessment procedures, the types of assessments used, their decision-making process, the types of indicators they use during identification, and their conceptualization of dyslexia. Similar to past studies, there was lack of consensus in the response of assessors on various metrics.

Validity and reliability of current assessment methods for dyslexia identification

An important takeaway from this study was that most of the survey participants reported that they use reading assessment such as word reading, pseudo word reading, reading fluency, reading comprehension, and spelling in their dyslexia assessment process. These assessment methods align with current recommendations in the field that recommend using academic measures to assess individuals for SpLDs such as dyslexia (e.g., Fletcher et al., 2019 ). A high percentage of respondents also used some form of writing assessment and/or oral language assessments when evaluating for dyslexia.

Similarly, high percentage of survey respondents also reported using a variety of different cognitive assessments when assessing for dyslexia. Respondents reported administering measures of working memory, general cognitive ability, verbal processing speed, verbal memory, reasoning skills, and visual temporal processing. Given that different assessors used a variety of cognitive assessments, it is important to highlight that this diversity may lead to the identification of varying patterns of strengths and weaknesses in individuals with dyslexia. As a consequence, this lack of consensus in the choice of cognitive assessments employed by assessors raises concerns about the reliability and consistency of the dyslexia identification process.

While past research has demonstrated correlation between cognitive measures and reading assessments, these methods have remained controversial. Little empirical data supports benefits of cognitive assessments in informing intervention efforts. For instance, Stuebing et al. ( 2002 ) in their meta-analysis demonstrated that after controlling for pretest reading scores, cognitive measures accounted for 1–2% of explained variance in students’ reading growth. More recently, a pilot study that explored the additional benefits of cognitive training reported no significant benefits of cognitive training on students reading outcomes. In this study (Goodrich et al., 2023 ), authors assigned preschool children at-risk of reading difficulties to either an early literacy program, early literacy program plus cognitive training, or control. Both early literacy program groups outperformed controls on literacy measures. However, there was no significant differences on literacy outcomes between the literacy only group compared to the literacy plus executive function training group. This study and past reviews consistently highlight little benefits of cognitive training interventions’ effects on academic outcomes (Kearns & Fuchs, 2013 ). Given this evidence, it is important to question the reason for administering cognitive assessment as they do little to guide intervention efforts to support students’ reading growth.

Another area of discussion is the number of assessments assessors use to identify students for dyslexia. A general recommendation in the field is to use more than one assessment for identification, as a single measure may underrepresent a construct (Fletcher et al., 2019 ). The median number of minimum assessments reported by assessors was six, and the median maximum number of assessments reported was eight. While this indicates a multi-faceted approach, the fact that almost 2/3rd of the sample reported not using cut-off scores raises questions about how diagnostic decisions are made. While the avoidance of strict cut-off scores aligns with the understanding that word reading abilities exist on a continuum, the lack of their use raises questions about how assessors are synthesizing the results of multiple assessments to determine a diagnosis. Confidence intervals, which account for measurement error and provide a range of plausible values, offer a more accurate and inclusive approach to identifying reading difficulties (Miciak et al., 2016 ) and could potentially address this ambiguity. Thus, it was perplexing to see that most assessors were not making normative comparisons to guide their decision-making. Another challenge is that almost all assessors use a blend of academic (e.g., reading) and cognitive assessments (e.g., working memory) to identify strengths and weaknesses or to identify a “spiky” profile. Past research on evaluating patterns of strengths and weakness has demonstrated this process to be unreliable and lacking validity (Fletcher & Miciak, 2017 ; Maki et al., 2022 ).

There are no guidelines from policymakers in the UK to the holistic process of evaluating students’ assessment scores, raising concerns about the reliability of this process. This concern is supported by one past case study in the UK, which found that different professionals came to very different conclusions of a child’s areas of academic needs based on their evaluation of the assessment data (Russell et al., 2012 ). Thus, the question is would different assessors come to different conclusions based on their own holistic evaluation of assessment data?

Our findings related to the variability in diagnostic procedures and conceptualization of dyslexia suggest a need for government policy to guide the assessment procedures for students with dyslexia. For example, in the United States, the Individuals with Disabilities Act (IDEA, US Department of Education, 2006 ) clearly states that “The Department does not believe that an assessment of psychological or cognitive processing should be required in determining whether a child has an SpLD. There is no current evidence that such assessments are necessary or sufficient for identifying SpLD. Further, in many cases, these assessments have not been used to make appropriate intervention decisions” (p. 46,651). Similar guidance is needed for more reliable identification processes in the UK.

Another important area to highlight is that one past study in the UK has reported parental income to be a significant predictor of a child being diagnosed with dyslexia; the likelihood of being identified as dyslexic increases with higher income (Knight & Crick, 2021 ). For parents in the UK, assessing their child for dyslexia could cost anywhere between £500 and £700. This raises questions of equity and who can afford these assessments as 60% of households in the UK earn less than £799 per week (Office of National Statistics, 2023 ). Given the high costs of assessments and the post-pandemic cost of living crisis in the UK, we wonder how many households have disposable incomes to afford paying for dyslexia assessments. We wonder if there is a need for cognitive assessments and, if not, would reducing the number of assessments help assessment institutions to reduce the cost of assessments to make it more equitable and accessible to the general public. It is important to note that the National Health Services in the UK does not cover the cost of dyslexia assessments and this cost has to be incurred by caregivers.

Assessor conceptualization of dyslexia

All survey participants (100%) reported that they are “very familiar” with dyslexia. However, it was perplexing to observe that only small proportion of our sample reported agreeing with DSM-V definition of dyslexia that defines dyslexia as issues with word reading, reading fluency, and spelling words. When probed further on how assessors conceptualize dyslexia, majority reported it being a phonological deficit, inadequate decoding skills, and lack of response to evidence-based reading instruction. However, a substantial proportion of the sample also aligned with dyslexia being conceptualized as patterns of strengths and weaknesses or a discrepancy between IQ and achievement. Our data suggests that although a resounding number of study participants align with the DSM-V definition of dyslexia, they also have a strong commitment to cognitive assessments as an integral aspect of identification. This lack of consensus is consistent with past research on the lack of consensus among what constitutes dyslexia (e.g., Al Dahhan et al.,  2021 ; Ryder & Norwich, 2019 ; Sadusky et al., 2021 ).

Additionally, we also wanted to explore if dyslexia assessors subscribe to myths or misconceptions about dyslexia. The common misconceptions that dyslexia assessors reported as being an “indicator of dyslexia” were that individuals with dyslexia read letters in reverse orders (61%), they see letters jumping around (33%), they have high levels of creativity (17%), they report motor skills issues or clumsiness (17%), and they struggle to read words only when text is displayed in certain colors (15%) or fonts (12%). This suggests that there are many assessors that align with misconceptions to inform their decisions surrounding dyslexia diagnosis. Empirical data does not support these to be indicators of dyslexia (e.g., Henderson et al., 2012 ; Kuster et al., 2017 ). Thus, there is a need for dyslexia and psychological associations in the UK to ensure that these misconceptions are directly addressed in their certification modules. This is especially important as a majority of respondents reported using the data holistically to evaluate their diagnosis procedure and these misconceptions could influence assessors’ judgments and could potentially be associated with identification errors.

Assessor confidence

We observed that assessors generally reported high levels of confidence in the validity and reliability of the diagnostic process and their diagnosis. This is consistent with previous findings in both educational (Maki et al., 2022 ) and clinical settings (Al Dahhan et al., 2021 ), where practitioners generally reported high confidence in their ability to identify students with specific learning disabilities/difficulties, especially those assessors who had received more training. However, this reported confidence contrasts with the concerns raised in the present study about the reliability and validity of methods employed (such as the patterns of strengths and weakness), the pervasive use of a variety of cognitive assessments, the lack of framework on how assessment data is to be used for diagnosis, and the belief in dyslexia misconceptions that a large proportion of the sample subscribes to. This discrepancy, echoing Maki et al.’s ( 2022 ) findings of a potential disconnect between accuracy and confidence, suggests that decision-making confidence might be misplaced if it is not underpinned by standardized and widely accepted identification methods. Hence, while assessors are confident in their diagnostic capabilities, this confidence may be problematic if the identification methods themselves are flawed or inconsistently applied. Further research exploring the relationship between training, experience, and diagnostic accuracy in this context is warranted.

The English language learner dilemma

There is little data in the research literature to shed light on dyslexia assessment practices for English language learners. In our survey, we asked UK dyslexia assessors if they assessed individuals who were English language learners. Approximately 30% of our sample reported assessing English language learners for dyslexia. Within this subsample, a majority (92%) reported that they did not assess English language learners in their first language and generally used the same assessments they used for monolingual English speakers. This is an area of concern as assessing individuals on assessments that are in their second language may impact the validity of assessors’ interpretation of assessment data.

While past researchers (Fletcher et al., 2019 ) recommend selecting assessments that are linguistically and culturally sensitive to make accurate inferences, there may be practical challenges. For instance, some respondents reported that they have been unable to access assessments in students’ first language, despite asking their local authority for support in doing so. This indicates assessors’ willingness to assess individuals in culturally and linguistically sensitive assessments, but the lack of available resources may be a potential barrier. Thus, improving assessors’ knowledge and access to assessments in students’ first language may be one step towards administering culturally and linguistically fair assessments that can lead to improved identification decisions for this subpopulation of individuals.

Limitations

A notable limitation of this study is that we are not aware of the survey response rate. Although post code data shows that our sample was recruited from all over the UK, it is not certain that this sample’s assessment practices are representative of all UK dyslexia assessors. Another limitation is that survey questions were limited to dyslexia identification and did not elicit responses on identification of other learning disabilities/difficulties such as reading comprehension difficulties, math difficulties, and/or writing difficulties.

Future recommendations and conclusion

Our study demonstrates that there is a general lack of consensus among assessors on the process of dyslexia identification. While many subscribe to the notion of dyslexia being a deficit in core areas of reading, several others subscribe to dyslexia being a discrepancy between individuals’ reading and cognitive profiles. There is a clear need in the UK for policymakers to clearly define dyslexia and provide assessment guidelines. Nationally defined identification pathways would be useful in providing guidance to various assessment institutions and this alignment could lead to a cohesive model for reliable identification of learning difficulties such as dyslexia.

Data Availability

The data that support the findings of this study are available in the UK Data Service ReShare repository. The data have been stored in accordance with institutional guidelines and are accessible for replication purposes. For further inquiries, please contact the corresponding author at [email protected].

Al Dahhan, N. Z., Mesite, L., Feller, M. J., & Christodoulou, J. (2021). Identifying reading disabilities: A survey of practitioners. Learning Disability Quarterly, 44 (4), 235–247. https://doi.org/10.1177/0731948721998707

Article   Google Scholar  

American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders. 5th ed. American Psychiatric Association.

Andresen, A., & Monsrud, M.-B. (2021). Assessment of dyslexia – why, when, and with what? Scandinavian Journal of Educational Research, 66 (6), 1063–1075. https://doi.org/10.1080/00313831.2021.1958373

American Educational Research Association, American Psycological Association, & National Council on Measurement in Education (Eds.). (2014 ). Standards for educational and psychological testing. American Educational Research Association.

Benson, N. F., Maki, K. E., Floyd, R. G., Eckert, T. L., Kranzler, J. H., & Fefer, S. A. (2020). A national survey of school psychologists’ practices in identifying specific learning disabilities. School Psychology, 35 (2), 146–157. https://doi.org/10.1037/spq0000344

Brunsdon, R., Coltheart, M., & Nickels, L. (2006). Severe developmental letter-processing impairment: A treatment case study. Cognitive Neuropsychology, 23 (6), 795–821. https://doi.org/10.1080/02643290500310863

Burns, M. K., Petersen-Brown, S., Haegele, K., Rodriguez, M., Schmitt, B., Cooper, M., Clayton, K., Hutcheson, S., Conner, C., Hosp, J., & VanDerHeyden, A. M. (2016). Meta-analysis of academic interventions derived from neuropsychological data. School Psychology Quarterly, 31 (1), 28–42. https://doi.org/10.1037/spq0000117

Cassar, M., Treiman, R., Moats, L., Pollo, T. C., & Kessler, B. (2005). How do the spellings of children with dyslexia compare with those of nondyslexic children? Reading and Writing, 18 (1), 27–49. https://doi.org/10.1007/s11145-004-2345-x

Daniel, J., Capin, P., & Steinle, P. (2021). A synthesis of the sustainability of remedial reading intervention effects for struggling adolescent readers. Journal of Learning Disabilities, 54 (3), 170–186. https://doi.org/10.1177/0022219421991249

de Jong, P. F. (2020). Diagnosing dyslexia: How deep should we dig? In J. A. Washington, D. L. Compton, & P. McCardle (Eds.), Dyslexia: Revisiting etiology, diagnosis, treatment, and policy (pp. 31–43). Paul H. Brookes Publishing Co.

Google Scholar  

Denton, C. A. (2012). Response to intervention for reading difficulties in the primary grades: Some answers and lingering questions. Journal of Learning Disabilities, 45 (3), 232–243. https://doi.org/10.1177/0022219412442155

Department for Education. (2014). Children and Families Act . DfE.

Elliott, J. G. (2020). It’s time to be scientific about dyslexia.  Reading Research Quarterly ,  55 (S1). https://doi.org/10.1002/rrq.333

Elliott, J. G., & Grigorenko, E. L. (2014). The dyslexia debate (No. 14). Cambridge University Press.

Equality Act (2010). HMSO.

Erbeli, F., Peng, P., & Rice, M. (2021). No evidence of creative benefit accompanying dyslexia: A meta-analysis. Journal of Learning Disabilities, 55 (3), 242–253. https://doi.org/10.1177/00222194211010350

Feifer, S. G. (2008). Integrating Response to Intervention (RTI) with neuropsychology: A scientific approach to reading. Psychology in the Schools, 45 (9), 812–825. https://doi.org/10.1002/pits.20328

Fenwick, M. E., Kubas, H. A., Witzke, J. W., Fitzer, K. R., Miller, D. C., Maricle, D. E., Harrison, G. L., Macoun, S. J., & Hale, J. B. (2015). Neuropsychological profiles of written expression learning disabilities determined by concordance-discordance model criteria. Applied Neuropsychology: Child, 5 (2), 83–96. https://doi.org/10.1080/21622965.2014.993396

Fletcher, J. M., Francis, D. J., Shaywitz, S. E., Lyon, G. R., Foorman, B. R., Stuebing, K. K., & Shaywitz, B. A. (1998). Intelligent testing and the discrepancy model for children with learning disabilities. Learning Disabilities Research & Practice, 13 (4), 186–203.

Fletcher, J., Lyon, G. R., Fuchs, L., & Barnes, M. A. (2019).  Learning disabilities: From identification to intervention . The Guilford Press.

Fletcher, J. M., & Miciak, J. (2017). Comprehensive cognitive assessments are not necessary for the identification and treatment of learning disabilities. Archives of Clinical Neuropsychology, 32 (1), 2–7. https://doi.org/10.1093/arclin/acw103

Fletcher, J. M., Stuebing, K. K., Morris, R. D., & Lyon, G. R. (2012). Classification and definition of learning disabilities: A hybrid model . In H. L. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of learning disabilities (2nd ed., pp. 33–50). Guilford Press.

Fletcher, J. M., & Vaughn, S. (2009). Response to intervention: Preventing and remediating academic difficulties. Child Development Perspectives, 3 (1), 30–37. https://doi.org/10.1111/j.1750-8606.2008.00072.x

Francis, D. J., Fletcher, J. M., Stuebing, K. K., Lyon, G. R., Shaywitz, B. A., & Shaywitz, S. E. (2005). Psychometric approaches to the identification of LD. Journal of Learning Disabilities, 38 (2), 98–108. https://doi.org/10.1177/00222194050380020101

Friedmann, N., & Rahamim, E. (2007). Developmental letter position dyslexia. Journal of Neuropsychology, 1 (2), 201–236. https://doi.org/10.1348/174866407x204227

Fuchs, D., Fuchs, L. S., & Compton, D. L. (2012). Smart RTI: A next-generation approach to multilevel prevention. Exceptional Children, 78 (3), 263–279. https://doi.org/10.1177/001440291207800301

Fuchs, L. S., & Vaughn, S. (2012). Responsiveness-to-intervention: A decade later. Journal of Learning Disabilities, 45 (3), 195–203. https://doi.org/10.1177/0022219412442150

Galliussi, J., Perondi, L., Chia, G., Gerbino, W., & Bernardis, P. (2020). Inter-letter spacing, inter-word spacing, and font with dyslexia-friendly features: Testing text readability in people with and without dyslexia. Annals of Dyslexia, 70 (1), 141–152. https://doi.org/10.1007/s11881-020-00194-x

Galuschka, K., Ise, E., Krick, K., & Schulte-Körne, G. (2014). Effectiveness of treatment approaches for children and adolescents with reading disabilities: A meta-analysis of randomized controlled Trials. PLoS ONE, 9 (2), e89900. https://doi.org/10.1371/journal.pone.0089900

Goodrich, J. M., Peng, P., Bohaty, J., Leiva, S., & Thayer, L. (2023). Embedding executive function training into early literacy instruction for dual language learners: A pilot study. Journal of Speech, Language, and Hearing Research, 66 (2), 573–588. https://doi.org/10.31234/osf.io/xkymz

Gough, P. B., & Tunmer, W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7 (1), 6–10. https://doi.org/10.1177/074193258600700104

Hale, J., Alfonso, V., Berninger, V., Bracken, B., Christo, C., Clark, E., Cohen, M., Davis, A., Decker, S., Denckla, M., Dumont, R., Elliott, C., Feifer, S., Fiorello, C., Flanagan, D., Fletcher-Janzen, E., Geary, D., Gerber, M., Gerner, M., … Yalof, J. (2014). Critical issues in response-to-intervention, comprehensive evaluation, and specific learning disabilities identification and intervention: An expert White Paper Consensus. Learning Disabilities: A Multidisciplinary Journal, 20 (2). https://doi.org/10.18666/ldmj-2014-v20-i2-5276

Henderson, L. M., Tsogka, N., & Snowling, M. J. (2012). Questioning the benefits that coloured overlays can have for reading in students with and without dyslexia. Journal of Research in Special Educational Needs, 13 (1), 57–65. https://doi.org/10.1111/j.1471-3802.2012.01237.x

Hinshelwood, J. (1896). A case of dyslexia: A peculiar form of word-blindness. 1. The Lancet, 148 (3821), 1451–1454.

Hulme, C., & Snowling, M. J. (2009). Developmental disorders of language learning and cognition. Wiley Blackwell.

Iovino, I., Fletcher, J. M., Breitmeyer, B. G., & Foorman, B. R. (1998). Colored overlays for visual perceptual deficits in children with reading disability and attention deficit/hyperactivity disorder: Are they differentially effective? Journal of Clinical and Experimental Neuropsychology, 20 (6), 791–806. https://doi.org/10.1076/jcen.20.6.791.1113

Joseph, H., & Powell, D. (2022). Does a specialist typeface affect how fluently children with and without dyslexia process letters, words, and passages? Dyslexia, 28 (4), 448–470. https://doi.org/10.1002/dys.1727

Kaltner, S., & Jansen, P. (2014). Mental rotation and motor performance in children with developmental dyslexia. Research in Developmental Disabilities, 35 (3), 741–754. https://doi.org/10.1016/j.ridd.2013.10.003

Kauffman, J. M., Nelson, C. M., Simpson, R. L., & Mock, D. R. (2011). Contemporary issues. In J. M. Kauffman & D. P. Hallahan (Eds.), Handbook of special education (pp. 15–26). Routledge.

Chapter   Google Scholar  

Kearns, D. M., & Fuchs, D. (2013). Does cognitively focused instruction improve the academic performance of low-achieving students? Exceptional Children, 79 (3), 263–290. https://doi.org/10.1177/001440291307900200

Knight, C., & Crick, T. (2021). The assignment and distribution of the dyslexia label: Using the UK Millennium Cohort Study to investigate the sociodemographic predictors of the dyslexia label in England and Wales. PLoS ONE, 16 (8), e0256114. https://doi.org/10.1371/journal.pone.0256114

Kohnen, S., Nickels, L., Castles, A., Friedmann, N., & McArthur, G. (2012). When ‘slime’ becomes ‘smile’: Developmental letter position dyslexia in English. Neuropsychologia, 50 (14), 3681–3692. https://doi.org/10.1016/j.neuropsychologia.2012.07.016

Kranzler, J. H., Floyd, R. G., Benson, N., Zaboski, B., & Thibodaux, L. (2016). Cross-battery assessment pattern of strengths and weaknesses approach to the identification of specific learning disorders: Evidence-based practice or pseudoscience? International Journal of School & Educational Psychology, 4 (3), 146–157. https://doi.org/10.1080/21683603.2016.1192855

Kuster, S. M., van Weerdenburg, M., Gompel, M., & Bosman, A. M. (2017). Dyslexie font does not benefit reading in children with or without dyslexia. Annals of Dyslexia, 68 (1), 25–42. https://doi.org/10.1007/s11881-017-0154-6

Maki, K. E., Kranzler, J. H., & Moody, M. E. (2022). Dual discrepancy/consistency pattern of strengths and weaknesses method of specific learning disability identification: Classification accuracy when combining clinical judgment with assessment data. Journal of School Psychology, 92 , 33–48. https://doi.org/10.1016/j.jsp.2022.02.003

McArthur, G., & Castles, A. (2017). Helping children with reading difficulties: Some things we have learned so far. Npj Science of Learning, 2 (1), 7. https://doi.org/10.1038/s41539-017-0008-3

Meyer, M. S. (2000). The ability–achievement discrepancy: Does it contribute to an understanding of learning disabilities? Educational Psychology Review, 12 , 315–337.

Miciak, J., & Fletcher, J. M. (2020). The critical role of instructional response for identifying dyslexia and other learning disabilities. Journal of Learning Disabilities, 53 (5), 343–353. https://doi.org/10.1177/0022219420906801

Miciak, J., Fletcher, J. M., & Stuebing, K. K. (2016). Accuracy and validity of methods for identifying learning disabilities in a response-to-intervention service delivery framework. In S. R. Jimerson, M. K. Burns, & A. M. Van Der Heyden (Eds.), Handbook of response to intervention (pp. 421–440). Springer US. https://doi.org/10.1007/978-1-4899-7568-3_25

Miciak, J., Taylor, W. P., Denton, C. A., & Fletcher, J. M. (2015). The effect of achievement test selection on identification of learning disabilities within a patterns of strengths and weaknesses framework. School Psychology Quarterly, 30 , 321–334.

Morgan, W. P. (1896). A case of congenital word blindness. British Medical Journal, 2 (1871), 1378.

O’Connor, R. E., & Sanchez, V. M. (2011). Responsiveness to intervention models for reducing reading difficulties and identifying learning disability. In J. M. Kauffman & D. P. Hallahan (Eds.), Handbook of special education (pp. 123–133). Routledge.

Office for National Statistics (ONS) (2023). Average household income, UK: Financial year ending 2022. Retrieved from: https://www.ons.gov.uk/peoplepopulationandcommunity/personalandhouseholdfinances/incomeandwealth/bulletins/householddisposableincomeandinequality/financialyearending2022#:~:text=Main%20points,(ONS)%20Household%20Finances%20Survey

Peter, B., Albert, A., Panagiotides, H., & Gray, S. (2020). Sequential and spatial letter reversals in adults with dyslexia during a word comparison task: Demystifying the “was saw” and “DB” myths. Clinical Linguistics & Phonetics, 35 (4), 340–367. https://doi.org/10.1080/02699206.2019.1705916

R Core Team. (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/ .

Rice, M., & Gilson, C. B. (2023). Dyslexia identification: Tackling current issues in schools. Intervention in School and Clinic, 58 (3), 205–209. https://doi.org/10.1177/10534512221081278

Russell, G., Norwich, B., & Gwernan-Jones, R. (2012). When diagnosis is uncertain: Variation in conclusions after psychological assessment of a six-year-old child. Early Child Development and Care, 182 (12), 1575–1592. https://doi.org/10.1080/03004430.2011.641541

Ryder, D., & Norwich, B. (2019). UK higher education lecturers’ perspectives of dyslexia, dyslexic students and related disability provision. Journal of Research in Special Educational Needs, 19 (3), 161–172.

Sadusky, A., Berger, E. P., Reupert, A. E., & Freeman, N. C. (2021). Methods used by psychologists for identifying dyslexia: A systematic review. Dyslexia, 28 (2), 132–148. https://doi.org/10.1002/dys.1706

Savage, R. (2004). Motor skills, automaticity and developmental dyslexia: A review of the research literature. Reading and Writing, 17 (3), 301–324. https://doi.org/10.1023/b:read.0000017688.67137.80

Scammacca, N. K., Roberts, G. J., Cho, E., Williams, K. J., Roberts, G., Vaughn, S. R., & Carroll, M. (2016). A century of progress. Review of Educational Research, 86 (3), 756–800. https://doi.org/10.3102/0034654316652942

Scammacca, N. K., Roberts, G., Vaughn, S., & Stuebing, K. K. (2013). A meta-analysis of interventions for struggling readers in grades 4–12. Journal of Learning Disabilities, 48 (4), 369–390. https://doi.org/10.1177/0022219413504995

Stanovich, K. E. (1991). Discrepancy definitions of reading disability: Has intelligence led us astray?. Reading Research Quarterly , 26 , 7–29.

Stuebing, K. K., Fletcher, J. M., Branum-Martin, L., Francis, D. J., & Van Der Heyden, A. (2012). Evaluation of the technical adequacy of three methods for identifying specific learning disabilities based on cognitive discrepancies. School Psychology Review, 41 (1), 3–22. https://doi.org/10.1080/02796015.2012.12087373

Stuebing, K. K., Fletcher, J. M., LeDoux, J. M., Lyon, G. R., Shaywitz, S. E., & Shaywitz, B. A. (2002). Validity of IQ-discrepancy classifications of reading disabilities: A meta-analysis. American Educational Research Journal, 39 (2), 469–518.

Suttle, C. M., Lawrenson, J. G., & Conway, M. L. (2018). Efficacy of coloured overlays and lenses for treating reading difficulty: An overview of systematic reviews. Clinical and Experimental Optometry, 101 (4), 514–520. https://doi.org/10.1111/cxo.12676

Taylor, W. P., Miciak, J., Fletcher, J. M., & Francis, D. J. (2017). Cognitive discrepancy models for specific learning disabilities identification: Simulations of psychometric limitations. Psychological Assessment, 29 (4), 446–457. https://doi.org/10.1037/pas0000356

U.S. Department of Education. (2006). Individuals with Disabilities Act (IDEA). 20 U.S.C. § 1400.

Wery, J. J., & Diliberto, J. A. (2016). The effect of a specialized dyslexia font, OpenDyslexic, on reading rate and accuracy. Annals of Dyslexia, 67 (2), 114–127. https://doi.org/10.1007/s11881-016-0127-1

Wickham, H., François, R., Henry, L., & Müller, K. (2017). dplyr: A Grammar of Data Manipulation (R package version 0.7.4) .

Willows, D. M., Kruk, R. S., & Corcos, E. (1993). Visual processes in reading and reading disabilities . Lawrence Erlbaum.

Download references

Support for this research was provided by Award Number BERADANIELJ2022 from the British Educational Research Association.

Author information

Authors and affiliations.

Durham University, Durham, UK

Johny Daniel & Lauryn Clucas

National Taiwan Normal University, Taipei, Taiwan

Hsuan-Hui Wang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Johny Daniel .

Ethics declarations

Ethics approval.

The current study was approved by Durham University’s Ethics committee prior to data collection.

Consent to participate

Prior to starting the survey, all participants were informed that their responses to this questionnaire are entirely voluntary and will be used, anonymously, in our research. They could withdraw their participation at any time. By completing this questionnaire, they agree to be in our study.

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

(DOCX 26.9 KB)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Daniel, J., Clucas, L. & Wang, HH. Identifying students with dyslexia: exploration of current assessment methods. Ann. of Dyslexia (2024). https://doi.org/10.1007/s11881-024-00313-y

Download citation

Received : 28 July 2023

Accepted : 30 July 2024

Published : 29 August 2024

DOI : https://doi.org/10.1007/s11881-024-00313-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Identification
  • Reading disabilities
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Home

  •   Facebook
  •   Twitter
  •   Linkedin
  •   Digg
  •   Reddit
  •   Pinterest
  •   Email

Latest Earthquakes |    Chat Share Social Media  

S-NPP 375-m eVIIRS Remote Sensing Phenology Metrics - across the conterminous U.S. (Ver. 2.0, August 2024)

Phenological dynamics of terrestrial ecosystems reflect the response of the Earth's vegetation canopy to changes in climate and hydrology and are thus important to monitor operationally. Researchers at the U.S. Geological Survey (USGS), Earth Resources Observation and Science (EROS) Center have developed methods for documenting the seasonal dynamics of vegetation in an operational fashion from satellite time-series data.

The USGS made the decision to develop 2023 CONUS phenology metrics using S-NPP Visible Infrared Imaging Radiometer Suite (VIIRS) because of the decommissioning of Aqua C6 MODIS sensor in the near future. The readily available and consistently processed smoothed EROS VIIRS (eVIIRS) maximum Normalized Difference Vegetation Index (NDVI) weekly composites are the key input for the phenological metrics data. The weighted least-square approach for temporal smoothing (Swets et. al., 1999) was adopted for the NDVI time series to eliminate anomalously low vegetation index values and reduce time shifts caused by overgeneralization of the NDVI signal. This approach uses a moving temporal window to calculate a family of regression lines that are associated with each observation; the family of lines is then averaged at each point and interpolated between points to provide a continuous temporal NDVI signal. While interpolating values between points, a weighting factor is applied that favors peak (high value) points over valley points. Smoothed NDVI data were stacked in an ascending three year 156 NDVI composite file (52 NDVI composites per year). The three years include the previous year and the following year (e.g., 2023 phenology metrics included 2022, 2023, and 2024 smoothed NDVI). In instances where the full 52 composites are not achieved, an average for each remaining weekly composite from the processed year and three previous years are used to fill those composites in the latter year to reach 156 composites (to fill 2024, composites from years 2021, 2022, and 2023 were averaged).

The smoothed NDVI data were subsequently ingested into a model developed in the Interactive Data Language (IDL) to quantify following phenological events: Start of Season Time (SOST); Start of Season NDVI (SOSN); End of Season Time (EOST); End of Season NDVI (EOSN); Maximum Time (MAXT); Maximum NDVI (MAXN); Duration (DUR); Amplitude (AMP); and Time Integrated NDVI (TIN).

Note:  S-NPP 375m eVIIRS Phenology Metrics CONUS 2021 Publication Date: 2022-08-25 S-NPP 375m eVIIRS Phenology Metrics CONUS 2022 Publication Date: 2023-08-03 S-NPP 375m eVIIRS Phenology Metrics CONUS 2023 Publication Date: 2024-08-30

For details about the algorithms and the data scaling for each of these seasonal phenological metrics, refer to the data creation process section of this metadata.

References: Swets, D. L., Reed, B. C., Rowland, J. R., and S. E. Marko, 1999, "A Weighted Least-squares Approach to Temporal Smoothing of NDVI," In Proceedings of the 1999 ASPRS Annual Conference, from Image to Information, Portland, Oregon, May 17-21, 1999, Bethesda, Maryland, American Society for Photogrammetry and Remote Sensing, CD-ROM, 1 disc.   First release: 2023 Revised: August 2024 (ver. 2.0)

Citation Information

Publication Year 2022
Title S-NPP 375-m eVIIRS Remote Sensing Phenology Metrics - across the conterminous U.S. (Ver. 2.0, August 2024)
DOI
Authors Trenton (Contractor) D Benedict, Dinesh (Contractor) Shrestha, Stephen Boyte
Product Type Data Release
Record Source
USGS Organization Earth Resources Observation and Science (EROS) Center
Rights

Related Content

Stephen boyte, research geographer.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Key things to know about U.S. election polling in 2024

Conceptual image of an oversized voting ballot box in a large crowd of people with shallow depth of field

Confidence in U.S. public opinion polling was shaken by errors in 2016 and 2020. In both years’ general elections, many polls underestimated the strength of Republican candidates, including Donald Trump. These errors laid bare some real limitations of polling.

In the midterms that followed those elections, polling performed better . But many Americans remain skeptical that it can paint an accurate portrait of the public’s political preferences.

Restoring people’s confidence in polling is an important goal, because robust and independent public polling has a critical role to play in a democratic society. It gathers and publishes information about the well-being of the public and about citizens’ views on major issues. And it provides an important counterweight to people in power, or those seeking power, when they make claims about “what the people want.”

The challenges facing polling are undeniable. In addition to the longstanding issues of rising nonresponse and cost, summer 2024 brought extraordinary events that transformed the presidential race . The good news is that people with deep knowledge of polling are working hard to fix the problems exposed in 2016 and 2020, experimenting with more data sources and interview approaches than ever before. Still, polls are more useful to the public if people have realistic expectations about what surveys can do well – and what they cannot.

With that in mind, here are some key points to know about polling heading into this year’s presidential election.

Probability sampling (or “random sampling”). This refers to a polling method in which survey participants are recruited using random sampling from a database or list that includes nearly everyone in the population. The pollster selects the sample. The survey is not open for anyone who wants to sign up.

Online opt-in polling (or “nonprobability sampling”). These polls are recruited using a variety of methods that are sometimes referred to as “convenience sampling.” Respondents come from a variety of online sources such as ads on social media or search engines, websites offering rewards in exchange for survey participation, or self-enrollment. Unlike surveys with probability samples, people can volunteer to participate in opt-in surveys.

Nonresponse and nonresponse bias. Nonresponse is when someone sampled for a survey does not participate. Nonresponse bias occurs when the pattern of nonresponse leads to error in a poll estimate. For example, college graduates are more likely than those without a degree to participate in surveys, leading to the potential that the share of college graduates in the resulting sample will be too high.

Mode of interview. This refers to the format in which respondents are presented with and respond to survey questions. The most common modes are online, live telephone, text message and paper. Some polls use more than one mode.

Weighting. This is a statistical procedure pollsters perform to make their survey align with the broader population on key characteristics like age, race, etc. For example, if a survey has too many college graduates compared with their share in the population, people without a college degree are “weighted up” to match the proper share.

How are election polls being conducted?

Pollsters are making changes in response to the problems in previous elections. As a result, polling is different today than in 2016. Most U.S. polling organizations that conducted and publicly released national surveys in both 2016 and 2022 (61%) used methods in 2022 that differed from what they used in 2016 . And change has continued since 2022.

A sand chart showing that, as the number of public pollsters in the U.S. has grown, survey methods have become more diverse.

One change is that the number of active polling organizations has grown significantly, indicating that there are fewer barriers to entry into the polling field. The number of organizations that conduct national election polls more than doubled between 2000 and 2022.

This growth has been driven largely by pollsters using inexpensive opt-in sampling methods. But previous Pew Research Center analyses have demonstrated how surveys that use nonprobability sampling may have errors twice as large , on average, as those that use probability sampling.

The second change is that many of the more prominent polling organizations that use probability sampling – including Pew Research Center – have shifted from conducting polls primarily by telephone to using online methods, or some combination of online, mail and telephone. The result is that polling methodologies are far more diverse now than in the past.

(For more about how public opinion polling works, including a chapter on election polls, read our short online course on public opinion polling basics .)

All good polling relies on statistical adjustment called “weighting,” which makes sure that the survey sample aligns with the broader population on key characteristics. Historically, public opinion researchers have adjusted their data using a core set of demographic variables to correct imbalances between the survey sample and the population.

But there is a growing realization among survey researchers that weighting a poll on just a few variables like age, race and gender is insufficient for getting accurate results. Some groups of people – such as older adults and college graduates – are more likely to take surveys, which can lead to errors that are too sizable for a simple three- or four-variable adjustment to work well. Adjusting on more variables produces more accurate results, according to Center studies in 2016 and 2018 .

A number of pollsters have taken this lesson to heart. For example, recent high-quality polls by Gallup and The New York Times/Siena College adjusted on eight and 12 variables, respectively. Our own polls typically adjust on 12 variables . In a perfect world, it wouldn’t be necessary to have that much intervention by the pollster. But the real world of survey research is not perfect.

importance of methodology

Predicting who will vote is critical – and difficult. Preelection polls face one crucial challenge that routine opinion polls do not: determining who of the people surveyed will actually cast a ballot.

Roughly a third of eligible Americans do not vote in presidential elections , despite the enormous attention paid to these contests. Determining who will abstain is difficult because people can’t perfectly predict their future behavior – and because many people feel social pressure to say they’ll vote even if it’s unlikely.

No one knows the profile of voters ahead of Election Day. We can’t know for sure whether young people will turn out in greater numbers than usual, or whether key racial or ethnic groups will do so. This means pollsters are left to make educated guesses about turnout, often using a mix of historical data and current measures of voting enthusiasm. This is very different from routine opinion polls, which mostly do not ask about people’s future intentions.

When major news breaks, a poll’s timing can matter. Public opinion on most issues is remarkably stable, so you don’t necessarily need a recent poll about an issue to get a sense of what people think about it. But dramatic events can and do change public opinion , especially when people are first learning about a new topic. For example, polls this summer saw notable changes in voter attitudes following Joe Biden’s withdrawal from the presidential race. Polls taken immediately after a major event may pick up a shift in public opinion, but those shifts are sometimes short-lived. Polls fielded weeks or months later are what allow us to see whether an event has had a long-term impact on the public’s psyche.

How accurate are polls?

The answer to this question depends on what you want polls to do. Polls are used for all kinds of purposes in addition to showing who’s ahead and who’s behind in a campaign. Fair or not, however, the accuracy of election polling is usually judged by how closely the polls matched the outcome of the election.

A diverging bar chart showing polling errors in U.S. presidential elections.

By this standard, polling in 2016 and 2020 performed poorly. In both years, state polling was characterized by serious errors. National polling did reasonably well in 2016 but faltered in 2020.

In 2020, a post-election review of polling by the American Association for Public Opinion Research (AAPOR) found that “the 2020 polls featured polling error of an unusual magnitude: It was the highest in 40 years for the national popular vote and the highest in at least 20 years for state-level estimates of the vote in presidential, senatorial, and gubernatorial contests.”

How big were the errors? Polls conducted in the last two weeks before the election suggested that Biden’s margin over Trump was nearly twice as large as it ended up being in the final national vote tally.

Errors of this size make it difficult to be confident about who is leading if the election is closely contested, as many U.S. elections are .

Pollsters are rightly working to improve the accuracy of their polls. But even an error of 4 or 5 percentage points isn’t too concerning if the purpose of the poll is to describe whether the public has favorable or unfavorable opinions about candidates , or to show which issues matter to which voters. And on questions that gauge where people stand on issues, we usually want to know broadly where the public stands. We don’t necessarily need to know the precise share of Americans who say, for example, that climate change is mostly caused by human activity. Even judged by its performance in recent elections, polling can still provide a faithful picture of public sentiment on the important issues of the day.

The 2022 midterms saw generally accurate polling, despite a wave of partisan polls predicting a broad Republican victory. In fact, FiveThirtyEight found that “polls were more accurate in 2022 than in any cycle since at least 1998, with almost no bias toward either party.” Moreover, a handful of contrarian polls that predicted a 2022 “red wave” largely washed out when the votes were tallied. In sum, if we focus on polling in the most recent national election, there’s plenty of reason to be encouraged.

Compared with other elections in the past 20 years, polls have been less accurate when Donald Trump is on the ballot. Preelection surveys suffered from large errors – especially at the state level – in 2016 and 2020, when Trump was standing for election. But they performed reasonably well in the 2018 and 2022 midterms, when he was not.

Pew Research Center illustration

During the 2016 campaign, observers speculated about the possibility that Trump supporters might be less willing to express their support to a pollster – a phenomenon sometimes described as the “shy Trump effect.” But a committee of polling experts evaluated five different tests of the “shy Trump” theory and turned up little to no evidence for each one . Later, Pew Research Center and, in a separate test, a researcher from Yale also found little to no evidence in support of the claim.

Instead, two other explanations are more likely. One is about the difficulty of estimating who will turn out to vote. Research has found that Trump is popular among people who tend to sit out midterms but turn out for him in presidential election years. Since pollsters often use past turnout to predict who will vote, it can be difficult to anticipate when irregular voters will actually show up.

The other explanation is that Republicans in the Trump era have become a little less likely than Democrats to participate in polls . Pollsters call this “partisan nonresponse bias.” Surprisingly, polls historically have not shown any particular pattern of favoring one side or the other. The errors that favored Democratic candidates in the past eight years may be a result of the growth of political polarization, along with declining trust among conservatives in news organizations and other institutions that conduct polls.

Whatever the cause, the fact that Trump is again the nominee of the Republican Party means that pollsters must be especially careful to make sure all segments of the population are properly represented in surveys.

The real margin of error is often about double the one reported. A typical election poll sample of about 1,000 people has a margin of sampling error that’s about plus or minus 3 percentage points. That number expresses the uncertainty that results from taking a sample of the population rather than interviewing everyone . Random samples are likely to differ a little from the population just by chance, in the same way that the quality of your hand in a card game varies from one deal to the next.

A table showing that sampling error is not the only kind of polling error.

The problem is that sampling error is not the only kind of error that affects a poll. Those other kinds of error, in fact, can be as large or larger than sampling error. Consequently, the reported margin of error can lead people to think that polls are more accurate than they really are.

There are three other, equally important sources of error in polling: noncoverage error , where not all the target population has a chance of being sampled; nonresponse error, where certain groups of people may be less likely to participate; and measurement error, where people may not properly understand the questions or misreport their opinions. Not only does the margin of error fail to account for those other sources of potential error, putting a number only on sampling error implies to the public that other kinds of error do not exist.

Several recent studies show that the average total error in a poll estimate may be closer to twice as large as that implied by a typical margin of sampling error. This hidden error underscores the fact that polls may not be precise enough to call the winner in a close election.

Other important things to remember

Transparency in how a poll was conducted is associated with better accuracy . The polling industry has several platforms and initiatives aimed at promoting transparency in survey methodology. These include AAPOR’s transparency initiative and the Roper Center archive . Polling organizations that participate in these organizations have less error, on average, than those that don’t participate, an analysis by FiveThirtyEight found .

Participation in these transparency efforts does not guarantee that a poll is rigorous, but it is undoubtedly a positive signal. Transparency in polling means disclosing essential information, including the poll’s sponsor, the data collection firm, where and how participants were selected, modes of interview, field dates, sample size, question wording, and weighting procedures.

There is evidence that when the public is told that a candidate is extremely likely to win, some people may be less likely to vote . Following the 2016 election, many people wondered whether the pervasive forecasts that seemed to all but guarantee a Hillary Clinton victory – two modelers put her chances at 99% – led some would-be voters to conclude that the race was effectively over and that their vote would not make a difference. There is scientific research to back up that claim: A team of researchers found experimental evidence that when people have high confidence that one candidate will win, they are less likely to vote. This helps explain why some polling analysts say elections should be covered using traditional polling estimates and margins of error rather than speculative win probabilities (also known as “probabilistic forecasts”).

National polls tell us what the entire public thinks about the presidential candidates, but the outcome of the election is determined state by state in the Electoral College . The 2000 and 2016 presidential elections demonstrated a difficult truth: The candidate with the largest share of support among all voters in the United States sometimes loses the election. In those two elections, the national popular vote winners (Al Gore and Hillary Clinton) lost the election in the Electoral College (to George W. Bush and Donald Trump). In recent years, analysts have shown that Republican candidates do somewhat better in the Electoral College than in the popular vote because every state gets three electoral votes regardless of population – and many less-populated states are rural and more Republican.

For some, this raises the question: What is the use of national polls if they don’t tell us who is likely to win the presidency? In fact, national polls try to gauge the opinions of all Americans, regardless of whether they live in a battleground state like Pennsylvania, a reliably red state like Idaho or a reliably blue state like Rhode Island. In short, national polls tell us what the entire citizenry is thinking. Polls that focus only on the competitive states run the risk of giving too little attention to the needs and views of the vast majority of Americans who live in uncompetitive states – about 80%.

Fortunately, this is not how most pollsters view the world . As the noted political scientist Sidney Verba explained, “Surveys produce just what democracy is supposed to produce – equal representation of all citizens.”

  • Survey Methods
  • Trust, Facts & Democracy
  • Voter Files

Download Scott Keeter's photo

Scott Keeter is a senior survey advisor at Pew Research Center .

Download Courtney Kennedy's photo

Courtney Kennedy is Vice President of Methods and Innovation at Pew Research Center .

How do people in the U.S. take Pew Research Center surveys, anyway?

How public polling has changed in the 21st century, what 2020’s election poll errors tell us about the accuracy of issue polling, a field guide to polling: election 2020 edition, methods 101: how is polling done around the world, most popular.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

IMAGES

  1. Importance of Research Methodology in PhD

    importance of methodology

  2. Your Step-by-Step Guide to Writing a Good Research Methodology

    importance of methodology

  3. Power of Research: Why Research Methodology is Important

    importance of methodology

  4. PPT

    importance of methodology

  5. Types of Research Methodology: Uses, Types & Benefits

    importance of methodology

  6. Research Methodology

    importance of methodology

VIDEO

  1. importance of interview research methodology

  2. Importance of Mathematics

  3. Qualitative Methodology in Research Design: A Question of Inquiry

  4. Questionnaire || Meaning and Definition || Type and Characteristics || Research Methodology ||

  5. Design Thinking

  6. DEF CON 25 SE Village

COMMENTS

  1. What Is Research Methodology? (Why It's Important and Types)

    Research methodology is a way of explaining how a researcher intends to carry out their research. It's a logical, systematic plan to resolve a research problem. A methodology details a researcher's approach to the research to ensure reliable, valid results that address their aims and objectives. It encompasses what data they're going to collect ...

  2. PDF Methodology: What It Is and Why It Is So Important

    Methodology: What It Is and Why It Is so Important 5 and desirable) and these are our means (use of theory, methodology, guiding concepts, replication of results). Science is hardly a game because so many of its tasks and topics are so serious—indeed, a matter of life and death (e.g., suicide, risky behavior, cigarette smoking).

  3. What Is a Research Methodology?

    The methodology section should clearly show why your methods suit your objectives and convince the reader that you chose the best possible approach to answering your problem statement and research questions. 2. Cite relevant sources. Your methodology can be strengthened by referencing existing research in your field. This can help you to:

  4. A tutorial on methodological studies: the what, when, how and why

    Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

  5. 6. The Methodology

    The methods section describes actions taken to investigate a research problem and the rationale for the application of specific procedures or techniques used to identify, select, process, and analyze information applied to understanding the problem, thereby, allowing the reader to critically evaluate a study's overall validity and reliability.

  6. What Is Research Methodology? Definition + Examples

    As we mentioned, research methodology refers to the collection of practical decisions regarding what data you'll collect, from who, how you'll collect it and how you'll analyse it. Research design, on the other hand, is more about the overall strategy you'll adopt in your study. For example, whether you'll use an experimental design ...

  7. What is research methodology? [Update 2024]

    A research methodology encompasses the way in which you intend to carry out your research. This includes how you plan to tackle things like collection methods, statistical analysis, participant observations, and more. You can think of your research methodology as being a formula. One part will be how you plan on putting your research into ...

  8. Research Methodology

    The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

  9. What Is a Research Methodology?

    Revised on 10 October 2022. Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.

  10. Research Methodology Guide: Writing Tips, Types, & Examples

    Importance of methodology in research papers. When it comes to writing your study, the methodology in research papers or a dissertation plays a pivotal role. A well-crafted methodology section of a research paper or thesis not only enhances the credibility of your research but also provides a roadmap for others to replicate or build upon your work.

  11. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  12. The fundamental importance of method to theory

    An important next step is to codify these notions of theory and method, perhaps most promisingly by quantifying their characteristics and connections. Bibliometric tools now make this possible 139 ...

  13. Choosing the Right Research Methodology: A Guide

    Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an ...

  14. Full article: Why methodology matters

    Understanding the methodology employed in an article is the key to becoming an "unofficial" critical article reviewer. ... The introduction of courses in high school and university on how to read and evaluate a journal article may be an important step for educating the next generation of readers and scholars.

  15. Methodology

    Research. In its most common sense, methodology is the study of research methods. However, the term can also refer to the methods themselves or to the philosophical discussion of associated background assumptions. A method is a structured procedure for bringing about a certain goal, like acquiring knowledge or verifying knowledge claims.

  16. Methodology for research I

    INTRODUCTION. Research is a process for acquiring new knowledge in systematic approach involving diligent planning and interventions for discovery or interpretation of the new-gained information.[1,2] The outcome reliability and validity of a study would depend on well-designed study with objective, reliable, repeatable methodology with appropriate conduct, data collection and its analysis ...

  17. What is Research Methodology? Definition, Types, and Examples

    Definition, Types, and Examples. Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of ...

  18. Methodology: What it is and why it is so important.

    Scientific knowledge is very special. This knowledge is based on the accumulation of empirical evidence and obtained through systematic and careful observation of the phenomenon of interest. At a very general level, the ways in which the observations are obtained constitute the methods of science. Yet, these methods can be considered at multiple levels, including the principles and tenets they ...

  19. How To Choose The Right Research Methodology

    How to choose a research methodology. To choose the right research methodology for your dissertation or thesis, you need to consider three important factors. Based on these three factors, you can decide on your overarching approach - qualitative, quantitative or mixed methods. Once you've made that decision, you can flesh out the finer ...

  20. What Is Research Methodology and Why Is It Important?

    Updated 27 June 2024. Deciding on a methodology is an important part of the research process. It allows you to understand the type of data you're gathering and the techniques you can use to collect relevant information. Learning the different types of methodologies may help you achieve your research objectives.

  21. Methodology: What it is and why it is so important.

    The first part of this chapter conveys what methodology is and the roles it plays in scientific knowledge. Perhaps the most critical point is to conceive methodology not only as a set of practices but as a way of approaching the subject matter of interest. Scientific knowledge is very special; it is knowledge that is based on the accumulation of empirical evidence. Empirical evidence is a rich ...

  22. A tutorial on methodological studies: the what, when, how and why

    Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...

  23. Research methodology

    Importance of Research Methodology in Research. It is necessary for a researcher to design a research methodology for the problem chosen. One should note that even if the research method considered for two problems are the same the research methodology may be different. It is important for the researcher to know not only the research methods ...

  24. Minimal important difference of Berg Balance Scale, performance

    Both anchor-based and distribution-based methods were applied in this study to determine the Minimal Important Difference (MID) of the selected scales. Anchor-based method The change in the BBS, POMA and DGI scores between T1 and T2 was compared with the CGI.

  25. Ensuring Project Success: What is the Critical Path Method in Project

    The critical path method in project management helps identify tasks that affect a project's timeline. Learn its basics and how to apply it to keep complex projects on track. ... What is the Importance of CPM in Project Management? The critical path method is like the GPS for project management. It does not just show you the way; it highlights ...

  26. Validation of a 9-item Perceived Suicide Awareness Scale (PSAS-9) for

    Conclusions: This study was an important step towards validating a perceived suicide awareness scale, which appears as a new dimension of suicidality, distinct from suicide-related knowledge. The PSAS-9 could be used to develop, evaluate, and improve suicide prevention efforts. ... Methodology. Empirical Study; Quantitative Study. Tests and ...

  27. Understanding Cash Flow Statements: Definition & Importance

    The methods for preparing a cash flow statement. There are two methods for preparing a cash flow statement: the direct method and the indirect method. ... The importance of cash flow planning and management. With a general understanding of what a cash flow statement is and why financial reporting is important, it's imperative to also learn ...

  28. Identifying students with dyslexia: exploration of current ...

    Early identification plays a crucial role in providing timely support to students with learning disabilities, such as dyslexia, in order to overcome their reading difficulties. However, there is significant variability in the methods used for identifying dyslexia. This study aimed to explore and understand the practices of dyslexia identification in the UK. A survey was conducted among 274 ...

  29. S-NPP 375-m eVIIRS Remote Sensing Phenology Metrics

    Phenological dynamics of terrestrial ecosystems reflect the response of the Earth's vegetation canopy to changes in climate and hydrology and are thus important to monitor operationally. Researchers at the U.S. Geological Survey (USGS), Earth Resources Observation and Science (EROS) Center have developed methods for documenting the seasonal dynamics of vegetation in an operational fashion from sat

  30. Key things to know about U.S. election polling in 2024

    Restoring people's confidence in polling is an important goal, because robust and independent public polling has a critical role to play in a democratic society. It gathers and publishes information about the well-being of the public and about citizens' views on major issues. ... (61%) used methods in 2022 that differed from what they used ...