Research methodology vs. research methods
The research methodology or design is the overall strategy and rationale that you used to carry out the research. Whereas, research methods are the specific tools and processes you use to gather and understand the data you need to test your hypothesis.
To further understand research methodology, let’s explore some examples of research methodology:
a. Qualitative research methodology example: A study exploring the impact of author branding on author popularity might utilize in-depth interviews to gather personal experiences and perspectives.
b. Quantitative research methodology example: A research project investigating the effects of a book promotion technique on book sales could employ a statistical analysis of profit margins and sales before and after the implementation of the method.
c. Mixed-Methods research methodology example: A study examining the relationship between social media use and academic performance might combine both qualitative and quantitative approaches. It could include surveys to quantitatively assess the frequency of social media usage and its correlation with grades, alongside focus groups or interviews to qualitatively explore students’ perceptions and experiences regarding how social media affects their study habits and academic engagement.
These examples highlight the meaning of methodology in research and how it guides the research process, from data collection to analysis, ensuring the study’s objectives are met efficiently.
When it comes to writing your study, the methodology in research papers or a dissertation plays a pivotal role. A well-crafted methodology section of a research paper or thesis not only enhances the credibility of your research but also provides a roadmap for others to replicate or build upon your work.
Wondering how to write the research methodology section? Follow these steps to create a strong methods chapter:
At the start of a research paper , you would have provided the background of your research and stated your hypothesis or research problem. In this section, you will elaborate on your research strategy.
Begin by restating your research question and proceed to explain what type of research you opted for to test it. Depending on your research, here are some questions you can consider:
a. Did you use qualitative or quantitative data to test the hypothesis?
b. Did you perform an experiment where you collected data or are you writing a dissertation that is descriptive/theoretical without data collection?
c. Did you use primary data that you collected or analyze secondary research data or existing data as part of your study?
These questions will help you establish the rationale for your study on a broader level, which you will follow by elaborating on the specific methods you used to collect and understand your data.
Now that you have told your reader what type of research you’ve undertaken for the dissertation, it’s time to dig into specifics. State what specific methods you used and explain the conditions and variables involved. Explain what the theoretical framework behind the method was, what samples you used for testing it, and what tools and materials you used to collect the data.
Once you have explained the data collection process, explain how you analyzed and studied the data. Here, your focus is simply to explain the methods of analysis rather than the results of the study.
Here are some questions you can answer at this stage:
a. What tools or software did you use to analyze your results?
b. What parameters or variables did you consider while understanding and studying the data you’ve collected?
c. Was your analysis based on a theoretical framework?
Your mode of analysis will change depending on whether you used a quantitative or qualitative research methodology in your study. If you’re working within the hard sciences or physical sciences, you are likely to use a quantitative research methodology (relying on numbers and hard data). If you’re doing a qualitative study, in the social sciences or humanities, your analysis may rely on understanding language and socio-political contexts around your topic. This is why it’s important to establish what kind of study you’re undertaking at the onset.
Now that you have gone through your research process in detail, you’ll also have to make a case for it. Justify your choice of methodology and methods, explaining why it is the best choice for your research question. This is especially important if you have chosen an unconventional approach or you’ve simply chosen to study an existing research problem from a different perspective. Compare it with other methodologies, especially ones attempted by previous researchers, and discuss what contributions using your methodology makes.
No matter how thorough a methodology is, it doesn’t come without its hurdles. This is a natural part of scientific research that is important to document so that your peers and future researchers are aware of it. Writing in a research paper about this aspect of your research process also tells your evaluator that you have actively worked to overcome the pitfalls that came your way and you have refined the research process.
1. Remember who you are writing for. Keeping sight of the reader/evaluator will help you know what to elaborate on and what information they are already likely to have. You’re condensing months’ work of research in just a few pages, so you should omit basic definitions and information about general phenomena people already know.
2. Do not give an overly elaborate explanation of every single condition in your study.
3. Skip details and findings irrelevant to the results.
4. Cite references that back your claim and choice of methodology.
5. Consistently emphasize the relationship between your research question and the methodology you adopted to study it.
To sum it up, what is methodology in research? It’s the blueprint of your research, essential for ensuring that your study is systematic, rigorous, and credible. Whether your focus is on qualitative research methodology, quantitative research methodology, or a combination of both, understanding and clearly defining your methodology is key to the success of your research.
Once you write the research methodology and complete writing the entire research paper, the next step is to edit your paper. As experts in research paper editing and proofreading services , we’d love to help you perfect your paper!
Here are some other articles that you might find useful:
What does research methodology mean, what types of research methodologies are there, what is qualitative research methodology, how to determine sample size in research methodology, what is action research methodology.
Found this article helpful?
This is very simplified and direct. Very helpful to understand the research methodology section of a dissertation
Leave a Comment: Cancel reply
Your email address will not be published.
Your organization needs a technical editor: here’s why, your guide to the best ebook readers in 2024, writing for the web: 7 expert tips for web content writing.
Subscribe to our Newsletter
Get carefully curated resources about writing, editing, and publishing in the comfort of your inbox.
How to Copyright Your Book?
If you’ve thought about copyrighting your book, you’re on the right path.
© 2024 All rights reserved
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
Methodology
Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.
First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :
Second, decide how you will analyze the data .
Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.
Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.
Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.
For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .
If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .
Qualitative | to broader populations. . | |
---|---|---|
Quantitative | . |
You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.
Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).
If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.
Primary | . | methods. |
---|---|---|
Secondary |
In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .
In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .
To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.
Descriptive | . . | |
---|---|---|
Experimental |
Discover proofreading & editing
Research method | Primary or secondary? | Qualitative or quantitative? | When to use |
---|---|---|---|
Primary | Quantitative | To test cause-and-effect relationships. | |
Primary | Quantitative | To understand general characteristics of a population. | |
Interview/focus group | Primary | Qualitative | To gain more in-depth understanding of a topic. |
Observation | Primary | Either | To understand how something occurs in its natural setting. |
Secondary | Either | To situate your research in an existing body of work, or to evaluate trends within a research topic. | |
Either | Either | To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study. |
Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.
Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.
Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:
Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .
Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).
You can use quantitative analysis to interpret data that was collected either:
Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.
Research method | Qualitative or quantitative? | When to use |
---|---|---|
Quantitative | To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). | |
Meta-analysis | Quantitative | To statistically analyze the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner. |
Qualitative | To analyze data collected from interviews, , or textual sources. To understand general themes in the data and how they are communicated. | |
Either | To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources. Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words). |
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
Research bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.
In statistics, sampling allows you to test a hypothesis about the characteristics of a population.
The research methods you use depend on the type of data you need to answer your research question .
Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.
Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).
In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .
In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.
Other students also liked, writing strong research questions | criteria & examples.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Nature Reviews Psychology volume 2 , pages 55–66 ( 2023 ) Cite this article
806 Accesses
3 Citations
14 Altmetric
Metrics details
Many domains of inquiry in psychology are concerned with rich and complex phenomena. At the same time, the field of psychology is grappling with how to improve research practices to address concerns with the scientific enterprise. In this Perspective, we argue that both of these challenges can be addressed by adopting a principle of methodological variety. According to this principle, developing a variety of methodological tools should be regarded as a scientific goal in itself, one that is critical for advancing scientific theory. To illustrate, we show how the study of language and communication requires varied methodologies, and that theory development proceeds, in part, by integrating disparate tools and designs. We argue that the importance of methodological variation and innovation runs deep, travelling alongside theory development to the core of the scientific enterprise. Finally, we highlight ongoing research agendas that might help to specify, quantify and model methodological variety and its implications.
This is a preview of subscription content, access via your institution
Subscribe to this journal
Receive 12 digital issues and online access to articles
55,14 € per year
only 4,60 € per issue
Buy this article
Prices may be subject to local taxes which are calculated during checkout
Hacking, I. Representing and Intervening: Introductory Topics in The Philosophy of Natural Science (Cambridge Univ. Press, 1983).
Mayo, D. G. in PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association Vol. 1994, 270–279 (Cambridge Univ. Press, 1994).
Ackermann, R. The new experimentalism. Brit. J. Philos. Sci. 40 , 185–190 (1989).
Article Google Scholar
Simons, M. & Vagelli, M. Were experiments ever neglected? Ian Hacking and the history of philosophy of experiment. Phil. Inq. 9 , 167–188 (2021).
Google Scholar
Nosek, B. A. et al. Promoting an open research culture. Science 348 , 1422–1425 (2015).
Richardson, D. C., Dale, R. & Tomlinson, J. M. Conversation, gaze coordination, and beliefs about visual context. Cogn. Sci. 33 , 1468–1482 (2009).
Laidlaw, K. E., Foulsham, T., Kuhn, G. & Kingstone, A. Potential social interactions are important to social attention. Proc. Natl Acad. Sci. USA 108 , 5548–5553 (2011).
Richardson, D. C. et al. Joint perception: gaze and social context. Front. Hum. Neurosci. 6 , 194 (2012).
Risko, E. F., Richardson, D. C. & Kingstone, A. Breaking the fourth wall of cognitive science: real-world social attention and the dual function of gaze. Curr. Dir. Psychol. Sci. 25 , 70–74 (2016).
Levin, I. P., Schneider, S. L. & Gaeth, G. J. All frames are not created equal: a typology and critical analysis of framing effects. Organ. Behav. Hum. Decis. Process. 76 , 149–188 (1998).
Pothos, E. M. & Busemeyer, J. R. A quantum probability explanation for violations of ‘rational’ decision theory. Proc. R. Soc. B 276 , 2171–2178 (2009).
Pärnamets, P. et al. Biasing moral decisions by exploiting the dynamics of eye gaze. Proc. Natl Acad. Sci. USA 112 , 4170–4175 (2015).
Vinson, D. W., Dale, R. & Jones, M. N. Decision contamination in the wild: sequential dependencies in online review ratings. Behav. Res. Methods 51 , 1477–1484 (2019).
Longino, H. E. Gender, politics, and the theoretical virtues. Synthese 104 , 383–397 (1995).
Hansson, S. O. in Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) https://plato.stanford.edu/archives/fall2021/entries/pseudo-science/ (Stanford Univ., 2015).
Dupré, J. The Disorder of Things: Metaphysical Foundations of the Disunity of Science (Harvard Univ. Press, 1993).
Cartwright, N. The Dappled World: A Study of the Boundaries of Science (Cambridge Univ. Press, 1999).
McCauley, R. N. & Bechtel, W. Explanatory pluralism and heuristic identity theory. Theory Psychol. 11 , 736–760 (2001).
Mitchell, S. D. Integrative pluralism. Biol. Phil. 17 , 55–70 (2002).
Abrahamsen, A. & Bechtel, W. in Contemporary Debates in Cognitive Science (ed. Stainton, R.) 159–187 (Blackwell, 2006).
Kellert, S. H., Longino, H. E. & Waters, C. K. Scientific Pluralism (Univ. Minnesota Press, 2006).
Dale, R., Dietrich, E. & Chemero, A. Explanatory pluralism in cognitive science. Cogn. Sci. 33 , 739–742 (2009).
Weisberg, M. & Muldoon, R. Epistemic landscapes and the division of cognitive labor. Phil. Sci. 76 , 225–252 (2009).
Zollman, K. J. S. The epistemic benefit of transient diversity. Erkenntnis 72 , 17–35 (2010).
Horst, S. Beyond reduction: from naturalism to cognitive pluralism. Mind Matter 12 , 197–244 (2014).
Open Science Collaboration. Estimating the reproducibility of psychological science. Science 349 , 943 (2015).
Lin, H., Werner, K. M. & Inzlicht, M. Promises and perils of experimentation: the mutual-internal-validity problem. Perspect. Psychol. Sci. 16 , 854–863 (2021).
van Rooij, I. & Baggio, G. Theory before the test: how to build high-verisimilitude explanatory theories in psychological science. Perspect. Psychol. Sci. 16 , 682–697 (2021).
MacLeod, C. M. The Stroop task: the gold standard of attentional measures. J. Exp. Psychol. Gen. 121 , 12–14 (1992).
Eriksen, C. W. The flankers task and response competition: a useful tool for investigating a variety of cognitive problems. Vis. Cogn. 2 , 101–118 (1995).
Dickter, C. L. & Bartholow, B. D. Ingroup categorization and response conflict: interactive effects of target race, flanker compatibility, and infrequency on N2 amplitude. Psychophysiology 47 , 596–601 (2010).
Parris, B. A., Hasshim, N., Wadsley, M., Augustinova, M. & Ferrand, L. The loci of Stroop effects: a critical review of methods and evidence for levels of processing contributing to color-word Stroop effects and the implications for the loci of attentional selection. Psychol. Res. 86 , 1029–1053 (2022).
Barzykowski, K., Wereszczyski, M., Hajdas, S. & Radel, R. Cognitive inhibition behavioral tasks in online and laboratory settings: data from Stroop, SART and Eriksen Flanker tasks. Data Brief. 43 , 108398 (2022).
Gobel, M. S., Kim, H. S. & Richardson, D. C. The dual function of social gaze. Cognition 136 , 359–364 (2015).
Gallup, A. C., Chong, A. & Couzin, I. D. The directional flow of visual information transfer between pedestrians. Biol. Lett. 8 , 520–522 (2012).
Lick, D. J. & Johnson, K. L. Straight until proven gay: a systematic bias toward straight categorizations in sexual orientation judgments. J. Pers. Soc. Psychol. 110 , 801 (2016).
Alt, N. P., Lick, D. J. & Johnson, K. L. The straight categorization bias: a motivated and altruistic reasoning account. J. Pers. Soc. Psychol. https://doi.org/10.1037/pspi0000232 (2020).
Popper, K. Conjectures and Refutations: The Growth of Scientific Knowledge (Routledge, 2002).
Heit, E. & Hahn, U. Diversity-based reasoning in children. Cogn. Psychol. 43 , 243–273 (2001).
Heit, E., Hahn, U. & Feeney, A. in Categorization Inside and Outside the Laboratory: Essays in Honor of Douglas L. Medin 87–99 (American Psychological Association, 2005).
MacWhinney, B. in The Handbook of Linguistics 2nd edn (eds. Aronoff, M. & Rees-Miller, J.) 397–413 (Wiley, 2017).
Stivers, T. et al. Universals and cultural variation in turn-taking in conversation. Proc. Natl Acad. Sci. USA 106 , 10587–10592 (2009).
Louwerse, M. M., Dale, R., Bard, E. G. & Jeuniaux, P. Behavior matching in multimodal communication is synchronized. Cogn. Sci. 36 , 1404–1426 (2012).
Fusaroli, R., Bjørndahl, J. S., Roepstorff, A. & Tylén, K. A heart for interaction: shared physiological dynamics and behavioral coordination in a collective, creative construction task. J. Exp. Psychol. Hum. Percept. Perform. 42 , 1297 (2016).
Rasenberg, M., Özyürek, A. & Dingemanse, M. Alignment in multimodal interaction: an integrative framework. Cogn. Sci. 44 , e12911 (2020).
Dunn, M., Greenhill, S. J., Levinson, S. C. & Gray, R. D. Evolved structure of language shows lineage-specific trends in word-order universals. Nature 473 , 79–82 (2011).
Hua, X., Greenhill, S. J., Cardillo, M., Schneemann, H. & Bromham, L. The ecological drivers of variation in global language diversity. Nat. Commun. 10 , 1–10 (2019).
Christiansen, M. H. & Chater, N. The now-or-never bottleneck: a fundamental constraint on language. Behav. Brain Sci. 39 , e62 (2016).
Fitch, W. T., De Boer, B., Mathur, N. & Ghazanfar, A. A. Monkey vocal tracts are speech-ready. Sci. Adv. 2 , e1600723 (2016).
Hauser, M. D., Chomsky, N. & Fitch, W. T. The faculty of language: what is it, who has it, and how did it evolve? Science 298 , 1569–1579 (2002).
Cowley, S. J. Distributed Language (John Benjamins, 2011).
Samuels, R. Nativism in cognitive science. Mind Lang. 17 , 233–265 (2002).
Behme, C. & Deacon, S. H. Language learning in infancy: does the empirical evidence support a domain specific language acquisition device? Phil. Psychol. 21 , 641–671 (2008).
Chomsky, N. 4. A Review Of BF Skinner’s Verbal Behavior (Harvard Univ. Press, 2013).
Vihman, M. M. Phonological Development: The Origins of Language in the Child (Blackwell, 1996).
Oller, D. K. The Emergence of the Speech Capacity (Psychology Press, 2000).
Clark, E. V. & Casillas, M. First Language Acquisition (Routledge, 2015).
Goldstein, M. H., King, A. P. & West, M. J. Social interaction shapes babbling: testing parallels between birdsong and speech. Proc. Natl Acad. Sci. USA 100 , 8030–8035 (2003).
Warlaumont, A. S. Modeling the emergence of syllabic structure. J. Phonet. 53 , 61–65 (2015).
VanDam, M. et al. HomeBank: An online repository of daylong child-centered audio recordings. Semin. Speech Lang. 37 , 128–142 (2016).
Elmlinger, S. L., Schwade, J. A. & Goldstein, M. H. The ecology of prelinguistic vocal learning: parents simplify the structure of their speech in response to babbling. J. Child Lang. 46 , 998–1011 (2019).
Roy, B. C., Frank, M. C., DeCamp, P., Miller, M. & Roy, D. Predicting the birth of a spoken word. Proc. Natl Acad. Sci. USA 112 , 12663–12668 (2015).
McClelland, J. L. The place of modeling in cognitive science. Top. Cogn. Sci. 1 , 11–38 (2009).
Smaldino, P. E. in Computational Social Psychology (eds Vallacher’, R., Read, S. J. & Nowak, A.) 311–331 (Routledge, 2017).
Guest, O. & Martin, A. E. How computational modeling can force theory building in psychological science. Perspect. Psychol. Sci. 16 , 789–802 (2021).
Elman, J. L., Bates, E. A. & Johnson, M. H. Rethinking Innateness: A Connectionist Perspective on Development Vol. 10 (MIT Press, 1996).
Warlaumont, A. S., Westermann, G., Buder, E. H. & Oller, D. K. Prespeech motor learning in a neural network using reinforcement. Neural Netw. 38 , 64–75 (2013).
Warlaumont, A. S. & Finnegan, M. K. Learning to produce syllabic speech sounds via reward-modulated neural plasticity. PLoS One 11 , e0145096 (2016).
MacWhinney, B. & Snow, C. The child language data exchange system: an update. J. Child. Lang. 17 , 457–472 (1990).
MacWhinney, B. The CHILDES Project: Tools for Analyzing Talk 3rd edn (Psychology Press, 2014).
Kachergis, G., Marchman, V. A. & Frank, M. C. Toward a “standard model” of early language learning. Curr. Dir. Psychol. Sci. 31 , 20–27 (2022).
Lewis, J. D. & Elman, J. L. A connectionist investigation of linguistic arguments from the poverty of the stimulus: learning the unlearnable. In Proc. Annual Meeting of the Cognitive Science Society Vol. 23, 552–557 (eds. Moore, J. D. & Stenning, K.) (2001).
Regier, T. & Gahl, S. Learning the unlearnable: the role of missing evidence. Cognition 93 , 147–155 (2004).
Reali, F. & Christiansen, M. H. Uncovering the richness of the stimulus: structure dependence and indirect statistical evidence. Cogn. Sci. 29 , 1007–1028 (2005).
Foraker, S., Regier, T., Khetarpal, N., Perfors, A. & Tenenbaum, J. Indirect evidence and the poverty of the stimulus: the case of anaphoric one . Cognit. Sci. 33 , 287–300 (2009).
Saffran, J. R., Aslin, R. N. & Newport, E. L. Statistical learning by 8-month-old infants. Science 274 , 1926–1928 (1996).
McMurray, B. & Hollich, G. Core computational principles of language acquisition: can statistical learning do the job? Dev. Sci. 12 , 365–368 (2009).
Frost, R., Armstrong, B. C., Siegelman, N. & Christiansen, M. H. Domain generality versus modality specificity: the paradox of statistical learning. Trends Cogn. Sci. 19 , 117–125 (2015).
Isbilen, E. S., Frost, R. L. A., Monaghan, P. & Christiansen, M. H. Statistically based chunking of nonadjacent dependencies. J. Exp. Psychol. Gen. https://doi.org/10.1037/xge0001207 (2022).
Ruba, A. L., Pollak, S. D. & Saffran, J. R. Acquiring complex communicative systems: Statistical learning of language and emotion. Top. Cogn. Sci. https://doi.org/10.1111/tops.12612 (2022).
Abney, D. H., Warlaumont, A. S., Oller, D. K., Wallot, S. & Kello, C. T. Multiple coordination patterns in infant and adult vocalizations. Infancy 22 , 514–539 (2017).
Mendoza, J. K. & Fausey, C. M. Everyday music in infancy. Dev. Sci. 24 , e13122 (2019).
Ritwika, V. et al. Exploratory dynamics of vocal foraging during infant-caregiver communication. Sci. Rep. 10 , 10469 (2020).
Mendoza, J. K. & Fausey, C. M. Quantifying everyday ecologies: principles for manual annotation of many hours of infants’ lives. Front. Psychol. 12 , 710636 (2021).
Fernald, A., Zangl, R., Portillo, A. L. & Marchman, V. A. in Developmental Psycholinguistics: On-line Methods in Children’s Language Processing (eds. Sekerina, I. A., Fernández, E. M. & Clahsen, H.) Vol. 44, 97 (John Benjamins, 2008).
Weisleder, A. & Fernald, A. Talking to children matters: early language experience strengthens processing and builds vocabulary. Psychol. Sci. 24 , 2143–2152 (2013).
Bergelson, E. & Aslin, R. N. Nature and origins of the lexicon in 6-mo-olds. Proc. Natl Acad. Sci. USA 114 , 12916–12921 (2017).
Brennan, S. E., Galati, A. & Kuhlen, A. K. in Psychology of Learning and Motivation (ed. Ross, B. H.) Vol. 53, 301–344 (Elsevier, 2010).
Streeck, J., Goodwin, C. & LeBaron, C. Embodied Interaction: Language and Body in the Material World (Cambridge Univ. Press, 2011).
Goodwin, C. Co-operative Action (Cambridge Univ. Press, 2017).
Dale, R., Spivey, M. J. in Eye-Tracking In Interaction. Studies On The Role Of Eye Gaze In Dialogue (eds. Oben, B. and Brône, G.) 67–90 (John Benjamins, 2018).
Richardson, D. C. & Spivey, M. J. in Encyclopedia of Biomaterials and Biomedical Engineering 573–582 (CRC Press, 2004).
Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. Integration of visual and linguistic information in spoken language comprehension. Science 268 , 1632–1634 (1995).
Spivey, M. J., Tanenhaus, M. K., Eberhard, K. M. & Sedivy, J. C. Eye movements and spoken language comprehension: effects of visual context on syntactic ambiguity resolution. Cogn. Psychol. 45 , 447–481 (2002).
Richardson, D. C., Dale, R. & Spivey, M. J. Eye movements in language and cognition. Methods Cogn. Linguist. 18 , 323–344 (2007).
Ferreira, F. & Clifton, C. Jr The independence of syntactic processing. J. Mem. Lang. 25 , 348–368 (1986).
Kamide, Y., Altmann, G. T. & Haywood, S. L. The time-course of prediction in incremental sentence processing: evidence from anticipatory eye movements. J. Mem. Lang. 49 , 133–156 (2003).
Coco, M. I., Keller, F. & Malcolm, G. L. Anticipation in real‐world scenes: the role of visual context and visual memory. Cogn. Sci. 40 , 1995–2024 (2016).
Coco, M. I. & Keller, F. Scan patterns predict sentence production in the cross‐modal processing of visual scenes. Cogn. Sci. 36 , 1204–1223 (2012).
Kieslich, P. J., Henninger, F., Wulff, D. U., Haslbeck, J. M. & Schulte-Mecklenbeck, M. in A Handbook of Process Tracing Methods 111–130 (Routledge, 2019).
Spivey, M. J., Grosjean, M. & Knoblich, G. Continuous attraction toward phonological competitors. Proc. Natl Acad. Sci. USA 102 , 10393–10398 (2005).
Freeman, J., Dale, R. & Farmer, T. Hand in motion reveals mind in motion. Front. Psychol. 2 , 59 (2011).
Freeman, J. B. & Johnson, K. L. More than meets the eye: split-second social perception. Trends Cogn. Sci. 20 , 362–374 (2016).
Goodale, B. M., Alt, N. P., Lick, D. J. & Johnson, K. L. Groups at a glance: perceivers infer social belonging in a group based on perceptual summaries of sex ratio. J. Exp. Psychol. Gen. 147 , 1660–1676 (2018).
Sneller, B. & Roberts, G. Why some behaviors spread while others don’t: a laboratory simulation of dialect contact. Cognition 170 , 298–311 (2018).
Atkinson, M., Mills, G. J. & Smith, K. Social group effects on the emergence of communicative conventions and language complexity. J. Lang. Evol. 4 , 1–18 (2019).
Raviv, L., Meyer, A. & Lev‐Ari, S. The role of social network structure in the emergence of linguistic structure. Cogn. Sci. 44 , e12876 (2020).
Lupyan, G. & Dale, R. Language structure is partly determined by social structure. PLoS One 5 , e8559 (2010).
Lupyan, G. & Dale, R. Why are there different languages? The role of adaptation in linguistic diversity. Trends Cogn. Sci. 20 , 649–660 (2016).
Wu, L., Waber, B. N., Aral, S., Brynjolfsson, E. & Pentland, A. Mining face-to-face interaction networks using sociometric badges: predicting productivity in an IT configuration task. Inf. Syst. Behav. Soc. Methods https://doi.org/10.2139/ssrn.1130251 (2008).
Paxton, A. & Dale, R. Argument disrupts interpersonal synchrony. Q. J. Exp. Psychol. 66 , 2092–2102 (2013).
Alviar, C., Dale, R. & Galati, A. Complex communication dynamics: exploring the structure of an academic talk. Cogn. Sci. 43 , e12718 (2019).
Joo, J., Bucy, E. P. & Seidel, C. Automated coding of televised leader displays: detecting nonverbal political behavior with computer vision and deep learning. Int. J. Commun. 13 , 4044–4066 (2019).
Metallinou, A. et al. The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations. Lang. Res. Eval. 50 , 497–521 (2016).
Pouw, W., Paxton, A., Harrison, S. J. & Dixon, J. A. Acoustic information about upper limb movement in voicing. Proc. Natl Acad. Sci. USA 117 , 11364–11367 (2020).
Enfield, N., Levinson, S. C., De Ruiter, J. P. & Stivers, T. in Field Manual Vol. 10, 96–99 (ed. Majid, A.) (Max Planck Institute for Psycholinguistics, 2007).
Enfield, N. & Sidnell, J. On the concept of action in the study of interaction. Discourse Stud. 19 , 515–535 (2017).
Duran, N. D., Paxton, A. & Fusaroli, R. ALIGN: analyzing linguistic interactions with generalizable techNiques — a Python library. Psychol. Methods 24 , 419 (2019).
Brennan, S. E. & Clark, H. H. Conceptual pacts and lexical choice in conversation. J. Exp. Psychol. Learn. Mem. Cogn. 22 , 1482–1493 (1996).
Hasson, U., Nir, Y., Levy, I., Fuhrmann, G. & Malach, R. Intersubject synchronization of cortical activity during natural vision. Science 303 , 1634–1640 (2004).
Huth, A. G., De Heer, W. A., Griffiths, T. L., Theunissen, F. E. & Gallant, J. L. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 532 , 453–458 (2016).
Shain, C. et al. Robust effects of working memory demand during naturalistic language comprehension in language-selective cortex. J. Neurosci. 42 , 7412–7430 (2022).
Fedorenko, E., Blank, I. A., Siegelman, M. & Mineroff, Z. Lack of selectivity for syntax relative to word meanings throughout the language network. Cognition 203 , 104348 (2020).
Stephens, G. J., Silbert, L. J. & Hasson, U. Speaker–listener neural coupling underlies successful communication. Proc. Natl Acad. Sci. USA 107 , 14425–14430 (2010).
Schilbach, L. et al. Toward a second-person neuroscience. Behav. Brain Sci. 36 , 393–414 (2013).
Redcay, E. & Schilbach, L. Using second-person neuroscience to elucidate the mechanisms of social interaction. Nat. Rev. Neurosci. 20 , 495–505 (2019).
Riley, M. A., Richardson, M., Shockley, K. & Ramenzoni, V. C. Interpersonal synergies. Front. Psychol. 2 , 38 (2011).
Dale, R., Fusaroli, R., Duran, N. D. & Richardson, D. C. in Psychology of Learning and Motivation (ed. Ross, B. H.) Vol. 59, 43–95 (Elsevier, 2013).
Fusaroli, R., Rączaszek-Leonardi, J. & Tylén, K. Dialog as interpersonal synergy. N. Ideas Psychol. 32 , 147–157 (2014).
Hadley, L. V., Naylor, G. & Hamilton, A. F. D. C. A review of theories and methods in the science of face-to-face social interaction. Nat. Rev. Psychol. 1 , 42–54 (2022).
Cornejo, C., Cuadros, Z., Morales, R. & Paredes, J. Interpersonal coordination: methods achievements and challenges. Front. Psychol. https://doi.org/10.3389/fpsyg.2017.01685 (2017).
Smaldino, P. E. in Computational Social Psychology 311–331 (Routledge, 2017).
Devezer, B., Nardin, L. G., Baumgaertner, B. & Buzbas, E. O. Scientific discovery in a model-centric framework: reproducibility, innovation, and epistemic diversity. PLoS One 14 , e0216125 (2019).
Sulik, J., Bahrami, B. & Deroy, O. The diversity gap: when diversity matters for knowledge. Perspect. Psychol. Sci. 17 , 752–767 (2022).
O’Connor, C. & Bruner, J. Dynamics and diversity in epistemic communities. Erkenntnis 84 , 101–119 (2019).
Longino, H. in The Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) https://plato.stanford.edu/archives/sum2019/entries/scientific-knowledge-social/ (Stanford Univ., 2019).
Van Rooij, I. The tractable cognition thesis. Cogn. Sci. 32 , 939–984 (2008).
Kwisthout, J., Wareham, T. & Van Rooij, I. Bayesian intractability is not an ailment that approximation can cure. Cogn. Sci. 35 , 779–784 (2011).
Contreras Kallens, P. & Dale, R. Exploratory mapping of theoretical landscapes through word use in abstracts. Scientometrics 116 , 1641–1674 (2018).
Methods for methods’ sake. Nat. Methods https://doi.org/10.1038/nmeth1004-1 (2004).
Oberauer, K. & Lewandowsky, S. Addressing the theory crisis in psychology. Psychon. Bull. Rev. 26 , 1596–1618 (2019).
Meehl, P. E. Theory-testing in psychology and physics: a methodological paradox. Phil. Sci. 34 , 103–115 (1967).
Klein, O. et al. A practical guide for transparency in psychological science. Collabra Psychol. 4 , 20 (2018).
Muthukrishna, M. & Henrich, J. A problem in theory. Nat. Hum. Behav. 3 , 221–229 (2019).
Eronen, M. I. & Bringmann, L. F. The theory crisis in psychology: how to move forward. Perspect. Psychol. Sci. https://doi.org/10.1177/1745691620970586 (2021).
Borsboom, D., van der Maas, H. L. J., Dalege, J., Kievit, R. A. & Haig, B. D. Theory construction methodology: a practical framework for building theories in psychology. Perspect. Psychol. Sci. https://doi.org/10.1177/1745691620969647 (2021).
Kyvik, S. & Reymert, I. Research collaboration in groups and networks: differences across academic fields. Scientometrics 113 , 951–967 (2017).
Tebes, J. K. & Thai, N. D. Interdisciplinary team science and the public: steps toward a participatory team science. Am. Psychol. 73 , 549 (2018).
Falk-Krzesinski, H. J. et al. Mapping a research agenda for the science of team science. Res. Eval. 20 , 145–158 (2011).
da Silva, J. A. T. The Matthew effect impacts science and academic publishing by preferentially amplifying citations, metrics and status. Scientometrics 126 , 5373–5377 (2021).
Scheel, A. M., Tiokhin, L., Isager, P. M. & Lakens, D. Why hypothesis testers should spend less time testing hypotheses. Perspect. Psychol. Sci. 16 , 744–755 (2020).
Jones, M. N. Big Data in Cognitive Science (Psychology Press, 2016).
Paxton, A. & Griffiths, T. L. Finding the traces of behavioral and cognitive processes in big data and naturally occurring datasets. Behav. Res. Methods 49 , 1630–1638 (2017).
Lupyan, G. & Goldstone, R. L. Beyond the lab: using big data to discover principles of cognition. Behav Res. Methods , 51 , 1554–3528 (2019).
Haspelmath, M., Dryer, M. S., Gil, D. & Comrie, B. (eds) The World Atlas of Language Structures (Max Planck Digital Library, 2013).
Eberhard, D. M., Simons, G. F. & Fennig, C. D. Ethnologue: Languages of the World (SIL International, 2021).
MacCorquodale, K. & Meehl, P. E. On a distinction between hypothetical constructs and intervening variables. Psychol. Rev. 55 , 95–107 (1948).
van Rooij, I. & Baggio, G. Theory before the test: how to build high-verisimilitude explanatory theories in psychological science. Perspect. Psychol. Sci . https://doi.org/10.1177/1745691620970604 (2021).
Christiansen, M. H. & Chater, N. Creating Language: Integrating Evolution, Acquisition, and Processing (MIT Press, 2016).
Berwick, R. C. & Chomsky, N. Why Only Us: Language and Evolution (MIT Press, 2016).
Gopnik, A. Scientific thinking in young children: theoretical advances, empirical research, and policy implications. Science 337 , 1623–1627 (2012).
Pereira, A. F., James, K. H., Jones, S. S. & Smith, L. B. Early biases and developmental changes in self-generated object views. J. Vis. 10 , 22–22 (2010).
Fagan, M. K. & Iverson, J. M. The influence of mouthing on infant vocalization. Infancy 11 , 191–202 (2007).
Martin, J., Ruthven, M., Boubertakh, R. & Miquel, M. E. Realistic dynamic numerical phantom for MRI of the upper vocal tract. J. Imaging 6 , 86 (2020).
Spivey, M. J. & Dale, R. Continuous dynamics in real-time cognition. Curr. Dir. Psychol. Sci. 15 , 207–211 (2006).
Publication Manual of the American Psychological Association 3rd edn (American Psychological Association, 1983).
Publication Manual of the American Psychological Association 6th edn (American Psychological Association, 2010).
Ashby, W. R. An Introduction to Cybernetics (Martino, 1956).
de Raadt, J. D. R. Ashby’s law of requisite variety: an empirical study. Cybern. Syst. 18 , 517–536 (1987).
Ward, L. M. Dynamical Cognitive Science (MIT Press, 2002).
Regier, T., Carstensen, A. & Kemp, C. Languages support efficient communication about the environment: words for snow revisited. PLoS One 11 , e0151138 (2016).
Newell, A. Unified Theories of Cognition (Harvard Univ. Press, 1990).
Rich, P., de Haan, R., Wareham, T. & van Rooij, I. in Proc. Annual Meeting of the Cognitive Science Society Vol. 43, 3034–3040 (eds. Fitch, T., Lamm, C., Leder, H., and Teßmar-Raible, K.) (2021).
Potochnik, A. & Sanches de Oliveira, G. Patterns in cognitive phenomena and pluralism of explanatory styles. Top. Cogn. Sci. 12 , 1306–1320 (2020).
Leydesdorff, L. & Schank, T. Dynamic animations of journal maps: indicators of structural changes and interdisciplinary developments. J. Am. Soc. Inf. Sci. Technol. 59 , 1810–1818 (2008).
Leydesdorff, L. & Goldstone, R. L. Interdisciplinarity at the journal and specialty level: the changing knowledge bases of the journal Cognitive Science . J. Assoc. Inf. Sci. Technol. 65 , 164–177 (2014).
DeStefano, I., Oey, L. A., Brockbank, E. & Vul, E. Integration by parts: collaboration and topic structure in the CogSci community. Top. Cogn. Sci. 13 , 399–413 (2021).
Cummins, R. in Explanation And Cognition (eds Keil, F. C. & Wilson, R.) 117–144 (MIT Press, 2000).
Boyd, N. M. & Bogen, J. in Stanford Encyclopedia of Philosophy (ed. Zalta, E. N.) https://plato.stanford.edu/archives/win2021/entries/science-theory-observation/ (Stanford Univ., 2021).
Smaldino, P. E. How to build a strong theoretical foundation. Psychol. Inq. 31 , 297–301 (2020).
Chang, H. Inventing Temperature: Measurement and Scientific Progress (Oxford Univ. Press, 2004).
Download references
A.S.W. was supported by the National Science Foundation (grants 1529127 and 1539129/1827744) and by the James S. McDonnell Foundation ( https://doi.org/10.37717/220020507 ). K.L.J. was supported by the National Science Foundation (grant 2017245).
Authors and affiliations.
Department of Communication, University of California, Los Angeles, Los Angeles, CA, USA
Rick Dale, Anne S. Warlaumont & Kerri L. Johnson
Department of Psychology, University of California, Los Angeles, Los Angeles, CA, USA
Kerri L. Johnson
You can also search for this author in PubMed Google Scholar
R.D. discussed the submission theme with the editor and wrote the first draft. A.S.W. and K.L.J. refined and added to this plan and contributed major new sections of writing and revision. All authors contributed to developing the figures.
Correspondence to Rick Dale .
Competing interests.
The authors declare no competing interests.
Peer review information.
Nature Reviews Psychology thanks Berna Devezer; Michael Frank, who co-reviewed with Anjie Cao; and Justin Sulik for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and permissions
Cite this article.
Dale, R., Warlaumont, A.S. & Johnson, K.L. The fundamental importance of method to theory. Nat Rev Psychol 2 , 55–66 (2023). https://doi.org/10.1038/s44159-022-00120-5
Download citation
Accepted : 22 September 2022
Published : 29 November 2022
Issue Date : January 2023
DOI : https://doi.org/10.1038/s44159-022-00120-5
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Mechanisms linking social media use to adolescent mental health vulnerability.
Nature Reviews Psychology (2024)
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
Table of Contents
Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an informed decision.
There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based . Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research, and action research. Each method can be opted for and modified, depending on the type of research hypotheses and objectives.
When deciding on a research methodology, one of the key factors to consider is whether your research will be qualitative or quantitative. Qualitative research is used to understand people’s experiences, concepts, thoughts, or behaviours . Quantitative research, on the contrary, deals with numbers, graphs, and charts, and is used to test or confirm hypotheses, assumptions, and theories.
Qualitative research is often used to examine issues that are not well understood, and to gather additional insights on these topics. Qualitative research methods include open-ended survey questions, observations of behaviours described through words, and reviews of literature that has explored similar theories and ideas. These methods are used to understand how language is used in real-world situations, identify common themes or overarching ideas, and describe and interpret various texts. Data analysis for qualitative research typically includes discourse analysis, thematic analysis, and textual analysis.
The goal of quantitative research is to test hypotheses, confirm assumptions and theories, and determine cause-and-effect relationships. Quantitative research methods include experiments, close-ended survey questions, and countable and numbered observations. Data analysis for quantitative research relies heavily on statistical methods.
The methods used for data analysis also differ for qualitative and quantitative research. As mentioned earlier, quantitative data is generally analysed using statistical methods and does not leave much room for speculation. It is more structured and follows a predetermined plan. In quantitative research, the researcher starts with a hypothesis and uses statistical methods to test it. Contrarily, methods used for qualitative data analysis can identify patterns and themes within the data, rather than provide statistical measures of the data. It is an iterative process, where the researcher goes back and forth trying to gauge the larger implications of the data through different perspectives and revising the analysis if required.
The choice between qualitative and quantitative research will depend on the gap that the research project aims to address, and specific objectives of the study. If the goal is to establish facts about a subject or topic, quantitative research is an appropriate choice. However, if the goal is to understand people’s experiences or perspectives, qualitative research may be more suitable.
In conclusion, an understanding of the different research methods available, their applicability, advantages, and disadvantages is essential for making an informed decision on the best methodology for your project. If you need any additional guidance on which research methodology to opt for, you can head over to Elsevier Author Services (EAS). EAS experts will guide you throughout the process and help you choose the perfect methodology for your research goals.
You may also like.
Input your search keywords and press Enter.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Rakesh garg.
Department of Onco-anaesthesiology and Palliative Medicine, Dr. BRAIRCH, All India Institute of Medical Sciences, New Delhi, India
The conduct of research requires a systematic approach involving diligent planning and its execution as planned. It comprises various essential predefined components such as aims, population, conduct/technique, outcome and statistical considerations. These need to be objective, reliable and in a repeatable format. Hence, the understanding of the basic aspects of methodology is essential for any researcher. This is a narrative review and focuses on various aspects of the methodology for conduct of a clinical research. The relevant keywords were used for literature search from various databases and from bibliographies of the articles.
Research is a process for acquiring new knowledge in systematic approach involving diligent planning and interventions for discovery or interpretation of the new-gained information.[ 1 , 2 ] The outcome reliability and validity of a study would depend on well-designed study with objective, reliable, repeatable methodology with appropriate conduct, data collection and its analysis with logical interpretation. Inappropriate or faulty methodology would make study unacceptable and may even provide clinicians faulty information. Hence, the understanding the basic aspects of methodology is essential.
This is a narrative review based on existing literature search. This review focuses on specific aspects of the methodology for conduct of a research/clinical trial. The relevant keywords for literature search included ‘research’, ‘study design’, ‘study controls’, ‘study population’, ‘inclusion/exclusion criteria’, ‘variables’, ‘sampling’, ‘randomisation’, ‘blinding’, ‘masking’, ‘allocation concealment’, ‘sample size’, ‘bias’, ‘confounders’ alone and in combinations. The search engine included PubMed/MEDLINE, Google Scholar and Cochrane. The bibliographies of the searched articles were specifically searched for missing manuscripts from the search engines and manually from the print journals in the library.
The following text highlights/describes the basic essentials of methodology which needs to be adopted for conducting a good research.
The aims and objectives of research need to be known thoroughly and should be specified before start of the study based on thorough literature search and inputs from professional experience. Aims and objectives state whether nature of the problem (formulated as research question or research problem) has to be investigated or its solution has to be found by different more appropriate method. The lacunae in existing knowledge would help formulate a research question. These statements have to be objective specific with all required details such as population, intervention, control, outcome variables along with time interventions.[ 3 , 4 , 5 ] This would help formulate a hypothesis which is a scientifically derived statement about a particular problem in the defined population. The hypothesis generation depends on the type of study as well. Researcher observation related to any aspect initiates hypothesis generation. A cross-sectional survey would generate hypothesis. An observational study establishes associations and supports/rejects the hypothesis. An experiment would finally test the hypothesis.[ 5 , 6 , 7 ]
The flow of study in an experimental design has various sequential steps [ Figure 1 ].[ 1 , 2 , 6 ] Population refers to an aggregate of individuals, things, cases, etc., i.e., observation units that are of interest and remain the focus of investigation. This reference population or target population is the group on which the study outcome would be extrapolated.[ 6 ] Once this target population is identified, researcher needs to assess whether it is possible to study all the individuals for an outcome. Usually, all cannot be included, so a study population is sampled. The important attribute of a sample is that every individual should have equal and non-zero chance of getting included in the study. The sample should be made independently, i.e., selection of one does not influence inclusion or exclusion of other. In clinical practice, the sampling is restricted to a particular place (patients attending to clinics or posted for surgery) or includes multiple centres rather than sampling the universe. Hence, the researcher should be cautious in generalising the outcomes. For example, in a tertiary care hospital, patients are referred and may have more risk factors as compared to primary centres where a patient with lesser severity are managed. Hence, researchers must disclose details of the study area. The study period needs to be disclosed as it would make readers understand the population characteristics. Furthermore, study period would tell about relevance of the study with respect to the present period.
Flow of an experimental study
The size of sample has to be pre-determined, analytically approached and sufficiently large to represent the population.[ 7 , 8 , 9 ] Including a larger sample would lead to wastage of resources, risk that the true treatment effect may be missed due to heterogeneity of large population and would be time-consuming.[ 6 ] If a study is too small, it will not provide the suitable answer to research question. The main determinant of the sample size includes clinical hypothesis, primary endpoint, study design, probability of Type I and II error, power, minimum treatment difference of clinical importance.[ 7 ] Attrition of patients should be attended during the sample size calculation.[ 6 , 9 ]
The appropriate study design is essential for the intervention outcome in terms of its best possible and most reliable estimate. The study design selection is based on parameters such as objectives, therapeutic area, treatment comparison, outcome and phase of the trial.[ 6 ] The study design may be broadly classified as:[ 5 , 6 , 7 ]
For studying causality, analytical observational studies would be prudent to avoid posing risk to subjects. For clinical drugs or techniques, experimental study would be more appropriate.[ 6 ] The treatments remain concurrent, i.e. the active and control interventions happen at the same period in RCT. It may parallel group design wherein treatment and control groups are allocated to different individuals. This requires comparing a placebo group or a gold standard intervention (control) with newer agent or technique.[ 6 ] In matched-design RCT, randomisation is between matched pairs. For cross-over study design, two or more treatments are administered sequentially to the same subject and thus each subject acts as its own control. However, researches should be aware of ‘carryover effect’ of the previous intervention and suitable wash period needs to be ensured. In cohort study design, subjects with disease/symptom or free of study variable are followed for a particular period. The cross-sectional study examines the prevalence of the disease, surveys, validating instruments, tools and questionnaires. The qualitative research is a study design wherein health-related issue in the population is explored with regard to its description, exploration and explanation.[ 6 ]
The control is required because disease may be self-remitting, Hawthorne effect (change in response or behaviours of subjects when included in study), placebo effect (patients feel improvement even with placebo), effect of confounder, co-intervention and regression to the mean phenomenon (for example, white coat hypertension, i.e. patients at recruitment may have higher study parameter but subsequently may get normal).[ 2 , 6 , 7 ] The control could be a placebo, no treatment, different dose or regimen or intervention or the standard/gold treatment. Avoiding a routine care for placebo is not desirable and unethical. For instance, for studying analgesic regimen, it would be unethical not to administer analgesics in a control group. It is advisable to continue standard of care, i.e. providing routine analgesics even in control group. The use of placebo or no treatment may be considered where no current proven intervention exists or placebo is required to evaluate efficacy or safety of an intervention without serious or irreversible harm.
The comparisons to be made in the study among groups also need to be specified.[ 6 , 7 , 9 ] These comparisons may prove superiority, non-inferiority or equivalence among groups. The superiority trials demonstrate superiority either to a placebo in a placebo-controlled trial or to an active control treatment. The non-inferiority trials would prove that the efficacy of an intervention is no worse than that of the active comparative treatment. The equivalence trials demonstrate that the outcome of two or more interventions differs by a clinically unimportant margin and either technique or drug may be clinically acceptable.
The study tools such as measurements scales, questionnaires and scoring systems need to be specified with an objective definition. These tools should be validated before its use and appropriate use by the research staff is mandatory to avoid any bias. These tools should be simple and easily understandable to everyone involved in the study.
In clinical research, specific group of relatively homogeneous patient population needs to be selected.[ 6 ] Inclusion and exclusion criteria define who can be included or excluded from the study sample. The inclusion criteria identify the study population in a consistent, reliable, uniform and objective manner. The exclusion criteria include factors or characteristics that make the recruited population ineligible for the study. These factors may be confounders for the outcome parameter. For example, patients with liver disease would be excluded if coagulation parameters would impact the outcome. The exclusion criteria are inclusive of inclusion criteria.
Variables are definite characteristics/parameters that are being studied. Clear, precise and objective definition for measurement of these characteristics needs to be defined.[ 2 ] These should be measurable and interpretable, sensitive to the objective of the study and clinically relevant. The most common end-point is related to efficacy, safety and quality of life. The study variables could be primary or secondary.[ 6 ] The primary end-point, usually one, provides the most relevant, reliable and convincing evidence related to the aim and objective. It is the characteristic on the basis of which research question/hypothesis has been formulated. It reflects clinically relevant and important treatment benefits. It determines the sample size. Secondary end-points are the other objectives indirectly related to primary objective with regard to its close association or they may be some associated effects/adverse effects related to intervention. The measurement timing of the variables must be defined a priori . These are usually done at screening, baseline and completion of trial.
The study end-point parameter may be clinical or surrogate in nature. A clinical end-point is related directly to clinical implications with regard to beneficial outcome of the intervention. The surrogate end-point is indirectly related to patient clinical benefit and is usually measures laboratory measurement or physical sign as a substitute for a clinically meaningful end-point. Surrogate end-points are more convenient, easily measurable, repeatable and faster.
Randomisation.
Randomisation or random allocation is a method to allocate individuals into one of the groups (arms) of a study.[ 1 , 2 ] It is the basic assumption required for statistical analysis of data. The randomisation would maximise statistical power, especially in subgroup analyses, minimise selection bias and minimise allocation bias (or confounding). This leads to distribution of all the characteristics, measured or non-measured, visible or invisible and known or unknown equally into the groups. Randomisation uses various strategies as per the study design and outcome.
This technique does not give equal and non-zero chances to all the individuals in the population to be selected in the sample.
Allocation concealment refers to the process ensuring the person who generates the random assignment remains blind to what arm the person will be allotted.[ 8 , 9 , 10 ] It is a strategy to avoid ascertainment or selection bias. For example, based on an outcome, researcher may recruit a specific category as lesser sicker patients to a particular group and vice versa to the other group. This selective recruitment would underestimate (if treatment group is sicker) or overestimate (if control group is sicker) the intervention effect.[ 9 ] The allocation should be concealed from investigator till the initiation of intervention. Hence, randomisation should be performed by an independent person who is not involved in the conduct of the study or its monitoring. The randomisation list is kept secret. The methods of allocation concealment include:[ 9 , 10 ]
Blinding ensures the group to which the study subjects are assigned not known or easily ascertained by those who are ‘masked’, i.e., participants, investigators, evaluators or statistician to limit occurrence of bias.[ 1 , 2 ] It confirms that the intervention and standard or placebo treatment appears the same. Blinding is different from allocation concealment. Allocation concealment is done before, whereas blinding is done at and after initiation of treatment. In situations such as study drugs with different formulations or medical versus surgical interventions, blinding may not be feasible.[ 8 ] Sham blocks or needling in subjects may not be ethical. In such situation, the outcome measurement should be made objective to the fullest to avoid bias and whosoever may be masked should be blinded. The research manuscript must mention the details about blinding including who was blinded after assignment to interventions and process or technique used. Blinding could be:[ 8 , 9 ]
Bias is a systematic deviation of the real, true effect (better or worst outcome) resulting from faulty study design.[ 1 , 2 ] The various steps of study such as randomisation, concealment, blinding, objective measurement and strict protocol adherence would reduce bias.
The various possible and potential biases in a trial can be:[ 7 ]
Confounding occurs when outcome parameters are affected by effects of other factors not directly relevant to the research question.[ 1 , 7 ] For example, if impact of drug on haemodynamics is studied on hypertensive patients, then diabetes mellitus would be confounder as it also effects the hemodynamic response to autonomic disturbances. Hence, it becomes prudent during the designing stage for a study that all potential confounders should be carefully considered. If the confounders are known, then they can be adjusted statistically but with loss of precision (statistical power). Hence, confounding can be controlled either by preventing it or by adjusting for it in the statistical analysis. The confounding can be controlled by restriction by study design (for example, restricted age range as 2-6 years), matching (use of constraints in the selection of the comparison group so that the study and comparison group have similar distribution with regard to potential confounder), stratification in the analysis without matching (involves restriction of the analysis to narrow ranges of the extraneous variable) and mathematical modelling in the analysis (use of advanced statistical methods of analysis such as multiple linear regression and logistic regression). Strategies during data analysis include stratified analysis using the Mantel-Haenszel method to adjust for confounders, using a matched design approach, data restriction and model fitting using regression techniques.
Basic understanding of the methodology is essential to have reliable, repeatable and clinically acceptable outcome. The study plan including all its components needs to be designed before start of the study, and the study protocol should be strictly adhered during the conduct of study.
Conflicts of interest.
There are no conflicts of interest.
Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of the research. Several aspects must be considered before selecting an appropriate research methodology, such as research limitations and ethical concerns that may affect your research.
The research methodology section in a scientific paper describes the different methodological choices made, such as the data collection and analysis methods, and why these choices were selected. The reasons should explain why the methods chosen are the most appropriate to answer the research question. A good research methodology also helps ensure the reliability and validity of the research findings. There are three types of research methodology—quantitative, qualitative, and mixed-method, which can be chosen based on the research objectives.
A research methodology describes the techniques and procedures used to identify and analyze information regarding a specific research topic. It is a process by which researchers design their study so that they can achieve their objectives using the selected research instruments. It includes all the important aspects of research, including research design, data collection methods, data analysis methods, and the overall framework within which the research is conducted. While these points can help you understand what is research methodology, you also need to know why it is important to pick the right methodology.
Having a good research methodology in place has the following advantages: 3
Types of research methodology.
There are three types of research methodology based on the type of research and the data required. 1
Sampling 4 is an important part of a research methodology and involves selecting a representative sample of the population to conduct the study, making statistical inferences about them, and estimating the characteristics of the whole population based on these inferences. There are two types of sampling designs in research methodology—probability and nonprobability.
In this type of sampling design, a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are:
During research, data are collected using various methods depending on the research methodology being followed and the research methods being undertaken. Both qualitative and quantitative research have different data collection methods, as listed below.
Qualitative research 5
Quantitative research 6
What are data analysis methods.
The data collected using the various methods for qualitative and quantitative research need to be analyzed to generate meaningful conclusions. These data analysis methods 7 also differ between quantitative and qualitative research.
Quantitative research involves a deductive method for data analysis where hypotheses are developed at the beginning of the research and precise measurement is required. The methods include statistical analysis applications to analyze numerical data and are grouped into two categories—descriptive and inferential.
Descriptive analysis is used to describe the basic features of different types of data to present it in a way that ensures the patterns become meaningful. The different types of descriptive analysis methods are:
Inferential analysis is used to make predictions about a larger population based on the analysis of the data collected from a smaller population. This analysis is used to study the relationships between different variables. Some commonly used inferential data analysis methods are:
Qualitative research involves an inductive method for data analysis where hypotheses are developed after data collection. The methods include:
Here are some important factors to consider when choosing a research methodology: 8
How to write a research methodology .
A research methodology should include the following components: 3,9
The methods section is a critical part of the research papers, allowing researchers to use this to understand your findings and replicate your work when pursuing their own research. However, it is usually also the most difficult section to write. This is where Paperpal can help you overcome the writer’s block and create the first draft in minutes with Paperpal Copilot, its secure generative AI feature suite.
With Paperpal you can get research advice, write and refine your work, rephrase and verify the writing, and ensure submission readiness, all in one place. Here’s how you can use Paperpal to develop the first draft of your methods section.
You can repeat this process to develop each section of your research manuscript, including the title, abstract and keywords. Ready to write your research papers faster, better, and without the stress? Sign up for Paperpal and start writing today!
Q1. What are the key components of research methodology?
A1. A good research methodology has the following key components:
Q2. Why is ethical consideration important in research methodology?
A2. Ethical consideration is important in research methodology to ensure the readers of the reliability and validity of the study. Researchers must clearly mention the ethical norms and standards followed during the conduct of the research and also mention if the research has been cleared by any institutional board. The following 10 points are the important principles related to ethical considerations: 10
Q3. What is the difference between methodology and method?
A3. Research methodology is different from a research method, although both terms are often confused. Research methods are the tools used to gather data, while the research methodology provides a framework for how research is planned, conducted, and analyzed. The latter guides researchers in making decisions about the most appropriate methods for their research. Research methods refer to the specific techniques, procedures, and tools used by researchers to collect, analyze, and interpret data, for instance surveys, questionnaires, interviews, etc.
Research methodology is, thus, an integral part of a research study. It helps ensure that you stay on track to meet your research objectives and answer your research questions using the most appropriate data collection and analysis tools based on your research design.
Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.
Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.
Experience the future of academic writing – Sign up to Paperpal and start writing for free!
Climatic vs. climactic: difference and examples, you may also like, dissertation printing and binding | types & comparison , what is a dissertation preface definition and examples , how to write a research proposal: (with examples..., how to write your research paper in apa..., how to choose a dissertation topic, how to write a phd research proposal, how to write an academic paragraph (step-by-step guide), maintaining academic integrity with paperpal’s generative ai writing..., research funding basics: what should a grant proposal..., how to write an abstract in research papers....
Qualitative vs quantitative vs mixed methods.
By: Derek Jansen (MBA). Expert Reviewed By: Dr Eunice Rautenbach | June 2021
Without a doubt, one of the most common questions we receive at Grad Coach is “ How do I choose the right methodology for my research? ”. It’s easy to see why – with so many options on the research design table, it’s easy to get intimidated, especially with all the complex lingo!
In this post, we’ll explain the three overarching types of research – qualitative, quantitative and mixed methods – and how you can go about choosing the best methodological approach for your research.
Understanding the options – Qualitative research – Quantitative research – Mixed methods-based research
Choosing a research methodology – Nature of the research – Research area norms – Practicalities
Before we jump into the question of how to choose a research methodology, it’s useful to take a step back to understand the three overarching types of research – qualitative , quantitative and mixed methods -based research. Each of these options takes a different methodological approach.
Qualitative research utilises data that is not numbers-based. In other words, qualitative research focuses on words , descriptions , concepts or ideas – while quantitative research makes use of numbers and statistics. Qualitative research investigates the “softer side” of things to explore and describe, while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them.
Importantly, qualitative research methods are typically used to explore and gain a deeper understanding of the complexity of a situation – to draw a rich picture . In contrast to this, quantitative methods are usually used to confirm or test hypotheses . In other words, they have distinctly different purposes. The table below highlights a few of the key differences between qualitative and quantitative research – you can learn more about the differences here.
Mixed methods -based research, as you’d expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data. Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that model empirically.
In other words, while qualitative and quantitative methods (and the philosophies that underpin them) are completely different, they are not at odds with each other. It’s not a competition of qualitative vs quantitative. On the contrary, they can be used together to develop a high-quality piece of research. Of course, this is easier said than done, so we usually recommend that first-time researchers stick to a single approach , unless the nature of their study truly warrants a mixed-methods approach.
The key takeaway here, and the reason we started by looking at the three options, is that it’s important to understand that each methodological approach has a different purpose – for example, to explore and understand situations (qualitative), to test and measure (quantitative) or to do both. They’re not simply alternative tools for the same job.
Right – now that we’ve got that out of the way, let’s look at how you can go about choosing the right methodology for your research.
To choose the right research methodology for your dissertation or thesis, you need to consider three important factors . Based on these three factors, you can decide on your overarching approach – qualitative, quantitative or mixed methods. Once you’ve made that decision, you can flesh out the finer details of your methodology, such as the sampling , data collection methods and analysis techniques (we discuss these separately in other posts ).
The three factors you need to consider are:
Let’s take a look at each of these.
As I mentioned earlier, each type of research (and therefore, research methodology), whether qualitative, quantitative or mixed, has a different purpose and helps solve a different type of question. So, it’s logical that the key deciding factor in terms of which research methodology you adopt is the nature of your research aims, objectives and research questions .
But, what types of research exist?
Broadly speaking, research can fall into one of three categories:
As a rule of thumb, exploratory research tends to adopt a qualitative approach , whereas confirmatory research tends to use quantitative methods . This isn’t set in stone, but it’s a very useful heuristic. Naturally then, research that combines a mix of both, or is seeking to develop a theory from the ground up and then test that theory, would utilize a mixed-methods approach.
Let’s look at an example in action.
If your research aims were to understand the perspectives of war veterans regarding certain political matters, you’d likely adopt a qualitative methodology, making use of interviews to collect data and one or more qualitative data analysis methods to make sense of the data.
If, on the other hand, your research aims involved testing a set of hypotheses regarding the link between political leaning and income levels, you’d likely adopt a quantitative methodology, using numbers-based data from a survey to measure the links between variables and/or constructs .
So, the first (and most important thing) thing you need to consider when deciding which methodological approach to use for your research project is the nature of your research aims , objectives and research questions. Specifically, you need to assess whether your research leans in an exploratory or confirmatory direction or involves a mix of both.
The importance of achieving solid alignment between these three factors and your methodology can’t be overstated. If they’re misaligned, you’re going to be forcing a square peg into a round hole. In other words, you’ll be using the wrong tool for the job, and your research will become a disjointed mess.
If your research is a mix of both exploratory and confirmatory, but you have a tight word count limit, you may need to consider trimming down the scope a little and focusing on one or the other. One methodology executed well has a far better chance of earning marks than a poorly executed mixed methods approach. So, don’t try to be a hero, unless there is a very strong underpinning logic.
Choosing the right methodology for your research also involves looking at the approaches used by other researchers in the field, and studies with similar research aims and objectives to yours. Oftentimes, within a discipline, there is a common methodological approach (or set of approaches) used in studies. While this doesn’t mean you should follow the herd “just because”, you should at least consider these approaches and evaluate their merit within your context.
A major benefit of reviewing the research methodologies used by similar studies in your field is that you can often piggyback on the data collection techniques that other (more experienced) researchers have developed. For example, if you’re undertaking a quantitative study, you can often find tried and tested survey scales with high Cronbach’s alphas. These are usually included in the appendices of journal articles, so you don’t even have to contact the original authors. By using these, you’ll save a lot of time and ensure that your study stands on the proverbial “shoulders of giants” by using high-quality measurement instruments .
Of course, when reviewing existing literature, keep point #1 front of mind. In other words, your methodology needs to align with your research aims, objectives and questions. Don’t fall into the trap of adopting the methodological “norm” of other studies just because it’s popular. Only adopt that which is relevant to your research.
When choosing a research methodology, there will always be a tension between doing what’s theoretically best (i.e., the most scientifically rigorous research design ) and doing what’s practical , given your constraints . This is the nature of doing research and there are always trade-offs, as with anything else.
But what constraints, you ask?
When you’re evaluating your methodological options, you need to consider the following constraints:
Let’s look at each of these.
The first practical constraint you need to consider is your access to data . If you’re going to be undertaking primary research , you need to think critically about the sample of respondents you realistically have access to. For example, if you plan to use in-person interviews , you need to ask yourself how many people you’ll need to interview, whether they’ll be agreeable to being interviewed, where they’re located, and so on.
If you’re wanting to undertake a quantitative approach using surveys to collect data, you’ll need to consider how many responses you’ll require to achieve statistically significant results. For many statistical tests, a sample of a few hundred respondents is typically needed to develop convincing conclusions.
So, think carefully about what data you’ll need access to, how much data you’ll need and how you’ll collect it. The last thing you want is to spend a huge amount of time on your research only to find that you can’t get access to the required data.
The next constraint is time. If you’re undertaking research as part of a PhD, you may have a fairly open-ended time limit, but this is unlikely to be the case for undergrad and Masters-level projects. So, pay attention to your timeline, as the data collection and analysis components of different methodologies have a major impact on time requirements . Also, keep in mind that these stages of the research often take a lot longer than originally anticipated.
Another practical implication of time limits is that it will directly impact which time horizon you can use – i.e. longitudinal vs cross-sectional . For example, if you’ve got a 6-month limit for your entire research project, it’s quite unlikely that you’ll be able to adopt a longitudinal time horizon.
As with so many things, money is another important constraint you’ll need to consider when deciding on your research methodology. While some research designs will cost near zero to execute, others may require a substantial budget .
Some of the costs that may arise include:
These are just a handful of costs that can creep into your research budget. Like most projects, the actual costs tend to be higher than the estimates, so be sure to err on the conservative side and expect the unexpected. It’s critically important that you’re honest with yourself about these costs, or you could end up getting stuck midway through your project because you’ve run out of money.
Another practical consideration is the hardware and/or software you’ll need in order to undertake your research. Of course, this variable will depend on the type of data you’re collecting and analysing. For example, you may need lab equipment to analyse substances, or you may need specific analysis software to analyse statistical data. So, be sure to think about what hardware and/or software you’ll need for each potential methodological approach, and whether you have access to these.
The final practical constraint is a big one. Naturally, the research process involves a lot of learning and development along the way, so you will accrue knowledge and skills as you progress. However, when considering your methodological options, you should still consider your current position on the ladder.
Some of the questions you should ask yourself are:
Answering these questions honestly will provide you with another set of criteria against which you can evaluate the research methodology options you’ve shortlisted.
So, as you can see, there is a wide range of practicalities and constraints that you need to take into account when you’re deciding on a research methodology. These practicalities create a tension between the “ideal” methodology and the methodology that you can realistically pull off. This is perfectly normal, and it’s your job to find the option that presents the best set of trade-offs.
In this post, we’ve discussed how to go about choosing a research methodology. The three major deciding factors we looked at were:
If you have any questions, feel free to leave a comment below. If you’d like a helping hand with your research methodology, check out our 1-on-1 research coaching service , or book a free consultation with a friendly Grad Coach.
This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...
Very useful and informative especially for beginners
Nice article! I’m a beginner in the field of cybersecurity research. I am a Telecom and Network Engineer and Also aiming for PhD scholarship.
I find the article very informative especially for my decitation it has been helpful and an eye opener.
Hi I am Anna ,
I am a PHD candidate in the area of cyber security, maybe we can link up
The Examples shows by you, for sure they are really direct me and others to knows and practices the Research Design and prepration.
I found the post very informative and practical.
I struggle so much with designs of the research for sure!
I’m the process of constructing my research design and I want to know if the data analysis I plan to present in my thesis defense proposal possibly change especially after I gathered the data already.
Thank you so much this site is such a life saver. How I wish 1-1 coaching is available in our country but sadly it’s not.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
BMC Medical Research Methodology volume 20 , Article number: 226 ( 2020 ) Cite this article
42k Accesses
58 Citations
61 Altmetric
Metrics details
Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.
We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?
Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.
Peer Review reports
The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).
In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig. 1 .
Trends in the number studies that mention “methodological review” or “meta-
epidemiological study” in PubMed.
The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.
The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.
Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.
Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.
Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.
These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].
There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.
Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].
Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.
In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.
Q: How should I select research reports for my methodological study?
A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].
The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.
Q: How many databases should I search?
A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.
Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.
Q: Should I publish a protocol for my methodological study?
A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.
Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).
Q: How to appraise the quality of a methodological study?
A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.
Q: Should I justify a sample size?
A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:
Comparing two groups
Determining a proportion, mean or another quantifier
Determining factors associated with an outcome using regression-based analyses
For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].
Q: What should I call my study?
A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.
Q: Should I account for clustering in my methodological study?
A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”
A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].
Q: Should I extract data in duplicate?
A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].
Q: Should I assess the risk of bias of research reports included in my methodological study?
A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].
Q: What variables are relevant to methodological studies?
A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:
Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.
Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].
Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]
Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].
Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].
Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].
Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].
Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.
Q: Should I focus only on high impact journals?
A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.
Q: Can I conduct a methodological study of qualitative research?
A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.
Q: What reporting guidelines should I use for my methodological study?
A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.
Q: What are the potential threats to validity and how can I avoid them?
A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.
Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].
With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.
Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.
Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.
In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:
What is the aim?
Methodological studies that investigate bias
A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].
Methodological studies that investigate quality (or completeness) of reporting
Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].
Methodological studies that investigate the consistency of reporting
Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].
Methodological studies that investigate factors associated with reporting
In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].
Methodological studies that investigate methods
Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].
Methodological studies that summarize other methodological studies
Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].
Methodological studies that investigate nomenclature and terminology
Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].
Other types of methodological studies
In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.
What is the design?
Methodological studies that are descriptive
Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].
Methodological studies that are analytical
Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].
What is the sampling strategy?
Methodological studies that include the target population
Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n = 103) [ 30 ].
Methodological studies that include a sample of the target population
Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.
What is the unit of analysis?
Methodological studies with a research report as the unit of analysis
Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.
Methodological studies with a design, analysis or reporting item as the unit of analysis
Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].
This framework is outlined in Fig. 2 .
A proposed framework for methodological studies
Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.
In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.
Data sharing is not applicable to this article as no new data were created or analyzed in this study.
Consolidated Standards of Reporting Trials
Evidence, Participants, Intervention, Comparison, Outcome, Timeframe
Grading of Recommendations, Assessment, Development and Evaluations
Participants, Intervention, Comparison, Outcome, Timeframe
Preferred Reporting Items of Systematic reviews and Meta-Analyses
Studies Within a Review
Studies Within a Trial
Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.
PubMed Google Scholar
Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.
PubMed PubMed Central Google Scholar
Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.
Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.
Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.
Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.
Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.
Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.
Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.
Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.
Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.
Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.
Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.
Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.
Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.
Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.
CAS PubMed Google Scholar
Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.
Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.
Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.
Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.
The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.
Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.
Google Scholar
Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.
Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.
CAS Google Scholar
Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.
Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.
Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.
Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.
The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.
Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.
Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.
Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.
Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.
Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.
De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.
Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.
Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.
Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.
Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.
El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.
Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.
Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.
CAS PubMed PubMed Central Google Scholar
Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.
Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.
Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.
Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.
Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.
Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.
Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.
Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.
Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.
Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.
Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.
Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.
Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.
Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.
de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.
Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.
Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.
Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.
Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.
Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.
Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.
Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.
Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.
Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.
Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.
Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.
Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.
Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.
Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.
METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.
Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.
Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.
Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.
Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.
Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.
Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.
Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.
Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.
Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.
Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.
Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.
Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.
Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.
Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.
Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.
Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.
Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.
Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.
Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.
Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.
Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.
Download references
This work did not receive any dedicated funding.
Authors and affiliations.
Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada
Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane
Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada
Lawrence Mbuagbaw & Lehana Thabane
Centre for the Development of Best Practices in Health, Yaoundé, Cameroon
Lawrence Mbuagbaw
Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia
Livia Puljak
Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA
David B. Allison
Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada
Lehana Thabane
Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada
Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada
You can also search for this author in PubMed Google Scholar
LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.
Correspondence to Lawrence Mbuagbaw .
Ethics approval and consent to participate.
Not applicable.
Competing interests.
DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7
Download citation
Received : 27 May 2020
Accepted : 27 August 2020
Published : 07 September 2020
DOI : https://doi.org/10.1186/s12874-020-01107-7
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1471-2288
Select Page
Posted by Md. Harun Ar Rashid | Mar 28, 2022 | Research Methodology
Research methodology is a collective term for the structured process of conducting research. There are many different methodologies used in various types of research and the term is usually considered to include research design, data gathering, and data analysis. Research methodology seeks to inform: Why a research study has been undertaken, how the research problem has been defined, in what way and why the hypothesis has been formulated, what data have been collected and what particular method has been adopted, why particular technique of analyzing data has been used and a host of similar other questions are usually answered when we talk of research methodology concerning a research problem or study.
In simple terms, research methodology is used to give a clear-cut idea of what the researcher is carrying out his or her research. In order to plan at the right point of time and to advance the research work, research methodology makes the right platform for the researcher to map out the research work in relevance to make solid plans. Moreover, research methodology guides the researcher to involve and be active in his or her particular field of inquiry. Most of the time, the aim of the research and the research topic won’t be the same at all times it varies from the objectives and flow of the research, but by adopting a suitable methodology this can be achieved.
Right from selecting the topic and carrying out the research, the research methodology drives the researcher on the right track. The entire research plan is based on the concept of the right research methodology. Moreover, through the research methodology, the external environment constitutes the research by giving an in-depth idea on setting the right research objective, followed by literature point of view, based on that chosen analysis through interviews or questionnaires findings will be obtained and finally concluded the message by this research.
Source: Hentschel (1999)
The research methodology constitutes the internal environment by understanding and identifying the right type of research, strategy, philosophy, time horizon, and approaches, followed by the right procedures and techniques based on his or her research work. Additionally, the research methodology acts as the nerve center because the entire research is bounded by it, and to perform good research work, the internal and external environment has to follow the right research methodology process.
The system of collecting data for research projects is known as a research methodology. The data may be collected for either theoretical or practical research for example management research may be strategically conceptualized along with operational planning methods and change management. Some important factors in research methodology include the validity of research data, ethics, and the reliability of most of your work is finished by the time you finish the analysis of your data. This is followed by a research design, which may be either experimental or quasi-experimental. The last two stages are data analysis and finally writing the research paper, which is organized carefully into graphs and tables so that only important relevant data is shown.
Importance of Research Methodology in Research
It is necessary for a researcher to design a research methodology for the problem chosen. One should note that even if the research method considered for two problems are the same the research methodology may be different. It is important for the researcher to know not only the research methods necessary for the research undertaken but also the methodology. For example, a researcher not only needs to know how to calculate the mean, variance, and distribution function for a set of data, how to find a solution to a physical system described by a mathematical model, how to determine the roots of algebraic equations and how to apply a particular method but also need to know (i) which is a suitable method for the chosen problem?, (ii) what is the order of accuracy of the result of a method?, (iii) what is the efficiency of the method? And so on. Considerations of these aspects constitute a research methodology. More precisely, research methods help us get a solution to a problem. On the other hand, the research methodology is concerned with the explanation of the following:
(1) Why is a particular research study undertaken?
(2) How did one formulate a research problem?
(3) What types of data were collected?
(4) What particular method has been used?
(5) Why was a particular technique of analysis of data used?
The study of research methods gives the training to apply them to a problem. The study of research methodology provides us with the necessary training in choosing research methods, materials, scientific tools, and training in techniques relevant to the problem chosen.
Types of Research Methodologies
Research methodologies can be quantitative or qualitative. Ideally, comprehensive research should try to incorporate both qualitative and quantitative methodologies but this is not always possible, usually due to time and financial constraints. Research methodologies are generally used in academic research to test hypotheses or theories. A good design should ensure the research is valid, i.e. it clearly tests the hypothesis and not extraneous variables, and that the research is reliable, i.e. it yields consistent results every time.
Qualitative Research Methodology: is a highly subjective research discipline, designed to look beyond the percentages to gain an understanding of feelings, impressions, and viewpoints.
Key Characteristics of Qualitative Research
Quantitative Research Methodology: as the term suggests, concerned with the collection and analysis of data in numeric form. It tends to emphasize relatively large scale and representative sets of data, and is often, falsely in our view, presented or perceived as being about the gathering of `facts’.
Key Characteristics of Quantitative Research
Former Student at Rajshahi University
Related posts.
March 28, 2023
April 2, 2023
April 6, 2023
February 10, 2023
Library & Information Management Community
M | T | W | T | F | S | S |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | |||
5 | 6 | 7 | 8 | 9 | 10 | 11 |
12 | 13 | 14 | 15 | 16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 |
26 | 27 | 28 | 29 | 30 | 31 |
Amplify is almost here! Join thousands of accounting, finance, ESG, audit and risk pros online on September 12th. Learn More
Hamish Prince Solutions Manager
While you might have heard the term before, or have a basic grasp of its importance within your organization, you might still be asking yourself: What is a cash flow statement? If you’ve been tasked with working on this important financial document, it’s important that you fully understand both its purpose and what is reported in the statement.
By understanding a cash flow statement, you will be able to better assess whether your company has enough cash to meet its short-term obligations, evaluate the efficiency of core operations in generating cash, and evaluate the potential returns and risks associated with investment opportunities.
This article should help you understand the essentials of cash flow reporting, and how leveraging modern financial management platforms like Workiva can simplify cash flow analysis in financial management.
Cash flow statement: definition.
The cash flow statement provides information about the cash inflows and outflows of a business during a specific period, typically monthly, quarterly, or annually.
It’s split up into three main sections: operating activities, investing activities, and financing activities, presenting a summary of how cash has been generated and spent by a company.
The statement provides a comprehensive view of a company’s financial performance and financial positioning, complementing an organization’s income statement and balance sheet. It provides insight into a company’s liquidity, operational efficiency, investment potential, and overall financial health during financial reporting and financial analysis.
You can read How to Prepare Statements of Cash Flows for some helpful tips on how to prepare a cash flow statement correctly.
It’s important to understand all three elements of a cash statement to grasp how the cash reporting process works.
These appear at the top of a cash flow statement and detail ongoing business activities such as sales and manufacturing. This section is designed to show where a company gets its cash from and how it uses that money during any given period of time.
While the operating activities section of a cash flow statement is concerned with a company’s day-to-day income, the investing activities section looks at long-term cash usage, such as buying or selling property or essential equipment. It also looks at sales of a division, or a cash out, from a merger or acquisition.
The final section included within the statement details the flow of cash between a business, its owners, and its creditors. It shows how a business raises capital and pays back its investors, including activities like issuing and selling stock, paying cash dividends, and adding loans.
There are two methods for preparing a cash flow statement: the direct method and the indirect method.
While less common than the indirect method, the direct method has its advantages. In fact, even though each method is accepted by the Generally Accepted Accounting Practices (GAAP) and International Financial Reporting Standards (IFRS) , both guidelines actually encourage the use of the direct method.
Instead of modifying the operating section from accrual accounting to a cash basis, the direct method uses actual cash inflows and outflows—such as cash received from customers, cash paid to suppliers and employees, and cash paid for other operating expenses—from the company’s operations. Click here for some helpful tips on how to more effectively prepare a cash flow statement.
The more commonly used indirect method starts with net income and then adjusts for non-cash transactions and changes in working capital to arrive at the cash flow from operating activities. Indirect cash flow statements help stakeholders understand how company operations will contribute to the company’s current cash flow.
After putting together a cash flow statement, you’ll have a better idea of what is going on financially throughout your business, but what does a cash flow statement show? Put simply, it is a report of how much cash is flowing into or out of a business over a specified period.
If you have a positive cash flow, you have more cash flowing into the business than is going out of it. On the other hand, if you have a negative cash flow, more money is leaving your organization than is flowing in.
While a positive or negative cash flow does not necessarily directly translate to profits gained or lost, it’s generally better to have a positive cash flow. An excess of cash allows companies to reinvest and find new ways to grow their businesses, whereas a negative cash flow means that there’s more cash leaving a business than entering it.
To properly read and analyze a cash flow statement, an organization needs to compare statements over multiple periods to see if there are any noticeable trends or warning signs.
When analyzing your cash flow statement, here are some of the key indicators to look for:
There are various techniques for analyzing cash flow statements, and the most basic is comparing outlays to inflows to determine whether cash flow is positive or negative.
Another technique is to analyze the operating cash flow to net sales ratio—also known as revenue—which tells a business how much cash has been generated per sale.
One of the most important techniques is the free cash flow, which is the capital expenditures subtracted from the net operating cash flow. This is seen as equally important because it shows just how efficient a company is at generating cash. Free cash flows are used by investors to measure whether a company might have enough money to pay investors through dividends and share buybacks, after they have finished funding operations and capital expenditures.
With a general understanding of what a cash flow statement is and why financial reporting is important , it’s imperative to also learn strategies for improving cash flow management. This can help you to maintain financial stability and support business growth.
The best way to manage cash flow better is to look for ways to optimize operations, investing, and financing activities that will enhance cash flows. However, it can be complicated, as you need to closely monitor capital expenditures, optimize working capital through careful assessment, and implement capital management strategies. It’s also important to track key cash flow metrics and ratios to monitor liquidity and identify potential issues early.
Fortunately, SaaS solutions like the Workiva platform make it easier than ever to improve cash flow planning and management. They allow for real-time data access, automating financial processes like invoicing, expense tracking, and accounts receivable management.
But what is the purpose of a cash flow statement compared to other financial statements such as income statements or balance sheets?
Whereas a cash flow statement shows cash movements from operating, investing, and financing activities, income statements illustrate company profitability under accrual accounting rules. Balance sheets show company assets, liabilities, and shareholder equity.
All three of these financial reporting methods complement each other and play an integrated role in assessing a business’s financial performance during a specific time frame.
However, cash flow statements provide a unique perspective on financial health. Unlike balance sheets and income statements, which primarily capture financial positions at a single point in time or over a period, cash flow statements track the actual cash inflows and outflows during the entire reporting period.
In recent history, proper cash flow management has led to some overwhelmingly positive results. For example, Apple’s success story is often attributed to both its innovative products and its meticulous cash flow management .
By maintaining a strong focus on cash flow optimisation and liquidity management, Apple accumulated a significant cash reserve, allowing the company to survive economic downturns, invest in research and development and pursue strategic acquisitions like Beats Electronics and Shazam.
Microsoft has also focused their attention on accelerating cash flow generation from its core software products, while expanding into high-growth areas like cloud computing and artificial intelligence. This approach to capital allocation and cash flow optimisation has enabled them to invest in innovation and make strategic acquisitions such as LinkedIn and GitHub .
Unfortunately, companies like Toys “R” Us, Blockbuster, Kodak and RadioShack have also faced significant challenges from poor cash flow management, which has led them to financial distress, disruptions to operations and even bankruptcy.
Cash flow statements provide valuable insights, and it’s fundamental that organisations prioritise understanding and managing their cash flow to ensure long-term business health. Financial reporting tools like the Workiva platform help to make this task simpler, helping businesses to improve financial decision-making and drive their sustainable growth.
Try Workiva today! Request a demo to see how the Workiva platform brings clarity and confidence as you pull data and insights from your financial statements.
All product names, trademarks and registered trademarks are property of their respective owners.
All company, product and service names used are for identification purposes only.
Learn how to connect your data to create a clean, simple statement of cash flows.
Solutions Manager
Hamish has helped clients improve their financial reporting process for over 20 years. This has involved helping clients implement collaborative workflows, SEC compliance with HTML for their annual reports and Form 20-Fs, iXBRL for HMRC and now iXBRL for the ESEF mandate. While these changes are often driven by external events, helping clients successfully implementing change within organisations is often the greatest challenge.
This year’s Amplify conference focused on four tracks: ESG, accounting & finance, legal, and governance, risk, and compliance. Dive into these high-level insights so you and your team can be prepared for what’s to come in 2024.
Esg risk readiness: the intersection of esg & grc, where to start with legal technology: a modern roadmap for legal teams, financial reporting datasheet, online registration is currently unavailable..
Please email events@workiva to register for this event.
Please contact us at [email protected]
You have full access to this open access article
27 Altmetric
Explore all metrics
Early identification plays a crucial role in providing timely support to students with learning disabilities, such as dyslexia, in order to overcome their reading difficulties. However, there is significant variability in the methods used for identifying dyslexia. This study aimed to explore and understand the practices of dyslexia identification in the UK. A survey was conducted among 274 dyslexia professionals, including educational psychologists and dyslexia specialists, to investigate the types of assessments they employ, their approach to utilizing assessment data, their decision-making processes, and their conceptualization of dyslexia. Additionally, the study examined whether these professionals held any misconceptions or myths associated with dyslexia. Analysis of the survey data revealed substantial variability in how professionals conceptualize dyslexia, as well as variations in assessment methods. Furthermore, a significant proportion of the survey respondents subscribed to one or more misconceptions regarding dyslexia; the most common misconception identified among professionals was the belief that children with dyslexia read letters in reverse order. The findings highlight the need for standardized approaches to dyslexia identification and debunking prevailing misconceptions. The implications of these findings are discussed, emphasizing the importance of informed policy and practice in supporting students with dyslexia. Recommendations are provided to enhance consistency and accuracy in dyslexia identification, with the aim of facilitating early intervention and support for affected students.
The critical role of instructional response in defining and identifying students with dyslexia: a case for updating existing definitions, evaluating the impact of dyslexia laws on the identification of specific learning disability and dyslexia, explore related subjects.
Avoid common mistakes on your manuscript.
Students identified with learning disabilities such as dyslexia are defined as those who demonstrate difficulties in reading skills compared to peers, despite opportunities to learn to read. Intervention efforts to help students overcome their reading challenges generally show greater effects of intervention in early primary grades compared to intervention efforts for students identified in secondary grades (Scammacca et al., 2013 , 2016 ). Indeed, a wealth of data supports early identification as one of the key factors in helping students overcome their reading challenges (see Fletcher et al., 2019 ).
However, the identification process and the criteria used to identify students with dyslexia have been a subject of ongoing debate (see Elliott & Grigorenko, 2014 ). While there is consensus in the field regarding what does not constitute dyslexia, there are debates over its specific definition and identification procedures (e.g., Elliott, 2020 ). Despite the critical importance of accurately identifying dyslexia, there remains a notable gap in the literature regarding the assessment processes used in the UK. Thus, the focus of this study is to investigate what assessments, benchmarks, and procedures assessors such as educational psychologists, dyslexia specialists, and school personnel use to identify school-age children with dyslexia in the UK.
According the Diagnostic and Statistical Manual of Mental Disorders (DSM-V), dyslexia is defined as “…learning difficulties characterized by problems with accurate or fluent word recognition, poor decoding, and poor spelling abilities…” in the absence of other sensory, emotional, or cognitive disabilities (American Psychiatric Association, 2013 , p. 67). Thus, the core observable deficits individuals with dyslexia present are difficulties in decoding and encoding (i.e., spelling) words. In this section, we provide a brief history of dyslexia identification procedures, outline the components that are directly and indirectly associated with dyslexia identification, and highlight some misconceptions that are controversial and may influence diagnostic guidelines and assessment procedures.
Dyslexia identification has a long and complex history. One of the first observations of an individual with dyslexia was made in the late 1800s. In this report, it was noted that a 14-year-old boy who was “bright” and observed to have normal intelligence demonstrated a remarkable inability to read and spell words in isolation (Morgan, 1896 ). In an attempt to identify the cause of dyslexia, early researchers alluded to theories that this inability to read was associated with some form of “congenital deficits” or “word blindness” or “derangements of visual memory” (Hinshelwood, 1896 ; Morgan, 1896 ). It is important to note that these early researchers were vital in raising awareness of conditions associated with the inability to read; however, their inferences were based on observational data and lacked sophisticated methods to support theories associated with cognitive or visual deficits as a cause for dyslexia.
Over the years, researchers have explored different methods to identify students with learning disabilities such as dyslexia. Some of the earlier identification methods relied on hypotheses that visual deficits were a source of dyslexia. For instance, the visual-perceptual deficit model hypotheses (Willows et al., 1993 ) proposed that reading difficulties are caused by a dysfunction in the magnocellular pathway, which is responsible for processing fast-moving, low-contrast visual information. Based on correlational studies, this pathway was thought to play a crucial role in visual perception, including the ability to perceive letter shapes accurately. However, the causal nature of this pathway has not been established and there is little empirical data to support the visual deficit hypotheses as an explanation for dyslexia (Fletcher et al., 2019 ; Iovino et al., 1998 ).
One assessment model which was predominantly used in the last century for identifying students with dyslexia and other learning disabilities, but has been refuted, was the IQ-reading ability discrepancy model. In this identification method, an individual’s assessment scores needed to demonstrate a discrepancy in their IQ test scores and their reading scores. This method aligned with the earliest observations where children were observed to have been “bright” with “normal intelligence” but demonstrated an inability to read. Overwhelming evidence has demonstrated the issues related to the validity of the process and poor reliability in identification (e.g., Fletcher et al., 1998 ; Francis et al., 2005 ; Meyer, 2000 ; Stanovich, 1991 ; Stuebing et al., 2002 ). Thus, current evidence does not support the use of this model in the identification process.
More recently, another model of discrepancy known as the patterns of cognitive strengths and weaknesses has been proposed for dyslexia identification (Hale et al., 2014 ). In this assessment model, individuals’ assessment scores need to demonstrate strengths in certain cognitive domains and weakness in other cognitive domains that are associated with low reading scores (Fenwick et al., 2015 ). However, multiple studies demonstrate lack of reliability in identification of students with learning disabilities using this assessment method (Fletcher & Miciak, 2017 ; Kranzler et al., 2016 ; Maki et al., 2022 ; Miciak et al., 2015 ; Stuebing et al., 2012 ; Taylor et al., 2017 ). For instance, Maki et al. ( 2022 ) observing school psychologists’ dyslexia identification process using the patterns of cognitive strengths and weaknesses model observed that they used considerable amount of time and resources administering cognitive assessments that were associated with low probability of accurate identification.
In addition to the unreliability of this assessment method, another challenge reported is that these assessment procedures are not very informative for educators who have to plan intervention to support students diagnosed with dyslexia (Taylor et al., 2017 ). For instance, one past meta-analysis reported that interventions that target improvement in students’ cognitive abilities such as working memory have negligible effects on the academic outcomes such as reading (Kearns & Fuchs, 2013 ).
Another discrepancy model concerns the learning opportunity and poor reading performance (de Jong, 2020 ), in which learning opportunity is viewed as adequate instruction received by students, and poor reading performance is considered as the unexpected underachievement. In other words, dyslexia is viewed as a discrepancy between reading growth and instructional quality. Based on this perspective, the response to intervention (RTI) model was proposed (Fletcher et al., 2019 ; D. Fuchs et al., 2012 ). In the RTI model, all students are screened for reading difficulties, their reading progress is then monitored, and increasingly intense interventions are provided according to their response to progress monitoring assessments (Fletcher & Vaughn, 2009 ). With this approach, a dyslexia diagnosis can only be fulfilled with severe reading lag and two additional conditions: (a) inadequate growth in reading in general instructional settings and (b) inadequate response to small group or one-on-one evidence-based reading interventions (de Jong, 2020 ; Fuchs et al., 2012 ).
The RTI model is supported for substantial advantages, including early intervention and academic prevention, reduction of over-identification, collaboration between general and special education, encouraging evidence-based instruction, providing educational services to students without labeling, and reducing the cost associated with identification process (Fletcher & Vaughn, 2009 ; D. Fuchs et al., 2012 ; L. S. Fuchs & Vaughn, 2012 ). However, the RTI model is not a panacea for dyslexia identification. Issues related to reliability and validity still remain, including problems of identifying adequate instruction and response (Denton, 2012 ; Kauffman et al., 2011 ; O’Connor & Sanchez, 2011 ).
To address the problems of the above-mentioned discrepancy models, one possible solution is to integrate multiple criteria for dyslexia identification. Therefore, hybrid models have been proposed (Fletcher & Vaughn, 2009 ; Fletcher et al., 2012 ; Miciak & Fletcher, 2020 ; Rice & Gilson, 2023 ). The hybrid models may differ in the assessment implementation (Fletcher et al., 2012 ) and vary with or without the unexpectedness component (Rice & Gilson, 2023 ). Current recommendations suggest that a dyslexia diagnosis should be made based on (a) low achievement in reading, (b) inadequate response to evidence-based instructions, and (c) exclusion factors to ensure that low achievement is not due to another disability or contextual factors (Fletcher & Vaughn, 2009 ; Rice & Gilson, 2023 ).
Furthermore, assessments are always involved when identifying dyslexia, regardless of which model is applied. It is thus reasonable to consider issues related to the assessments. For example, Miciak et al. ( 2016 ) suggested that it is more reliable to incorporate multiple reading assessments and to employ confidence intervals instead of rigid cut-off points during the process of dyslexia identification. In addition to that, culture and language factors should be taken into considerations whenever necessary when administering assessments (American Educational Research Association et al., 2014 ; Fletcher et al., 2019 ).
In this section, we delve into the proximal causes and distal associations of dyslexia, drawing insights from Hulme and Snowling’s ( 2009 ) analogy of lung cancer. Emphasizing the significance of reliability and validity in the identification process and its relevance in instructional decision-making within the RTI or hybrid model framework, we aim to explore the key factors that contribute to a reliable identification of students with dyslexia.
Proximal causes. Proximal causes refer to factors that directly and immediately impact the outcome. Taking Hulme and Snowling’s ( 2009 ) lung cancer as the exemplar, the gene mutation in the lung tissue would be a direct and proximal cause of lung cancer. Based on this analogy, proximal causes of dyslexia refer to components that directly and immediately produce poor word reading or spelling. Several theoretical models of reading have posited that successful word reading/spelling can be achieved only when multiple proximal causes function together (e.g., Gough & Tunmer, 1986 ), such as the ability to manipulate sounds or phonological awareness, knowledge of letter-sound relationships or decoding skills, and reading fluency (Gough & Tunmer, 1986 ; McArthur & Castles, 2017 ). Failure in any of the above factors could be directly linked to failure in reading or spelling words accurately.
Distal associations. Distal associations refer to factors that have indirect impact on the result. In Hulme and Snowling’s ( 2009 ) example, cigarette smoke would be a distal link to lung cancer as it increases the risk of cancer. Regarding dyslexia, distal associations refer to cognitive components that are associated with individuals’ word reading or spelling but are not intrinsic components of reading. In the literature, examples of distal factors associated with reading are working memory, verbal memory, and attention (Burns et al., 2016 ; Feifer, 2008 ; McArthur & Castles, 2017 ).
Although some studies have argued that a comprehensive array of cognitive assessment data, including proximal and distal measures, would contribute to the development of suitable treatment for dyslexia (e.g., Feifer, 2008 ), other studies have shown that cognitive assessment data is not necessarily helpful for identification and intervention (Burns et al., 2016 ; Galuschka et al., 2014 ; McArthur & Castles, 2017 ). Previous studies have consistently supported the significance of proximal measures for identification and treatment compared to distal measures (Burns et al., 2016 ; Galuschka et al., 2014 ). In a meta-analysis that examined the effects of using cognitive data screening and designing interventions among 37 studies, although a small effect was found for distal cognitive measures (i.e., intelligence tests and memory assessments), larger effects were found for proximal measures (i.e., phonological awareness and reading fluency) (Burns et al., 2016 ). Another meta-analysis has also observed that cognitively focused interventions did not generalize to improvements in reading performance (Kearns & Fuchs, 2013 ). On the contrary, a proximal intervention, which focuses on the proximal causes of reading, such as phonics instruction and reading fluency training, has shown to be more effective (e.g., Daniel et al., 2021 ; Scammacca et al., 2016 ) than a distal intervention that centers on distal associations of reading, such as colored overlays and sensorimotor training (Galuschka et al., 2014 ).
The different identification models and evidence supporting or refuting them have given rise to a series of misconceptions that has been reported in mainstream media and academic literature (Elliott & Grigorenko, 2014 ). Most of these misconceptions stem from procedures that have historical precedence but lack empirical data supporting their use in the identification process. Below we highlight some misconceptions that align with the misconception that having dyslexia is more than deficits in reading and spelling words.
Some portrayals of children with dyslexia note that children see letters and words reversed and this is an indicator of dyslexia. Studies that have explored the letter reversal aspect have compared dyslexic and non-dyslexic individuals and demonstrated that letter reversals are more characteristic of being at a certain stage of reading development, rather than a core aspect of dyslexia; these studies have also reported no significant differences in letter reversals among dyslexic and non-dyslexic children and adults (Cassar et al., 2005 ; Peter et al., 2020 ). It is important to note that there is some empirical data to support the hypothesis that individuals with dyslexia misread words due to letter positioning. Some researchers have observed that individuals with dyslexia when reading anagram words (e.g., smile and slime; tried and tired) make migration errors more frequently than control group peers that impact their word reading accuracy and their comprehension (Brunsdon et al., 2006 ; Friedmann & Rahamim, 2007 ; Kohnen et al., 2012 ). In these experiments, individuals with dyslexia might make migration errors wherein they read the word “bowls” as “blows” and this decoding error also impacts their comprehension. However, it is important to highlight that migration errors are different from letter reversals, and we could not locate any studies that observe letter reversals solely in individuals with dyslexia.
Other common misconceptions that are not empirically supported are dyslexic individuals demonstrating high levels of creativity (Erbeli et al., 2021 ) and sensory-motor difficulties (Kaltner & Jansen, 2014 ; Savage, 2004 ). For instance, Erbeli et al. ( 2021 ) reviewed 20 studies in their meta-analysis and reported that there was lack of evidence to support the notion of creative benefits for individuals with dyslexia; there were no significant differences in levels of creativity between individuals with and without dyslexia.
There are also misguided recommendations in improving students with dyslexia’s reading skills that align with unsupported theories of visual-perceptual deficit. For instance, there is little evidence to recommend using color overlays (Henderson et al., 2012 ; Suttle et al., 2018 ) and specific dyslexic fonts (Galliussi et al., 2020 ; Kuster et al., 2017 ; Joseph & Powell, 2022 ; Wery & Dilberto, 2016 ) in improving reading skills in students with dyslexia. For example, Galliussi et al. ( 2020 ) evaluated the impact of letter form or different fonts on typical and dyslexic individuals’ reading speed and accuracy. Authors reported no additional benefits of reading text in dyslexia friendly fonts compared to common fonts for children with and without dyslexia.
Of concern is that if individuals assessing students for dyslexia adhere to these misconceptions, then this could lead assessors to make erroneous judgments. Thus, in our study, we explore UK dyslexia assessors’ conceptualization of dyslexia and whether they consider these misconceptions as an indicator of dyslexia.
In the United States (US), a recent study on identifying school-age students with learning disabilities showed variability in identification criteria, assessments, and diagnostic labels across a wide-range of surveyed educational professionals (Al Dahhan et al., 2021 ). In a survey of close to 1000 assessors, authors (Al Dahhan et al., 2021 ) reported assessors using a variety of different criteria when evaluating assessment data and lengthy wait times for individuals to receive assessment and diagnostic results. Similarly, Benson et al. ( 2020 ) reported that school psychologists in the US used various identification frameworks, including outdated ones like intelligence-achievement discrepancy. These different frameworks resulted in varied identification decisions, impacting students’ access to support. In Norway, Andresen and Monsrud ( 2021 ) found that assessors reported consensus in the types of assessments used to identify students with dyslexia. However, their study also reported that assessors place heavy emphasis on students’ performance on intelligence tests and use reading assessments which lack reliable psychometric properties (Andresen & Monsrud, 2021 ). A recent systematic review of assessment practices to identify students with dyslexia reported that various dyslexia assessment practices were employed, encompassing cognitive discrepancy and response-to-intervention methods to identify students with dyslexia (Sadusky, et al., 2021 ). Authors also note that most of the studies reviewed were conducted in the US, with very few studies exploring dyslexia assessment procedures in other countries (Sadusky, et al., 2021 ). In the United Kingdom (UK), Russell et al. ( 2012 ) conducted a case study with one 6-year-old child who was assessed on multiple measures by four different professionals. Authors reported that there was general lack of agreement among professionals on the assessment methodology, which lead to different diagnosis of the child’s areas of needs. However, given this study included only one child, it is hard to generalize these findings to assessment practices in the UK.
These past studies on diagnostic procedures in dyslexia identification highlight the discrepancies in the diagnostic process among assessors leading to inconsistent identification approaches that can impact the services students receive to overcome their learning challenges. To ensure that students with additional needs gain timely access to services, it is essential that all students who have additional needs are identified reliably for support services. More importantly, it is vital to understand that procedures professionals are undertaking to identify students with dyslexia are not only reliable but also valid and align with current recommendations in the field. Furthermore, none of the past studies to our knowledge have explored methods of assessment for students who are English language learners in English-speaking countries, indicating a crucial area for future research to ensure equitable and effective diagnostic practices for this significant student population.
In the UK, the Equality Act ( 2010 ) legally protects individuals with disabilities from discrimination in society, including in educational settings. The Equality Act ( 2010 ) provides clarity that it is against the law to discriminate against someone because of “protected characteristics,” one of which is having a disability. “Disabled” is defined as having a physical or mental impairment that has substantial, long-term adverse effects on an individual’s ability to conduct day-to-day activities (Equality Act, 2010 ). However, neither dyslexia nor specific learning disabilities/difficulties are explicitly mentioned in the Equality Act.
More recently, the Children and Families Act 2014 provides regulations for the Special Educational Needs and Disability Code of Practice (Department of Education, 2014 ). This regulatory document mentions dyslexia as a condition associated with specific learning difficulties (SpLD). However, it does not provide a definition of what constitutes dyslexia and refers the reader to the Dyslexia-SpLD Trust for guidance. Thus, in the UK, there is no official guidance from policymakers on defining and identifying students with dyslexia or other learning difficulties.
It is also important to state that there are a variety of credentials relating to dyslexia assessment that can be obtained in the UK. For example, the British Dyslexia Association (BDA) offers Associate Membership of the British Dyslexia Association (AMBDA), which is used as an indicator of professional competence in diagnostic assessment. To apply for AMBDA, individuals must have completed an AMBDA accredited Level 7 postgraduate course. These courses are run by various dyslexia organizations, such as Dyslexia Action and Dyslexia Matters, and example courses include a Postgraduate Certificate in Specialist Assessment for Literacy-Related Difficulties and a Level 7 Diploma in Teaching and Assessing Learners with Dyslexia, Specific Learning Differences, and Barriers to Literacy. Completion of one of these courses can then lead to an Assessment Practising Certificate (APC). An APC is used as an indicator that an assessor has competed an AMBDA accredited course and recognizes the knowledge and skills gained from this. This credential is especially important in the UK, as the Department of Education states that a diagnosis of dyslexia will only be accepted as part of a Disabled Students’ Allowance application if it is completed by an assessor holding an APC or if they are a registered psychologist. Because of this, the BDA recommend that all assessors should hold an APC.
There is currently no clear guidance from policymakers in the UK on the definition and diagnostic procedures of dyslexia. The onus of developing diagnostic procedures and standards relies heavily on various independent professional organizations that develop their criteria for assessments, conduct assessment procedures, and provide diagnostic information to individuals, their caregivers, and school personnel. Apart from one case study that included one participant (Russell et al., 2012 ), no previous study to our knowledge has explored how independent assessors identify school-age children with dyslexia in the UK. By providing a detailed exploration of the current assessment methods in the UK, this research contributes significantly to the broader understanding of dyslexia identification. We explored the following research questions:
How do professional assessors identify students for dyslexia in the UK?
What types of assessments are used to identify dyslexia, how are standardized measures and cut-off scores utilized in dyslexia diagnosis, how many assessments are conducted and how long does the assessment process take, how do assessors make decisions regarding a dyslexia diagnosis, what assessments are used to assess english language learners for dyslexia.
How do professionals conceptualize dyslexia?
What is dyslexia assessors’ level of confidence in the validly and reliability of their assessment procedures and their diagnostic judgment?
The study received ethical approval from the Ethics Committee at first author’s university. All responses were anonymous and no identifiable information was collected. Participants were able to exit the survey at any time if they no longer wished to participate.
A recruitment email was sent to various UK-based dyslexia and psychological associations. Four dyslexia associations based in the UK, together with two psychological associations, distributed the survey email and its accompanying link to their members, with the email being sent on one occasion. Also sharing the survey with dyslexia and psychological associations, online searches were conducted to identify potential participants. This involved searching for the terms “dyslexia assessor” and “dyslexia specialist” and specifying the region. The regions included in the search were UK, England, Scotland, Wales, Northern Ireland, North East, North West, South East, and South West of England. These searches allowed us to identify personal websites for individuals offering dyslexia assessment, such as specialist teachers. These individuals were then contacted via the email listed on their website with an invitation to take part in the study and a link to the survey. These professionals were contacted once via email. All survey responses were collected between a 4-week period between January and February 2023.
To take part in the survey, participants had to work in a role that involved assessing students for dyslexia, such as a dyslexia specialist, specialist assessor, or educational psychologist. Participants were asked to indicate their current role and qualifications in identifying school-aged students suspected of having dyslexia. See Table 1 for participant demographic information.
Based on past studies (e.g., Al Dahhan et al., 2021 ; Andresen & Monsrud, 2021 ; Benson et al., 2020 ), we developed a survey to explore how various professionals identify school-age students with dyslexia. The online survey (see Appendix A ) included four sections, which were “Demographic Information,” “Assessing and Identifying Students with Dyslexia,” “Conceptualising Dyslexia,” and “Thoughts on the Process of Assessment and Identification.” Before distributing the survey, feedback was obtained from professionals in the field, which resulted in slight changes to the wording of some questions. All survey questions were optional, and participants could choose to skip any of the survey items.
The “Demographic Information” section included nine questions about participants’ background, such as their highest degree and relevant qualifications, their role in identifying students with dyslexia and how long they have worked in this role, and the age groups of students they assess.
The “Assessing and Identifying Students with Dyslexia” section included 25 questions on participants’ assessment and identification process. It included questions about the different types of assessments (e.g., phonological awareness, vocabulary, working memory) they used to identify pupils with dyslexia, the standardized assessments they typically use, their use of benchmarks or cut-off points on these assessments, and their reasons for selecting these assessments. Participants are also asked about the referral process, such as reasons for referral, who generally begins the process, and the average time from referral to diagnosis. The survey also asked participants to report if they assessed individuals who are English language learners and the language of assessments used for this subgroup of individuals.
The “Conceptualising Dyslexia” section had 27 questions that addressed how respondents conceptualize and define dyslexia. The questions focused on the models that participants use to define dyslexia and the criteria they use to identify it. In this section, participants are shown a list of criteria and asked to indicate if they would use these to identify dyslexia. These indicators fell under three subcategories: proximal causes of dyslexia such as poor knowledge of letter names, distal associations of dyslexia such as poor performance on working memory tasks, and myths or misconceptions such as reading letters in reverse order or high levels of creativity.
The “Thoughts on the Process of Assessment and Identification” section had two questions that asked participants about their confidence in their assessment of a student having or not having dyslexia and their perceptions on the reliability of the process in helping them make decisions.
The survey included various types of question items. Many questions allowed respondents to select one or more multiple choice options from a list of choices, for example, questions about the types of assessments used to identify dyslexia or the reasons for referrals (e.g., “What types of assessments do you use to identify students with dyslexia? Choose all that apply.”). Some items used a Likert scale for responses, where participants rate their agreement or frequency of a particular behavior or belief, for example, questions about confidence in assessments (e.g., “How confident do you feel in your assessment of the child as having or not having a reading disability post your assessment? [0 = not confident at all; 10 = certain]”). Participants were also asked open-ended questions to elaborate on their choices such as how they used the assessment data in their diagnostic process.
We utilized an online polling website for the data collection phase. Upon completion of the data collection process, we downloaded all the collected data onto a spreadsheet. We used the dplyr package (Wickham et al., 2017 ) in R (R Core Team, 2021 ) for data cleaning and descriptive analyses.
Survey participants reported that the most common reason that a parent or school refers a child for assessment is their reading proficiency being below average (62.50% and 59.00%, respectively). Many respondents also reported that parents and school refer a child due to them being unresponsive to classroom reading instruction (65.50% and 35.00%, respectively). However, many are also referred by their parents or school because their cognitive, motor, or visual skills are below average (34.00% and 24.50%, respectively), indicating that more distal indicators are also used to inform referrals. Further reasons for referral provided by participants include students struggling with studies despite showing good general ability, issues with writing and spelling, disparities between verbal and written work, struggling with the curriculum (e.g., working slowly, misreading questions), and running out of time on assessments. Table 2 also shows participants’ responses to the average amount of time it takes from the time they receive a referral to individuals receiving a diagnosis. The majority (59%) of pupils received a diagnosis within 1 month of referral, while 30% received a diagnosis between 1 and 6 months after referral.
As shown in Table 3 , participants were asked to indicate the types of assessments that they use to identify students with dyslexia. Almost all respondents reported assessing reading-related constructs and phonological processing. A vast majority also reported assessing students on various distal measures such as working memory, verbal processing speed, cognitive ability, verbal memory, and reasoning skills. Additionally, Table 4 shows the frequency and types of reading assessments assessors use when conducting assessments with word reading and reading fluency assessments administered most frequently.
To understand participants’ use of standardized measures and cut-off scores, they were asked to report which assessments they use and how they use standardized assessment scores. Across our sample, 80 different standardized assessments were reported as being used during assessments. See Appendix B for a list of the most frequently used standardized assessments. Post assessment administration, a substantial majority (63%) of the participants reported not using cut-off score on standardized assessment to diagnose dyslexia. In contrast, 36% reported utilizing cut-off scores on multiple assessments before completing their diagnostic report. Only one individual in our sample reported using a cut-off score on a single assessment prior to diagnosis.
When asked to explain how they use assessment scores, many reported using the assessments to get an overall picture of a student’s underlying cognitive ability and to look for patterns of strengths and weaknesses that are indicative of dyslexia. It was also often reported that assessors did not use these assessments in isolation, but considered them alongside background information, observations, and reports from parents and teachers. For example, many responses indicated that if a score was low but did not meet a cut-off point, they would consider the assessment scores in relation to background information to determine if, taken together, they indicate dyslexia. Some participants also reported using assessment scores to get a holistic view of strengths and weaknesses and to identify a “spiky” profile in order to build a picture of a student’s areas of need.
Participants were asked to report the minimum and maximum number of assessments they use during the identification process and the time the assessment takes. The minimum number of assessments ranged from 1 to 31, with a median of 6, and the maximum ranged from 1 to 50, with a median of 8 assessments.
The minimum assessment time ranged from 45 to 240 min, with a median of 150 min, and the maximum time ranged from 90 to 600 min, with a median of 220 min. These results indicate that there is large variation in the number of assessments used and assessment time, with some professionals, on the extreme end, assessing a child for up to 10 h on up to 50 assessments.
More than four in five respondents make their decisions on a diagnosis independently (85.00%). Of the remaining respondents who work with a team to make decisions, team members included educational psychologists, special education needs coordinators, teachers, other specialists, and families. These results suggest that the vast majority of professionals rely solely on their judgment to make decisions on a child’s diagnosis.
Among the 274 survey participants, a subset of 61 respondents indicated that they conduct assessments for individuals who are English language learners. Within the group of 61 assessors who assess English language learners, only a small number, specifically 5, stated that they conduct assessments in the individual’s first language; the remaining 56 reported using the same assessments that are administered to monolingual English-speaking students.
Familiarity.
When presented with the DSM-V definition, which states that dyslexia is characterized by difficulties with reading, spelling, and writing, over two-thirds indicated that the definition was missing elements of cognitive, visual, or motor skills (68.16%). Also, almost a fifth of respondents indicated that the DSM-V definition was inaccurate (19.55%).
Results indicated that almost two-thirds of participants use 5 or more of the proximal indicators (e.g., poor knowledge of letters or letter names, labored or error prone reading fluency) to identify dyslexia (62.15%). Results also demonstrate that 7.91% agree with 5 or more misconceptions as an indicator of dyslexia and close to half of the survey participants associate with at least one misconception as an indicator for dyslexia (43.50%) (e.g., high levels of creativity, use of dyslexia fonts or colored overlays, seeing letters in reverse order).
To understand how participants conceptualize dyslexia, they were asked what constitutes dyslexia. As shown in Table 5 , findings indicate that there is large variation in the way that professionals are conceptualizing dyslexia. A large majority reported dyslexia to be a phonological deficit while many also conceptualize dyslexia as a discrepancy between an individual’s reading skills and their cognitive ability (i.e., patterns of strengths and weakness model).
In assessing the confidence levels of dyslexia assessors, the study found that professionals generally felt confident in their diagnostic judgment following an assessment for a child’s potential dyslexia. On a scale from 0 (not confident at all) to 10 (certain), the confidence level was reported with a mean of 8.5, a standard deviation of 1.1, and a median of 9. Similarly, when evaluating the validity and reliability of the assessments they employed in making eligibility decisions, assessors reported high confidence levels, with a mean of 8.3, a standard deviation of 1.3, and a median of 9, on the same confidence scale.
In this study, we explored existing assessment methodologies for identifying school-age children with dyslexia in the UK. We aimed to solicit responses from assessors on their background, their assessment procedures, the types of assessments used, their decision-making process, the types of indicators they use during identification, and their conceptualization of dyslexia. Similar to past studies, there was lack of consensus in the response of assessors on various metrics.
An important takeaway from this study was that most of the survey participants reported that they use reading assessment such as word reading, pseudo word reading, reading fluency, reading comprehension, and spelling in their dyslexia assessment process. These assessment methods align with current recommendations in the field that recommend using academic measures to assess individuals for SpLDs such as dyslexia (e.g., Fletcher et al., 2019 ). A high percentage of respondents also used some form of writing assessment and/or oral language assessments when evaluating for dyslexia.
Similarly, high percentage of survey respondents also reported using a variety of different cognitive assessments when assessing for dyslexia. Respondents reported administering measures of working memory, general cognitive ability, verbal processing speed, verbal memory, reasoning skills, and visual temporal processing. Given that different assessors used a variety of cognitive assessments, it is important to highlight that this diversity may lead to the identification of varying patterns of strengths and weaknesses in individuals with dyslexia. As a consequence, this lack of consensus in the choice of cognitive assessments employed by assessors raises concerns about the reliability and consistency of the dyslexia identification process.
While past research has demonstrated correlation between cognitive measures and reading assessments, these methods have remained controversial. Little empirical data supports benefits of cognitive assessments in informing intervention efforts. For instance, Stuebing et al. ( 2002 ) in their meta-analysis demonstrated that after controlling for pretest reading scores, cognitive measures accounted for 1–2% of explained variance in students’ reading growth. More recently, a pilot study that explored the additional benefits of cognitive training reported no significant benefits of cognitive training on students reading outcomes. In this study (Goodrich et al., 2023 ), authors assigned preschool children at-risk of reading difficulties to either an early literacy program, early literacy program plus cognitive training, or control. Both early literacy program groups outperformed controls on literacy measures. However, there was no significant differences on literacy outcomes between the literacy only group compared to the literacy plus executive function training group. This study and past reviews consistently highlight little benefits of cognitive training interventions’ effects on academic outcomes (Kearns & Fuchs, 2013 ). Given this evidence, it is important to question the reason for administering cognitive assessment as they do little to guide intervention efforts to support students’ reading growth.
Another area of discussion is the number of assessments assessors use to identify students for dyslexia. A general recommendation in the field is to use more than one assessment for identification, as a single measure may underrepresent a construct (Fletcher et al., 2019 ). The median number of minimum assessments reported by assessors was six, and the median maximum number of assessments reported was eight. While this indicates a multi-faceted approach, the fact that almost 2/3rd of the sample reported not using cut-off scores raises questions about how diagnostic decisions are made. While the avoidance of strict cut-off scores aligns with the understanding that word reading abilities exist on a continuum, the lack of their use raises questions about how assessors are synthesizing the results of multiple assessments to determine a diagnosis. Confidence intervals, which account for measurement error and provide a range of plausible values, offer a more accurate and inclusive approach to identifying reading difficulties (Miciak et al., 2016 ) and could potentially address this ambiguity. Thus, it was perplexing to see that most assessors were not making normative comparisons to guide their decision-making. Another challenge is that almost all assessors use a blend of academic (e.g., reading) and cognitive assessments (e.g., working memory) to identify strengths and weaknesses or to identify a “spiky” profile. Past research on evaluating patterns of strengths and weakness has demonstrated this process to be unreliable and lacking validity (Fletcher & Miciak, 2017 ; Maki et al., 2022 ).
There are no guidelines from policymakers in the UK to the holistic process of evaluating students’ assessment scores, raising concerns about the reliability of this process. This concern is supported by one past case study in the UK, which found that different professionals came to very different conclusions of a child’s areas of academic needs based on their evaluation of the assessment data (Russell et al., 2012 ). Thus, the question is would different assessors come to different conclusions based on their own holistic evaluation of assessment data?
Our findings related to the variability in diagnostic procedures and conceptualization of dyslexia suggest a need for government policy to guide the assessment procedures for students with dyslexia. For example, in the United States, the Individuals with Disabilities Act (IDEA, US Department of Education, 2006 ) clearly states that “The Department does not believe that an assessment of psychological or cognitive processing should be required in determining whether a child has an SpLD. There is no current evidence that such assessments are necessary or sufficient for identifying SpLD. Further, in many cases, these assessments have not been used to make appropriate intervention decisions” (p. 46,651). Similar guidance is needed for more reliable identification processes in the UK.
Another important area to highlight is that one past study in the UK has reported parental income to be a significant predictor of a child being diagnosed with dyslexia; the likelihood of being identified as dyslexic increases with higher income (Knight & Crick, 2021 ). For parents in the UK, assessing their child for dyslexia could cost anywhere between £500 and £700. This raises questions of equity and who can afford these assessments as 60% of households in the UK earn less than £799 per week (Office of National Statistics, 2023 ). Given the high costs of assessments and the post-pandemic cost of living crisis in the UK, we wonder how many households have disposable incomes to afford paying for dyslexia assessments. We wonder if there is a need for cognitive assessments and, if not, would reducing the number of assessments help assessment institutions to reduce the cost of assessments to make it more equitable and accessible to the general public. It is important to note that the National Health Services in the UK does not cover the cost of dyslexia assessments and this cost has to be incurred by caregivers.
All survey participants (100%) reported that they are “very familiar” with dyslexia. However, it was perplexing to observe that only small proportion of our sample reported agreeing with DSM-V definition of dyslexia that defines dyslexia as issues with word reading, reading fluency, and spelling words. When probed further on how assessors conceptualize dyslexia, majority reported it being a phonological deficit, inadequate decoding skills, and lack of response to evidence-based reading instruction. However, a substantial proportion of the sample also aligned with dyslexia being conceptualized as patterns of strengths and weaknesses or a discrepancy between IQ and achievement. Our data suggests that although a resounding number of study participants align with the DSM-V definition of dyslexia, they also have a strong commitment to cognitive assessments as an integral aspect of identification. This lack of consensus is consistent with past research on the lack of consensus among what constitutes dyslexia (e.g., Al Dahhan et al., 2021 ; Ryder & Norwich, 2019 ; Sadusky et al., 2021 ).
Additionally, we also wanted to explore if dyslexia assessors subscribe to myths or misconceptions about dyslexia. The common misconceptions that dyslexia assessors reported as being an “indicator of dyslexia” were that individuals with dyslexia read letters in reverse orders (61%), they see letters jumping around (33%), they have high levels of creativity (17%), they report motor skills issues or clumsiness (17%), and they struggle to read words only when text is displayed in certain colors (15%) or fonts (12%). This suggests that there are many assessors that align with misconceptions to inform their decisions surrounding dyslexia diagnosis. Empirical data does not support these to be indicators of dyslexia (e.g., Henderson et al., 2012 ; Kuster et al., 2017 ). Thus, there is a need for dyslexia and psychological associations in the UK to ensure that these misconceptions are directly addressed in their certification modules. This is especially important as a majority of respondents reported using the data holistically to evaluate their diagnosis procedure and these misconceptions could influence assessors’ judgments and could potentially be associated with identification errors.
We observed that assessors generally reported high levels of confidence in the validity and reliability of the diagnostic process and their diagnosis. This is consistent with previous findings in both educational (Maki et al., 2022 ) and clinical settings (Al Dahhan et al., 2021 ), where practitioners generally reported high confidence in their ability to identify students with specific learning disabilities/difficulties, especially those assessors who had received more training. However, this reported confidence contrasts with the concerns raised in the present study about the reliability and validity of methods employed (such as the patterns of strengths and weakness), the pervasive use of a variety of cognitive assessments, the lack of framework on how assessment data is to be used for diagnosis, and the belief in dyslexia misconceptions that a large proportion of the sample subscribes to. This discrepancy, echoing Maki et al.’s ( 2022 ) findings of a potential disconnect between accuracy and confidence, suggests that decision-making confidence might be misplaced if it is not underpinned by standardized and widely accepted identification methods. Hence, while assessors are confident in their diagnostic capabilities, this confidence may be problematic if the identification methods themselves are flawed or inconsistently applied. Further research exploring the relationship between training, experience, and diagnostic accuracy in this context is warranted.
There is little data in the research literature to shed light on dyslexia assessment practices for English language learners. In our survey, we asked UK dyslexia assessors if they assessed individuals who were English language learners. Approximately 30% of our sample reported assessing English language learners for dyslexia. Within this subsample, a majority (92%) reported that they did not assess English language learners in their first language and generally used the same assessments they used for monolingual English speakers. This is an area of concern as assessing individuals on assessments that are in their second language may impact the validity of assessors’ interpretation of assessment data.
While past researchers (Fletcher et al., 2019 ) recommend selecting assessments that are linguistically and culturally sensitive to make accurate inferences, there may be practical challenges. For instance, some respondents reported that they have been unable to access assessments in students’ first language, despite asking their local authority for support in doing so. This indicates assessors’ willingness to assess individuals in culturally and linguistically sensitive assessments, but the lack of available resources may be a potential barrier. Thus, improving assessors’ knowledge and access to assessments in students’ first language may be one step towards administering culturally and linguistically fair assessments that can lead to improved identification decisions for this subpopulation of individuals.
A notable limitation of this study is that we are not aware of the survey response rate. Although post code data shows that our sample was recruited from all over the UK, it is not certain that this sample’s assessment practices are representative of all UK dyslexia assessors. Another limitation is that survey questions were limited to dyslexia identification and did not elicit responses on identification of other learning disabilities/difficulties such as reading comprehension difficulties, math difficulties, and/or writing difficulties.
Our study demonstrates that there is a general lack of consensus among assessors on the process of dyslexia identification. While many subscribe to the notion of dyslexia being a deficit in core areas of reading, several others subscribe to dyslexia being a discrepancy between individuals’ reading and cognitive profiles. There is a clear need in the UK for policymakers to clearly define dyslexia and provide assessment guidelines. Nationally defined identification pathways would be useful in providing guidance to various assessment institutions and this alignment could lead to a cohesive model for reliable identification of learning difficulties such as dyslexia.
The data that support the findings of this study are available in the UK Data Service ReShare repository. The data have been stored in accordance with institutional guidelines and are accessible for replication purposes. For further inquiries, please contact the corresponding author at [email protected].
Al Dahhan, N. Z., Mesite, L., Feller, M. J., & Christodoulou, J. (2021). Identifying reading disabilities: A survey of practitioners. Learning Disability Quarterly, 44 (4), 235–247. https://doi.org/10.1177/0731948721998707
Article Google Scholar
American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders. 5th ed. American Psychiatric Association.
Andresen, A., & Monsrud, M.-B. (2021). Assessment of dyslexia – why, when, and with what? Scandinavian Journal of Educational Research, 66 (6), 1063–1075. https://doi.org/10.1080/00313831.2021.1958373
American Educational Research Association, American Psycological Association, & National Council on Measurement in Education (Eds.). (2014 ). Standards for educational and psychological testing. American Educational Research Association.
Benson, N. F., Maki, K. E., Floyd, R. G., Eckert, T. L., Kranzler, J. H., & Fefer, S. A. (2020). A national survey of school psychologists’ practices in identifying specific learning disabilities. School Psychology, 35 (2), 146–157. https://doi.org/10.1037/spq0000344
Brunsdon, R., Coltheart, M., & Nickels, L. (2006). Severe developmental letter-processing impairment: A treatment case study. Cognitive Neuropsychology, 23 (6), 795–821. https://doi.org/10.1080/02643290500310863
Burns, M. K., Petersen-Brown, S., Haegele, K., Rodriguez, M., Schmitt, B., Cooper, M., Clayton, K., Hutcheson, S., Conner, C., Hosp, J., & VanDerHeyden, A. M. (2016). Meta-analysis of academic interventions derived from neuropsychological data. School Psychology Quarterly, 31 (1), 28–42. https://doi.org/10.1037/spq0000117
Cassar, M., Treiman, R., Moats, L., Pollo, T. C., & Kessler, B. (2005). How do the spellings of children with dyslexia compare with those of nondyslexic children? Reading and Writing, 18 (1), 27–49. https://doi.org/10.1007/s11145-004-2345-x
Daniel, J., Capin, P., & Steinle, P. (2021). A synthesis of the sustainability of remedial reading intervention effects for struggling adolescent readers. Journal of Learning Disabilities, 54 (3), 170–186. https://doi.org/10.1177/0022219421991249
de Jong, P. F. (2020). Diagnosing dyslexia: How deep should we dig? In J. A. Washington, D. L. Compton, & P. McCardle (Eds.), Dyslexia: Revisiting etiology, diagnosis, treatment, and policy (pp. 31–43). Paul H. Brookes Publishing Co.
Google Scholar
Denton, C. A. (2012). Response to intervention for reading difficulties in the primary grades: Some answers and lingering questions. Journal of Learning Disabilities, 45 (3), 232–243. https://doi.org/10.1177/0022219412442155
Department for Education. (2014). Children and Families Act . DfE.
Elliott, J. G. (2020). It’s time to be scientific about dyslexia. Reading Research Quarterly , 55 (S1). https://doi.org/10.1002/rrq.333
Elliott, J. G., & Grigorenko, E. L. (2014). The dyslexia debate (No. 14). Cambridge University Press.
Equality Act (2010). HMSO.
Erbeli, F., Peng, P., & Rice, M. (2021). No evidence of creative benefit accompanying dyslexia: A meta-analysis. Journal of Learning Disabilities, 55 (3), 242–253. https://doi.org/10.1177/00222194211010350
Feifer, S. G. (2008). Integrating Response to Intervention (RTI) with neuropsychology: A scientific approach to reading. Psychology in the Schools, 45 (9), 812–825. https://doi.org/10.1002/pits.20328
Fenwick, M. E., Kubas, H. A., Witzke, J. W., Fitzer, K. R., Miller, D. C., Maricle, D. E., Harrison, G. L., Macoun, S. J., & Hale, J. B. (2015). Neuropsychological profiles of written expression learning disabilities determined by concordance-discordance model criteria. Applied Neuropsychology: Child, 5 (2), 83–96. https://doi.org/10.1080/21622965.2014.993396
Fletcher, J. M., Francis, D. J., Shaywitz, S. E., Lyon, G. R., Foorman, B. R., Stuebing, K. K., & Shaywitz, B. A. (1998). Intelligent testing and the discrepancy model for children with learning disabilities. Learning Disabilities Research & Practice, 13 (4), 186–203.
Fletcher, J., Lyon, G. R., Fuchs, L., & Barnes, M. A. (2019). Learning disabilities: From identification to intervention . The Guilford Press.
Fletcher, J. M., & Miciak, J. (2017). Comprehensive cognitive assessments are not necessary for the identification and treatment of learning disabilities. Archives of Clinical Neuropsychology, 32 (1), 2–7. https://doi.org/10.1093/arclin/acw103
Fletcher, J. M., Stuebing, K. K., Morris, R. D., & Lyon, G. R. (2012). Classification and definition of learning disabilities: A hybrid model . In H. L. Swanson, K. R. Harris, & S. Graham (Eds.), Handbook of learning disabilities (2nd ed., pp. 33–50). Guilford Press.
Fletcher, J. M., & Vaughn, S. (2009). Response to intervention: Preventing and remediating academic difficulties. Child Development Perspectives, 3 (1), 30–37. https://doi.org/10.1111/j.1750-8606.2008.00072.x
Francis, D. J., Fletcher, J. M., Stuebing, K. K., Lyon, G. R., Shaywitz, B. A., & Shaywitz, S. E. (2005). Psychometric approaches to the identification of LD. Journal of Learning Disabilities, 38 (2), 98–108. https://doi.org/10.1177/00222194050380020101
Friedmann, N., & Rahamim, E. (2007). Developmental letter position dyslexia. Journal of Neuropsychology, 1 (2), 201–236. https://doi.org/10.1348/174866407x204227
Fuchs, D., Fuchs, L. S., & Compton, D. L. (2012). Smart RTI: A next-generation approach to multilevel prevention. Exceptional Children, 78 (3), 263–279. https://doi.org/10.1177/001440291207800301
Fuchs, L. S., & Vaughn, S. (2012). Responsiveness-to-intervention: A decade later. Journal of Learning Disabilities, 45 (3), 195–203. https://doi.org/10.1177/0022219412442150
Galliussi, J., Perondi, L., Chia, G., Gerbino, W., & Bernardis, P. (2020). Inter-letter spacing, inter-word spacing, and font with dyslexia-friendly features: Testing text readability in people with and without dyslexia. Annals of Dyslexia, 70 (1), 141–152. https://doi.org/10.1007/s11881-020-00194-x
Galuschka, K., Ise, E., Krick, K., & Schulte-Körne, G. (2014). Effectiveness of treatment approaches for children and adolescents with reading disabilities: A meta-analysis of randomized controlled Trials. PLoS ONE, 9 (2), e89900. https://doi.org/10.1371/journal.pone.0089900
Goodrich, J. M., Peng, P., Bohaty, J., Leiva, S., & Thayer, L. (2023). Embedding executive function training into early literacy instruction for dual language learners: A pilot study. Journal of Speech, Language, and Hearing Research, 66 (2), 573–588. https://doi.org/10.31234/osf.io/xkymz
Gough, P. B., & Tunmer, W. E. (1986). Decoding, reading, and reading disability. Remedial and Special Education, 7 (1), 6–10. https://doi.org/10.1177/074193258600700104
Hale, J., Alfonso, V., Berninger, V., Bracken, B., Christo, C., Clark, E., Cohen, M., Davis, A., Decker, S., Denckla, M., Dumont, R., Elliott, C., Feifer, S., Fiorello, C., Flanagan, D., Fletcher-Janzen, E., Geary, D., Gerber, M., Gerner, M., … Yalof, J. (2014). Critical issues in response-to-intervention, comprehensive evaluation, and specific learning disabilities identification and intervention: An expert White Paper Consensus. Learning Disabilities: A Multidisciplinary Journal, 20 (2). https://doi.org/10.18666/ldmj-2014-v20-i2-5276
Henderson, L. M., Tsogka, N., & Snowling, M. J. (2012). Questioning the benefits that coloured overlays can have for reading in students with and without dyslexia. Journal of Research in Special Educational Needs, 13 (1), 57–65. https://doi.org/10.1111/j.1471-3802.2012.01237.x
Hinshelwood, J. (1896). A case of dyslexia: A peculiar form of word-blindness. 1. The Lancet, 148 (3821), 1451–1454.
Hulme, C., & Snowling, M. J. (2009). Developmental disorders of language learning and cognition. Wiley Blackwell.
Iovino, I., Fletcher, J. M., Breitmeyer, B. G., & Foorman, B. R. (1998). Colored overlays for visual perceptual deficits in children with reading disability and attention deficit/hyperactivity disorder: Are they differentially effective? Journal of Clinical and Experimental Neuropsychology, 20 (6), 791–806. https://doi.org/10.1076/jcen.20.6.791.1113
Joseph, H., & Powell, D. (2022). Does a specialist typeface affect how fluently children with and without dyslexia process letters, words, and passages? Dyslexia, 28 (4), 448–470. https://doi.org/10.1002/dys.1727
Kaltner, S., & Jansen, P. (2014). Mental rotation and motor performance in children with developmental dyslexia. Research in Developmental Disabilities, 35 (3), 741–754. https://doi.org/10.1016/j.ridd.2013.10.003
Kauffman, J. M., Nelson, C. M., Simpson, R. L., & Mock, D. R. (2011). Contemporary issues. In J. M. Kauffman & D. P. Hallahan (Eds.), Handbook of special education (pp. 15–26). Routledge.
Chapter Google Scholar
Kearns, D. M., & Fuchs, D. (2013). Does cognitively focused instruction improve the academic performance of low-achieving students? Exceptional Children, 79 (3), 263–290. https://doi.org/10.1177/001440291307900200
Knight, C., & Crick, T. (2021). The assignment and distribution of the dyslexia label: Using the UK Millennium Cohort Study to investigate the sociodemographic predictors of the dyslexia label in England and Wales. PLoS ONE, 16 (8), e0256114. https://doi.org/10.1371/journal.pone.0256114
Kohnen, S., Nickels, L., Castles, A., Friedmann, N., & McArthur, G. (2012). When ‘slime’ becomes ‘smile’: Developmental letter position dyslexia in English. Neuropsychologia, 50 (14), 3681–3692. https://doi.org/10.1016/j.neuropsychologia.2012.07.016
Kranzler, J. H., Floyd, R. G., Benson, N., Zaboski, B., & Thibodaux, L. (2016). Cross-battery assessment pattern of strengths and weaknesses approach to the identification of specific learning disorders: Evidence-based practice or pseudoscience? International Journal of School & Educational Psychology, 4 (3), 146–157. https://doi.org/10.1080/21683603.2016.1192855
Kuster, S. M., van Weerdenburg, M., Gompel, M., & Bosman, A. M. (2017). Dyslexie font does not benefit reading in children with or without dyslexia. Annals of Dyslexia, 68 (1), 25–42. https://doi.org/10.1007/s11881-017-0154-6
Maki, K. E., Kranzler, J. H., & Moody, M. E. (2022). Dual discrepancy/consistency pattern of strengths and weaknesses method of specific learning disability identification: Classification accuracy when combining clinical judgment with assessment data. Journal of School Psychology, 92 , 33–48. https://doi.org/10.1016/j.jsp.2022.02.003
McArthur, G., & Castles, A. (2017). Helping children with reading difficulties: Some things we have learned so far. Npj Science of Learning, 2 (1), 7. https://doi.org/10.1038/s41539-017-0008-3
Meyer, M. S. (2000). The ability–achievement discrepancy: Does it contribute to an understanding of learning disabilities? Educational Psychology Review, 12 , 315–337.
Miciak, J., & Fletcher, J. M. (2020). The critical role of instructional response for identifying dyslexia and other learning disabilities. Journal of Learning Disabilities, 53 (5), 343–353. https://doi.org/10.1177/0022219420906801
Miciak, J., Fletcher, J. M., & Stuebing, K. K. (2016). Accuracy and validity of methods for identifying learning disabilities in a response-to-intervention service delivery framework. In S. R. Jimerson, M. K. Burns, & A. M. Van Der Heyden (Eds.), Handbook of response to intervention (pp. 421–440). Springer US. https://doi.org/10.1007/978-1-4899-7568-3_25
Miciak, J., Taylor, W. P., Denton, C. A., & Fletcher, J. M. (2015). The effect of achievement test selection on identification of learning disabilities within a patterns of strengths and weaknesses framework. School Psychology Quarterly, 30 , 321–334.
Morgan, W. P. (1896). A case of congenital word blindness. British Medical Journal, 2 (1871), 1378.
O’Connor, R. E., & Sanchez, V. M. (2011). Responsiveness to intervention models for reducing reading difficulties and identifying learning disability. In J. M. Kauffman & D. P. Hallahan (Eds.), Handbook of special education (pp. 123–133). Routledge.
Office for National Statistics (ONS) (2023). Average household income, UK: Financial year ending 2022. Retrieved from: https://www.ons.gov.uk/peoplepopulationandcommunity/personalandhouseholdfinances/incomeandwealth/bulletins/householddisposableincomeandinequality/financialyearending2022#:~:text=Main%20points,(ONS)%20Household%20Finances%20Survey
Peter, B., Albert, A., Panagiotides, H., & Gray, S. (2020). Sequential and spatial letter reversals in adults with dyslexia during a word comparison task: Demystifying the “was saw” and “DB” myths. Clinical Linguistics & Phonetics, 35 (4), 340–367. https://doi.org/10.1080/02699206.2019.1705916
R Core Team. (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https://www.R-project.org/ .
Rice, M., & Gilson, C. B. (2023). Dyslexia identification: Tackling current issues in schools. Intervention in School and Clinic, 58 (3), 205–209. https://doi.org/10.1177/10534512221081278
Russell, G., Norwich, B., & Gwernan-Jones, R. (2012). When diagnosis is uncertain: Variation in conclusions after psychological assessment of a six-year-old child. Early Child Development and Care, 182 (12), 1575–1592. https://doi.org/10.1080/03004430.2011.641541
Ryder, D., & Norwich, B. (2019). UK higher education lecturers’ perspectives of dyslexia, dyslexic students and related disability provision. Journal of Research in Special Educational Needs, 19 (3), 161–172.
Sadusky, A., Berger, E. P., Reupert, A. E., & Freeman, N. C. (2021). Methods used by psychologists for identifying dyslexia: A systematic review. Dyslexia, 28 (2), 132–148. https://doi.org/10.1002/dys.1706
Savage, R. (2004). Motor skills, automaticity and developmental dyslexia: A review of the research literature. Reading and Writing, 17 (3), 301–324. https://doi.org/10.1023/b:read.0000017688.67137.80
Scammacca, N. K., Roberts, G. J., Cho, E., Williams, K. J., Roberts, G., Vaughn, S. R., & Carroll, M. (2016). A century of progress. Review of Educational Research, 86 (3), 756–800. https://doi.org/10.3102/0034654316652942
Scammacca, N. K., Roberts, G., Vaughn, S., & Stuebing, K. K. (2013). A meta-analysis of interventions for struggling readers in grades 4–12. Journal of Learning Disabilities, 48 (4), 369–390. https://doi.org/10.1177/0022219413504995
Stanovich, K. E. (1991). Discrepancy definitions of reading disability: Has intelligence led us astray?. Reading Research Quarterly , 26 , 7–29.
Stuebing, K. K., Fletcher, J. M., Branum-Martin, L., Francis, D. J., & Van Der Heyden, A. (2012). Evaluation of the technical adequacy of three methods for identifying specific learning disabilities based on cognitive discrepancies. School Psychology Review, 41 (1), 3–22. https://doi.org/10.1080/02796015.2012.12087373
Stuebing, K. K., Fletcher, J. M., LeDoux, J. M., Lyon, G. R., Shaywitz, S. E., & Shaywitz, B. A. (2002). Validity of IQ-discrepancy classifications of reading disabilities: A meta-analysis. American Educational Research Journal, 39 (2), 469–518.
Suttle, C. M., Lawrenson, J. G., & Conway, M. L. (2018). Efficacy of coloured overlays and lenses for treating reading difficulty: An overview of systematic reviews. Clinical and Experimental Optometry, 101 (4), 514–520. https://doi.org/10.1111/cxo.12676
Taylor, W. P., Miciak, J., Fletcher, J. M., & Francis, D. J. (2017). Cognitive discrepancy models for specific learning disabilities identification: Simulations of psychometric limitations. Psychological Assessment, 29 (4), 446–457. https://doi.org/10.1037/pas0000356
U.S. Department of Education. (2006). Individuals with Disabilities Act (IDEA). 20 U.S.C. § 1400.
Wery, J. J., & Diliberto, J. A. (2016). The effect of a specialized dyslexia font, OpenDyslexic, on reading rate and accuracy. Annals of Dyslexia, 67 (2), 114–127. https://doi.org/10.1007/s11881-016-0127-1
Wickham, H., François, R., Henry, L., & Müller, K. (2017). dplyr: A Grammar of Data Manipulation (R package version 0.7.4) .
Willows, D. M., Kruk, R. S., & Corcos, E. (1993). Visual processes in reading and reading disabilities . Lawrence Erlbaum.
Download references
Support for this research was provided by Award Number BERADANIELJ2022 from the British Educational Research Association.
Authors and affiliations.
Durham University, Durham, UK
Johny Daniel & Lauryn Clucas
National Taiwan Normal University, Taipei, Taiwan
Hsuan-Hui Wang
You can also search for this author in PubMed Google Scholar
Correspondence to Johny Daniel .
Ethics approval.
The current study was approved by Durham University’s Ethics committee prior to data collection.
Prior to starting the survey, all participants were informed that their responses to this questionnaire are entirely voluntary and will be used, anonymously, in our research. They could withdraw their participation at any time. By completing this questionnaire, they agree to be in our study.
The authors declare no competing interests.
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
(DOCX 26.9 KB)
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Daniel, J., Clucas, L. & Wang, HH. Identifying students with dyslexia: exploration of current assessment methods. Ann. of Dyslexia (2024). https://doi.org/10.1007/s11881-024-00313-y
Download citation
Received : 28 July 2023
Accepted : 30 July 2024
Published : 29 August 2024
DOI : https://doi.org/10.1007/s11881-024-00313-y
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
An official website of the United States government
Here's how you know
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Latest Earthquakes | Chat Share Social Media
Phenological dynamics of terrestrial ecosystems reflect the response of the Earth's vegetation canopy to changes in climate and hydrology and are thus important to monitor operationally. Researchers at the U.S. Geological Survey (USGS), Earth Resources Observation and Science (EROS) Center have developed methods for documenting the seasonal dynamics of vegetation in an operational fashion from satellite time-series data.
The USGS made the decision to develop 2023 CONUS phenology metrics using S-NPP Visible Infrared Imaging Radiometer Suite (VIIRS) because of the decommissioning of Aqua C6 MODIS sensor in the near future. The readily available and consistently processed smoothed EROS VIIRS (eVIIRS) maximum Normalized Difference Vegetation Index (NDVI) weekly composites are the key input for the phenological metrics data. The weighted least-square approach for temporal smoothing (Swets et. al., 1999) was adopted for the NDVI time series to eliminate anomalously low vegetation index values and reduce time shifts caused by overgeneralization of the NDVI signal. This approach uses a moving temporal window to calculate a family of regression lines that are associated with each observation; the family of lines is then averaged at each point and interpolated between points to provide a continuous temporal NDVI signal. While interpolating values between points, a weighting factor is applied that favors peak (high value) points over valley points. Smoothed NDVI data were stacked in an ascending three year 156 NDVI composite file (52 NDVI composites per year). The three years include the previous year and the following year (e.g., 2023 phenology metrics included 2022, 2023, and 2024 smoothed NDVI). In instances where the full 52 composites are not achieved, an average for each remaining weekly composite from the processed year and three previous years are used to fill those composites in the latter year to reach 156 composites (to fill 2024, composites from years 2021, 2022, and 2023 were averaged).
The smoothed NDVI data were subsequently ingested into a model developed in the Interactive Data Language (IDL) to quantify following phenological events: Start of Season Time (SOST); Start of Season NDVI (SOSN); End of Season Time (EOST); End of Season NDVI (EOSN); Maximum Time (MAXT); Maximum NDVI (MAXN); Duration (DUR); Amplitude (AMP); and Time Integrated NDVI (TIN).
Note: S-NPP 375m eVIIRS Phenology Metrics CONUS 2021 Publication Date: 2022-08-25 S-NPP 375m eVIIRS Phenology Metrics CONUS 2022 Publication Date: 2023-08-03 S-NPP 375m eVIIRS Phenology Metrics CONUS 2023 Publication Date: 2024-08-30
For details about the algorithms and the data scaling for each of these seasonal phenological metrics, refer to the data creation process section of this metadata.
References: Swets, D. L., Reed, B. C., Rowland, J. R., and S. E. Marko, 1999, "A Weighted Least-squares Approach to Temporal Smoothing of NDVI," In Proceedings of the 1999 ASPRS Annual Conference, from Image to Information, Portland, Oregon, May 17-21, 1999, Bethesda, Maryland, American Society for Photogrammetry and Remote Sensing, CD-ROM, 1 disc. First release: 2023 Revised: August 2024 (ver. 2.0)
Publication Year | 2022 |
---|---|
Title | S-NPP 375-m eVIIRS Remote Sensing Phenology Metrics - across the conterminous U.S. (Ver. 2.0, August 2024) |
DOI | |
Authors | Trenton (Contractor) D Benedict, Dinesh (Contractor) Shrestha, Stephen Boyte |
Product Type | Data Release |
Record Source | |
USGS Organization | Earth Resources Observation and Science (EROS) Center |
Rights |
Stephen boyte, research geographer.
Numbers, Facts and Trends Shaping Your World
Read our research on:
Full Topic List
Read Our Research On:
Confidence in U.S. public opinion polling was shaken by errors in 2016 and 2020. In both years’ general elections, many polls underestimated the strength of Republican candidates, including Donald Trump. These errors laid bare some real limitations of polling.
In the midterms that followed those elections, polling performed better . But many Americans remain skeptical that it can paint an accurate portrait of the public’s political preferences.
Restoring people’s confidence in polling is an important goal, because robust and independent public polling has a critical role to play in a democratic society. It gathers and publishes information about the well-being of the public and about citizens’ views on major issues. And it provides an important counterweight to people in power, or those seeking power, when they make claims about “what the people want.”
The challenges facing polling are undeniable. In addition to the longstanding issues of rising nonresponse and cost, summer 2024 brought extraordinary events that transformed the presidential race . The good news is that people with deep knowledge of polling are working hard to fix the problems exposed in 2016 and 2020, experimenting with more data sources and interview approaches than ever before. Still, polls are more useful to the public if people have realistic expectations about what surveys can do well – and what they cannot.
With that in mind, here are some key points to know about polling heading into this year’s presidential election.
Probability sampling (or “random sampling”). This refers to a polling method in which survey participants are recruited using random sampling from a database or list that includes nearly everyone in the population. The pollster selects the sample. The survey is not open for anyone who wants to sign up.
Online opt-in polling (or “nonprobability sampling”). These polls are recruited using a variety of methods that are sometimes referred to as “convenience sampling.” Respondents come from a variety of online sources such as ads on social media or search engines, websites offering rewards in exchange for survey participation, or self-enrollment. Unlike surveys with probability samples, people can volunteer to participate in opt-in surveys.
Nonresponse and nonresponse bias. Nonresponse is when someone sampled for a survey does not participate. Nonresponse bias occurs when the pattern of nonresponse leads to error in a poll estimate. For example, college graduates are more likely than those without a degree to participate in surveys, leading to the potential that the share of college graduates in the resulting sample will be too high.
Mode of interview. This refers to the format in which respondents are presented with and respond to survey questions. The most common modes are online, live telephone, text message and paper. Some polls use more than one mode.
Weighting. This is a statistical procedure pollsters perform to make their survey align with the broader population on key characteristics like age, race, etc. For example, if a survey has too many college graduates compared with their share in the population, people without a college degree are “weighted up” to match the proper share.
Pollsters are making changes in response to the problems in previous elections. As a result, polling is different today than in 2016. Most U.S. polling organizations that conducted and publicly released national surveys in both 2016 and 2022 (61%) used methods in 2022 that differed from what they used in 2016 . And change has continued since 2022.
One change is that the number of active polling organizations has grown significantly, indicating that there are fewer barriers to entry into the polling field. The number of organizations that conduct national election polls more than doubled between 2000 and 2022.
This growth has been driven largely by pollsters using inexpensive opt-in sampling methods. But previous Pew Research Center analyses have demonstrated how surveys that use nonprobability sampling may have errors twice as large , on average, as those that use probability sampling.
The second change is that many of the more prominent polling organizations that use probability sampling – including Pew Research Center – have shifted from conducting polls primarily by telephone to using online methods, or some combination of online, mail and telephone. The result is that polling methodologies are far more diverse now than in the past.
(For more about how public opinion polling works, including a chapter on election polls, read our short online course on public opinion polling basics .)
All good polling relies on statistical adjustment called “weighting,” which makes sure that the survey sample aligns with the broader population on key characteristics. Historically, public opinion researchers have adjusted their data using a core set of demographic variables to correct imbalances between the survey sample and the population.
But there is a growing realization among survey researchers that weighting a poll on just a few variables like age, race and gender is insufficient for getting accurate results. Some groups of people – such as older adults and college graduates – are more likely to take surveys, which can lead to errors that are too sizable for a simple three- or four-variable adjustment to work well. Adjusting on more variables produces more accurate results, according to Center studies in 2016 and 2018 .
A number of pollsters have taken this lesson to heart. For example, recent high-quality polls by Gallup and The New York Times/Siena College adjusted on eight and 12 variables, respectively. Our own polls typically adjust on 12 variables . In a perfect world, it wouldn’t be necessary to have that much intervention by the pollster. But the real world of survey research is not perfect.
Predicting who will vote is critical – and difficult. Preelection polls face one crucial challenge that routine opinion polls do not: determining who of the people surveyed will actually cast a ballot.
Roughly a third of eligible Americans do not vote in presidential elections , despite the enormous attention paid to these contests. Determining who will abstain is difficult because people can’t perfectly predict their future behavior – and because many people feel social pressure to say they’ll vote even if it’s unlikely.
No one knows the profile of voters ahead of Election Day. We can’t know for sure whether young people will turn out in greater numbers than usual, or whether key racial or ethnic groups will do so. This means pollsters are left to make educated guesses about turnout, often using a mix of historical data and current measures of voting enthusiasm. This is very different from routine opinion polls, which mostly do not ask about people’s future intentions.
When major news breaks, a poll’s timing can matter. Public opinion on most issues is remarkably stable, so you don’t necessarily need a recent poll about an issue to get a sense of what people think about it. But dramatic events can and do change public opinion , especially when people are first learning about a new topic. For example, polls this summer saw notable changes in voter attitudes following Joe Biden’s withdrawal from the presidential race. Polls taken immediately after a major event may pick up a shift in public opinion, but those shifts are sometimes short-lived. Polls fielded weeks or months later are what allow us to see whether an event has had a long-term impact on the public’s psyche.
The answer to this question depends on what you want polls to do. Polls are used for all kinds of purposes in addition to showing who’s ahead and who’s behind in a campaign. Fair or not, however, the accuracy of election polling is usually judged by how closely the polls matched the outcome of the election.
By this standard, polling in 2016 and 2020 performed poorly. In both years, state polling was characterized by serious errors. National polling did reasonably well in 2016 but faltered in 2020.
In 2020, a post-election review of polling by the American Association for Public Opinion Research (AAPOR) found that “the 2020 polls featured polling error of an unusual magnitude: It was the highest in 40 years for the national popular vote and the highest in at least 20 years for state-level estimates of the vote in presidential, senatorial, and gubernatorial contests.”
How big were the errors? Polls conducted in the last two weeks before the election suggested that Biden’s margin over Trump was nearly twice as large as it ended up being in the final national vote tally.
Errors of this size make it difficult to be confident about who is leading if the election is closely contested, as many U.S. elections are .
Pollsters are rightly working to improve the accuracy of their polls. But even an error of 4 or 5 percentage points isn’t too concerning if the purpose of the poll is to describe whether the public has favorable or unfavorable opinions about candidates , or to show which issues matter to which voters. And on questions that gauge where people stand on issues, we usually want to know broadly where the public stands. We don’t necessarily need to know the precise share of Americans who say, for example, that climate change is mostly caused by human activity. Even judged by its performance in recent elections, polling can still provide a faithful picture of public sentiment on the important issues of the day.
The 2022 midterms saw generally accurate polling, despite a wave of partisan polls predicting a broad Republican victory. In fact, FiveThirtyEight found that “polls were more accurate in 2022 than in any cycle since at least 1998, with almost no bias toward either party.” Moreover, a handful of contrarian polls that predicted a 2022 “red wave” largely washed out when the votes were tallied. In sum, if we focus on polling in the most recent national election, there’s plenty of reason to be encouraged.
Compared with other elections in the past 20 years, polls have been less accurate when Donald Trump is on the ballot. Preelection surveys suffered from large errors – especially at the state level – in 2016 and 2020, when Trump was standing for election. But they performed reasonably well in the 2018 and 2022 midterms, when he was not.
During the 2016 campaign, observers speculated about the possibility that Trump supporters might be less willing to express their support to a pollster – a phenomenon sometimes described as the “shy Trump effect.” But a committee of polling experts evaluated five different tests of the “shy Trump” theory and turned up little to no evidence for each one . Later, Pew Research Center and, in a separate test, a researcher from Yale also found little to no evidence in support of the claim.
Instead, two other explanations are more likely. One is about the difficulty of estimating who will turn out to vote. Research has found that Trump is popular among people who tend to sit out midterms but turn out for him in presidential election years. Since pollsters often use past turnout to predict who will vote, it can be difficult to anticipate when irregular voters will actually show up.
The other explanation is that Republicans in the Trump era have become a little less likely than Democrats to participate in polls . Pollsters call this “partisan nonresponse bias.” Surprisingly, polls historically have not shown any particular pattern of favoring one side or the other. The errors that favored Democratic candidates in the past eight years may be a result of the growth of political polarization, along with declining trust among conservatives in news organizations and other institutions that conduct polls.
Whatever the cause, the fact that Trump is again the nominee of the Republican Party means that pollsters must be especially careful to make sure all segments of the population are properly represented in surveys.
The real margin of error is often about double the one reported. A typical election poll sample of about 1,000 people has a margin of sampling error that’s about plus or minus 3 percentage points. That number expresses the uncertainty that results from taking a sample of the population rather than interviewing everyone . Random samples are likely to differ a little from the population just by chance, in the same way that the quality of your hand in a card game varies from one deal to the next.
The problem is that sampling error is not the only kind of error that affects a poll. Those other kinds of error, in fact, can be as large or larger than sampling error. Consequently, the reported margin of error can lead people to think that polls are more accurate than they really are.
There are three other, equally important sources of error in polling: noncoverage error , where not all the target population has a chance of being sampled; nonresponse error, where certain groups of people may be less likely to participate; and measurement error, where people may not properly understand the questions or misreport their opinions. Not only does the margin of error fail to account for those other sources of potential error, putting a number only on sampling error implies to the public that other kinds of error do not exist.
Several recent studies show that the average total error in a poll estimate may be closer to twice as large as that implied by a typical margin of sampling error. This hidden error underscores the fact that polls may not be precise enough to call the winner in a close election.
Transparency in how a poll was conducted is associated with better accuracy . The polling industry has several platforms and initiatives aimed at promoting transparency in survey methodology. These include AAPOR’s transparency initiative and the Roper Center archive . Polling organizations that participate in these organizations have less error, on average, than those that don’t participate, an analysis by FiveThirtyEight found .
Participation in these transparency efforts does not guarantee that a poll is rigorous, but it is undoubtedly a positive signal. Transparency in polling means disclosing essential information, including the poll’s sponsor, the data collection firm, where and how participants were selected, modes of interview, field dates, sample size, question wording, and weighting procedures.
There is evidence that when the public is told that a candidate is extremely likely to win, some people may be less likely to vote . Following the 2016 election, many people wondered whether the pervasive forecasts that seemed to all but guarantee a Hillary Clinton victory – two modelers put her chances at 99% – led some would-be voters to conclude that the race was effectively over and that their vote would not make a difference. There is scientific research to back up that claim: A team of researchers found experimental evidence that when people have high confidence that one candidate will win, they are less likely to vote. This helps explain why some polling analysts say elections should be covered using traditional polling estimates and margins of error rather than speculative win probabilities (also known as “probabilistic forecasts”).
National polls tell us what the entire public thinks about the presidential candidates, but the outcome of the election is determined state by state in the Electoral College . The 2000 and 2016 presidential elections demonstrated a difficult truth: The candidate with the largest share of support among all voters in the United States sometimes loses the election. In those two elections, the national popular vote winners (Al Gore and Hillary Clinton) lost the election in the Electoral College (to George W. Bush and Donald Trump). In recent years, analysts have shown that Republican candidates do somewhat better in the Electoral College than in the popular vote because every state gets three electoral votes regardless of population – and many less-populated states are rural and more Republican.
For some, this raises the question: What is the use of national polls if they don’t tell us who is likely to win the presidency? In fact, national polls try to gauge the opinions of all Americans, regardless of whether they live in a battleground state like Pennsylvania, a reliably red state like Idaho or a reliably blue state like Rhode Island. In short, national polls tell us what the entire citizenry is thinking. Polls that focus only on the competitive states run the risk of giving too little attention to the needs and views of the vast majority of Americans who live in uncompetitive states – about 80%.
Fortunately, this is not how most pollsters view the world . As the noted political scientist Sidney Verba explained, “Surveys produce just what democracy is supposed to produce – equal representation of all citizens.”
Scott Keeter is a senior survey advisor at Pew Research Center .
Courtney Kennedy is Vice President of Methods and Innovation at Pew Research Center .
How public polling has changed in the 21st century, what 2020’s election poll errors tell us about the accuracy of issue polling, a field guide to polling: election 2020 edition, methods 101: how is polling done around the world, most popular.
901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 | Media Inquiries
ABOUT PEW RESEARCH CENTER Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .
© 2024 Pew Research Center
IMAGES
VIDEO
COMMENTS
Research methodology is a way of explaining how a researcher intends to carry out their research. It's a logical, systematic plan to resolve a research problem. A methodology details a researcher's approach to the research to ensure reliable, valid results that address their aims and objectives. It encompasses what data they're going to collect ...
Methodology: What It Is and Why It Is so Important 5 and desirable) and these are our means (use of theory, methodology, guiding concepts, replication of results). Science is hardly a game because so many of its tasks and topics are so serious—indeed, a matter of life and death (e.g., suicide, risky behavior, cigarette smoking).
The methodology section should clearly show why your methods suit your objectives and convince the reader that you chose the best possible approach to answering your problem statement and research questions. 2. Cite relevant sources. Your methodology can be strengthened by referencing existing research in your field. This can help you to:
Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.
The methods section describes actions taken to investigate a research problem and the rationale for the application of specific procedures or techniques used to identify, select, process, and analyze information applied to understanding the problem, thereby, allowing the reader to critically evaluate a study's overall validity and reliability.
As we mentioned, research methodology refers to the collection of practical decisions regarding what data you'll collect, from who, how you'll collect it and how you'll analyse it. Research design, on the other hand, is more about the overall strategy you'll adopt in your study. For example, whether you'll use an experimental design ...
A research methodology encompasses the way in which you intend to carry out your research. This includes how you plan to tackle things like collection methods, statistical analysis, participant observations, and more. You can think of your research methodology as being a formula. One part will be how you plan on putting your research into ...
The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.
Revised on 10 October 2022. Your research methodology discusses and explains the data collection and analysis methods you used in your research. A key part of your thesis, dissertation, or research paper, the methodology chapter explains what you did and how you did it, allowing readers to evaluate the reliability and validity of your research.
Importance of methodology in research papers. When it comes to writing your study, the methodology in research papers or a dissertation plays a pivotal role. A well-crafted methodology section of a research paper or thesis not only enhances the credibility of your research but also provides a roadmap for others to replicate or build upon your work.
Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:
An important next step is to codify these notions of theory and method, perhaps most promisingly by quantifying their characteristics and connections. Bibliometric tools now make this possible 139 ...
Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an ...
Understanding the methodology employed in an article is the key to becoming an "unofficial" critical article reviewer. ... The introduction of courses in high school and university on how to read and evaluate a journal article may be an important step for educating the next generation of readers and scholars.
Research. In its most common sense, methodology is the study of research methods. However, the term can also refer to the methods themselves or to the philosophical discussion of associated background assumptions. A method is a structured procedure for bringing about a certain goal, like acquiring knowledge or verifying knowledge claims.
INTRODUCTION. Research is a process for acquiring new knowledge in systematic approach involving diligent planning and interventions for discovery or interpretation of the new-gained information.[1,2] The outcome reliability and validity of a study would depend on well-designed study with objective, reliable, repeatable methodology with appropriate conduct, data collection and its analysis ...
Definition, Types, and Examples. Research methodology 1,2 is a structured and scientific approach used to collect, analyze, and interpret quantitative or qualitative data to answer research questions or test hypotheses. A research methodology is like a plan for carrying out research and helps keep researchers on track by limiting the scope of ...
Scientific knowledge is very special. This knowledge is based on the accumulation of empirical evidence and obtained through systematic and careful observation of the phenomenon of interest. At a very general level, the ways in which the observations are obtained constitute the methods of science. Yet, these methods can be considered at multiple levels, including the principles and tenets they ...
How to choose a research methodology. To choose the right research methodology for your dissertation or thesis, you need to consider three important factors. Based on these three factors, you can decide on your overarching approach - qualitative, quantitative or mixed methods. Once you've made that decision, you can flesh out the finer ...
Updated 27 June 2024. Deciding on a methodology is an important part of the research process. It allows you to understand the type of data you're gathering and the techniques you can use to collect relevant information. Learning the different types of methodologies may help you achieve your research objectives.
The first part of this chapter conveys what methodology is and the roles it plays in scientific knowledge. Perhaps the most critical point is to conceive methodology not only as a set of practices but as a way of approaching the subject matter of interest. Scientific knowledge is very special; it is knowledge that is based on the accumulation of empirical evidence. Empirical evidence is a rich ...
Background Methodological studies - studies that evaluate the design, analysis or reporting of other research-related reports - play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste. Main body We provide an overview of some of the key aspects of ...
Importance of Research Methodology in Research. It is necessary for a researcher to design a research methodology for the problem chosen. One should note that even if the research method considered for two problems are the same the research methodology may be different. It is important for the researcher to know not only the research methods ...
Both anchor-based and distribution-based methods were applied in this study to determine the Minimal Important Difference (MID) of the selected scales. Anchor-based method The change in the BBS, POMA and DGI scores between T1 and T2 was compared with the CGI.
The critical path method in project management helps identify tasks that affect a project's timeline. Learn its basics and how to apply it to keep complex projects on track. ... What is the Importance of CPM in Project Management? The critical path method is like the GPS for project management. It does not just show you the way; it highlights ...
Conclusions: This study was an important step towards validating a perceived suicide awareness scale, which appears as a new dimension of suicidality, distinct from suicide-related knowledge. The PSAS-9 could be used to develop, evaluate, and improve suicide prevention efforts. ... Methodology. Empirical Study; Quantitative Study. Tests and ...
The methods for preparing a cash flow statement. There are two methods for preparing a cash flow statement: the direct method and the indirect method. ... The importance of cash flow planning and management. With a general understanding of what a cash flow statement is and why financial reporting is important, it's imperative to also learn ...
Early identification plays a crucial role in providing timely support to students with learning disabilities, such as dyslexia, in order to overcome their reading difficulties. However, there is significant variability in the methods used for identifying dyslexia. This study aimed to explore and understand the practices of dyslexia identification in the UK. A survey was conducted among 274 ...
Phenological dynamics of terrestrial ecosystems reflect the response of the Earth's vegetation canopy to changes in climate and hydrology and are thus important to monitor operationally. Researchers at the U.S. Geological Survey (USGS), Earth Resources Observation and Science (EROS) Center have developed methods for documenting the seasonal dynamics of vegetation in an operational fashion from sat
Restoring people's confidence in polling is an important goal, because robust and independent public polling has a critical role to play in a democratic society. It gathers and publishes information about the well-being of the public and about citizens' views on major issues. ... (61%) used methods in 2022 that differed from what they used ...