• USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 7. The Results
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The results section is where you report the findings of your study based upon the methodology [or methodologies] you applied to gather information. The results section should state the findings of the research arranged in a logical sequence without bias or interpretation. A section describing results should be particularly detailed if your paper includes data generated from your own research.

Annesley, Thomas M. "Show Your Cards: The Results Section and the Poker Game." Clinical Chemistry 56 (July 2010): 1066-1070.

Importance of a Good Results Section

When formulating the results section, it's important to remember that the results of a study do not prove anything . Findings can only confirm or reject the hypothesis underpinning your study. However, the act of articulating the results helps you to understand the problem from within, to break it into pieces, and to view the research problem from various perspectives.

The page length of this section is set by the amount and types of data to be reported . Be concise. Use non-textual elements appropriately, such as figures and tables, to present findings more effectively. In deciding what data to describe in your results section, you must clearly distinguish information that would normally be included in a research paper from any raw data or other content that could be included as an appendix. In general, raw data that has not been summarized should not be included in the main text of your paper unless requested to do so by your professor.

Avoid providing data that is not critical to answering the research question . The background information you described in the introduction section should provide the reader with any additional context or explanation needed to understand the results. A good strategy is to always re-read the background section of your paper after you have written up your results to ensure that the reader has enough context to understand the results [and, later, how you interpreted the results in the discussion section of your paper that follows].

Bavdekar, Sandeep B. and Sneha Chandak. "Results: Unraveling the Findings." Journal of the Association of Physicians of India 63 (September 2015): 44-46; Brett, Paul. "A Genre Analysis of the Results Section of Sociology Articles." English for Specific Speakers 13 (1994): 47-59; Go to English for Specific Purposes on ScienceDirect;Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008; Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section. San Francisco Edit; "Reporting Findings." In Making Sense of Social Research Malcolm Williams, editor. (London;: SAGE Publications, 2003) pp. 188-207.

Structure and Writing Style

I.  Organization and Approach

For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results . Both approaches are appropriate in how you report your findings, but use only one approach.

  • Present a synopsis of the results followed by an explanation of key findings . This approach can be used to highlight important findings. For example, you may have noticed an unusual correlation between two variables during the analysis of your findings. It is appropriate to highlight this finding in the results section. However, speculating as to why this correlation exists and offering a hypothesis about what may be happening belongs in the discussion section of your paper.
  • Present a result and then explain it, before presenting the next result then explaining it, and so on, then end with an overall synopsis . This is the preferred approach if you have multiple results of equal significance. It is more common in longer papers because it helps the reader to better understand each finding. In this model, it is helpful to provide a brief conclusion that ties each of the findings together and provides a narrative bridge to the discussion section of the your paper.

NOTE :   Just as the literature review should be arranged under conceptual categories rather than systematically describing each source, you should also organize your findings under key themes related to addressing the research problem. This can be done under either format noted above [i.e., a thorough explanation of the key results or a sequential, thematic description and explanation of each finding].

II.  Content

In general, the content of your results section should include the following:

  • Introductory context for understanding the results by restating the research problem underpinning your study . This is useful in re-orientating the reader's focus back to the research problem after having read a review of the literature and your explanation of the methods used for gathering and analyzing information.
  • Inclusion of non-textual elements, such as, figures, charts, photos, maps, tables, etc. to further illustrate key findings, if appropriate . Rather than relying entirely on descriptive text, consider how your findings can be presented visually. This is a helpful way of condensing a lot of data into one place that can then be referred to in the text. Consider referring to appendices if there is a lot of non-textual elements.
  • A systematic description of your results, highlighting for the reader observations that are most relevant to the topic under investigation . Not all results that emerge from the methodology used to gather information may be related to answering the " So What? " question. Do not confuse observations with interpretations; observations in this context refers to highlighting important findings you discovered through a process of reviewing prior literature and gathering data.
  • The page length of your results section is guided by the amount and types of data to be reported . However, focus on findings that are important and related to addressing the research problem. It is not uncommon to have unanticipated results that are not relevant to answering the research question. This is not to say that you don't acknowledge tangential findings and, in fact, can be referred to as areas for further research in the conclusion of your paper. However, spending time in the results section describing tangential findings clutters your overall results section and distracts the reader.
  • A short paragraph that concludes the results section by synthesizing the key findings of the study . Highlight the most important findings you want readers to remember as they transition into the discussion section. This is particularly important if, for example, there are many results to report, the findings are complicated or unanticipated, or they are impactful or actionable in some way [i.e., able to be pursued in a feasible way applied to practice].

NOTE:   Always use the past tense when referring to your study's findings. Reference to findings should always be described as having already happened because the method used to gather the information has been completed.

III.  Problems to Avoid

When writing the results section, avoid doing the following :

  • Discussing or interpreting your results . Save this for the discussion section of your paper, although where appropriate, you should compare or contrast specific results to those found in other studies [e.g., "Similar to the work of Smith [1990], one of the findings of this study is the strong correlation between motivation and academic achievement...."].
  • Reporting background information or attempting to explain your findings. This should have been done in your introduction section, but don't panic! Often the results of a study point to the need for additional background information or to explain the topic further, so don't think you did something wrong. Writing up research is rarely a linear process. Always revise your introduction as needed.
  • Ignoring negative results . A negative result generally refers to a finding that does not support the underlying assumptions of your study. Do not ignore them. Document these findings and then state in your discussion section why you believe a negative result emerged from your study. Note that negative results, and how you handle them, can give you an opportunity to write a more engaging discussion section, therefore, don't be hesitant to highlight them.
  • Including raw data or intermediate calculations . Ask your professor if you need to include any raw data generated by your study, such as transcripts from interviews or data files. If raw data is to be included, place it in an appendix or set of appendices that are referred to in the text.
  • Be as factual and concise as possible in reporting your findings . Do not use phrases that are vague or non-specific, such as, "appeared to be greater than other variables..." or "demonstrates promising trends that...." Subjective modifiers should be explained in the discussion section of the paper [i.e., why did one variable appear greater? Or, how does the finding demonstrate a promising trend?].
  • Presenting the same data or repeating the same information more than once . If you want to highlight a particular finding, it is appropriate to do so in the results section. However, you should emphasize its significance in relation to addressing the research problem in the discussion section. Do not repeat it in your results section because you can do that in the conclusion of your paper.
  • Confusing figures with tables . Be sure to properly label any non-textual elements in your paper. Don't call a chart an illustration or a figure a table. If you are not sure, go here .

Annesley, Thomas M. "Show Your Cards: The Results Section and the Poker Game." Clinical Chemistry 56 (July 2010): 1066-1070; Bavdekar, Sandeep B. and Sneha Chandak. "Results: Unraveling the Findings." Journal of the Association of Physicians of India 63 (September 2015): 44-46; Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008;  Caprette, David R. Writing Research Papers. Experimental Biosciences Resources. Rice University; Hancock, Dawson R. and Bob Algozzine. Doing Case Study Research: A Practical Guide for Beginning Researchers . 2nd ed. New York: Teachers College Press, 2011; Introduction to Nursing Research: Reporting Research Findings. Nursing Research: Open Access Nursing Research and Review Articles. (January 4, 2012); Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section. San Francisco Edit ; Ng, K. H. and W. C. Peh. "Writing the Results." Singapore Medical Journal 49 (2008): 967-968; Reporting Research Findings. Wilder Research, in partnership with the Minnesota Department of Human Services. (February 2009); Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Schafer, Mickey S. Writing the Results. Thesis Writing in the Sciences. Course Syllabus. University of Florida.

Writing Tip

Why Don't I Just Combine the Results Section with the Discussion Section?

It's not unusual to find articles in scholarly social science journals where the author(s) have combined a description of the findings with a discussion about their significance and implications. You could do this. However, if you are inexperienced writing research papers, consider creating two distinct sections for each section in your paper as a way to better organize your thoughts and, by extension, your paper. Think of the results section as the place where you report what your study found; think of the discussion section as the place where you interpret the information and answer the "So What?" question. As you become more skilled writing research papers, you can consider melding the results of your study with a discussion of its implications.

Driscoll, Dana Lynn and Aleksandra Kasztalska. Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

  • << Previous: Insiderness
  • Next: Using Non-Textual Elements >>
  • Last Updated: Mar 26, 2024 10:40 AM
  • URL: https://libguides.usc.edu/writingguide

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Science News
  • Meetings and Events
  • Social Media
  • Press Resources
  • Email Updates
  • Innovation Speaker Series

Revolutionizing the Study of Mental Disorders

March 27, 2024 • Feature Story • 75th Anniversary

At a Glance:

  • The Research Domain Criteria framework (RDoC) was created in 2010 by the National Institute of Mental Health.
  • The framework encourages researchers to examine functional processes that are implemented by the brain on a continuum from normal to abnormal.
  • This way of researching mental disorders can help overcome inherent limitations in using all-or-nothing diagnostic systems for research.
  • Researchers worldwide have taken up the principles of RDoC.
  • The framework continues to evolve and update as new information becomes available.

President George H. W. Bush proclaimed  the 1990s “ The Decade of the Brain  ,” urging the National Institutes of Health, the National Institute of Mental Health (NIMH), and others to raise awareness about the benefits of brain research.

“Over the years, our understanding of the brain—how it works, what goes wrong when it is injured or diseased—has increased dramatically. However, we still have much more to learn,” read the president’s proclamation. “The need for continued study of the brain is compelling: millions of Americans are affected each year by disorders of the brain…Today, these individuals and their families are justifiably hopeful, for a new era of discovery is dawning in brain research.”

An image showing an FMRI machine with computer screens showing brain images. Credit: iStock/patrickheagney.

Still, despite the explosion of new techniques and tools for studying the brain, such as functional magnetic resonance imaging (fMRI), many mental health researchers were growing frustrated that their field was not progressing as quickly as they had hoped.

For decades, researchers have studied mental disorders using diagnoses based on the Diagnostic and Statistical Manual of Mental Disorders (DSM)—a handbook that lists the symptoms of mental disorders and the criteria for diagnosing a person with a disorder. But, among many researchers, suspicion was growing that the system used to diagnose mental disorders may not be the best way to study them.

“There are many benefits to using the DSM in medical settings—it provides reliability and ease of diagnosis. It also provides a clear-cut diagnosis for patients, which can be necessary to request insurance-based coverage of healthcare or job- or school-based accommodations,” said Bruce Cuthbert, Ph.D., who headed the workgroup that developed NIMH’s Research Domain Criteria Initiative. “However, when used in research, this approach is not always ideal.”

Researchers would often test people with a specific diagnosed DSM disorder against those with a different disorder or with no disorder and see how the groups differed. However, different mental disorders can have similar symptoms, and people can be diagnosed with several different disorders simultaneously. In addition, a diagnosis using the DSM is all or none—patients either qualify for the disorder based on their number of symptoms, or they don’t. This black-and-white approach means there may be people who experience symptoms of a mental disorder but just miss the cutoff for diagnosis.

Dr. Cuthbert, who is now the senior member of the RDoC Unit which orchestrates RDoC work, stated that “Diagnostic systems are based on clinical signs and symptoms, but signs and symptoms can’t really tell us much about what is going on in the brain or the underlying causes of a disorder. With modern neuroscience, we were seeing that information on genetic, pathophysiological, and psychological causes of mental disorders did not line up well with the current diagnostic disorder categories, suggesting that there were central processes that relate to mental disorders that were not being reflected in DMS-based research.”

Road to evolution

Concerned about the limits of using the DSM for research, Dr. Cuthbert, a professor of clinical psychology at the University of Minnesota at the time, approached Dr. Thomas Insel (then NIMH director) during a conference in the autumn of 2008. Dr. Cuthbert recalled saying, “I think it’s really important that we start looking at dimensions of functions related to mental disorders such as fear, working memory, and reward systems because we know that these dimensions cut across various disorders. I think NIMH really needs to think about mental disorders in this new way.”

Dr. Cuthbert didn’t know it then, but he was suggesting something similar to ideas that NIMH was considering. Just months earlier, Dr. Insel had spearheaded the inclusion of a goal in NIMH’s 2008 Strategic Plan for Research to “develop, for research purposes, new ways of classifying mental disorders based on dimensions of observable behavior and neurobiological measures.”

Unaware of the new strategic goal, Dr. Cuthbert was surprised when Dr. Insel's senior advisor, Marlene Guzman, called a few weeks later to ask if he’d be interested in taking a sabbatical to help lead this new effort. Dr. Cuthbert soon transitioned into a full-time NIMH employee, joining the Institute at an exciting time to lead the development of what became known as the Research Domain Criteria (RDoC) Framework. The effort began in 2009 with the creation of an internal working group of interdisciplinary NIMH staff who identified core functional areas that could be used as examples of what research using this new conceptual framework looked like.

The workgroup members conceived a bold change in how investigators studied mental disorders.

“We wanted researchers to transition from looking at mental disorders as all or none diagnoses based on groups of symptoms. Instead, we wanted to encourage researchers to understand how basic core functions of the brain—like fear processing and reward processing—work at a biological and behavioral level and how these core functions contribute to mental disorders,” said Dr. Cuthbert.

This approach would incorporate biological and behavioral measures of mental disorders and examine processes that cut across and apply to all mental disorders. From Dr. Cuthbert’s standpoint, this could help remedy some of the frustrations mental health researchers were experiencing.

Around the same time the workgroup was sharing its plans and organizing the first steps, Sarah Morris, Ph.D., was a researcher focusing on schizophrenia at the University of Maryland School of Medicine in Baltimore. When she first read these papers, she wondered what this new approach would mean for her research, her grants, and her lab.

She also remembered feeling that this new approach reflected what she was seeing in her data.

“When I grouped my participants by those with and without schizophrenia, there was a lot of overlap, and there was a lot of variability across the board, and so it felt like RDoC provided the pathway forward to dissect that and sort it out,” said Dr. Morris.

Later that year, Dr. Morris joined NIMH and the RDoC workgroup, saying, “I was bumping up against a wall every day in my own work and in the data in front of me. And the idea that someone would give the field permission to try something new—that was super exciting.”

The five original RDoC domains of functioning were introduced to the broader scientific community in a series of articles published in 2010  .

To establish the new framework, the RDoC workgroup (including Drs. Cuthbert and Morris) began a series of workshops in 2011 to collect feedback from experts in various areas from the larger scientific community. Five workshops were held over the next two years, each with a different broad domain of functioning based upon prior basic behavioral neuroscience. The five domains were called:

  • Negative valence (which included processes related to things like fear, threat, and loss)
  • Positive valence (which included processes related to working for rewards and appreciating rewards)
  • Cognitive processes
  • Social processes
  • Arousal and regulation processes (including arousal systems for the body and sleep).

At each workshop, experts defined several specific functions, termed constructs, that fell within the domain of interest. For instance, constructs in the cognitive processes domain included attention, memory, cognitive control, and others.

The result of these feedback sessions was a framework that described mental disorders as the interaction between different functional processes—processes that could occur on a continuum from normal to abnormal. Researchers could measure these functional processes in a variety of complementary ways—for example, by looking at genes associated with these processes, the brain circuits that implement these processes, tests or observations of behaviors that represent these functional processes, and what patients report about their concerns. Also included in the framework was an understanding that functional processes associated with mental disorders are impacted and altered by the environment and a person’s developmental stage.

Preserving momentum

An image depicting the RDoC Framework that includes four overlapping circles (titled: Lifespan, Domains, Units of Analysis, and Environment).

Over time, the Framework continued evolving and adapting to the changing science. In 2018, a sixth functional area called sensorimotor processes was added to the Framework, and in 2019, a workshop was held to better incorporate developmental and environmental processes into the framework.;

Since its creation, the use of RDoC principles in mental health research has spread across the U.S. and the rest of the world. For example, the Psychiatric Ratings using Intermediate Stratified Markers project (PRISM)   , which receives funding from the European Union’s Innovative Medicines Initiative, is seeking to link biological markers of social withdrawal with clinical diagnoses using RDoC-style principles. Similarly, the Roadmap for Mental Health Research in Europe (ROAMER)   project by the European Commission sought to integrate mental health research across Europe using principles similar to those in the RDoC Framework.;

Dr. Morris, who has acceded to the Head of the RDoC Unit, commented: “The fact that investigators and science funders outside the United States are also pursuing similar approaches gives me confidence that we’ve been on the right pathway. I just think that this has got to be how nature works and that we are in better alignment with the basic fundamental processes that are of interest to understanding mental disorders.”

The RDoC framework will continue to adapt and change with emerging science to remain relevant as a resource for researchers now and in the future. For instance, NIMH continues to work toward the development and optimization of tools to assess RDoC constructs and supports data-driven efforts to measure function within and across domains.

“For the millions of people impacted by mental disorders, research means hope. The RDoC framework helps us study mental disorders in a different way and has already driven considerable change in the field over the past decade,” said Joshua A. Gordon, M.D., Ph.D., director of NIMH. “We hope this and other innovative approaches will continue to accelerate research progress, paving the way for prevention, recovery, and cure.”

Publications

Cuthbert, B. N., & Insel, T. R. (2013). Toward the future of psychiatric diagnosis: The seven pillars of RDoC. BMC Medicine , 11 , 126. https://doi.org/10.1186/1741-7015-11-126  

Cuthbert B. N. (2014). Translating intermediate phenotypes to psychopathology: The NIMH Research Domain Criteria. Psychophysiology , 51 (12), 1205–1206. https://doi.org/10.1111/psyp.12342  

Cuthbert, B., & Insel, T. (2010). The data of diagnosis: New approaches to psychiatric classification. Psychiatry , 73 (4), 311–314. https://doi.org/10.1521/psyc.2010.73.4.311  

Cuthbert, B. N., & Kozak, M. J. (2013). Constructing constructs for psychopathology: The NIMH research domain criteria. Journal of Abnormal Psychology , 122 (3), 928–937. https://doi.org/10.1037/a0034028  

Garvey, M. A., & Cuthbert, B. N. (2017). Developing a motor systems domain for the NIMH RDoC program.  Schizophrenia Bulletin , 43 (5), 935–936. https://doi.org/10.1093/schbul/sbx095  

Insel, T. (2013). Transforming diagnosis . http://www.nimh.nih.gov/about/director/2013/transforming-diagnosis.shtml

Kozak, M. J., & Cuthbert, B. N. (2016). The NIMH Research Domain Criteria initiative: Background, issues, and pragmatics. Psychophysiology , 53 (3), 286–297. https://doi.org/10.1111/psyp.12518  

Morris, S. E., & Cuthbert, B. N. (2012). Research Domain Criteria: Cognitive systems, neural circuits, and dimensions of behavior. Dialogues in Clinical Neuroscience , 14 (1), 29–37. https://doi.org/10.31887/DCNS.2012.14.1/smorris  

Sanislow, C. A., Pine, D. S., Quinn, K. J., Kozak, M. J., Garvey, M. A., Heinssen, R. K., Wang, P. S., & Cuthbert, B. N. (2010). Developing constructs for psychopathology research: Research domain criteria. Journal of Abnormal Psychology , 119 (4), 631–639. https://doi.org/10.1037/a0020909  

  • Presidential Proclamation 6158 (The Decade of the Brain) 
  • Research Domain Criteria Initiative website
  • Psychiatric Ratings using Intermediate Stratified Markers (PRISM)  
  • Roadmap for Mental Health Research in Europe (ROAMER)  

Research-Methodology

Suggestions for Future Research

Your dissertation needs to include suggestions for future research. Depending on requirements of your university, suggestions for future research can be either integrated into Research Limitations section or it can be a separate section.

You will need to propose 4-5 suggestions for future studies and these can include the following:

1. Building upon findings of your research . These may relate to findings of your study that you did not anticipate. Moreover, you may suggest future research to address unanswered aspects of your research problem.

2. Addressing limitations of your research . Your research will not be free from limitations and these may relate to formulation of research aim and objectives, application of data collection method, sample size, scope of discussions and analysis etc. You can propose future research suggestions that address the limitations of your study.

3. Constructing the same research in a new context, location and/or culture . It is most likely that you have addressed your research problem within the settings of specific context, location and/or culture. Accordingly, you can propose future studies that can address the same research problem in a different settings, context, location and/or culture.

4. Re-assessing and expanding theory, framework or model you have addressed in your research . Future studies can address the effects of specific event, emergence of a new theory or evidence and/or other recent phenomenon on your research problem.

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  offers practical assistance to complete a dissertation with minimum or no stress. The e-book covers all stages of writing a dissertation starting from the selection to the research area to submitting the completed version of the work within the deadline. John Dudovskiy

Suggestions for Future Research

  • Short Report
  • Open access
  • Published: 22 June 2011

Effective implementation of research into practice: an overview of systematic reviews of the health literature

  • Annette Boaz 1 ,
  • Juan Baeza 2 ,
  • Alec Fraser 3 &

the European Implementation Score Collaborative Group (EIS)

BMC Research Notes volume  4 , Article number:  212 ( 2011 ) Cite this article

24k Accesses

153 Citations

8 Altmetric

Metrics details

The gap between research findings and clinical practice is well documented and a range of interventions has been developed to increase the implementation of research into clinical practice.

A review of systematic reviews of the effectiveness of interventions designed to increase the use of research in clinical practice. A search for relevant systematic reviews was conducted of Medline and the Cochrane Database of Reviews 1998-2009. 13 systematic reviews containing 313 primary studies were included. Four strategy types are identified: audit and feedback; computerised decision support; opinion leaders; and multifaceted interventions. Nine of the reviews reported on multifaceted interventions. This review highlights the small effects of single interventions such as audit and feedback, computerised decision support and opinion leaders. Systematic reviews of multifaceted interventions claim an improvement in effectiveness over single interventions, with effect sizes ranging from small to moderate. This review found that a number of published systematic reviews fail to state whether the recommended practice change is based on the best available research evidence.

Conclusions

This overview of systematic reviews updates the body of knowledge relating to the effectiveness of key mechanisms for improving clinical practice and service development. Multifaceted interventions are more likely to improve practice than single interventions such as audit and feedback. This review identified a small literature focusing explicitly on getting research evidence into clinical practice. It emphasizes the importance of ensuring that primary studies and systematic reviews are precise about the extent to which the reported interventions focus on changing practice based on research evidence (as opposed to other information codified in guidelines and education materials).

Despite significant investment in health research, challenges remain in translating this research into policies and practices that improve patient care. The gap between research findings and clinical practice is well documented [ 1 , 2 ] and a range of interventions has been developed to increase the implementation of research into health policy and practice. In particular, clinical guidelines, audit and feedback, continuing professional education and financial incentives are widely used and have been extensively evaluated [ 3 ].

Systematic reviews of existing research provide a rigorous method for assessing the relative effectiveness of different interventions that seek to implement research evidence into healthcare practice. A review by Grimshaw et al. [ 4 ] identified a range of strategies for changing provider behaviour ranging from educational interventions, audit and feedback, computerised decision support to financial incentives and combined interventions. The authors concluded that all the interventions had the potential to promote the uptake of evidence in practice, although no one intervention seemed to be more effective than the others in all settings.

This overview of systematic reviews of the health literature on the effectiveness of currently used implementation methods in translating research findings in to practice provides a focused update of Grimshaw et al.'s 2001 review. We detect a growing assumption that interventions designed to improve clinical practice and service development are always based on the best quality evidence, something that the pioneers of evidence-based medicine went to great lengths to point out was not (and was never likely to be) the case. We investigate whether any methods were effective in implementing research evidence. We excluded systematic reviews focusing on achieving change that were not sufficiently explicit about their evidence base. We want to know the effectiveness of implementation methods in translating evidence-based findings into practice as opposed to other non-evidenced-based changes.

We searched Medline and the Cochrane Database of Reviews 1998-2009 using the search strategy employed by Grimshaw et al. [ 4 ]. Searches from 1966-July 1998 were completed by Grimshaw et al for their earlier review [ 4 ]. Full details of the data extraction process are given in Figure 1 .

figure 1

Data extraction information .

Inclusion criteria

Systematic reviews are conducted to a set of consistent, transparent quality standards. As such, only systematic reviews were included in the review. In line with Franke et al. [ 5 ] reviews were considered to be systematic reviews if they met at least two of the following criteria: search terms were included; search included Pubmed/Medline; the methodological quality of the studies was assessed as part of the review. We included reviews that focused on the implementation of research evidence into practice. Study populations included healthcare providers and patients. Numerous interventions were assessed including clinical guidelines, audit and feedback, continuing professional education, financial incentives, use of opinion leaders and multifaceted interventions. Some systematic reviews included comparisons of different interventions, other reviews compared one type of intervention against a control group. Outcomes related to improvements in process or patient well-being. Numerous individual study types (RCT, CCT, BA, ITS) were included within the systematic reviews (see Additional file 1 ).

Exclusion criteria

We excluded systematic reviews that did not look explicitly at interventions designed to get research evidence into practice [ 6 ]. However, this is far from straightforward in the field of healthcare where the principle of evidence-based practice is widely acknowledged and tools to change behaviour such as guidelines are often seen to be an implicit codification of evidence, despite the fact that this is not always the case [ 7 ]. Systematic reviews that explored changes in provider behaviour, but did not state that the changes were research based were excluded. Systematic reviews were excluded that made no mention of research evidence as were papers that were unclear about the use of research evidence [ 8 , 9 ]. Studies that focused on evidence-based interventions, but failed to report on the evidence base were also excluded. One systematic review was also excluded as it focused on changing patient rather than provider behaviour [ 10 ] and a second was excluded as it had no demonstrable outcomes [ 11 ].

Fifty-eight systematic reviews were read by members of the research team (either AF and AB or AF and JB) and 45 of these were excluded following discussion between all three members of the team in order to reduce the risk of bias. Thirty-one systematic reviews were excluded because they did not look explicitly at interventions designed to get research evidence into practice. Six systematic reviews were excluded because of unclear information about the evidence base of some individual studies within the review. In 5 cases it was clear that individual studies within the reviews focused on non-evidence-based changes such as cost reduction in prescribing practice and therefore these reviews were not exclusively based on getting research findings into practice. Three further papers were excluded as they were overviews. Additional file 2 provides bibliographic details and reasons for exclusion for the 45 excluded reviews.

We identified 13 systematic reviews that met the inclusion criteria. The systematic reviews contained a range of 10-66 primary studies. Of the 313 primary studies in the 13 systematic reviews, there were only 21 duplications. In the systematic reviews, Randomised Controlled Trials (RCT) were favoured by the authors over non-RCT study designs, however, non-randomised Controlled Clinical Trials (CCT), Before and After (B/A) and Interrupted Time Series (ITS) studies were also included. Several systematic reviews covered more than one clinical specialty while others focused on a specific area including prescribing, psychiatric care, pneumonia, obstetrics, stroke care and diabetes care. The original papers were all published in English, 5 came from Canada, 2 from Australia, 2 from the UK, one each from France, Germany, Italy and the USA.

The methodological quality of the systematic reviews was assessed by two members of the research team (either AF and AB or AF and JB) using an established quality checklist adapted by Franke et al from Oxman and Guyatt [ 5 ] using a scale of 0 (poor quality) to 7 (high quality). In most cases there was agreement between the two assessors. Where significant differences arose they were resolved by discussion between all three members of the review team. Nine of the systematic reviews received a maximum quality score i.e. 7 [ 12 – 20 ]. One systematic review received a score of 6 [ 21 ], two systematic reviews received a score of 5 [ 22 , 23 ] and one systematic review scored a 4 [ 24 ]. Further details are available in Additional file 3 in relation to each included systematic review. However, flaws identified within these systematic reviews related to: lack of clarity of search methods, lack of comprehensiveness of search methods, potential bias in the selection of articles and failure to report the methods used to combine the findings of the selected articles. The quality scores are listed in Table 1 .

Table 1 shows the included studies with their quality scores, number of included studies and conclusions grouped by strategy types, which are drawn from EPOC implementation types. The authors of these reviews did not always report effect sizes and when they did they were descriptive (e.g. moderate or small) rather than numerical. The systematic reviews identify four strategy types; audit and feedback, computerised decision support, use of opinion leaders and multifaceted interventions that are considered in turn below. Multi-faceted interventions include more than one type of implementation strategy (including incentives, audit and feedback, educational strategies and reminders).

Audit and feedback

One study looked at the effects of audit and feedback [ 22 ]. Prescribing and preventive care seem most likely to be altered by these approaches. More complex areas such as disease management, adherence to guidelines and diagnosis appear less effected by audit and feedback. The authors suggest that this may be due to the differences in complexity of the levels of decision making for clinicians in these respective facets of care.

Computerised decision support

Two studies focused on computerised decision support. One review suggested that research evidence in the form of computer guidance may give clinicians greater confidence when prescribing and lead to more effective prescribing practice. A second review lamented the lack of high-quality primary studies demonstrating improvements in patient outcomes and the poor descriptive value of many studies which make learning lessons for implementation difficult [ 13 ]. However, the authors cautioned that the findings were based on a small number of studies and that the overall quality of these was low [ 12 ].

Use of opinion leaders

One review looked at the role of local opinion leaders [ 14 ]. The authors suggest that opinion leaders can successfully promote evidence-based practice, however, the difficulty of identifying opinion leaders and the labour intensive nature of assessing their impact may limit the use of opinion leaders as a knowledge transfer intervention.

Multifaceted interventions

The majority of the reviews incorporated studies that focused on more than one intervention type across a variety of clinical areas. Examples of the interventions in one multi-faceted approach included: physician and public education, physician peer review and incentive payments to physicians and hospitals. The most consistent message is that interventions designed to promote the use of evidence in policy are more effective when delivered as part of multifaceted intervention that combine different approaches [ 16 , 18 – 21 , 23 , 24 ], though the effect is characterised as small to moderate [ 17 ].

A further rationale for multifaceted interventions is that practitioners respond differently to varying types of interventions. For example, one of the reviews [ 18 ] investigated whether particular interventions were effective in promoting the use of evidence in obstetrics. They concluded that, in obstetrics, nurses were more receptive to educational strategies than physicians, whilst audit and feedback are effective for both groups.

Overall the reviews suggest that active interventions, such as opinion leaders [ 14 ] and reminders and feedback [ 22 ] are more effective than passive approaches, such as information campaigns.

This overview of systematic reviews, with its specific focus on evidence-based interventions, highlights a major limitation of existing reviews and primary studies in contributing to the effectiveness of Evidence-Based Medicine. This review emphasises the importance of ensuring that primary studies and systematic reviews are precise about the extent to which interventions are focused on changing practice based on evidence (as opposed to other information codified in guidelines, education material, etc.) The review identified very few systematic reviews looking exclusively and explicitly at implementing research findings into practice; conversely 43 reviews either focused on the implementation of non-evidenced based findings or were not explicit about the nature of the findings and were thus excluded.

This overview of systematic reviews updates the existing body of knowledge relating to the effectiveness of key mechanisms for improving clinical practice and service development [ 25 , 26 ]. The 13 studies included in this overview of systematic reviews highlights the small effects of single interventions such as audit and feedback, computerised decision support and opinion leaders. Multifaceted interventions are frequently used to promote the use of research in practice. Systematic reviews of multifaceted interventions claim an improvement in effectiveness over single interventions, with effect sizes ranging from small to moderate.

The EPOC group within the Cochrane Collaboration has made a particularly significant contribution in producing reviews relating to mechanisms such as audit and feedback [ 27 ], opinion leaders [ 14 ], and computerised advice [ 12 ]. Previous syntheses of existing reviews [ 1 , 4 , 28 ] have identified a large literature focused on changing practice, such as changing prescribing behaviour and service reorganizations. The literature focuses on a specific set of interventions that includes audit, clinical guidelines, opinion leaders and education and feedback. These interventions have been extensively evaluated in randomized controlled trials. The reviewers concluded that promoting the use of evidence in practice requires a complex, multifaceted intervention. While guidelines, feedback and educational interventions achieve small to moderate impacts in isolation, they are far more effective when combined in multiple strategies.

The challenges of achieving a more evidence-based approach to medical practice have been widely reported [ 29 , 30 ]. We have found that a number of published studies fail to state whether the recommended practice change is based on the best available research evidence. If this is not clearly stated in research papers it is not safe to assume this is the case. Furthermore, such an approach would run contrary to the principles of evidence-based medicine. Without being precise in this important matter we are in danger of assuming that all interventions designed to improve healthcare are implicitly evidence based, without research to support this hypothesis. Transparency and precision are critical to ensuring that evidence continues to play a key role in the development of healthcare and does not merely become shorthand for any 'desirable' change.

Comparison with previous reviews

We know from the literature on the challenges involved in promoting Evidence-Based Medicine that the principles are not universally embedded in mechanisms such as guidelines and educational materials designed to promote clinical practice and service improvement [ 31 ]. It is therefore important that evaluations of strategies to change provider behaviour either only focus on changes that are evidenced based (not ones that are politically or financially driven) or are explicit about whether the changes are evidenced based or not.

In reporting the findings of existing primary studies, the systematic reviews point to two issues that warrant further investigation. Firstly, in order to improve the impact of research on health policy and practice, it is essential that theories are developed that reflect the diverse mechanisms involved in implementation [ 6 ]. It can be concluded from the reviews reported here that implementation of evidence into practice requires complex interventions that need to consider issues of context and process. For example, many of the systematic reviews [ 16 , 18 – 21 , 23 , 24 ] highlight the importance of multifaceted interventions to promote implementation of evidence into practice. One of the papers [ 18 ] signals the importance of considering what implementation mechanisms might be most effective in particular clinical contexts. Therefore, systematic reviews of effectiveness studies alone may not be sufficiently sensitive to deliver all the learning necessary to improve the use of research evidence in clinical practice and service improvement. A deeper understanding may be gained by complementing these studies with the findings from social science research that considers the important issues of context and process [ 32 , 33 ]. This review identified a much smaller literature focusing explicitly on getting research evidence into practice] [ 12 – 24 ]. This result suggests that further studies should explore whether the nature of the behaviour change being sought (either evidence based or not) has an impact on the degree of change that occurs.

However, the existence of a relatively large, rigorously evaluated set of interventions to promote the use of research evidence provides a vital tool (albeit not the only tool) in meeting the challenge of promoting better use of evidence in practice to improve patient care. Greater transparency and precision about the degree to which interventions are designed to promote evidence-based clinical practice and service improvement will further enhance our understanding of the progress made towards evidence-based medicine.

Limitations and strengths of this study

There are some limitations to conducting overview reviews of systematic reviews. Firstly, there are concerns about double counting individual studies included in different reviews. In this overview we have checked for this and found surprisingly little overlap. Secondly, in reviews of reviews the studies identified are unlikely to have been published in the last few years, given the fact that they have been published in both an original paper and then identified and included in a published review. Thus a review of reviews is less likely to include the very latest research as this would not be captured in existing reviews. This might have particular implications for interventions based on new technologies such as electronic reminders for clinicians. We made best efforts to overcome this by running the searches again at the end of 2009 and incorporated 2 additional studies [ 13 , 20 ]. Finally, the reviewers are situated at some distance from the original studies and rely on summaries produced by others of existing primary studies. A further limitation related to the selection of systematic reviews that looked explicitly at interventions designed to get research evidence into practice. A number of systematic reviews were excluded as they were not explicit in their inclusion criteria that the studies selected were focused on promoting the use of evidence in practice. Others were excluded as they were not explicit in the main body of the text that the systematic review was focused on promoting the use of evidence in practice. These omissions may relate to reporting bias rather than the systematic reviews themselves.

However there is a considerable efficiency gain in doing a review of reviews, particularly as so much synthesis work has been done in the field already. We can learn from a wide body of work by reviewing 13 reviews of 313 individual studies. Furthermore, a coherent and tested set of interventions emerge that are highly consistent with previous studies [ 1 ].

Ethical approval

Ethics committee approval was not required for this review.

Contributors

JB, AB and AF developed the review protocol, AF conducted the searches with guidance from Sarah Lawson (Senior Information Specialist NHS Support KCL) and conducted the initial screening based on titles and abstracts. Full text screening was conducted by JB, AB and AF. Data extraction and quality appraisal was conducted by either JB and AF or AB and AF. The first draft of the paper was produced by AB and JB, with subsequent drafts developed by AB, JB and AF. All authors have read and approved the manuscript.

Grol R, Grimshaw J: From best evidence to best practice: effective implementation of change in patients' care.[see comment]. [Review] [103 refs]. Lancet. 2003, 362: 1225-1230. 10.1016/S0140-6736(03)14546-1.

Article   PubMed   Google Scholar  

Green LA, Seifert CM: Translation of research into practice: why we can't "just do it". J Am Board Fam Pract. 2005, 18: 541-545. 10.3122/jabfm.18.6.541.

Grimshaw J, McAuley LM, Bero L: Systematic reviews of the effectiveness of quality improvement strategies and programmes. Quality & Safety in Health Care. 2010, 12: 298-303.

Article   Google Scholar  

Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L: Changing provider behavior - An overview of systematic reviews of interventions. Medical Care. 2001, 39: II2-II45.

Article   PubMed   CAS   Google Scholar  

Francke AL, Smit MC, de Veer AJ, Mistiaen P: Factors influencing the implementation of clinical guidelines for health care professionals: a systematic meta-review. [Review] [64 refs]. BMC Medical Informatics & Decision Making. 2008, 8: 38-10.1186/1472-6947-8-38.

Thompson DS, Estabrooks CA, Scott-Findlay S, Moore K, Wallin L: Interventions aimed at increasing research use in nursing: a systematic review. Implementation Science. 2007, 2:

Google Scholar  

Shiffman RN, Liaw Y, Brandt CA, Corb GJ: Computer-based guideline implementation systems: a systematic review of functionality and effectiveness (Structured abstract). Journal of the American Medical Informatics Association. 1999, 6: 104-114. 10.1136/jamia.1999.0060104.

Article   PubMed   CAS   PubMed Central   Google Scholar  

Ranji SR, Steinman MA, Shojania KG, Gonzales R: Interventions to reduce unnecessary antibiotic prescribing: a systematic review and quantitative analysis (Provisional abstract). Medical Care. 2008, 46: 847-862. 10.1097/MLR.0b013e318178eabd.

van der Wees PJ, Jamtvedt G, Rebbeck T, de Bie RA, Dekker J, Hendriks EJ: Multifaceted strategies may increase implementation of physiotherapy clinical guidelines: a systematic review. [Review] [36 refs]. Australian Journal of Physiotherapy. 2008, 54: 233-241.

Stead LF, Bergson G, Lancaster T: Physician advice for smoking cessation. Lancaster Tim Physician advice for smoking cessation Cochrane Database of Systematic Reviews: Reviews 2008. Edited by: Stead Lindsay F, Bergson Gillian. 2008, John Wiley & Sons, Ltd Chichester, UK, 2

Wilson A, Childs S: The effect of interventions to alter the consultation length of family physicians: a systematic review.[see comment]. [Review] [16 refs]. British Journal of General Practice. 2006, 56: 876-882.

PubMed   PubMed Central   Google Scholar  

Durieux P, Trinquart L, Colombet I, Niès J, Walton RT, Rajeswaran A: Computerized advice on drug dosage to improve prescribing practice. Durieux Pierre, Trinquart Ludovic, Colombet Isabelle, Niès Julie, Walton RT, Rajeswaran Anand, Rège Walther Myriam, Harvey Emma, Burnand Bernard Computerized advice on drug dosage to improve prescribing practice Cochrane Database of Systematic R. 2008

Mollon B, Chong J, Holbrook AM, Sung M, Thabane L, Foster G: Features predicting the success of computerized decision support for prescribing: a systematic review of randomized controlled trials. [Review] [68 refs]. BMC Medical Informatics & Decision Making. 2009, 9: 11-10.1186/1472-6947-9-11.

Doumit G, Gattellari M, Grimshaw J, O'Brien MA: Local opinion leaders: effects on professional practice and health care outcomes.[update of Cochrane Database Syst Rev. 2000;(2):CD000125; PMID: 10796491]. [Review] [54 refs]. Cochrane Database of Systematic Reviews (1). 2007, CD000125-

Davey P, Brown E, Fenelon L, Finch R, Gould I, Hartman G: Interventions to improve antibiotic prescribing practices for hospital inpatients. [Review] [133 refs]. Cochrane Database of Systematic Reviews (4):CD003543, 2005. 2005, CD003543-

Arnold SR, Straus SE: Interventions to improve antibiotic prescribing practices in ambulatory care. [Review] [175 refs]. Cochrane Database of Systematic Reviews (4). 2005, CD003539-

Hakkennes S, Dodd K: Guideline implementation in allied health professions: a systematic review of the literature. [Review] [64 refs]. Quality & Safety in Health Care. 2008, 17: 296-300. 10.1136/qshc.2007.023804.

Article   CAS   Google Scholar  

Chaillet N, Dube E, Dugas M, Audibert F, Tourigny C, Fraser WD: Evidence-based strategies for implementing guidelines in obstetrics: a systematic review. [Review] [71 refs]. Obstetrics & Gynecology. 2006, 108: 1234-1245. 10.1097/01.AOG.0000236434.74160.8b.

Chaillet N, Dumont A: Evidence-based strategies for reducing cesarean section rates: a meta-analysis. [Review] [72 refs]. Birth. 2007, 34: 53-64. 10.1111/j.1523-536X.2006.00146.x.

de Belvis AG, Pelone F, Biasco A, Ricciardi W, Volpe M: Can primary care professionals' adherence to Evidence Based Medicine tools improve quality of care in type 2 diabetes mellitus? A systematic review. [Review] [55 refs]. Diabetes Research & Clinical Practice. 2009, 85: 119-131. 10.1016/j.diabres.2009.05.007.

Weinmann S, Koesters M, Becker T: Effects of implementation of psychiatric guidelines on provider performance and patient outcome: systematic review (Structured abstract). Acta Psychiatrica Scandinavica. 2007, 115: 420-433. 10.1111/j.1600-0447.2007.01016.x.

Bywood PT, Lunnay B, Roche AM: Strategies for facilitating change in alcohol and other drugs (AOD) professional practice: a systematic review of the effectiveness of reminders and feedback. [Review] [56 refs]. Drug & Alcohol Review. 2008, 27: 548-558. 10.1080/09595230802245535.

Kwan J, Hand P, Sandercock P: Improving the efficiency of delivery of thrombolysis for acute stroke: a systematic review. [Review] [39 refs]. Qjm. 2004, 97: 273-279. 10.1093/qjmed/hch054.

Simpson SH, Marrie TJ, Majumdar SR: Do guidelines guide pneumonia practice: a systematic review of interventions and barriers to best practice in the management of community-acquired pneumonia (Structured abstract). Respiratory Care Clinics of North America. 2005, 11: 1-13.

Greco PJ, Eisenberg JM: Changing physicians' practices. N Engl J Med. 1993, 329: 1271-1274. 10.1056/NEJM199310213291714.

Oxman A, Thomson MA, Davis DA, Haynes RB: No magic bullets: a systematic review of 102 trials of interventions to improve professional practice. CMAJ Canadian Medical Association Journal. 1995, 153: 1423-1431.

PubMed   CAS   PubMed Central   Google Scholar  

Jamtvedt G, Young JM, Kristoffersen DT, O'Brien MA, Oxman AD: Audit and feedback: effects on professional practice and health care outcomes.[update of Cochrane Database Syst Rev. 2003;(3):CD000259; PMID: 12917891]. [Review] [214 refs]. Cochrane Database of Systematic Reviews (2). 2006, CD000259-

Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L: Changing provider behavior: an overview of systematic reviews of interventions. Medical Care. 2001, 39 (Suppl-45):

Haynes B, Haines A: Barriers and bridges to evidence based clinical practice. BMJ. 1998, 317: 273-276.

Dopson S, Locock L, Gabbay J, Ferlie E, Fitzgerald L: Evidence-Based Medicine and the Implementation Gap. Health. 2003, 7: 311-330.

Oxman AD, Sch++nemann HJ, Fretheim A: Improving the use of research evidence in guideline development: 12. Incorporating considerations of equity. Health Research Policy and Systems. 2006, 4:

Dopson S, Fitzgerald L: Knowledge to action? Evidence-based healthcare in context. 2005, Oxford: Oxford University Press

Book   Google Scholar  

Angus J, Hodnett E, O'Brien-Pallas L: Implementing evidence-based nursing practice: a tale of two intrapartum nursing units. Nurs Inq. 2003, 10: 218-228. 10.1046/j.1440-1800.2003.00193.x.

Download references

Acknowledgements and Funding

** This work forms part of the European Implementation Score (EIS) project, funded by the EU 7 th Framework Programme. The EU FP7 EIS project is a collaboration between King's College London, University of Florence, University of Lund, London School of Economics, University College London, the German Stroke Foundation and Charité - Universitätsmedizin Berlin. The work packages are lead by: Prof. C. Wolfe, Prof. D Inzitari, Dr J. Baeza, Prof. B. Norrving, Prof. P. Heuschmann, Prof. A. McGuire, Prof. H. Hemingway, Dr M. Wagner and Dr C. McKevitt. This paper has been written by the authors on behalf of the EIS Work Package 3 board: Dr J. Baeza, School of Social Sciences and Public Policy, King's College London, UK; Dr A. Rudd, Guy's and St. Thomas' Foundation Trust, St. Thomas' Hospital, UK; Prof. M. Giroud, Department of Neurology, University of Dijon, France; and Prof. M. Dennis, Division of Clinical Neurosciences, University of Edinburgh, UK. We thank two anonymous reviews and Professors Charles Wolfe and Naomi Fulop for their very helpful comments on a previous draft of this paper. AB acknowledges financial support from the Department of Health via the National Institute for Health Research (NIHR) comprehensive Biomedical Research Centre award to Guy's and St Thomas' NHS Foundation Trust in partnership with King's College London and King's College Hospital NHS Foundation Trust.

Author information

Authors and affiliations.

Department of Primary Care and Public Health Sciences, King's College London, 7th Floor, Capital House, 42 Weston Street, London, SE1 3QD, UK

Annette Boaz

Department of Management, School of Social Science and Public Policy, King's College London, Franklin-Wilkins Building, 150 Stamford Street, London, SE1 9NH, UK

Alec Fraser

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Annette Boaz .

Additional information

Declaration of competing interests.

The authors declare that they have no competing interests.

Electronic supplementary material

13104_2011_978_moesm1_esm.doc.

Additional file 1:Data extracted from the included reviews. Detailed information about the included reviews and reasons for inclusion. (DOC 111 KB)

Additional File 2:Excluded studies. Information about the studies excluded from the review. (DOCX 25 KB)

Additional file 3:quality assessment. a description of the quality assessment tools used in the review. (docx 22 kb), authors’ original submitted files for images.

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Boaz, A., Baeza, J., Fraser, A. et al. Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Res Notes 4 , 212 (2011). https://doi.org/10.1186/1756-0500-4-212

Download citation

Received : 11 January 2011

Accepted : 22 June 2011

Published : 22 June 2011

DOI : https://doi.org/10.1186/1756-0500-4-212

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic Review
  • Primary Study
  • Research Evidence
  • Opinion Leader
  • Strategy Type

BMC Research Notes

ISSN: 1756-0500

this part of the research study will be written for further improvements to be done

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Limitations in Research – Types, Examples and Writing Guide

Limitations in Research – Types, Examples and Writing Guide

Table of Contents

Limitations in Research

Limitations in Research

Limitations in research refer to the factors that may affect the results, conclusions , and generalizability of a study. These limitations can arise from various sources, such as the design of the study, the sampling methods used, the measurement tools employed, and the limitations of the data analysis techniques.

Types of Limitations in Research

Types of Limitations in Research are as follows:

Sample Size Limitations

This refers to the size of the group of people or subjects that are being studied. If the sample size is too small, then the results may not be representative of the population being studied. This can lead to a lack of generalizability of the results.

Time Limitations

Time limitations can be a constraint on the research process . This could mean that the study is unable to be conducted for a long enough period of time to observe the long-term effects of an intervention, or to collect enough data to draw accurate conclusions.

Selection Bias

This refers to a type of bias that can occur when the selection of participants in a study is not random. This can lead to a biased sample that is not representative of the population being studied.

Confounding Variables

Confounding variables are factors that can influence the outcome of a study, but are not being measured or controlled for. These can lead to inaccurate conclusions or a lack of clarity in the results.

Measurement Error

This refers to inaccuracies in the measurement of variables, such as using a faulty instrument or scale. This can lead to inaccurate results or a lack of validity in the study.

Ethical Limitations

Ethical limitations refer to the ethical constraints placed on research studies. For example, certain studies may not be allowed to be conducted due to ethical concerns, such as studies that involve harm to participants.

Examples of Limitations in Research

Some Examples of Limitations in Research are as follows:

Research Title: “The Effectiveness of Machine Learning Algorithms in Predicting Customer Behavior”

Limitations:

  • The study only considered a limited number of machine learning algorithms and did not explore the effectiveness of other algorithms.
  • The study used a specific dataset, which may not be representative of all customer behaviors or demographics.
  • The study did not consider the potential ethical implications of using machine learning algorithms in predicting customer behavior.

Research Title: “The Impact of Online Learning on Student Performance in Computer Science Courses”

  • The study was conducted during the COVID-19 pandemic, which may have affected the results due to the unique circumstances of remote learning.
  • The study only included students from a single university, which may limit the generalizability of the findings to other institutions.
  • The study did not consider the impact of individual differences, such as prior knowledge or motivation, on student performance in online learning environments.

Research Title: “The Effect of Gamification on User Engagement in Mobile Health Applications”

  • The study only tested a specific gamification strategy and did not explore the effectiveness of other gamification techniques.
  • The study relied on self-reported measures of user engagement, which may be subject to social desirability bias or measurement errors.
  • The study only included a specific demographic group (e.g., young adults) and may not be generalizable to other populations with different preferences or needs.

How to Write Limitations in Research

When writing about the limitations of a research study, it is important to be honest and clear about the potential weaknesses of your work. Here are some tips for writing about limitations in research:

  • Identify the limitations: Start by identifying the potential limitations of your research. These may include sample size, selection bias, measurement error, or other issues that could affect the validity and reliability of your findings.
  • Be honest and objective: When describing the limitations of your research, be honest and objective. Do not try to minimize or downplay the limitations, but also do not exaggerate them. Be clear and concise in your description of the limitations.
  • Provide context: It is important to provide context for the limitations of your research. For example, if your sample size was small, explain why this was the case and how it may have affected your results. Providing context can help readers understand the limitations in a broader context.
  • Discuss implications : Discuss the implications of the limitations for your research findings. For example, if there was a selection bias in your sample, explain how this may have affected the generalizability of your findings. This can help readers understand the limitations in terms of their impact on the overall validity of your research.
  • Provide suggestions for future research : Finally, provide suggestions for future research that can address the limitations of your study. This can help readers understand how your research fits into the broader field and can provide a roadmap for future studies.

Purpose of Limitations in Research

There are several purposes of limitations in research. Here are some of the most important ones:

  • To acknowledge the boundaries of the study : Limitations help to define the scope of the research project and set realistic expectations for the findings. They can help to clarify what the study is not intended to address.
  • To identify potential sources of bias: Limitations can help researchers identify potential sources of bias in their research design, data collection, or analysis. This can help to improve the validity and reliability of the findings.
  • To provide opportunities for future research: Limitations can highlight areas for future research and suggest avenues for further exploration. This can help to advance knowledge in a particular field.
  • To demonstrate transparency and accountability: By acknowledging the limitations of their research, researchers can demonstrate transparency and accountability to their readers, peers, and funders. This can help to build trust and credibility in the research community.
  • To encourage critical thinking: Limitations can encourage readers to critically evaluate the study’s findings and consider alternative explanations or interpretations. This can help to promote a more nuanced and sophisticated understanding of the topic under investigation.

When to Write Limitations in Research

Limitations should be included in research when they help to provide a more complete understanding of the study’s results and implications. A limitation is any factor that could potentially impact the accuracy, reliability, or generalizability of the study’s findings.

It is important to identify and discuss limitations in research because doing so helps to ensure that the results are interpreted appropriately and that any conclusions drawn are supported by the available evidence. Limitations can also suggest areas for future research, highlight potential biases or confounding factors that may have affected the results, and provide context for the study’s findings.

Generally, limitations should be discussed in the conclusion section of a research paper or thesis, although they may also be mentioned in other sections, such as the introduction or methods. The specific limitations that are discussed will depend on the nature of the study, the research question being investigated, and the data that was collected.

Examples of limitations that might be discussed in research include sample size limitations, data collection methods, the validity and reliability of measures used, and potential biases or confounding factors that could have affected the results. It is important to note that limitations should not be used as a justification for poor research design or methodology, but rather as a way to enhance the understanding and interpretation of the study’s findings.

Importance of Limitations in Research

Here are some reasons why limitations are important in research:

  • Enhances the credibility of research: Limitations highlight the potential weaknesses and threats to validity, which helps readers to understand the scope and boundaries of the study. This improves the credibility of research by acknowledging its limitations and providing a clear picture of what can and cannot be concluded from the study.
  • Facilitates replication: By highlighting the limitations, researchers can provide detailed information about the study’s methodology, data collection, and analysis. This information helps other researchers to replicate the study and test the validity of the findings, which enhances the reliability of research.
  • Guides future research : Limitations provide insights into areas for future research by identifying gaps or areas that require further investigation. This can help researchers to design more comprehensive and effective studies that build on existing knowledge.
  • Provides a balanced view: Limitations help to provide a balanced view of the research by highlighting both strengths and weaknesses. This ensures that readers have a clear understanding of the study’s limitations and can make informed decisions about the generalizability and applicability of the findings.

Advantages of Limitations in Research

Here are some potential advantages of limitations in research:

  • Focus : Limitations can help researchers focus their study on a specific area or population, which can make the research more relevant and useful.
  • Realism : Limitations can make a study more realistic by reflecting the practical constraints and challenges of conducting research in the real world.
  • Innovation : Limitations can spur researchers to be more innovative and creative in their research design and methodology, as they search for ways to work around the limitations.
  • Rigor : Limitations can actually increase the rigor and credibility of a study, as researchers are forced to carefully consider the potential sources of bias and error, and address them to the best of their abilities.
  • Generalizability : Limitations can actually improve the generalizability of a study by ensuring that it is not overly focused on a specific sample or situation, and that the results can be applied more broadly.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

How to Review a Manuscript

  • First Online: 01 January 2020

Cite this chapter

Book cover

  • Thomas W. Heinrich 2 , 3  

1182 Accesses

The peer review process is the current standard for assessing a manuscript’s worthiness for publication in the scientific literature, and it is based on the idealism, professionalism, and collegiality of the peer reviewer. Reviewers serve a critical role in ensuring the dissemination and fidelity of knowledge throughout the medical profession. Peer reviewers provide fair, constructive, and knowledgeable feedback on a manuscript that improves the quality of the manuscript and aids journal editors in determining an appropriate disposition of the manuscript. Accepting an invitation to review demonstrates a willingness to contribute to the profession of medicine and the advancement of knowledge.

  • Peer review
  • Publication process
  • Scientific conduct
  • Confidentiality
  • Conflicts of interest
  • Peer review systems

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Kronick DA. Peer review in 18th-century scientific journalism. JAMA. 1990;263(10):1321–2.

Article   CAS   Google Scholar  

Bordage G, Caelleigh AS. A tool for reviewers: review criteria for research manuscripts. Acad Med. 2001;76(9):904–8.

Article   Google Scholar  

Pytynia KB. Why participate in peer review as a journal manuscript reviewer: what’s in it for you? Otolaryngol Head Neck Surg. 2017;156(6):976–7.

Roberts LW, Coverdale J, Edenharder K, Louie A. How to review a manuscript; a “down-to-earth” approach. Acad Psychiatry. 2004;28(2):81–5.

Lovejoy TI, Revenson TA, France CR. Reviewing manuscripts for peer-review journals: a primer for novice and seasoned reviewers. Ann Behav Med. 2011;42:1–13.

Alexandrov AV, Hennerici MG, Norrving B. Suggestions for reviewing manuscripts. Cerebrovasc Dis. 2009;28:243–6.

Black N, van Rooyen S, Godlee F, Smith R, Evans S, et al. What makes a good reviewer and a good review for a general medical journal? JAMA. 1998;280:231–3.

Goldbeck-Wood S. Evidence on peer review-scientific quality control or smokescreen? BMJ. 1999;318:44–5.

McNutt RA, Evans AT, Fletcher RH, Fletcher SW. The effects of blinding on the quality of peer review. A randomized trial. JAMA. 1990;263(10):1371–6.

Rosenfeld RM. How to review journal manuscripts. Otolaryngol Head Neck Surg. 2010;142(4):472–86.

McGaghie WC, Bordage G, Shea JA. Problem statement, conceptual framework, and research question. Acad Med. 2001;76(9):923–4.

McGaghie WC, Bordage G, Crandall S, et al. Research design. Acad Med. 2001;76(9):929–30.

Alam M. How to review a manuscript. Dermatol Surg. 2015;41(8):883–8.

Regehr G. Presentation of results. Acad Med. 2001;76(9):940–2.

Christenbery TL. Manuscript peer review: a guide for advanced practice nurses. J Am Acad Nurse Pract. 2011;23(1):15–22.

Salasche SJ. How to “peer review” a medical journal manuscript. Dermatol Surg. 1997;23(6):423–8.

Hoppin FG. How I review an original scientific article. Am J Respir Crit Care Med. 2002;166:1019–23.

Hill JA. How to review a manuscript. J Electrocardiol. 2016;49(2):109–11.

Kotsis SV, Chung KC. Manuscript rejection: how to submit a revision and tips on being a good peer reviewer. Plast Reconstr Surg. 2014;133(4):958–64.

Download references

Author information

Authors and affiliations.

Department of Psychiatry and Behavioral Medicine, Medical College of Wisconsin, Milwaukee, WI, USA

Thomas W. Heinrich

Department of Family and Community Medicine, Medical College of Wisconsin, Milwaukee, WI, USA

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, USA

Laura Weiss Roberts

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Heinrich, T.W. (2020). How to Review a Manuscript. In: Roberts, L. (eds) Roberts Academic Medicine Handbook. Springer, Cham. https://doi.org/10.1007/978-3-030-31957-1_32

Download citation

DOI : https://doi.org/10.1007/978-3-030-31957-1_32

Published : 01 January 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-31956-4

Online ISBN : 978-3-030-31957-1

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Health Serv Res

Logo of bmchsr

Implementing research results in clinical practice- the experiences of healthcare professionals

Nanna kristensen.

Gentofte Hospital, Kildegårdsvej 28, 2900 Hellerup, Denmark

Camilla Nymann

Hanne konradsen.

In healthcare research, results diffuse only slowly into clinical practice, and there is a need to bridge the gap between research and practice. This study elucidates how healthcare professionals in a hospital setting experience working with the implementation of research results.

A descriptive design was chosen. During 2014, 12 interviews were carried out with healthcare professionals representing different roles in the implementation process, based on semi-structured interview guidelines. The analysis was guided by a directed content analysis approach.

The initial implementation was non-formalized. In the decision-making and management process, the pattern among nurses and doctors, respectively, was found to be different. While nurses’ decisions tended to be problem-oriented and managed on a person-driven basis, doctors’ decisions were consensus-oriented and managed by autonomy. All, however, experienced a knowledge-based execution of the research results, as the implementation process ended.

The results illuminate the challenges involved in closing the evidence-practice gap, and may add to the growing body of knowledge on which basis actions can be taken to ensure the best care and treatment available actually reaches the patient.

Electronic supplementary material

The online version of this article (doi:10.1186/s12913-016-1292-y) contains supplementary material, which is available to authorized users.

Healthcare research continually produces large amounts of results and revised methods of treatment and care for patients, which, if implemented in practice, can potentially save lives and improve the quality of life of patients [ 1 ]. Nonetheless, a rise in the amount of research results available does not automatically translate into improved patient care and treatment [ 2 , 3 ].

There is broad evidence that there is a substantial gap between the healthcare that patients receive and the practice that is recommended – also known as the research-practice gap, evidence-practice-gap or knowing-doing gap [ 4 – 6 ]. Evidence suggests that it sometimes takes more than a decade to implement research results in clinical practice, and that it is often difficult to sustain innovations over time [ 7 , 8 ]. This is critical, not only for patients, who thereby fail to receive the best treatment and care available, but also for healthcare organizations and society, who miss out on the potential financial value gains and returns on investment [ 9 , 10 ].

Implementation Science is the mail field of research dedicated to exploring methods of implementing research evidence into practice [ 11 , 12 ]. Many studies within this field explore methods to promote integration of research findings by policymaking and through larger, systematic and planned implementation initiatives such as e.g. Consolidated Framework For Implementation Research (CFIR) [ 13 ]. Fewer studies unfold whether and how research results seems to wander into practice in a less structured, planned and top down manner through local, emerging and personally carried mechanisms.

Within the field of translational science (the translation of new clinical knowledge into improved health) studies suggesting methods for bridging the gap between research and practice [ 14 ] mainly focus on exploring implementation methods capable of promoting the exchange, transfer, diffusion and dissemination of evidence-based knowledge to practitioners and decision-makers in healthcare systems [ 14 ]. Journal clubs are similarly a widespread dissemination method for clinicians to access evidence-based knowledge through presentations and discussions of research articles [ 15 ]. What these approaches have in common is a focus on how to convey evidence-based information to healthcare professionals, and thereby raise awareness of relevant improvements in treatment and care. A large body of research nonetheless suggests that it is difficult for professionals to utilize new, decontextualized, explicit knowledge in their daily work practice [ 16 – 18 ]. What directs the professional’s actions in practice will often be the implicit and established know-how of routines– even when decisions on new methods and the commitment to put them into practice is otherwise present [ 19 – 21 ].

In line with these insights, translational methods have been shown to produce only small to moderate effects, and research suggests that the successful uptake of research results in the actions of healthcare professionals requires more than merely making the results accessible for local practice [ 3 , 22 , 23 ].

Recent research suggests that evidence-informed practice is mediated by an interplay between the individuals, the new knowledge and the actual context in which the evidence is to be operationalized and utilized in daily practice [ 24 , 25 ]. Organizational contextual factors such as culture and leadership, but also social and attitudinal factors as professional opinion has shown to have a great impact on implementation success [ 12 , 26 , 27 ]. In this perspective, new research results are not transferred on a 1:1 basis from academia to practice. Instead, the applicability of research results must be locally evaluated, and new results must eventually be made actionable and utilizable, and adapted to local practice, in order to produce the desired outcome over time [ 22 , 23 , 27 - 29 ].

Deepening our understanding of the factors which prohibit or promote this interplay in local practice and the operationalization and use of research results in daily clinical life is vital in order to bridge the continuing gap between healthcare research and practice [ 30 , 31 ].

The objective of this study is to elucidate how healthcare professionals in a hospital setting experienced working with the implementation of research results in practice, and which existing methods they utilized to incorporate research results into daily healthcare action.

A descriptive qualitative design was chosen, as the aim of the study was to elucidate the experiences of healthcare professionals. A directed content analysis approach guided the analysis [ 32 ].

Setting and participants

The participants were healthcare professionals working in two different medical wards in a medium-sized university hospital in Denmark. In order to capture viewpoints representing various different roles in the implementation process, the following professionals from each ward were invited to participate (Table  1 ).

(Participant characteristics)

a Dual role filled by the same person

As there was an overlap between the positions in two instances, twelve interviews were carried out. The wards were selected on the basis of having several researchers employed, as well as their willingness to participate.

The participants were recruited through the heads of departments, who were asked to identify professionals eligible to participate. A calendar invitation was subsequently sent out inviting the professionals to participate, and all agreed.

Data collection

Data was collected in the spring of 2014 through 12 qualitative, semi-structured interviews. All of the interviews took place in the wards. The theoretical framework of Klein & Knight [ 33 ] served as the basis of the interview guide (see Additional file 1 ).

The theoretical framework consisted of factors enhancing implementation: 1) a package of implementation policies and practices established by an organization, 2) the climate for innovation implementation in the team or organization —i.e. the employees’ shared perceptions of the importance of innovation implementation within the team or organization, 3) managerial support for innovation implementation, 4) the availability of financial resources, 5) a learning orientation: a set of interrelated practices and beliefs that support and enable employee and organizational skill development, learning, and growth, 6) managerial patience, i.e. long-term orientation. The framework also consisted of the following challenges to implementation: 1) many technological innovations are unreliable and imperfectly designed, 2) many innovations require would-be users to acquire new technical knowledge and skills, 3) the decision to adopt and implement an innovation is typically made by those higher up in the hierarchy than the innovation’s targeted users, 4) many team-based and organizational innovations require individuals to change their roles, routines, and norms, 5) implementation is time-consuming, expensive, and, at least initially, a drag on performance, and 6) organizational norms and routines foster maintenance of the status quo.

The opening question of the interviews was always open-ended, asking the participants to talk about their own experiences of working with research implementation in practice. Consequently, the participants contributed as much detailed information as they wished, and the researchers asked further questions as necessary. The interviews lasted on average 45 min, and were conducted by the first and second author of this article. One person acted as the main interviewer while the other observed the interview as a whole, ensuring follow-up in accordance with the interview guide. All interviews were recorded and transcribed.

Data coding and analysis

A directed and deductive content analysis approach [ 34 ] guided the analysis in order to bring theoretically-derived coding categories into connection with the empirical data.

Transcripts were entered into NVivo10 in order to structure the data. An unconstrained categorization matrix was developed on the basis of the twelve theoretical factors to guide the analysis, as described by Elo et al. [ 34 ]. Data was coded according to the categories in the matrix. During the coding process, new categories emerged, such as issues about professionals using their spare time for research and research implementation, and multidisciplinarity among doctors and nurses. The new categories were noted and treated as equally important additions to the initial categories. Once the coding process was complete, the number of categories was reduced by “ collapsing those that are similar or dissimilar into broader higher-order categories. ” [ 34 ], while maintaining proximity to the text. This abstraction process was repeated with the higher-order categories, resulting in six main categories, as described in the results section of this article.

In order to enhance rigor and validity, interviews were initially coded by all authors individually, after which they met and discussed the categorization until consensus was obtained [ 35 ].

The manuscript adheres to the RATS Guidelines in order to enhance credibility.

Ethical considerations

The study was submitted to The Committees on Health Research Ethics for the Capital Region of Denmark (De Videnskabsetiske komiteer for Region Hovedstaden), who assessed that the study did not require formal approval by the committee.

As only two wards at the hospital served as the empirical basis of the study, the researchers paid special attention to issues of confidentiality and anonymity. Participants were therefore informed that their names would not be mentioned in the study, but were also asked to reflect on the fact that the limited number of participants might make it difficult to maintain total anonymity. With this information in mind, all participants gave their written, informed consent prior to participating.

In this study of the experience of healthcare professionals with existing ways of incorporating research results into healthcare action, six main categories were identified: non-formalized , consensus-oriented , problem-oriented , autonomous, person-driven and knowledge-based.

These main categories related in different ways to the varying implementation activities of initiating, deciding on, managing and executing change. These activities are associated with different stages in the process of implementation, and the main categories are therefore structured around these (Fig.  1 ).

An external file that holds a picture, illustration, etc.
Object name is 12913_2016_1292_Fig1_HTML.jpg

Activities and stages in the process of implementation of research results in clinical practice

The healthcare professionals experienced no formalized procedures or established workflows in relation to initiating the implementation of research results. One nurse explained how the work of initiating implementation was not integrated into the conclusion of a research project:

“When we have the results, we are prone to think ‘Well, that’s that’ and move on to a new project.” (Nurse)

In relation to describing, searching for, remaining updated on, and evaluating the relevance of new research knowledge within various areas, one doctor commented:

“…there is no system in it. It’s not as though we say, ‘you keep an eye on what’s going on in this field, and I’ll keep an eye on this’. It becomes somewhat unsystematic.” (Doctor)

No well-defined assignments, roles or responsibilities emerged in the experience of translating research results into practical implementation. One doctor described the uncertainty experienced due to the lack of a more systematic approach to research implementation, and stated that:

“…I think it would be nice to have some kind of system or safety net, so that you don’t have to worry about whether something may have slipped through.” (Doctor)

Heads of departments and heads of units were not active in initiating the implementation of new research in practice. Their participation was mostly limited to approving, allocating resources or applying for financial support when a healthcare professional wished to initiate the implementation of a research result.

Often, highly-motivated persons with specific research interests took the initiative into their own hands to suggest the implementation of certain new research results in practice. In the nurse group this was often a clinical nurse specialist, whereas in the doctor group any doctor might initiate a potential implementation. One senior physician with special research responsibility described this as being closely related to a high degree of motivation to take action:

“…people take things seriously. They don’t just sit around and wait for an instruction to arrive. I mean – they just do things.” (Physician with special research responsibility)

This informal practice of individuals independently initiating the implementation of research results was also seen in doctors putting in extra hours after their formal working hours, both to conduct their own research and to acquire the skills necessary to implement a certain new result in practice, such as a surgical technique or a new item of equipment. In this connection, both research and the implementation of research were to some extent driven by individual interests and motivation that went beyond formal obligations.

When deciding on and managing implementation, various patterns were described in relation to doctors and nurses. In the doctor group, the decision to implement a new result was described as a consensus process among the senior physicians; managing implementation, on the other hand, was experienced as being regulated by individual doctors autonomously.

New research results were discussed at weekly meetings between all doctors – either on the basis of the doctors’ own research, articles, or input from external conferences. The most specialized physicians within a clinical area selected and presented new research results to their colleagues. These presentations were followed by discussion in the group – sometimes debating the results, and at other times considering whether to implement the results in practice or not. One doctor said:

“If there is some kind of consensus that this sounds exciting and this is something that we would like to proceed with, then we can agree that it is something we will do.” (Senior physician)

In this way a consensus decision was arrived at, and the group would then define the principles for implementing new methods of patient treatment. As one doctor described it:

“We discuss it in the group and agree that to begin with, we will apply a strategy in which we only do this with some (patients).” (Physician with special research responsibility)

Once a consensus decision had been taken to implement a new result, the collective coordination ceased, and was replaced by a principle of the individual autonomy of each doctor to manage his or her own decisions in practice.

“ We have a high degree of autonomy. We do not all do things the same way. ” (Senior Physician)

In this way, any doctor could refrain from implementing a new practice that had been decided on in the group, or act on something that had not been decided in the group, without the need to ask permission. Due to autonomy, no organized follow-up was conducted from within the ward to manage and monitor the implementation of research results. Some healthcare professionals said that if there was a follow-up on the implementation of a new research result, it was often in practice conducted by agents from the pharmaceutical industry who wished to establish the application of certain products.

The principle of autonomy was also visible in deciding whether to adopt new instructions and guidelines. As a head of ward stated:

“…we don’t have to follow these guidelines. There may be many other issues to consider. We might be satisfied with the treatment that we already have, and not find the new treatment much better. It might even be more expensive. So we don’t have to put it into practice.” (Medical Head of Department)

In the doctor group, therefore, the experience of deciding on and managing the implementation of research results in practice showed both an orientation towards achieving consensus decisions, and at the same time a principle of managing change autonomously in practice.

Fewer persons participated in decision-making and management in the nurse group. One or more clinical nurse specialists and a Nursing Head of Unit jointly planned a process to collect research results and design an intervention to change practice. Most of the time, the proposals came from clinical specialists, with the formal aim of remaining updated on research within the overall professional field. One clinical specialist said:

“…our practice was very individual and experimental, so I made a search for the existing evidence. And on that basis I implemented a new practice.” (Nurse with special research responsibility)

A problem in existing practice inspired clinical nurse specialists to revise that practice, and on that basis seek out existing research results. Clinical nurse specialists were seen as agents of change with responsibility to manage the implementation of a research result as a revised practice in cooperation with the Nursing Head of Unit. Clinical nurse specialists were referred to as ‘key persons’ and ‘resource persons’ who searched for evidence, designed the changes in practice, coordinated changes with other sections, produced instructions and maintained focus on the change at status meetings. The clinical nurse specialists also performed a supervisory role in relation to updating nurses’ knowledge, training their skills in practice, monitoring whether nurses carried out the new practice, and addressing those nurses who failed to adopt the new methods or experienced problems with them. One nurse with special research responsibility described how she worked to spread change in the nursing group by engaging particularly motivated staff members to advocate the new practice and act as change ambassadors in the daily routine:

“You have to have someone, either yourself, or someone else that you identify, to continually stir the pot. Saying ‘we are all responsible’ is the same as saying ‘no-one takes responsibility’.” (Nurse with special research responsibility)

At the same time, both clinical nurses and Nursing Heads of Unit described instances when the implementation of research results failed because nobody took action on the agreed plan. Despite the intention to implement the change, the clinical specialists described how they failed to actually turn decisions about changes into revised practice. One nurse explained:

“We can easily agree – but the motivation falls away as we walk out of the room, because other assignments accumulate for the resource persons that have been in the room, such as the need to cover a shift.” (Nurse)

Failure to allow time to follow up and anchor new decisions in practice was experienced as the result of competing agendas overruling the management of the decisions, as ‘there is simply no room for more’. Both the large flow of patients, the pressure to keep up efficiency figures and the large number of other, unrelated implementation processes going on in connection with quality improvement were seen as barriers to implementing the research-based changes.

“…the combination of the high level of pushing patients through the hospital – that is, the high demands towards production – and the great focus on efficiency conflict with the need to conduct that many development projects, each with their own requirements, and at the same time.” (Nurse)

The overriding focus on production was experienced as being closely related to management focus and behavior:

“Management – the senior management here – is very focused on the bottom line figures. There is definitely a lack of follow-up and a lack of people insisting ‘now we’re going to do this’.” (Nurse with special research responsibility)

As well as being the ones with the responsibility for managing changes, nurses with special research responsibility also saw themselves as being very much alone and having trouble making the changes on their own:

“I’m the only one who is trained to read research articles. And that’s not enough – there is not enough discussion to be able to drive a change. There is not enough resonance. I’m on my own.” (Nurse with special research responsibility)

Operationalizing research results into revised action in practice was mainly knowledge-based, in terms of generating information external to the individuals handling the knowledge [ 36 ]. Moving from the decision to executing the changes was mostly experienced as a procedure to create a new instruction or a supplement to an existing one.

“The procedures to be changed are written down in instructions on how to perform them.” (Medical Head of Department)

When reflecting on how this was done, one doctor stated:

“…a document, an instruction is created, and from there on everybody does it.” (Doctor)

In this respect there was a widespread belief that changes would emerge from written words/documents. Information-sharing about decisions took place between a few consultant doctors in the doctor group. As one doctor said:

“Actually, only a very small group needs to know where we are going. Because then you pass it on to others.” (Doctor)

The actual implementation took place randomly, as senior physicians ‘passed it on’ to junior doctors when approving professional decisions in relation to the treatment of patients. When reflecting on how the new information reaches the majority of other healthcare professionals, one Medical Head of Department stated:

“Well, you’re obliged to read the instructions when you get hired in this department. They are accessible on our local network. Or on the hospital network. You can read all of them there.” (Medical Head of Department)

At the same time as relying on knowledge-based implementation through mandatory reading of written documents, the large number of written procedures was also experienced as something that hindered healthcare professionals from knowing how to carry out the practice. Several instructions and guidelines referred to the same practice, and reading all of the instructions simultaneously was experienced as too demanding in a busy schedule, resulting in a failure to read them.

Other types of knowledge-based implementation included exchanging and sharing information at meetings, and in newsletters and e-mails. Both doctors and nurses described teaching each other theoretically, sharing knowledge, and in some cases attending formal training, such as conferences or courses.

Nonetheless, these practices were seen as ineffective in implementing research results. As one nurse expressed it:

“Of course you can inform people, teach them – but it changes nothing.” (Nurse)

Applying job training and bedside learning in the implementation of research results was common in the nurse group. As one clinical nurse specialist explained her practice:

“I had a list of the people who had received the theoretical teaching, and what we did was that they accompanied me with a patient and observed me showing them ‘how to do it’, and then I observed them, so that I could supervise them.” (Nurse with special research responsibility)

On-the-job training was perceived as being a more efficient way of implementing research results, but at the same time much more demanding on resources:

“My experience is that face-to-face is the way to do it, to create the understanding, as well as ensuring their ability to do it afterwards. But it’s time-consuming.” (Nursing Head of Unit)

Doctors also referred to on-the-job training in relation to the implementation of new procedures. In such cases one or more of the doctors acquired a new skill and then taught it to colleagues who were interested, but no organized on-the-job training was described.

The objective of this study has been to elucidate how healthcare professionals in a hospital setting experienced working with research implementation in practice, and which existing methods of incorporating new research results into daily healthcare action they had experienced. This is not the first study to explore experiences of healthcare professionals with implementation of research results, however most other studies, examine how researchers and policymakers might work with clinicians rather that how clinicians work on their own. This study suggests that clinicians do work intentionally implementing research results by themselves. And knowing the mechanisms regulating this intentional implementation effort is important in furthering the knowledge on how to ensure a best practice patient care and evidence based healthcare systems.

We found the initiation of the implementation of research results to be largely non-formalized at the organizational level and not led by management. According to the literature, refraining from formalizing which research results are to be implemented, and how they are to be implemented, can both benefit and compromise the operationalization of research results in healthcare practice.

In a longitudinal case study of healthcare improvement, Booth et al. [ 37 ] argued that improvements in chronic illness care emerged as a result of co-evolution, non-linearity and self-organization in a network of agents, and not as a result of planned system change. In this view, the non-formal character of research implementation could be thought to provide a necessary space of possibilities for improvements to emerge. The non-formalized nature of implementation also provides plenty of room for individual initiative and opportunities to define which research results are to be implemented and how they are to be implemented. This participation and self-determination is argued to be vital to securing an affective commitment to the changes and an intrinsic motivation to change clinical behavior [ 38 , 39 ].

On the other hand, formal leadership is claimed to be an important ingredient in making changes happen in practice, particularly in terms of obtaining the same affective commitment to change on the part of organization members [ 40 ]. Stelter et al. [ 41 ] argue that supportive leadership behaviors – including articulating visions, clarifying prioritized goals, handling inhibitors in the implementation process, etc. – are necessary to institutionalize evidence-based practices in an organization. According to Birken et al. [ 42 ], middle managers are crucial in the area of diffusing and synthesizing information, selling the innovation and mediating between strategy and day-to-day activities. Furthermore, the lack of formal procedures and overall professional-strategic steering may risk failure to prioritize the limited resources in areas where improvements in healthcare and patient treatment are most needed [ 43 , 44 ], systematic and prioritized knowledge implementation efforts need not exclude the possibility of aspiring, well-motivated and self-determined healthcare professionals creating bottom-up changes. By promoting ‘enabling contexts’, managers can support emerging processes with visionary proposals and commitment, while professionals can generate innovation and professional development within strategically selected areas [ 44 ].

In relation to deciding on and managing change, nurses and doctors in the study followed different patterns.

The consensus approach, as a decision-making method, has also been studied by others [ 45 ]. Investigating the process of practice change in a clinical team, Mehta et al. (1998) found that in addition to evidence-based knowledge, group dynamics, the process of dialogue and personal experience also exerted a considerable influence on the consensus that was reached and the decisions that were made [ 44 ]. In this perspective, multiple, non-formal and non-rational factors can influence the decisions that are taken in clinical teams in relation to initiating and deciding to implement research results.

In the management of the changes in practice, the autonomy of individual doctors was key. Our results indicated that even though decisions were reached through consensus, they were not perceived as binding in clinical practice. Clinical autonomy is a phenomenon that is well described elsewhere in the literature [ 46 ]. A recent study showed that doctors preferred their individual, non-systematic professional assessments to a formalized strategic approach in which they applied the recommendations of a large, predetermined, evidence-based program when treating COPD patients across sectors [ 47 ].

Armstrong [ 48 ] has described this preference for autonomy as a defense of professional identity and the right to act independently without instructions from others. One interpretation of clinical autonomy is that the individual practitioner strives to preserve traditional privileges and monopolies of knowledge in the medical profession – e.g. by using the rhetoric of a patient-centered treatment that allows the professional to act autonomously and avoid the constraints of professional control and standardization.

Another interpretation of the phenomenon is that clinical autonomy is a necessary prerequisite for doctors to be able to make research-based decisions on the treatment of individual patients, since such decisions are not only based on clinical evidence, but also on factors such as patient preference and clinical expertise [ 49 , 50 ]. In this perspective, the lack of formalization and management of professional development, as our results also indicate, can be viewed as a valuable and desirable aspect of maintaining professional healthcare practice on the basis of the individual doctor’s expertise [ 51 ].

The implementation of research results in the nurse group was driven and managed by a single nurse or a small group of nurses, with a clinical nurse specialist in each ward as key to managing the changes. One could argue that these nurses play a role that is somewhat similar to the role of nurse facilitators as described by Dogherthy et al. [ 52 ]. According to Dogherthy, the nurse facilitator is an important element in advancing evidence-based nursing practice. The role is very broadly defined in the literature as ranging from specific task-driven actions to releasing the inherent potential of individuals [ 52 ]. However, one common denominator that is stressed in order for the nurse facilitator to succeed in implementing research results is the deputizing of the nurse facilitator. Lack of authority has in several studies been identified as a barrier to the implementation of research results by nurses [ 52 – 54 ].

The background and competencies of nurses leading change processes on the basis of research results is argued to be important in determining whether their management of research implementation will succeed. Currey et al. [ 55 ] argue that nurses who facilitate evidence-based clinical practice must have a doctorate, be recognized clinical experts with educational expertise and advanced interpersonal, teamworking and communications skills. In line with this research-active nurses have been found to be more likely to overcome barriers in the implementation process and to succeed in translating research into practice [ 56 ].

Having only a few research-active nurses in a ward thus seems to be a barrier for implementation in itself, as the majority of nurses in the wards abstained from using evidence-based knowledge, giving three typical reasons: lack of time, lack of interest and lack of qualifications [ 56 ].

All in all, considering the competencies and the available management space of key-persons/clinical nurse specialists, leadership of the implementation of research results seems to be key to the implementation of research results in clinical practice.

The manner of executing the implementation of new research results in practice resonates with the tradition in translational science of focusing on conveying evidence-based information in the implementation of research results in practice [ 14 ].

We found widespread use of written clinical instructions, guidelines and newsletters when executing the implementation of research, as has also been found in other studies on implementation methods in healthcare [ 22 ].

It has been argued in several studies that issuing clinical guidelines, etc., serves to protect the collective professional autonomy from administrative pressures by clearly demonstrating a commitment to high standards of care, thereby justifying professional independence [ 57 , 58 ].

When considering the knowledge-based approach, studies found that the mere dissemination of evidence-based clinical guidelines was ineffective in changing the behavior of healthcare professionals [ 2 , 59 , 60 ]. In a decision science study, Mohan et al. [ 61 ] demonstrated doctors’ non-compliance with clinical practice guidelines in trauma triage. 80 % of the doctors failed to refer trauma patients to trauma centers, even though the patients met the guideline conditions for transfer. The authors attribute the non-compliance to non-evidence-based attitudes on the part of the doctors, plus organizational norms and incentives that influenced the doctors’ perceptions and decision-making procedures.

Creating behavioral compliance with the content of guidelines has been shown to require the continuous involvement of all staff in establishing new routines to utilize the guidelines [ 62 ]. Our results indicated no such involvement of general staff, as one or a few professionals often developed or discussed the guidelines independently. The widespread belief that creating new guidelines and relying on the mandatory reading of these will in itself change the behavior of healthcare professionals may be a barrier to the effective implementation of new research results in practice.

With regard to managing implementation though e-mails, classroom teaching, etc., Rangachari et al. [ 3 ] have described these methods as ineffective in establishing new research-based practice as they solely raise awareness of the change, but fail to make it actionable.

Other researchers also point to the importance of making knowledge actionable [ 63 ]. The transformation of explicit knowledge and awareness into new skills may very well depend on activities such as acting out and improvising, mentally and motorically, the new intentions and knowledge, and thereby operationalizing and internalizing the knowledge and continually regulating actions to produce the desired outcome [ 16 , 64 ].

Opportunities to act upon new research results in practice were scarce in our results, and more common among the nurses than among the doctors. However, in both cases there were reports of on-the-job training and bedside learning, which were perceived as being more effective in changing the behavior of staff in the ward.

Limitations

The use of interviews to investigate the implementation processes may have influenced the data and the linear change model that emerged from the interviews. When invited to articulate their experiences of research implementation, participants may have generated narratives with a beginning, a middle and an end, whereas more complex and circular processes may have taken place.

Investigating implementation practice through additional methods such as observational studies, participation in meetings and daily clinical practice may provide further insight into the nature of common implementation processes in healthcare systems.

Conclusions

The experience of research implementation illuminated a process that was unsystematically initiated, relied on few stakeholders, and often ended up on paper rather than in practice and in the actual treatment and care of individual patients. The results reveal that this on the one hand describes the challenges involved in closing the evidence-practice gap, but on the other hand supports the commitment of professionals with special research interests. The results of this study will add to the growing body of knowledge on which basis action can be taken to ensure that the best care and treatment available actually reaches the patient.

Acknowledgements

We wish to thank all the respondents in this study and their management for allocating the time for interviews. We would also like to thank Jüri Johannes Rumessen and Sidsel Rasborg Wied for supporting the study and for their advice during the data analysis.

Additional file

Interview guide (translated from Danish). (PDF 70 kb)

Competing interests

The authors declare that they have no competing interests.

Authors’ contribution

The study was designed by NK, CN and HK. Data collection was performed by NK and CN. Data analysis was undertaken by NK, CN and HK. The draft manuscript was written by NK and CN under the supervision of HK. All of the authors have read, revised and approved the final manuscript.

Authors' information

All of the authors were employed at Gentofte University Hospital, Denmark.

Contributor Information

Nanna Kristensen, Email: kd.rotcennoc@kn .

Camilla Nymann, Email: [email protected] .

Hanne Konradsen, Email: [email protected] .

IMAGES

  1. Parts Of A Research Paper

    this part of the research study will be written for further improvements to be done

  2. Example Of Significance Of The Study For Future Researchers

    this part of the research study will be written for further improvements to be done

  3. Research Plan Example For Phd

    this part of the research study will be written for further improvements to be done

  4. How to Start a Research Paper: Guide with Examples

    this part of the research study will be written for further improvements to be done

  5. How To Write The Methodology Part Of A Research Paper ~ Alice Writing

    this part of the research study will be written for further improvements to be done

  6. Nursing Research Proposal Paper Example

    this part of the research study will be written for further improvements to be done

VIDEO

  1. RESEARCH CRITIQUE: Quantitative Study

  2. 📚How I mange study after marriage 💁with two kids🔥motivation for everyone🎯 RO/ARO

  3. Be Part of Research and help to shape the future

  4. Panel Study| Research Method, Business Research Methodology #shortnotes #bba #bcom

  5. Mastering Final Exam Prep: Proven Strategies for Success

  6. NISM Research Analyst 2024

COMMENTS

  1. How to Write Recommendations in Research

    Recommendations for future research should be: Concrete and specific. Supported with a clear rationale. Directly connected to your research. Overall, strive to highlight ways other researchers can reproduce or replicate your results to draw further conclusions, and suggest different directions that future research can take, if applicable.

  2. What are Implications and Recommendations in Research? How to Write It

    Support your study arguments: Ensure that your research findings stand alone on their own merits to showcase the strength of your research paper. How to write recommendations in research. When writing research recommendations, your focus should be on highlighting what additional work can be done in that field. It gives direction to researchers ...

  3. Conclusions and recommendations for future research

    The initially stated overarching aim of this research was to identify the contextual factors and mechanisms that are regularly associated with effective and cost-effective public involvement in research. While recognising the limitations of our analysis, we believe we have largely achieved this in our revised theory of public involvement in research set out in Chapter 8. We have developed and ...

  4. Future Research

    Future Research: Further research could investigate the long-term effects of mindfulness-based interventions on mental health outcomes among individuals with chronic pain. A longitudinal study could be conducted to examine the sustainability of mindfulness practices in reducing pain-related distress and improving psychological well-being over time.

  5. The critical steps for successful research: The research proposal and

    Research outcomes are measured through quality publications. Scientists must not only 'do' science but must 'write' science. The story of the project must be told in a clear, simple language weaving in previous work done in the field, answering the research question, and addressing the hypothesis set forth at the beginning of the study.

  6. Ideas for writing the "future research directions" section (pt.I)

    This can bridge the gap between theory and practice. Ethical Considerations: Address any ethical concerns related to future research directions. Ensure that proposed studies adhere to ethical ...

  7. Organizing Your Social Sciences Research Paper

    Presents the underlying meaning of your research, notes possible implications in other areas of study, and explores possible improvements that can be made in order to further develop the concerns of your research; Highlights the importance of your study and how it can contribute to understanding the research problem within the field of study;

  8. 20 Ways to Improve Your Research Paper

    Here, we provide 20 useful tips to improve your research paper before submission. 1. Choose a specific and accurate title (and subtitles) This is a very important part of your manuscript and can affect readership. People often choose what to read based on first impressions. Make sure your title doesn't put people off.

  9. Organizing Your Social Sciences Research Paper

    Doing Case Study Research: A Practical Guide for Beginning Researchers. 2nd ed. New York: Teachers College Press, 2011; Introduction to Nursing Research: Reporting Research Findings. Nursing Research: Open Access Nursing Research and Review Articles. (January 4, 2012); Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section.

  10. Revolutionizing the Study of Mental Disorders

    The Research Domain Criteria framework (RDoC) was created in 2010 by the National Institute of Mental Health. The framework encourages researchers to examine functional processes that are implemented by the brain on a continuum from normal to abnormal. This way of researching mental disorders can help overcome inherent limitations in using all ...

  11. How to Conduct Responsible Research: A Guide for Graduate Students

    Abstract. Researchers must conduct research responsibly for it to have an impact and to safeguard trust in science. Essential responsibilities of researchers include using rigorous, reproducible research methods, reporting findings in a trustworthy manner, and giving the researchers who contributed appropriate authorship credit.

  12. How to write the part scope for further research?

    The part scope for further research is essential in every academic study such as a thesis, dissertation or journal paper. The main purpose of this part is to make the readers aware of the findings emerging from the study, and its shortcomings. The shortcomings of the research gap guide future researchers on a domain that they must consider to ...

  13. PDF Summary and Analysis of Scientific Research Articles

    The analysis shows that you can evaluate the evidence presented in the research and explain why the research could be important. Summary. The summary portion of the paper should be written with enough detail so that a reader would not have to look at the original research to understand all the main points. At the same time, the summary section ...

  14. Suggestions for Future Research

    You will need to propose 4-5 suggestions for future studies and these can include the following: 1. Building upon findings of your research. These may relate to findings of your study that you did not anticipate. Moreover, you may suggest future research to address unanswered aspects of your research problem. 2.

  15. A Beginner's Guide to Starting the Research Process

    Step 4: Create a research design. The research design is a practical framework for answering your research questions. It involves making decisions about the type of data you need, the methods you'll use to collect and analyze it, and the location and timescale of your research. There are often many possible paths you can take to answering ...

  16. How to Write Limitations of the Study (with examples)

    Common types of limitations and their ramifications include: Theoretical: limits the scope, depth, or applicability of a study. Methodological: limits the quality, quantity, or diversity of the data. Empirical: limits the representativeness, validity, or reliability of the data. Analytical: limits the accuracy, completeness, or significance of ...

  17. Overview of the Research Process

    Research is a rigorous problem-solving process whose ultimate goal is the discovery of new knowledge. Research may include the description of a new phenomenon, definition of a new relationship, development of a new model, or application of an existing principle or procedure to a new context. Research is systematic, logical, empirical, reductive, replicable and transmittable, and generalizable.

  18. Effective implementation of research into practice: an overview of

    The gap between research findings and clinical practice is well documented and a range of interventions has been developed to increase the implementation of research into clinical practice. A review of systematic reviews of the effectiveness of interventions designed to increase the use of research in clinical practice. A search for relevant systematic reviews was conducted of Medline and the ...

  19. Limitations in Research

    The study only included a specific demographic group (e.g., young adults) and may not be generalizable to other populations with different preferences or needs. How to Write Limitations in Research. When writing about the limitations of a research study, it is important to be honest and clear about the potential weaknesses of your work.

  20. Studying Ways to Improve the Use of Research Evidence: Presenting a

    A first step is to define research evidence. The Foundation defines research evidence as empirical findings derived from systematic data collection and analyses. As we've written before, it is important to remember that data by themselves do not constitute research evidence.

  21. Improving the Quality of Systematic Reviews: Discussion, Conclusions

    Abstract: The committee recommends that sponsors of systematic reviews (SRs) of comparative effectiveness research (CER) should adopt appropriate standards for the design, conduct, and reporting of SRs and require adherence to the standards as a condition for funding. The newly created Patient-Centered Outcomes Research Institute and agencies of the U.S. Department of Health and Human Services ...

  22. 5 Tips to improve your research article and make it reader ...

    The study details and appropriate logical discussion should follow. Further, knowing your readers and shaping the manuscript to cater to the target audience can considerably make your paper easier to understand. 2. Write a catchy and effective first page: The title and abstract are the first sections a reader would come across. These may be the ...

  23. How to Review a Manuscript

    In research manuscripts this is accomplished by a clear description of the study design, procedures, ethical safeguards, and means of data analyses. The research design should be sufficiently described and detailed to allow the study to be replicated . The authors' methods and data analysis must be sound and appropriate for the research question.

  24. Implementing research results in clinical practice- the experiences of

    Background. Healthcare research continually produces large amounts of results and revised methods of treatment and care for patients, which, if implemented in practice, can potentially save lives and improve the quality of life of patients [].Nonetheless, a rise in the amount of research results available does not automatically translate into improved patient care and treatment [2, 3].