Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism. Run a free check.

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is a systematic research study

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved March 31, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Systematic Review | Definition, Examples & Guide

Systematic Review | Definition, Examples & Guide

Published on 15 June 2022 by Shaun Turney . Revised on 17 October 2022.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesise all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question ‘What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?’

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs meta-analysis, systematic review vs literature review, systematic review vs scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias . The methods are repeatable , and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesise the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesising all available evidence and evaluating the quality of the evidence. Synthesising means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism, run a free check.

Systematic reviews often quantitatively synthesise the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesise results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarise and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimise bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimise research b ias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinised by others.
  • They’re thorough : they summarise all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fourth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomised control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective(s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesise the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Grey literature: Grey literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of grey literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of grey literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Grey literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarise what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgement of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomised into the control and treatment groups.

Step 6: Synthesise the data

Synthesising the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesising the data:

  • Narrative ( qualitative ): Summarise the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarise and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analysed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a dissertation , thesis, research paper , or proposal .

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarise yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2022, October 17). Systematic Review | Definition, Examples & Guide. Scribbr. Retrieved 1 April 2024, from https://www.scribbr.co.uk/research-methods/systematic-reviews/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, what is a literature review | guide, template, & examples, exploratory research | definition, guide, & examples, what is peer review | types & examples.

Book cover

Principles and Practice of Clinical Trials pp 2159–2177 Cite as

Introduction to Systematic Reviews

  • Tianjing Li 3 ,
  • Ian J. Saldanha 4 &
  • Karen A. Robinson 5  
  • Reference work entry
  • First Online: 20 July 2022

196 Accesses

A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter. We discuss the motivation for and importance of performing systematic reviews and their relevance to trialists. We introduce the key steps in completing a systematic review, including framing the question, searching for and selecting studies, collecting data, assessing risk of bias in included studies, conducting a qualitative synthesis and a quantitative synthesis (i.e., meta-analysis), grading the certainty of evidence, and writing the systematic review report. We also describe how to identify systematic reviews and how to assess their methodological rigor. We discuss the challenges and criticisms of systematic reviews, and how technology and innovations, combined with a closer partnership between trialists and systematic reviewers, can help identify effective and safe evidence-based practices more quickly.

  • Systematic review
  • Meta-analysis
  • Research synthesis
  • Evidence-based
  • Risk of bias

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

AHRQ (2015) Methods guide for effectiveness and comparative effectiveness reviews. Available from https://effectivehealthcare.ahrq.gov/products/cer-methods-guide/overview . Accessed on 27 Oct 2019

Andersen MZ, Gülen S, Fonnes S, Andresen K, Rosenberg J (2020) Half of Cochrane reviews were published more than two years after the protocol. J Clin Epidemiol 124:85–93. https://doi.org/10.1016/j.jclinepi.2020.05.011

Article   Google Scholar  

Berkman ND, Lohr KN, Ansari MT, Balk EM, Kane R, McDonagh M, Morton SC, Viswanathan M, Bass EB, Butler M, Gartlehner G, Hartling L, McPheeters M, Morgan LC, Reston J, Sista P, Whitlock E, Chang S (2015) Grading the strength of a body of evidence when assessing health care interventions: an EPC update. J Clin Epidemiol 68(11):1312–1324

Borah R, Brown AW, Capers PL, Kaiser KA (2017) Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open 7(2):e012545. https://doi.org/10.1136/bmjopen-2016-012545

Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JP, Oliver S (2014) How to increase value and reduce waste when research priorities are set. Lancet 383(9912):156–165. https://doi.org/10.1016/S0140-6736(13)62229-1

Clarke M, Chalmers I (1998) Discussion sections in reports of controlled trials published in general medical journals: islands in search of continents? JAMA 280(3):280–282

Cooper NJ, Jones DR, Sutton AJ (2005) The use of systematic reviews when designing studies. Clin Trials 2(3):260–264

Djulbegovic B, Kumar A, Magazin A, Schroen AT, Soares H, Hozo I, Clarke M, Sargent D, Schell MJ (2011) Optimism bias leads to inconclusive results-an empirical study. J Clin Epidemiol 64(6):583–593. https://doi.org/10.1016/j.jclinepi.2010.09.007

Elliott JH, Synnot A, Turner T, Simmonds M, Akl EA, McDonald S, Salanti G, Meerpohl J, MacLehose H, Hilton J, Tovey D, Shemilt I, Thomas J (2017) Living systematic review network. Living systematic review: 1. Introduction-the why, what, when, and how. J Clin Epidemiol 91:23–30

Equator Network. Reporting guidelines for systematic reviews. Available from https://www.equator-network.org/?post_type=eq_guidelines&eq_guidelines_study_design=systematic-reviews-and-meta-analyses&eq_guidelines_clinical_specialty=0&eq_guidelines_report_section=0&s=+ . Accessed 9 Mar 2020

Garner P, Hopewell S, Chandler J, MacLehose H, Schünemann HJ, Akl EA, Beyene J, Chang S, Churchill R, Dearness K, Guyatt G, Lefebvre C, Liles B, Marshall R, Martínez García L, Mavergames C, Nasser M, Qaseem A, Sampson M, Soares-Weiser K, Takwoingi Y, Thabane L, Trivella M, Tugwell P, Welsh E, Wilson EC, Schünemann HJ (2016) Panel for updating guidance for systematic reviews (PUGs). When and how to update systematic reviews: consensus and checklist. BMJ 354:i3507. https://doi.org/10.1136/bmj.i3507 . Erratum in: BMJ 2016 Sep 06 354:i4853

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ (2011) GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 64(4):383–394

Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) (2019a) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester

Google Scholar  

Higgins JPT, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (2019b) Standards for the conduct of new Cochrane intervention reviews. In: JPT H, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, Churchill R (eds) Methodological expectations of Cochrane intervention reviews. Cochrane, London

IOM (2011) Committee on standards for systematic reviews of comparative effectiveness research, board on health care services. In: Eden J, Levit L, Berg A, Morton S (eds) Finding what works in health care: standards for systematic reviews. National Academies Press, Washington, DC

Jonnalagadda SR, Goyal P, Huffman MD (2015) Automating data extraction in systematic reviews: a systematic review. Syst Rev 4:78

Krnic Martinic M, Pieper D, Glatt A, Puljak L (2019) Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol 19(1):203. Published 4 Nov 2019. https://doi.org/10.1186/s12874-019-0855-0

Lasserson TJ, Thomas J, Higgins JPT (2019) Chapter 1: Starting a review. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

Lau J, Antman EM, Jimenez-Silva J, Kupelnick B, Mosteller F, Chalmers TC (1992) Cumulative meta-analysis of therapeutic trials for myocardial infarction. N Engl J Med 327(4):248–254

Lau J (2019) Editorial: systematic review automation thematic series. Syst Rev 8(1):70. Published 11 Mar 2019. https://doi.org/10.1186/s13643-019-0974-z

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D (2009) The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med 6(7):e1000100. https://doi.org/10.1371/journal.pmed.1000100

Lund H, Brunnhuber K, Juhl C, Robinson K, Leenaars M, Dorch BF, Jamtvedt G, Nortvedt MW, Christensen R, Chalmers I (2016) Towards evidence based research. BMJ 355:i5440. https://doi.org/10.1136/bmj.i5440

Marshall IJ, Noel-Storr A, Kuiper J, Thomas J, Wallace BC (2018) Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Res Synth Methods 9(4):602–614. https://doi.org/10.1002/jrsm.1287

Michelson M, Reuter K (2019) The significant cost of systematic reviews and meta-analyses: a call for greater involvement of machine learning to assess the promise of clinical trials. Contemp Clin Trials Commun 16:100443. https://doi.org/10.1016/j.conctc.2019.100443 . Erratum in: Contemp Clin Trials Commun 2019 16:100450

Moher D, Liberati A, Tetzlaff J (2009) Altman DG; PRISMA group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 151(4):264–269. W64

Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, Shekelle P, Stewart LA, PRISMA-P Group (2015) Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 4(1):1. https://doi.org/10.1186/2046-4053-4-1

NIHR HTA Stage 1 guidance notes. Available from https://www.nihr.ac.uk/documents/hta-stage-1-guidance-notes/11743 ; Accessed 10 Mar 2020

Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, Catalá-López F, Li L, Reid EK, Sarkis-Onofre R, Moher D (2016) Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med 13(5):e1002028. https://doi.org/10.1371/journal.pmed.1002028

Page MJ, Higgins JPT, Sterne JAC (2019) Chapter 13: assessing risk of bias due to missing results in a synthesis. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ et al (eds) Cochrane handbook for systematic reviews of interventions, 2nd edn. Wiley, Chichester, pp 349–374

Chapter   Google Scholar  

Robinson KA (2009) Use of prior research in the justification and interpretation of clinical trials. Johns Hopkins University

Robinson KA, Goodman SN (2011) A systematic examination of the citation of prior research in reports of randomized, controlled trials. Ann Intern Med 154(1):50–55. https://doi.org/10.7326/0003-4819-154-1-201101040-00007

Rouse B, Cipriani A, Shi Q, Coleman AL, Dickersin K, Li T (2016) Network meta-analysis for clinical practice guidelines – a case study on first-line medical therapies for primary open-angle glaucoma. Ann Intern Med 164(10):674–682. https://doi.org/10.7326/M15-2367

Saldanha IJ, Lindsley K, Do DV et al (2017) Comparison of clinical trial and systematic review outcomes for the 4 most prevalent eye diseases. JAMA Ophthalmol 135(9):933–940. https://doi.org/10.1001/jamaophthalmol.2017.2583

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol 7:10

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA (2017) AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 358:j4008. https://doi.org/10.1136/bmj.j4008

Shojania KG, Sampson M, Ansari MT, Ji J, Doucette S, Moher D (2007) How quickly do systematic reviews go out of date? A survival analysis. Ann Intern Med 147(4):224–233

Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, Henry D, Altman DG, Ansari MT, Boutron I, Carpenter JR, Chan AW, Churchill R, Deeks JJ, Hróbjartsson A, Kirkham J, Jüni P, Loke YK, Pigott TD, Ramsay CR, Regidor D, Rothstein HR, Sandhu L, Santaguida PL, Schünemann HJ, Shea B, Shrier I, Tugwell P, Turner L, Valentine JC, Waddington H, Waters E, Wells GA, Whiting PF, Higgins JP (2016) ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 355:i4919. https://doi.org/10.1136/bmj.i4919

Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, Cates CJ, Cheng HY, Corbett MS, Eldridge SM, Emberson JR, Hernán MA, Hopewell S, Hróbjartsson A, Junqueira DR, Jüni P, Kirkham JJ, Lasserson T, Li T, McAleenan A, Reeves BC, Shepperd S, Shrier I, Stewart LA, Tilling K, White IR, Whiting PF, Higgins JPT (2019) RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 366:l4898. https://doi.org/10.1136/bmj.l4898

Thomas J, Kneale D, McKenzie JE, Brennan SE, Bhaumik S (2019) Chapter 2: determining the scope of the review and the questions it will address. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (eds) Cochrane handbook for systematic reviews of interventions version 6.0 (updated July 2019). Cochrane. Available from www.training.cochrane.org/handbook

USPSTF U.S. Preventive Services Task Force Procedure Manual (2017). Available from: https://www.uspreventiveservicestaskforce.org/uspstf/sites/default/files/inline-files/procedure-manual2017_update.pdf . Accessed 21 May 2020

Whitaker (2015) UCSF guides: systematic review: when will i be finished? https://guides.ucsf.edu/c.php?g=375744&p=3041343 , Accessed 13 May 2020

Whiting P, Savović J, Higgins JP, Caldwell DM, Reeves BC, Shea B, Davies P, Kleijnen J (2016) Churchill R; ROBIS group. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. J Clin Epidemiol 69:225–234. https://doi.org/10.1016/j.jclinepi.2015.06.005

Download references

Author information

Authors and affiliations.

Department of Ophthalmology, University of Colorado Anschutz Medical Campus, Aurora, CO, USA

Tianjing Li

Department of Health Services, Policy, and Practice and Department of Epidemiology, Brown University School of Public Health, Providence, RI, USA

Ian J. Saldanha

Department of Medicine, Johns Hopkins University, Baltimore, MD, USA

Karen A. Robinson

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Tianjing Li .

Editor information

Editors and affiliations.

Department of Surgery, Division of Surgical Oncology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Steven Piantadosi

Department of Epidemiology, School of Public Health, Johns Hopkins University, Baltimore, MD, USA

Curtis L. Meinert

Section Editor information

Department of Epidemiology, University of Colorado Denver Anschutz Medical Campus, Aurora, CO, USA

The Johns Hopkins Center for Clinical Trials and Evidence Synthesis, Johns Hopkins University, Baltimore, MD, USA

Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this entry

Cite this entry.

Li, T., Saldanha, I.J., Robinson, K.A. (2022). Introduction to Systematic Reviews. In: Piantadosi, S., Meinert, C.L. (eds) Principles and Practice of Clinical Trials. Springer, Cham. https://doi.org/10.1007/978-3-319-52636-2_194

Download citation

DOI : https://doi.org/10.1007/978-3-319-52636-2_194

Published : 20 July 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-52635-5

Online ISBN : 978-3-319-52636-2

eBook Packages : Mathematics and Statistics Reference Module Computer Science and Engineering

Share this entry

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Systematic Reviews

  • What is a Systematic Review?

A systematic review is an evidence synthesis that uses explicit, reproducible methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses appropriate statistical techniques to combine these valid studies.

Key Characteristics of a Systematic Review:

Generally, systematic reviews must have:

  • a clearly stated set of objectives with pre-defined eligibility criteria for studies
  • an explicit, reproducible methodology
  • a systematic search that attempts to identify all studies that would meet the eligibility criteria
  • an assessment of the validity of the findings of the included studies, for example through the assessment of the risk of bias
  • a systematic presentation, and synthesis, of the characteristics and findings of the included studies.

A meta-analysis is a systematic review that uses quantitative methods to synthesize and summarize the pooled data from included studies.

Additional Information

  • How-to Books
  • Beyond Health Sciences

Cover Art

  • Cochrane Handbook For Systematic Reviews of Interventions Provides guidance to authors for the preparation of Cochrane Intervention reviews. Chapter 6 covers searching for reviews.
  • Systematic Reviews: CRD’s Guidance for Undertaking Reviews in Health Care From The University of York Centre for Reviews and Dissemination: Provides practical guidance for undertaking evidence synthesis based on a thorough understanding of systematic review methodology. It presents the core principles of systematic reviewing, and in complementary chapters, highlights issues that are specific to reviews of clinical tests, public health interventions, adverse effects, and economic evaluations.
  • Cornell, Sytematic Reviews and Evidence Synthesis Beyond the Health Sciences Video series geared for librarians but very informative about searching outside medicine.
  • << Previous: Getting Started
  • Next: Levels of Evidence >>
  • Getting Started
  • Levels of Evidence
  • Locating Systematic Reviews
  • Searching Systematically
  • Developing Answerable Questions
  • Identifying Synonyms & Related Terms
  • Using Truncation and Wildcards
  • Identifying Search Limits/Exclusion Criteria
  • Keyword vs. Subject Searching
  • Where to Search
  • Search Filters
  • Sensitivity vs. Precision
  • Core Databases
  • Other Databases
  • Clinical Trial Registries
  • Conference Presentations
  • Databases Indexing Grey Literature
  • Web Searching
  • Handsearching
  • Citation Indexes
  • Documenting the Search Process
  • Managing your Review

Research Support

  • Last Updated: Feb 29, 2024 3:16 PM
  • URL: https://guides.library.ucdavis.edu/systematic-reviews

1.2.2  What is a systematic review?

A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question.  It  uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can be drawn and decisions made (Antman 1992, Oxman 1993) . The key characteristics of a systematic review are:

a clearly stated set of objectives with pre-defined eligibility criteria for studies;

an explicit, reproducible methodology;

a systematic search that attempts to identify all studies that would meet the eligibility criteria;

an assessment of the validity of the findings of the included studies, for example through the assessment of risk of bias; and

a systematic presentation, and synthesis, of the characteristics and findings of the included studies.

Many systematic reviews contain meta-analyses. Meta-analysis is the use of statistical methods to summarize the results of independent studies (Glass 1976). By combining information from all relevant studies, meta-analyses can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review (see Chapter 9, Section 9.1.3 ). They also facilitate investigations of the consistency of evidence across studies, and the exploration of differences across studies.

University of Maryland Libraries Logo

Systematic Review

  • Library Help
  • What is a Systematic Review (SR)?
  • Steps of a Systematic Review
  • Framing a Research Question
  • Developing a Search Strategy
  • Searching the Literature
  • Managing the Process
  • Meta-analysis
  • Publishing your Systematic Review

Introduction to Systematic Review

  • Introduction
  • Types of literature reviews
  • Other Libguides
  • Systematic review as part of a dissertation
  • Tutorials & Guidelines & Examples from non-Medical Disciplines

Depending on your learning style, please explore the resources in various formats on the tabs above.

For additional tutorials, visit the SR Workshop Videos  from UNC at Chapel Hill outlining each stage of the systematic review process.

Know the difference! Systematic review vs. literature review

what is a systematic research study

Types of literature reviews along with associated methodologies

JBI Manual for Evidence Synthesis .  Find definitions and methodological guidance.

- Systematic Reviews - Chapters 1-7

- Mixed Methods Systematic Reviews -  Chapter 8

- Diagnostic Test Accuracy Systematic Reviews -  Chapter 9

- Umbrella Reviews -  Chapter 10

- Scoping Reviews -  Chapter 11

- Systematic Reviews of Measurement Properties -  Chapter 12

Systematic reviews vs scoping reviews - 

Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information and Libraries Journal , 26 (2), 91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x

Gough, D., Thomas, J., & Oliver, S. (2012). Clarifying differences between review designs and methods. Systematic Reviews, 1 (28). htt p s://doi.org/ 10.1186/2046-4053-1-28

Munn, Z., Peters, M., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018).  Systematic review or  scoping review ?  Guidance for authors when choosing between a systematic or scoping review approach.  BMC medical research methodology, 18 (1), 143. https://doi.org/10.1186/s12874-018-0611-x. Also, check out the  Libguide from Weill Cornell Medicine  for the  differences between a systematic review and a scoping review  and when to embark on either one of them.

Sutton, A., Clowes, M., Preston, L., & Booth, A. (2019). Meeting the review family: Exploring review types and associated information retrieval requirements . Health Information & Libraries Journal , 36 (3), 202–222. https://doi.org/10.1111/hir.12276

Temple University. Review Types . - This guide provides useful descriptions of some of the types of reviews listed in the above article.

UMD Health Sciences and Human Services Library.  Review Types . - Guide describing Literature Reviews, Scoping Reviews, and Rapid Reviews.

Whittemore, R., Chao, A., Jang, M., Minges, K. E., & Park, C. (2014). Methods for knowledge synthesis: An overview. Heart & Lung: The Journal of Acute and Critical Care, 43 (5), 453–461. https://doi.org/10.1016/j.hrtlng.2014.05.014

Differences between a systematic review and other types of reviews

Armstrong, R., Hall, B. J., Doyle, J., & Waters, E. (2011). ‘ Scoping the scope ’ of a cochrane review. Journal of Public Health , 33 (1), 147–150. https://doi.org/10.1093/pubmed/fdr015

Kowalczyk, N., & Truluck, C. (2013). Literature reviews and systematic reviews: What is the difference? Radiologic Technology , 85 (2), 219–222.

White, H., Albers, B., Gaarder, M., Kornør, H., Littell, J., Marshall, Z., Matthew, C., Pigott, T., Snilstveit, B., Waddington, H., & Welch, V. (2020). Guidance for producing a Campbell evidence and gap map . Campbell Systematic Reviews, 16 (4), e1125. https://doi.org/10.1002/cl2.1125. Check also this comparison between evidence and gaps maps and systematic reviews.

Rapid Reviews Tutorials

Rapid Review Guidebook  by the National Collaborating Centre of Methods and Tools (NCCMT)

Hamel, C., Michaud, A., Thuku, M., Skidmore, B., Stevens, A., Nussbaumer-Streit, B., & Garritty, C. (2021). Defining Rapid Reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of rapid reviews.  Journal of clinical epidemiology ,  129 , 74–85. https://doi.org/10.1016/j.jclinepi.2020.09.041

  • Müller, C., Lautenschläger, S., Meyer, G., & Stephan, A. (2017). Interventions to support people with dementia and their caregivers during the transition from home care to nursing home care: A systematic review . International Journal of Nursing Studies, 71 , 139–152. https://doi.org/10.1016/j.ijnurstu.2017.03.013
  • Bhui, K. S., Aslam, R. W., Palinski, A., McCabe, R., Johnson, M. R. D., Weich, S., … Szczepura, A. (2015). Interventions to improve therapeutic communications between Black and minority ethnic patients and professionals in psychiatric services: Systematic review . The British Journal of Psychiatry, 207 (2), 95–103. https://doi.org/10.1192/bjp.bp.114.158899
  • Rosen, L. J., Noach, M. B., Winickoff, J. P., & Hovell, M. F. (2012). Parental smoking cessation to protect young children: A systematic review and meta-analysis . Pediatrics, 129 (1), 141–152. https://doi.org/10.1542/peds.2010-3209

Scoping Review

  • Hyshka, E., Karekezi, K., Tan, B., Slater, L. G., Jahrig, J., & Wild, T. C. (2017). The role of consumer perspectives in estimating population need for substance use services: A scoping review . BMC Health Services Research, 171-14.  https://doi.org/10.1186/s12913-017-2153-z
  • Olson, K., Hewit, J., Slater, L.G., Chambers, T., Hicks, D., Farmer, A., & ... Kolb, B. (2016). Assessing cognitive function in adults during or following chemotherapy: A scoping review . Supportive Care In Cancer, 24 (7), 3223-3234. https://doi.org/10.1007/s00520-016-3215-1
  • Pham, M. T., Rajić, A., Greig, J. D., Sargeant, J. M., Papadopoulos, A., & McEwen, S. A. (2014). A scoping review of scoping reviews: Advancing the approach and enhancing the consistency . Research Synthesis Methods, 5 (4), 371–385. https://doi.org/10.1002/jrsm.1123
  • Scoping Review Tutorial from UNC at Chapel Hill

Qualitative Systematic Review/Meta-Synthesis

  • Lee, H., Tamminen, K. A., Clark, A. M., Slater, L., Spence, J. C., & Holt, N. L. (2015). A meta-study of qualitative research examining determinants of children's independent active free play . International Journal Of Behavioral Nutrition & Physical Activity, 12 (5), 121-12. https://doi.org/10.1186/s12966-015-0165-9

Videos on systematic reviews

Systematic Reviews: What are they? Are they right for my research? - 47 min. video recording with a closed caption option.

More training videos  on systematic reviews:   

Books on Systematic Reviews

Cover Art

Books on Meta-analysis

what is a systematic research study

  • University of Toronto Libraries  - very detailed with good tips on the sensitivity and specificity of searches.
  • Monash University  - includes an interactive case study tutorial. 
  • Dalhousie University Libraries - a comprehensive How-To Guide on conducting a systematic review.

Guidelines for a systematic review as part of the dissertation

  • Guidelines for Systematic Reviews in the Context of Doctoral Education Background  by University of Victoria (PDF)
  • Can I conduct a Systematic Review as my Master’s dissertation or PhD thesis? Yes, It Depends!  by Farhad (blog)
  • What is a Systematic Review Dissertation Like? by the University of Edinburgh (50 min video) 

Further readings on experiences of PhD students and doctoral programs with systematic reviews

Puljak, L., & Sapunar, D. (2017). Acceptance of a systematic review as a thesis: Survey of biomedical doctoral programs in Europe . Systematic Reviews , 6 (1), 253. https://doi.org/10.1186/s13643-017-0653-x

Perry, A., & Hammond, N. (2002). Systematic reviews: The experiences of a PhD Student . Psychology Learning & Teaching , 2 (1), 32–35. https://doi.org/10.2304/plat.2002.2.1.32

Daigneault, P.-M., Jacob, S., & Ouimet, M. (2014). Using systematic review methods within a Ph.D. dissertation in political science: Challenges and lessons learned from practice . International Journal of Social Research Methodology , 17 (3), 267–283. https://doi.org/10.1080/13645579.2012.730704

UMD Doctor of Philosophy Degree Policies

Before you embark on a systematic review research project, check the UMD PhD Policies to make sure you are on the right path. Systematic reviews require a team of at least two reviewers and an information specialist or a librarian. Discuss with your advisor the authorship roles of the involved team members. Keep in mind that the  UMD Doctor of Philosophy Degree Policies (scroll down to the section, Inclusion of one's own previously published materials in a dissertation ) outline such cases, specifically the following: 

" It is recognized that a graduate student may co-author work with faculty members and colleagues that should be included in a dissertation . In such an event, a letter should be sent to the Dean of the Graduate School certifying that the student's examining committee has determined that the student made a substantial contribution to that work. This letter should also note that the inclusion of the work has the approval of the dissertation advisor and the program chair or Graduate Director. The letter should be included with the dissertation at the time of submission.  The format of such inclusions must conform to the standard dissertation format. A foreword to the dissertation, as approved by the Dissertation Committee, must state that the student made substantial contributions to the relevant aspects of the jointly authored work included in the dissertation."

  • Cochrane Handbook for Systematic Reviews of Interventions - See Part 2: General methods for Cochrane reviews
  • Systematic Searches - Yale library video tutorial series 
  • Using PubMed's Clinical Queries to Find Systematic Reviews  - From the U.S. National Library of Medicine
  • Systematic reviews and meta-analyses: A step-by-step guide - From the University of Edinsburgh, Centre for Cognitive Ageing and Cognitive Epidemiology

Bioinformatics

  • Mariano, D. C., Leite, C., Santos, L. H., Rocha, R. E., & de Melo-Minardi, R. C. (2017). A guide to performing systematic literature reviews in bioinformatics .  arXiv preprint arXiv:1707.05813.

Environmental Sciences

Collaboration for Environmental Evidence. 2018.  Guidelines and Standards for Evidence synthesis in Environmental Management. Version 5.0 (AS Pullin, GK Frampton, B Livoreil & G Petrokofsky, Eds) www.environmentalevidence.org/information-for-authors .

Pullin, A. S., & Stewart, G. B. (2006). Guidelines for systematic review in conservation and environmental management. Conservation Biology, 20 (6), 1647–1656. https://doi.org/10.1111/j.1523-1739.2006.00485.x

Engineering Education

  • Borrego, M., Foster, M. J., & Froyd, J. E. (2014). Systematic literature reviews in engineering education and other developing interdisciplinary fields. Journal of Engineering Education, 103 (1), 45–76. https://doi.org/10.1002/jee.20038

Public Health

  • Hannes, K., & Claes, L. (2007). Learn to read and write systematic reviews: The Belgian Campbell Group . Research on Social Work Practice, 17 (6), 748–753. https://doi.org/10.1177/1049731507303106
  • McLeroy, K. R., Northridge, M. E., Balcazar, H., Greenberg, M. R., & Landers, S. J. (2012). Reporting guidelines and the American Journal of Public Health’s adoption of preferred reporting items for systematic reviews and meta-analyses . American Journal of Public Health, 102 (5), 780–784. https://doi.org/10.2105/AJPH.2011.300630
  • Pollock, A., & Berge, E. (2018). How to do a systematic review.   International Journal of Stroke, 13 (2), 138–156. https://doi.org/10.1177/1747493017743796
  • Institute of Medicine. (2011). Finding what works in health care: Standards for systematic reviews . https://doi.org/10.17226/13059
  • Wanden-Berghe, C., & Sanz-Valero, J. (2012). Systematic reviews in nutrition: Standardized methodology . The British Journal of Nutrition, 107 Suppl 2, S3-7. https://doi.org/10.1017/S0007114512001432

Social Sciences

  • Bronson, D., & Davis, T. (2012).  Finding and evaluating evidence: Systematic reviews and evidence-based practice (Pocket guides to social work research methods). Oxford: Oxford University Press.
  • Petticrew, M., & Roberts, H. (2006).  Systematic reviews in the social sciences: A practical guide . Malden, MA: Blackwell Pub.
  • Cornell University Library Guide -  Systematic literature reviews in engineering: Example: Software Engineering
  • Biolchini, J., Mian, P. G., Natali, A. C. C., & Travassos, G. H. (2005). Systematic review in software engineering .  System Engineering and Computer Science Department COPPE/UFRJ, Technical Report ES, 679 (05), 45.
  • Biolchini, J. C., Mian, P. G., Natali, A. C. C., Conte, T. U., & Travassos, G. H. (2007). Scientific research ontology to support systematic review in software engineering . Advanced Engineering Informatics, 21 (2), 133–151.
  • Kitchenham, B. (2007). Guidelines for performing systematic literature reviews in software engineering . [Technical Report]. Keele, UK, Keele University, 33(2004), 1-26.
  • Weidt, F., & Silva, R. (2016). Systematic literature review in computer science: A practical guide .  Relatórios Técnicos do DCC/UFJF ,  1 .
  • Academic Phrasebank - Get some inspiration and find some terms and phrases for writing your research paper
  • Oxford English Dictionary  - Use to locate word variants and proper spelling
  • << Previous: Library Help
  • Next: Steps of a Systematic Review >>
  • Last Updated: Mar 4, 2024 12:09 PM
  • URL: https://lib.guides.umd.edu/SR

Systematic Reviews and Meta Analysis

  • Getting Started
  • Guides and Standards
  • Review Protocols
  • Databases and Sources
  • Randomized Controlled Trials
  • Controlled Clinical Trials
  • Observational Designs
  • Tests of Diagnostic Accuracy
  • Software and Tools
  • Where do I get all those articles?
  • Collaborations
  • EPI 233/528
  • Countway Mediated Search
  • Risk of Bias (RoB)

Systematic review Q & A

What is a systematic review.

A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies. A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the included studies. A systematic review may include a meta-analysis.

For details about carrying out systematic reviews, see the Guides and Standards section of this guide.

Is my research topic appropriate for systematic review methods?

A systematic review is best deployed to test a specific hypothesis about a healthcare or public health intervention or exposure. By focusing on a single intervention or a few specific interventions for a particular condition, the investigator can ensure a manageable results set. Moreover, examining a single or small set of related interventions, exposures, or outcomes, will simplify the assessment of studies and the synthesis of the findings.

Systematic reviews are poor tools for hypothesis generation: for instance, to determine what interventions have been used to increase the awareness and acceptability of a vaccine or to investigate the ways that predictive analytics have been used in health care management. In the first case, we don't know what interventions to search for and so have to screen all the articles about awareness and acceptability. In the second, there is no agreed on set of methods that make up predictive analytics, and health care management is far too broad. The search will necessarily be incomplete, vague and very large all at the same time. In most cases, reviews without clearly and exactly specified populations, interventions, exposures, and outcomes will produce results sets that quickly outstrip the resources of a small team and offer no consistent way to assess and synthesize findings from the studies that are identified.

If not a systematic review, then what?

You might consider performing a scoping review . This framework allows iterative searching over a reduced number of data sources and no requirement to assess individual studies for risk of bias. The framework includes built-in mechanisms to adjust the analysis as the work progresses and more is learned about the topic. A scoping review won't help you limit the number of records you'll need to screen (broad questions lead to large results sets) but may give you means of dealing with a large set of results.

This tool can help you decide what kind of review is right for your question.

Can my student complete a systematic review during her summer project?

Probably not. Systematic reviews are a lot of work. Including creating the protocol, building and running a quality search, collecting all the papers, evaluating the studies that meet the inclusion criteria and extracting and analyzing the summary data, a well done review can require dozens to hundreds of hours of work that can span several months. Moreover, a systematic review requires subject expertise, statistical support and a librarian to help design and run the search. Be aware that librarians sometimes have queues for their search time. It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.

How can I know if my topic has been been reviewed already?

Before starting out on a systematic review, check to see if someone has done it already. In PubMed you can use the systematic review subset to limit to a broad group of papers that is enriched for systematic reviews. You can invoke the subset by selecting if from the Article Types filters to the left of your PubMed results, or you can append AND systematic[sb] to your search. For example:

"neoadjuvant chemotherapy" AND systematic[sb]

The systematic review subset is very noisy, however. To quickly focus on systematic reviews (knowing that you may be missing some), simply search for the word systematic in the title:

"neoadjuvant chemotherapy" AND systematic[ti]

Any PRISMA-compliant systematic review will be captured by this method since including the words "systematic review" in the title is a requirement of the PRISMA checklist. Cochrane systematic reviews do not include 'systematic' in the title, however. It's worth checking the Cochrane Database of Systematic Reviews independently.

You can also search for protocols that will indicate that another group has set out on a similar project. Many investigators will register their protocols in PROSPERO , a registry of review protocols. Other published protocols as well as Cochrane Review protocols appear in the Cochrane Methodology Register, a part of the Cochrane Library .

  • Next: Guides and Standards >>
  • Last Updated: Feb 26, 2024 3:17 PM
  • URL: https://guides.library.harvard.edu/meta-analysis
  • Manchester Met Library
  • Special Collections Museum
  • North West Film Archive
  • Poetry Library
  • Researcher Profiles
  • Research Metrics

What is a Systematic Review?

  • Review Types
  • Preparation and Planning
  • Search Methods
  • Grey Literature
  • Documentation and Data Management
  • Further Information and Resources
  • Cultural Services
  • Open Research
  • What is Research Data Management
  • Legal and Governance
  • Data Management Plans
  • Collecting and Organising Data
  • Storing Data
  • Sharing Data
  • Sensitive Data
  • Publishing Open Access
  • What to Consider
  • Funding for Open Access
  • Funder Requirements
  • Deposit in e-space
  • Search e-space
  • Rights Retention
  • Workshops and Development
  • Research Support
  • Systematic Reviews
  • Research Data Management
  • Open Access Publishing
  • email us at [email protected]

What is a systematic review?

A systematic review is a firmly structured literature review, undertaken according to a fixed plan, system or method. As such, it is highly focused on a particular and explicit topic area with strict research parameters. Systematic reviews will often have a detailed plan known as a protocol, which is a statement of the approach and methods to be used in the review prior to undertaking it. 

Systematic review methodology is explicit and precise because it aims to minimise bias, thereby enhancing the reliability of any conclusions. It is therefore considered an evidence-based approach. Systematic reviews are commonly used by health professionals, but also policy makers and researchers. 

There is information about the difference between a systematic review and a literature review on this page. If you are undertaking systematic approach to a literature review, however, you might find certain aspects of this guide useful. 

LITERATURE REVIEW VS SYSTEMATIC REVIEW

You can find further information on literature reviews on our  literature reviews page .

How we can help

What we need you to do: .

  • Have a firm idea of your research question or area 
  • List your main keywords and alternatives. You may want to use a table to organise your keywords. 
  • Think about how you will use your keywords to search using connectors such as AND/OR 
  • Define what you want to include and exclude from your search 
  • Consider where you want to search 
  • Run some initial searches and identify any problems or issues you want to discuss 

What your Librarian can help you with:  

  • Identifying relevant databases and other subject resources that could be used to supplement your review 
  • Demonstrating library resources for use in the review  
  • Replicating searches on other databases and resources 
  • Reviewing your search strategy/approach 
  • Directing you to referencing software support 
  • Suggesting ways to save and document your search results 
  • Helping to locate difficult to find material, using the  Request It! service
  • PREPARATION AND PLANNING
  • review types
  • SEARCH METHODS
  • GREY LITERATURE
  • DOCUMENTATION AND DATA MANAGEMENT
  • FURTHER INFORMATION AND RESOURCES

Home

  • Duke NetID Login
  • 919.660.1100
  • Duke Health Badge: 24-hour access
  • Accounts & Access
  • Databases, Journals & Books
  • Request & Reserve
  • Training & Consulting
  • Request Articles & Books
  • Renew Online
  • Reserve Spaces
  • Reserve a Locker
  • Study & Meeting Rooms
  • Course Reserves
  • Digital Health Device Collection
  • Pay Fines/Fees
  • Recommend a Purchase
  • Access From Off Campus
  • Building Access
  • Computers & Equipment
  • Wifi Access
  • My Accounts
  • Mobile Apps
  • Known Access Issues
  • Report an Access Issue
  • All Databases
  • Article Databases
  • Basic Sciences
  • Clinical Sciences
  • Dissertations & Theses
  • Drugs, Chemicals & Toxicology
  • Grants & Funding
  • Interprofessional Education
  • Non-Medical Databases
  • Search for E-Journals
  • Search for Print & E-Journals
  • Search for E-Books
  • Search for Print & E-Books
  • E-Book Collections
  • Biostatistics
  • Global Health
  • MBS Program
  • Medical Students
  • MMCi Program
  • Occupational Therapy
  • Path Asst Program
  • Physical Therapy
  • Researchers
  • Community Partners

Conducting Research

  • Archival & Historical Research
  • Black History at Duke Health
  • Data Analytics & Viz Software
  • Data: Find and Share
  • Evidence-Based Practice
  • NIH Public Access Policy Compliance
  • Publication Metrics
  • Qualitative Research
  • Searching Animal Alternatives

Systematic Reviews

  • Test Instruments

Using Databases

  • JCR Impact Factors
  • Web of Science

Finding & Accessing

  • COVID-19: Core Clinical Resources
  • Health Literacy
  • Health Statistics & Data
  • Library Orientation

Writing & Citing

  • Creating Links
  • Getting Published
  • Reference Mgmt
  • Scientific Writing

Meet a Librarian

  • Request a Consultation
  • Find Your Liaisons
  • Register for a Class
  • Request a Class
  • Self-Paced Learning

Search Services

  • Literature Search
  • Systematic Review
  • Animal Alternatives (IACUC)
  • Research Impact

Citation Mgmt

  • Other Software

Scholarly Communications

  • About Scholarly Communications
  • Publish Your Work
  • Measure Your Research Impact
  • Engage in Open Science
  • Libraries and Publishers
  • Directions & Maps
  • Floor Plans

Library Updates

  • Annual Snapshot
  • Conference Presentations
  • Contact Information
  • Gifts & Donations

What is a Systematic Review?

  • Types of Reviews
  • Manuals and Reporting Guidelines
  • Our Service
  • 1. Assemble Your Team
  • 2. Develop a Research Question
  • 3. Write and Register a Protocol
  • 4. Search the Evidence
  • 5. Screen Results
  • 6. Assess for Quality and Bias
  • 7. Extract the Data
  • 8. Write the Review
  • Additional Resources
  • Finding Full-Text Articles

A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are:

  • a clearly defined question with inclusion and exclusion criteria;
  • a rigorous and systematic search of the literature;
  • two phases of screening (blinded, at least two independent screeners);
  • data extraction and management;
  • analysis and interpretation of results;
  • risk of bias assessment of included studies;
  • and report for publication.

Medical Center Library & Archives Presentations

The following presentation is a recording of the Getting Started with Systematic Reviews workshop (4/2022), offered by the Duke Medical Center Library & Archives. A NetID/pw is required to access the tutorial via Warpwire. 

  • << Previous: Overview
  • Next: Types of Reviews >>
  • Last Updated: Mar 20, 2024 2:21 PM
  • URL: https://guides.mclibrary.duke.edu/sysreview
  • Duke Health
  • Duke University
  • Duke Libraries
  • Medical Center Archives
  • Duke Directory
  • Seeley G. Mudd Building
  • 10 Searle Drive
  • [email protected]

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 14, Issue 3
  • What is a systematic review?
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Jane Clarke
  • Correspondence to Jane Clarke 4 Prime Road, Grey Lynn, Auckland, New Zealand; janeclarkehome{at}gmail.com

https://doi.org/10.1136/ebn.2011.0049

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

A high-quality systematic review is described as the most reliable source of evidence to guide clinical practice. The purpose of a systematic review is to deliver a meticulous summary of all the available primary research in response to a research question. A systematic review uses all the existing research and is sometime called ‘secondary research’ (research on research). They are often required by research funders to establish the state of existing knowledge and are frequently used in guideline development. Systematic review findings are often used within the healthcare setting but may be applied elsewhere. For example, the Campbell Collaboration advocates the application of systematic reviews for policy-making in education, justice and social work.

Systematic reviews can be conducted on all types of primary research. Many are reviews of randomised trials (addressing questions of effectiveness), cross-sectional studies (addressing questions about prevalence or diagnostic accuracy, for example) or cohort studies (addressing questions about prognosis). When qualitative research is reviewed systematically, it may be described as a systematic review, but more often other terms such as meta-synthesis are used.

Systematic review methodology is explicit and precise and aims to minimise bias, thus enhancing the reliability of the conclusions drawn. 1 , 2 The features of a systematic review include:

■ clear aims with predetermined eligibility and relevance criteria for studies;

■ transparent, reproducible methods;

■ rigorous search designed to locate all eligible studies;

■ an assessment of the validity of the findings of the included studies and

■ a systematic presentation, and synthesis, of the included studies. 3

The first step in a systematic review is a meticulous search of all sources of evidence for relevant studies. The databases and citation indexes searched are listed in the methodology section of the review. Next, using predetermined reproducible criteria to screen for eligibility and relevance assessment of titles and the abstracts is completed. Each study is then assessed in terms of methodological quality.

Finally, the evidence is synthesised. This process may or may not include a meta-analysis. A meta-analysis is a statistical summary of the findings of independent studies. 4 Meta-analyses can potentially present more precise estimates of the effects of interventions than those derived from the individual studies alone. These strategies are used to limit bias and random error which may arise during this process. Without these safeguards, then, reviews can mislead, such that we gain an unreliable summary of the available knowledge.

The Cochrane Collaboration is a leader in the production of systematic reviews. Cochrane reviews are published on a monthly basis in the Cochrane Database of Systematic Reviews in The Cochrane Library (see: http://www.thecochranelibrary.com ).

  • Antman EM ,
  • Kupelnick B ,
  • Higgins JPT ,

Competing interests None.

Read the full text or download the PDF:

Study Design 101: Systematic Review

  • Case Report
  • Case Control Study
  • Cohort Study
  • Randomized Controlled Trial
  • Practice Guideline
  • Systematic Review
  • Meta-Analysis
  • Helpful Formulas
  • Finding Specific Study Types

A document often written by a panel that provides a comprehensive review of all relevant studies on a particular clinical or health-related topic/question. The systematic review is created after reviewing and combining all the information from both published and unpublished studies (focusing on clinical trials of similar treatments) and then summarizing the findings.

  • Exhaustive review of the current literature and other sources (unpublished studies, ongoing research)
  • Less costly to review prior studies than to create a new study
  • Less time required than conducting a new study
  • Results can be generalized and extrapolated into the general population more broadly than individual studies
  • More reliable and accurate than individual studies
  • Considered an evidence-based resource

Disadvantages

  • Very time-consuming
  • May not be easy to combine studies

Design pitfalls to look out for

Studies included in systematic reviews may be of varying study designs, but should collectively be studying the same outcome.

Is each study included in the review studying the same variables?

Some reviews may group and analyze studies by variables such as age and gender; factors that were not allocated to participants.

Do the analyses in the systematic review fit the variables being studied in the original studies?

Fictitious Example

Does the regular wearing of ultraviolet-blocking sunscreen prevent melanoma? An exhaustive literature search was conducted, resulting in 54 studies on sunscreen and melanoma. Each study was then evaluated to determine whether the study focused specifically on ultraviolet-blocking sunscreen and melanoma prevention; 30 of the 54 studies were retained. The thirty studies were reviewed and showed a strong positive relationship between daily wearing of sunscreen and a reduced diagnosis of melanoma.

Real-life Examples

Yang, J., Chen, J., Yang, M., Yu, S., Ying, L., Liu, G., ... Liang, F. (2018). Acupuncture for hypertension. The Cochrane Database of Systematic Reviews, 11 (11), CD008821. https://doi.org/10.1002/14651858.CD008821.pub2

This systematic review analyzed twenty-two randomized controlled trials to determine whether acupuncture is a safe and effective way to lower blood pressure in adults with primary hypertension. Due to the low quality of evidence in these studies and lack of blinding, it is not possible to link any short-term decrease in blood pressure to the use of acupuncture. Additional research is needed to determine if there is an effect due to acupuncture that lasts at least seven days.

Parker, H.W. and Vadiveloo, M.K. (2019). Diet quality of vegetarian diets compared with nonvegetarian diets: a systematic review. Nutrition Reviews , https://doi.org/10.1093/nutrit/nuy067

This systematic review was interested in comparing the diet quality of vegetarian and non-vegetarian diets. Twelve studies were included. Vegetarians more closely met recommendations for total fruit, whole grains, seafood and plant protein, and sodium intake. In nine of the twelve studies, vegetarians had higher overall diet quality compared to non-vegetarians. These findings may explain better health outcomes in vegetarians, but additional research is needed to remove any possible confounding variables.

Related Terms

Cochrane Database of Systematic Reviews

A highly-regarded database of systematic reviews prepared by The Cochrane Collaboration , an international group of individuals and institutions who review and analyze the published literature.

Exclusion Criteria

The set of conditions that characterize some individuals which result in being excluded in the study (i.e. other health conditions, taking specific medications, etc.). Since systematic reviews seek to include all relevant studies, exclusion criteria are not generally utilized in this situation.

Inclusion Criteria

The set of conditions that studies must meet to be included in the review (or for individual studies - the set of conditions that participants must meet to be included in the study; often comprises age, gender, disease type and status, etc.).

Now test yourself!

1. Systematic Reviews are similar to Meta-Analyses, except they do not include a statistical analysis quantitatively combining all the studies.

a) True b) False

2. The panels writing Systematic Reviews may include which of the following publication types in their review?

a) Published studies b) Unpublished studies c) Cohort studies d) Randomized Controlled Trials e) All of the above

Evidence Pyramid - Navigation

  • Meta- Analysis
  • Case Reports
  • << Previous: Practice Guideline
  • Next: Meta-Analysis >>

Creative Commons License

  • Last Updated: Sep 25, 2023 10:59 AM
  • URL: https://guides.himmelfarb.gwu.edu/studydesign101

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2850
  • [email protected]
  • https://himmelfarb.gwu.edu
  • Open access
  • Published: 01 April 2024

Strategies to implement evidence-informed decision making at the organizational level: a rapid systematic review

  • Emily C. Clark 1 ,
  • Trish Burnett 1 ,
  • Rebecca Blair 1 ,
  • Robyn L. Traynor 1 ,
  • Leah Hagerman 1 &
  • Maureen Dobbins 1 , 2  

BMC Health Services Research volume  24 , Article number:  405 ( 2024 ) Cite this article

Metrics details

Achievement of evidence-informed decision making (EIDM) requires the integration of evidence into all practice decisions by identifying and synthesizing evidence, then developing and executing plans to implement and evaluate changes to practice. This rapid systematic review synthesizes evidence for strategies for the implementation of EIDM across organizations, mapping facilitators and barriers to the COM-B (capability, opportunity, motivation, behaviour) model for behaviour change. The review was conducted to support leadership at organizations delivering public health services (health promotion, communicable disease prevention) to drive change toward evidence-informed public health.

A systematic search was conducted in multiple databases and by reviewing publications of key authors. Articles that describe interventions to drive EIDM within teams, departments, or organizations were eligible for inclusion. For each included article, quality was assessed, and details of the intervention, setting, outcomes, facilitators and barriers were extracted. A convergent integrated approach was undertaken to analyze both quantitative and qualitative findings.

Thirty-seven articles are included. Studies were conducted in primary care, public health, social services, and occupational health settings. Strategies to implement EIDM included the establishment of Knowledge Broker-type roles, building the EIDM capacity of staff, and research or academic partnerships. Facilitators and barriers align with the COM-B model for behaviour change. Facilitators for capability include the development of staff knowledge and skill, establishing specialized roles, and knowledge sharing across the organization, though staff turnover and subsequent knowledge loss was a barrier to capability. For opportunity, facilitators include the development of processes or mechanisms to support new practices, forums for learning and skill development, and protected time, and barriers include competing priorities. Facilitators identified for motivation include supportive organizational culture, expectations for new practices to occur, recognition and positive reinforcement, and strong leadership support. Barriers include negative attitudes toward new practices, and lack of understanding and support from management.

This review provides a comprehensive analysis of facilitators and barriers for the implementation of EIDM in organizations for public health, mapped to the COM-B model for behaviour change. The existing literature for strategies to support EIDM in public health illustrates several facilitators and barriers linked to realizing EIDM. Knowledge of these factors will help senior leadership develop and implement EIDM strategies tailored to their organization, leading to increased likelihood of implementation success.

Review registration

PROSPERO CRD42022318994.

Peer Review reports

There exist expectations that decisions and programs that affect public and population health are informed by the best available evidence from research, local context, and political will [ 1 , 2 , 3 ]. To achieve evidence-informed public health, it is important that public health organizations engage in and support evidence-informed decision making (EIDM). For this review, “public health organizations” refers to organizations that implement public health programs, including health promotion, injury and disease prevention, population health monitoring, emergency preparedness and response, and other critical functions [ 4 ]. EIDM, at an organizational level, involves the integration of evidence into all practice decisions by identifying and synthesizing evidence, then developing and executing plans to implement and evaluate changes to practice [ 2 , 5 , 6 ]. EIDM considers research evidence along with other factors such as context, resources, experience, and patient/community input to influence decision making and program implementation [ 2 , 3 , 7 , 8 ]. When implemented, EIDM results in efficient use of scarce resources, encourages stakeholder involvement resulting in more effective programs and decisions, improves transparency and accountability of organizations, improves health outcomes, and reduces harm [ 3 , 7 , 8 ]. Therefore, it is important that EIDM is integrated into organizations serving public health.

Driving organizational change for EIDM is challenging due to the need for multifaceted interventions [ 9 ].While there are systematic reviews of the implementation of specific evidence-informed initiatives, reviews of implementation of organization-wide EIDM are lacking. For example, Mathieson et al. and Li et al. examined the barriers and facilitators to the implementation of evidence-informed interventions in community nursing and Paci et al. examined barriers in physiotherapy [ 10 , 11 , 12 ]. Li et al. found that implementation of evidence-informed practices is associated with an organizational culture for EIDM where staff at all levels value and contribute to EIDM [ 12 ]. Similarly, Mathieson et al. and Paci et al. found that organizational context plays an important role in evidence-informed practice implementation along with organizational support and resources [ 10 , 11 ]. While these reviews identify organizational context, culture and support as crucial for the implementation of a particular evidence-informed practice, they do not identify and describe sufficiently what and how an organization evolves to consistently be evidence-informed for all decisions and programs and services it delivers.

Primary studies have explored how building capacity for staff to find, interpret and synthesize evidence to develop practice and program recommendations may contribute to EIDM [ 13 , 14 , 15 , 16 ]. In 2019, Saunders et al. completed an overview of systematic reviews on primary health care professionals’ EIDM competencies and found that implementation of EIDM across studies was low [ 9 ]. Participants reported insufficient knowledge and skills to implement EIDM in daily practice despite positive EIDM beliefs and attitudes [ 9 ]. In 2014, Sadeghi-Bazargani et al. and in 2018, Barzkar et al. also explored the implementation of EIDM and found similar results, listing inadequate skills and lack of knowledge amongst the most common barriers to EIDM [ 17 , 18 ].

An underlying current in research for organizational EIDM is a focus on organizational change [ 13 , 14 , 19 , 20 ]. To achieve EIDM across an organization, significant organizational change is usually necessary, resulting in substantial impact on the entire organization, as well as for individuals working there. However, while there are reviews of individual capacity for EIDM, there is minimal synthesized evidence describing EIDM capacity at the organizational level. This review seeks to address this research gap by identifying, appraising, and synthesizing research evidence from studies seeking to understand the process of embedding EIDM across an organization, with a focus on public health organizations.

The COM-B model for behaviour change was used as a guide for contextualizing the findings across studies. By integrating causal components of behaviour change, the COM-B model supports the development of interventions that can sustain behaviour change in the long-term. While there are numerous models available to support implementation and organizational change, the COM-B model was chosen, in part, for its simple visual representation of concepts, as well as its contributions to the sustainability of behaviours [ 21 ]. This model is designed to guide organizational change initiatives and distill complex systems that influence behaviour into simpler, visual representations. Specifically, this model looks at capability (C), opportunity (O) and motivation (M) as three key influencers of behaviour (B). The capability section of the COM-B model reflects whether the intended audience possess the knowledge and skills for a new behaviour. Opportunity reflects whether there is opportunity for new behaviour to occur, while motivation reflects whether there is sufficient motivation for a new behaviour to occur. All three components interact to create behaviour and behaviours can, in turn, alter capability, motivation and opportunity [ 21 ]. Selection of the COM-B model was also driven by authors’ extensive experience supporting public health organizations in implementing EIDM, which observed enablers for EIDM that align well with the COM-B model, such as team-wide capacity-building for EIDM, integration of EIDM into processes, and support from senior leadership [ 20 , 22 , 23 ]. The COM-B model has been used to map findings from systematic reviews examining the barriers and facilitators of various health interventions including nicotine replacement, chlamydia testing and lifestyle management of polycystic ovary syndrome [ 24 , 25 , 26 ]. This review has a broader focus and maps barriers and facilitators for organization-wide EIDM to the COM-B model.

Overall, EIDM is expected to be a foundation at public health organizations to achieve optimal health of populations. However, the capacity of public health organizations to realize EIDM varies considerably from organization to organization [ 14 , 22 , 27 , 28 , 29 ]. This rapid review aims to examine the implementation of EIDM at the organizational level to inform change efforts at Canadian public health organizations. The findings of this review can be applied more broadly and will support public health organizations beyond Canada to implement change efforts to practice in an evidence-informed way.

Study design

The review protocol was registered with the International Prospective Register of Systematic Reviews (PROSPERO; Registration CRD42022318994). The review was conducted and reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement for reporting systematic reviews and meta-analyses [ 30 ]. A rapid review approach was used, since the review was requested to be completed by the National Collaborating Centre for Methods and Tools’ Rapid Evidence Service within a specific timeline, in order to inform an organizational change initiative at a provincial public health organization in Canada [ 31 ]. Given the nature of the research question, a mixed methods rapid systematic review approach was taken, with guidance from the Joanna Briggs Institute (JBI) Manual for Evidence Synthesis [ 32 ].

Information sources and search strategy

The search was conducted on March 18, 2022. The following databases were searched from 2012 onward: Medline, Embase, Emcare, Global Health Database, PsycINFO, Web of Science. Each database was searched using combinations and variations of the terms “implement*”, “knowledge broker*”, “transform*”, “organizational culture”, “change management”, “evidence-based”, “knowledge translation”, and “knowledge mobilization”. Additionally, publications by key contributors to the field were reviewed. The full search strategy is included in Appendix 1 .

Studies were screened using DistillerSR software. Titles and abstracts of retrieved studies were screened by a single reviewer. Full texts of included studies were screened by a second reviewer and reviewed by a third. Screening was not completed in duplicate, consistent with a rapid review protocol [ 31 ]. To minimize the risk of bias, a subset of 100 retrieved articles were screened in duplicate at the title and abstract stage to ensure consistency across reviewers. Of this subset, there were four articles with conflicting decisions, which were discussed amongst screeners to clarify inclusion criteria.

Eligibility criteria

English-language, published primary studies with experimental or observational designs were eligible for inclusion. Review papers, such as literature and systematic reviews, were excluded to ensure that details regarding implementation of initiatives were captured without re-interpretation or generalization by review authors. Grey literature was not included. Eligibility criteria are outlined below in terms of a PICO (Population, Intervention, Comparison, Outcome) structure [ 33 ].

Studies conducted with public sector health-related service-delivery organizations were eligible for inclusion. This included public health departments and authorities, health care settings and social services. Studies focused on departments or teams within an organization, or on entire organizations, were also eligible for inclusion. Studies conducted in private sectors or academic institutions were excluded to narrow the focus of the review.

Intervention

Interventions designed and implemented to shift teams, departments, or organizations to EIDM in all decisions were eligible for inclusion. These can include initiatives where organizations establish roles or teams to drive organizational change for EIDM, or efforts to build and apply the knowledge and skill of staff for EIDM. These are distinct from implementation strategies for evidence-informed interventions. Eligible interventions were applied to a team, department, or organization to drive change toward evidence use in decision making at all levels of the organizations.

Studies that included any comparator or no comparator were included, recognizing that literature was likely to include case reports.

Outcomes measured either quantitatively or qualitatively were considered. These included behaviour change, confidence and skills, patient-level data such as quality indicators, evidence of EIDM embedded in organizational and decision-making processes, changes in organizational culture, and changes to budget allocation. Studies that reported primarily on implementation fidelity were excluded, since studies of implementation fidelity focus on whether an intervention is delivered as intended, rather than drivers for organizational change.

Studies conducted in the 38 member countries of the Organization for Economic Co-operation and Development (OECD) were included in this review to best align with the Canadian context and to inform organizational change efforts in public health within Canada [ 34 ].

Quality assessment

The methodological rigour of included studies was evaluated using the JBI suite of critical appraisal tools [ 35 ]. Ratings of low, moderate, or high quality were assigned based on the critical appraisal results. Quality assessment was completed by one reviewer and verified by a second. Conflicts were resolved through discussion or by consulting a third reviewer.

Data extraction

Data extraction was completed by a single reviewer and reviewed by a second. Data on the study design, setting, sector (e.g., public health, primary care, etc.), participants, intervention (e.g., description of learning initiatives, implementation strategies, etc.), outcome measures, and findings were extracted. To minimize the risk of bias, a subset of three included articles underwent data extraction in duplicate to ensure consistency across reviewers. There was good agreement between duplicate extraction, with variations in the format of extracted data but consistency in content.

Data analysis

Quantitative and qualitative data were synthesized simultaneously, using a convergent integrated approach [ 32 ]. Quantitative data underwent narrative synthesis, where findings that caused benefit were compared with those that caused harm or no effect [ 36 ]. Vote counting based on the direction of effect was used to determine whether most studies found a positive or negative effect [ 36 ]. For qualitative findings, studies were grouped according to common strategies. Within these common strategies, findings were reviewed for trends in reported facilitators and barriers. These trends were deductively mapped to the COM-B model for behaviour change [ 37 ].

Due to the heterogeneity in study outcomes, the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) [ 38 ] approach was not used for this review. Overall certainty of evidence was determined based on the risk of bias of included study designs and study quality.

Database searching retrieved 7067 records. After removing duplicates, 4174 records were screened by title and abstract, resulting in 1370 reports for full text review. Of those 1370 records, 35 articles were included. Scanning the publication lists of key authors retrieved 187 records, of which eight were retrieved for full text review and two were included, for a total of 37 articles included in this review. See Fig. 1 for a PRISMA flow chart illustrating the article search and selection process.

figure 1

PRISMA 2020 flow chart

Study characteristics

The overall characteristics of included studies are summarized in Table 1 . Of 37 included studies, most were conducted in primary care settings ( n  = 16) and public health settings ( n  = 16), with some in social services ( n  = 3), child and youth mental health ( n  = 1), and occupational health ( n  = 1). Most studies were conducted in the USA ( n  = 17), followed by Canada ( n  = 12), Australia ( n  = 5), and Europe ( n  = 3).

Study designs included case reports ( n  = 18), single group pre-/post-test studies ( n  = 10), qualitative studies ( n  = 7), and randomized controlled trials (RCTs) ( n  = 2). Both RCTs evaluated the implementation of organizational EIDM.

Studies reported quantitative ( n  = 11), qualitative ( n  = 20), or both quantitative and qualitative results ( n  = 6). For the studies that reported quantitative results, measures included EIDM implementation, EIDM-related beliefs and behaviours, organizational priorities for EIDM, and patient care quality indicators. Quantitative measures were heterogenous and did not allow meta-analysis. Qualitative findings were generated through formal qualitative analysis ( n  = 19) or descriptive case reports ( n  = 7). Most qualitative results included facilitators and barriers to implementation ( n  = 16).

Study quality

The critical appraisal checklist used to assess each study is indicated in Table  1 . Single group, pre-/post-test studies were evaluated according to the JBI Checklist for Quasi-experimental Studies [ 35 ].

A lack of control groups contributed to the risk of bias. Most included studies were rated Moderate or High quality according to their respective quality assessment tools. Full quality assessments for each article are included in Appendix 2 . Therefore, the overall methodological quality for this body of literature was rated as Moderate.

Strategies for implementing organization-wide EIDM

Due to the heterogeneity of study designs, interventions, and outcomes, it was not possible to determine which EIDM implementation strategies are more effective compared to others. Implementation strategies included the establishment of Knowledge Broker-type roles, building the EIDM capacity of staff, and research or academic partnerships. These strategies are listed in Table  2 .

Evaluation of strategies implemented by studies in this review was often qualitative and described facilitators and barriers, rather than quantitatively measuring effectiveness. However, it is possible to explore EIDM implementation strategies and factors that appear to contribute to or inhibit success. The most common strategy implemented in included studies was the establishment of Knowledge Broker-type roles [ 20 , 41 , 44 , 47 , 48 , 51 , 52 , 54 , 55 , 56 , 57 , 59 , 60 , 62 , 63 , 64 , 65 , 66 , 67 , 69 , 71 , 72 ]. Studies described roles differently (e.g., “Evidence-based Practice Facilitator”, “Evidence Facilitator”, “EIDM Mentor”). These roles all served to support EIDM across organizations through knowledge sharing, evidence synthesis, implementation, and other EIDM-related activities. In some studies, new staff were hired to Knowledge Broker roles, or developed among existing staff, while in others, Knowledge Brokers were contracted from external organizations. Knowledge Broker strategies were mostly implemented in parallel with other EIDM implementation strategies, such as capacity building for staff, integrating EIDM into decision-making processes and development of leadership to support EIDM. When these strategies were evaluated quantitatively for organizational capacity, culture and implementation of EIDM, most studies found positive results, such as increased scores for organizational climates supporting EIDM, improved attitudes toward EIDM, or the integration of EIDM into processes [ 44 , 52 , 54 , 62 , 66 , 67 , 71 , 72 ], although some studies found no change [ 55 , 60 ] following implementation of Knowledge Broker roles. Qualitatively, most studies described facilitators and barriers to EIDM, either through formal qualitative analysis or case report [ 14 , 20 , 39 , 40 , 41 , 42 , 43 , 45 , 47 , 48 , 52 , 55 , 57 , 59 , 60 , 61 , 64 , 65 , 68 ]. Facilitators included organizational culture with supportive leadership and staff buy-in, expectations to use evidence to inform decisions, accessible knowledge, and integration of EIDM into processes and templates. Barriers included limited time and competing priorities, staff turnover, and lack of understanding and support from management.

Ten included studies focused primarily on building EIDM capacity of existing staff at the organization, often at multiple levels (e.g., front-line service providers, managers, and leadership) [ 13 , 14 , 39 , 40 , 42 , 43 , 46 , 49 , 50 , 58 , 61 ]. Capacity building was typically done through EIDM-focused workshops, often with ongoing follow up support from workshop facilitators. While studies often measured changes in individual knowledge and skill for EIDM for workshop participants, organizational change for EIDM was reported qualitatively, either through formal qualitative analysis or through a case report. Facilitators for EIDM in these ten studies included organizational culture with supportive leadership and staff buy-in, dedicated staff roles to support EIDM, opportunities to meet and discuss EIDM (e.g., communities of practice, journal clubs), knowledge sharing across the organization, expectations to use evidence to inform decisions, accessible knowledge, and integration of EIDM into processes and templates. Barriers included limited time and competing priorities, staff turnover, and negative attitudes toward EIDM.

Research or academic partnerships and networks were the main strategy described in three case reports [ 45 , 53 , 68 ]. These involved establishing collaborations, either through universities or non-governmental health organizations, that provided direct EIDM support. These strategies were not evaluated quantitatively but described facilitators and barriers to effective cross-sector collaborations. Facilitators for EIDM included supportive leadership and management, dedicated staff roles to support EIDM, EIDM knowledge and skill development for staff, and regular communication between partners. Barriers included limited time and competing priorities, preference for experiential over research evidence, and negative attitudes toward EIDM.

Overall, studies described successes in implementing EIDM across organizations, citing several common key facilitators and barriers. To instigate behaviour change, strategies must address capability for change, which may be achieved by building staff capacity, establishing dedicated support roles, improving access to evidence, and sharing knowledge across the organization. Strategies must also enable opportunities for change, which may be supported through forums for EIDM learning and practice, protecting time for EIDM, integrating EIDM into new or existing roles, and adding EIDM to processes and templates. Behaviour change also requires motivation, which may be built through a supportive organizational culture, expectations to use EIDM, recognition and positive reinforcement, and strong support from leadership.

Key considerations for implementing EIDM

Many of the facilitators and barriers to EIDM are common across strategies explored by the studies included in this review. To conceptualize these factors, they were mapped to the COM-B model for behaviour change [ 21 ] in Fig. 2 .

figure 2

COM-B Model for behaviour change with facilitators and barriers for implementation of organization-wide EIDM

Within the capability component of the COM-B model, staff knowledge and skill development were included as a facilitator. Studies included in this review demonstrated that knowledge and skill for EIDM supported the use of evidence in decision making [ 13 , 14 , 39 , 40 , 42 , 43 , 46 , 49 , 50 , 58 , 61 ]. The establishment of specialized or dedicated roles for EIDM, such as Knowledge Broker roles, was included in the capability component of the COM-B model, since Knowledge Broker roles support the capacity of organizations and their staff to use evidence-informed approaches [ 20 , 41 , 44 , 47 , 48 , 51 , 52 , 54 , 55 , 56 , 57 , 59 , 60 , 62 , 63 , 64 , 65 , 66 , 67 , 69 , 71 , 72 ]. Finally, knowledge sharing across organizations was described as a facilitator for EIDM by several of the studies that built staff capacity for EIDM or established Knowledge Broker roles [ 13 , 48 , 49 , 51 , 52 , 54 , 56 , 59 , 61 , 65 ]. Barriers to the capability for EIDM behaviours include staff turnover and subsequent knowledge loss [ 14 , 20 , 56 ]. Staff turnover is especially challenging for interventions that involve staff in dedicated Knowledge Broker roles and interventions that build the knowledge and skill for staff to engage in evidence use [ 14 , 20 , 56 ]. In some cases, individuals who are trained in the Knowledge Broker role are then promoted to new roles or management and have fewer opportunities to apply their Knowledge Broker skills [ 20 ].

The opportunity portion of the COM-B model reflects whether there is opportunity for new behaviour to occur. The development of processes and mechanisms that support new practices can act as a reminder for staff, and may include re-design of planning or decision-making templates to capture supporting evidence, or adding EIDM-related items to agendas for regular meetings [ 41 , 47 , 53 , 60 ]. Forums for learning and skill development provide staff with opportunities to gain knowledge and practice newly acquired skills in group settings, such as communities of practice or journal clubs [ 48 , 56 , 61 , 65 ]. Finally, protected time to apply EIDM was found to be a facilitator for opportunity in the COM-B model [ 20 , 47 , 57 , 59 , 65 ], while competing priorities were found to be a barrier [ 20 , 39 , 40 , 52 , 55 , 57 , 60 , 64 , 65 ].

The final influencer in the COM-B model, motivation, reflects whether there is sufficient motivation for a new behaviour to occur. Facilitators include supportive organizational culture [ 14 , 20 , 43 , 47 , 57 , 59 ], expectations for new practices to occur [ 20 , 40 ], recognition and positive reinforcement [ 52 , 59 , 60 , 65 ], and strong leadership support [ 14 , 20 , 39 , 40 , 43 , 47 , 56 , 59 , 65 , 68 ]. Barriers to motivation included a lack of understanding or support from management [ 20 ], and negative attitudes toward change [ 20 , 52 , 59 , 68 ].

Strategies to implement EIDM across organizations include establishing specialized roles, providing staff education and training, developing processes or mechanisms to support new practices, and demonstrating leadership support. Facilitators and barriers for these strategies align with the COM-B model for behaviour change, which outlines capability, opportunity, and motivation as influencers of behaviour (Fig. 2 ). The COM-B model provides a comprehensive framework for the factors that influence behaviour change and has provided a valuable structure for examining barriers and facilitators to behaviour change in public health and related fields [ 73 , 74 , 75 , 76 ].

The capability section of the COM-B model reflects whether the intended audience possess the knowledge and skill for a new behaviour. Findings from this review establish facilitators for EIDM implementation capability, including the development of staff knowledge and skill, establishing specialized roles, and knowledge sharing across the organization. The development of staff knowledge and skill for EIDM are a necessary component to ensure EIDM in practice, however, literature has found that the organization-wide impact of conducting only individual-level knowledge and skill development is limited [ 77 , 78 , 79 ]. While knowledge and skill development are a necessary component to EIDM practice, they must be supported by other components to have an impact beyond the individual. Other strategies that support the use of newly gained knowledge and skills include the establishment of specialized roles for EIDM. Another strategy to support the use of EIDM is the establishment of dedicated staff roles, such as Knowledge Brokers. Knowledge Broker roles have been used across diverse contexts and show promise in supporting organization-wide EIDM implementation [ 20 , 22 , 23 , 67 , 80 , 81 , 82 , 83 ]. One facilitator for Knowledge Broker roles was knowledge sharing across the organization. Factors that influence the success of staff in Knowledge Broker roles align with those mapped to opportunity and motivation in the COM-B model, including the integration of EIDM into processes, knowledge sharing, and supportive organizational culture [ 20 , 22 , 47 , 67 , 84 , 85 ]. Knowledge Brokers can also help facilitate knowledge sharing across the organization, which was another facilitator mapped to the capability level of the model [ 20 , 47 , 84 , 85 ]. Knowledge sharing refers to the shared learning, knowledge products and resources for EIDM. At large public health organizations, it can be challenging to facilitate knowledge sharing between teams and departments [ 86 , 87 ]. Integrating technology can help; there have been some advances driven by the COVID-19 pandemic, such as the development of knowledge sharing platforms [ 88 , 89 , 90 , 91 ]. Public health organizations seeking to implement EIDM should invest in their knowledge sharing infrastructure.

At the capability level of the COM-B model, staff turnover was a barrier to EIDM implementation. Organizations that implement these strategies should be cognizant of the potential for knowledge loss due to staff turnover when selecting staff for Knowledge Broker roles or capacity building opportunities.

Facilitators for organizational EIDM opportunity include the development of processes or mechanisms to support new practices, forums for learning and skill development, and protected time. The use of reminders for organizational behaviour change and implementation of clinical practice guidelines has been shown to be an effective strategy across many contexts [ 92 , 93 , 94 , 95 ]. Organizations seeking to implement EIDM should consider revising current templates and processes to support their initiatives. Another facilitator included forums for shared learning and skill development. Other literature shows that these forums can be effective in developing knowledge and skill and should foster an environment of learning without fear of reprisal [ 96 , 97 ]. Finally, protected time for EIDM was a facilitator and competing priorities were a barrier. In public health practice, staff are often challenged with high workloads, so that EIDM may be viewed as an additional burden rather than a means to improve practice [ 98 , 99 ]. For an EIDM approach to be practiced, staff must be provided with sufficient time to apply and practice skills. Organizations should consider involving middle management who oversee staff time allocations, rather than only senior leadership, to help ensure that staff are provided with the time they need and that expectations are adjusted accordingly [ 20 , 23 ].

At the motivation level of the COM-B model, supportive organizational culture was mapped as a facilitator. The influence of organizational culture on evidence-informed practice at health organizations has been explored in a previous systematic review by Li et al. [ 100 ]. This systematic review of organizational contextual factors that influence evidence-based practice included 37 studies conducted in healthcare-related settings. Findings align with facilitators identified above, especially leadership support, which was found to impact evidence-based practice as well as all other factors that influence evidence-based practice [ 100 ]. The review also found that monitoring and feedback contributed to implementation of evidence-based practice, which aligns with recognition and positive reinforcement in the COM-B model above [ 100 ]. Notably, another factor that was mapped to the COM-B model was the expectation for new practices to occur, which was not explicitly identified as an influence on practice [ 100 ]. While Li et al. acknowledge that leadership that neglects to hold staff accountable are detrimental to implementation of EIDM, this accountability and clear expectations for change practice were a stronger finding in this current rapid systematic review.

The need for leadership support aligns with opportunity, since it is often management that determines the allocation of staff time for EIDM [ 20 , 23 ]. Attitudes and the belief that EIDM is associated with positive outcomes is a key factor in overall competence for EIDM [ 101 ]. Efforts to address negative attitudes within staff, especially at the leadership level, may improve implementation of EIDM.

While this review provides a comprehensive overview of interventions to support EIDM in public health and related organizations, it does have some limitations. Given the heterogeneity of included studies, it was not possible to discern which implementation strategies for EIDM are more effective compared to others. Knowledge Broker roles, building capacity for EIDM, and research-academic partnerships were all shown to contribute to EIDM, but study findings do not support one strategy as superior to others. Given the highly contextual nature of these interventions, it is likely that the relative effectiveness of different interventions depends on the organization’s unique set of characteristics. Evaluation is also critical to determine if change efforts are successful or need to be adjusted. It is possible that a combination of strategies would maximize the likelihood that diverse needs of staff are met. Rigorous studies to evaluate this hypothesis are needed.

Most studies included in this review are non-randomized studies of interventions. Given the importance of context in organizational change, randomized controlled trial designs may not be well-suited to evaluate studies of EIDM implementation [ 102 ]. High-quality single-group studies, such as prospective cohort analytic studies evaluated with validated measures or qualitative descriptive analyses of case studies with thorough descriptions of interventions and context, may be more appropriate designs for designing future initiatives in this field. However, arguments have been made for the use of randomized trial designs in implementation research [ 103 ]. Foy et al. advocate for overcoming contextual barriers by using innovative trial designs, such as the multiphase optimization strategy approach, where a series of trials identify the most promising single or combined intervention components, or the sequential multiple assignment randomized trial approach, where early results inform tailoring of adaptive interventions [ 103 ]. These designs may be a promising approach to conducting trials within highly contextual settings. Another viewpoint is that perhaps it may not be essential to determine if one strategy is superior to another, but rather that strategies build a larger, multi-strategy approach to implementation [ 104 ]. There may be greater benefit to determining the conditions under which various strategies are effective [ 104 ].

A limitation in this review’s methodology is that the review was completed following a rapid review protocol to ensure timely completion. Modifications of a systematic review approach included the use of a single reviewer for screening and using an unblinded reviewer to check quality assessment and data extraction. This may have contributed to some bias within the review, due to the reviewers’ interpretations of studies. To minimize this bias, there were efforts to calibrate screening, quality assessment and data extraction using a subset of studies.

This review provides a synthesis of strategies for the organization-wide implementation of EIDM, and an in-depth analysis of their facilitators and barriers in public health organizations. Facilitators and barriers mapped to the COM-B model for behaviour change can be used by organizational leadership to drive organizational change toward EIDM.

This rapid systematic review explored the implementation of EIDM at the organizational level of public health and related organizations. Despite the similarity of these implementation challenges, studies used distinct strategies for implementation, including the establishment of dedicated roles to support EIDM, building staff capacities, research or academic partnerships, and integrating evidence into processes or mechanisms. Facilitators and barriers mapped to the COM-B model provide key guidance for driving organizational change to evidence-informed approaches for all decisions.

Availability of data and materials

All data generated or analysed during this study are included in this published article and its supplementary information files.

Abbreviations

Evidence-informed Decision Making

Evidence-based Practice

Evidence-informed Practice

Grading of Recommendations, Assessment, Development and Evaluations

Joanna Briggs Institute

Knowledge Translation

Randomized Controlled Trial

Public Health Agency of Canada. Core Competencies for Public Health in Canada. 1st ed. 2008.

Google Scholar  

National Collaborating Centre for Methods and Tools. Evidence-Informed Decision Making in Public Health 2022. Available from: https://www.nccmt.ca/tools/eiph .

World Health Organization. WHO guide for evidence-informed decision-making. Evidence, policy, impact. 2021.

Canadian Public Health Association. Public health: a conceptual framework. Ottawa: Canadian Public Health Association; 2017.

Brownson RC, Gurney JG, Land GH. Evidence-based decision making in public health. J Public Health Manag Pract. 1999;5(5):86–97.

Article   CAS   PubMed   Google Scholar  

Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving concept. Am J Prev Med. 2004;27(5):417–21.

PubMed   Google Scholar  

Titler MG. The evidence for evidence-based practice implementation. In: Hughes RG, editor. Patient safety and quality: an evidence-based handbook for nurses. Advances in Patient Safety. Rockville (MD); 2008.

Pan American Health Organization. A guide for evidence-informed decision-making, including in health emergencies. 2022.

Saunders H, Gallagher-Ford L, Kvist T, Vehvilainen-Julkunen K. Practicing Healthcare professionals’ evidence-based practice competencies: an overview of systematic reviews. Worldviews Evid Based Nurs. 2019;16(3):176–85.

Article   PubMed   Google Scholar  

Paci M, Faedda G, Ugolini A, Pellicciari L. Barriers to evidence-based practice implementation in physiotherapy: a systematic review and meta-analysis. Int J Qual Health Care. 2021;33(2):mzab093.

Mathieson A, Grande G, Luker K. Strategies, facilitators and barriers to implementation of evidence-based practice in community nursing: a systematic mixed-studies review and qualitative synthesis. Prim Health Care Res Dev. 2019;20:e6.

Li S, Cao M, Zhu X. Evidence-based practice: knowledge, attitudes, implementation, facilitators, and barriers among community nurses-systematic review. Med (Baltim). 2019;98(39):e17209.

Article   Google Scholar  

Ward M, Mowat D. Creating an organizational culture for evidence-informed decision making. Healthc Manage Forum. 2012;25(3):146–50.

Peirson L, Ciliska D, Dobbins M, Mowat D. Building capacity for evidence informed decision making in public health: a case study of organizational change. BMC Public Health. 2012;12:137.

Article   PubMed   PubMed Central   Google Scholar  

Allen P, Parks RG, Kang SJ, Dekker D, Jacob RR, Mazzucca-Ragan S, et al. Practices among Local Public Health Agencies to support evidence-based decision making: a qualitative study. J Public Health Manag Pract. 2023;29(2):213–25.

Ellen ME, Leon G, Bouchard G, Ouimet M, Grimshaw JM, Lavis JN. Barriers, facilitators and views about next steps to implementing supports for evidence-informed decision-making in health systems: a qualitative study. Implement Sci. 2014;9:179.

Sadeghi-Bazargani H, Tabrizi JS, Azami-Aghdash S. Barriers to evidence-based medicine: a systematic review. J Eval Clin Pract. 2014;20(6):793–802.

Barzkar F, Baradaran HR, Koohpayehzadeh J. Knowledge, attitudes and practice of physicians toward evidence-based medicine: a systematic review. J Evid Based Med. 2018;11(4):246–51.

Clark E, Dobbins M, Hagerman L, Neumann S, Akaraci S. What is known about strategies to implement evidence-informed practice at an organizational level? Prospero; 2022.

Clark EC, Dhaliwal B, Ciliska D, Neil-Sztramko SE, Steinberg M, Dobbins M. A pragmatic evaluation of a public health knowledge broker mentoring education program: a convergent mixed methods study. Implement Sci Commun. 2022;3(1):18.

Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6:42.

Dobbins M, Hanna SE, Ciliska D, Manske S, Cameron R, Mercer SL, et al. A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implement Sci. 2009;4:61.

Dobbins M, Traynor RL, Workentine S, Yousefi-Nooraie R, Yost J. Impact of an organization-wide knowledge translation strategy to support evidence-informed public health decision making. BMC Public Health. 2018;18(1):1412.

McDonagh LK, Saunders JM, Cassell J, Curtis T, Bastaki H, Hartney T, et al. Application of the COM-B model to barriers and facilitators to chlamydia testing in general practice for young people and primary care practitioners: a systematic review. Implement Sci. 2018;13(1):130.

Mersha AG, Gould GS, Bovill M, Eftekhari P. Barriers and facilitators of adherence to nicotine replacement therapy: a systematic review and analysis using the capability, opportunity, motivation, and Behaviour (COM-B) Model. Int J Environ Res Public Health. 2020;17(23):8895.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Pirotta S, Joham AJ, Moran LJ, Skouteris H, Lim SS. Implementation of evidence-based PCOS lifestyle management guidelines: perceived barriers and facilitators by consumers using the theoretical domains Framework and COM-B Model. Patient Educ Couns. 2021;104(8):2080–8.

Dubois A, Lévesque M. Canada’s National Collaborating centres: facilitating evidence-informed decision-making in public health. Can Commun Dis Rep. 2020;46(2–3):31–5.

Martin W, Wharf Higgins J, Pauly BB, MacDonald M. Layers of translation - evidence literacy in public health practice: a qualitative secondary analysis. BMC Public Health. 2017;17(1):803.

van der Graaf P, Forrest LF, Adams J, Shucksmith J, White M. How do public health professionals view and engage with research? A qualitative interview study and stakeholder workshop engaging public health professionals and researchers. BMC Public Health. 2017;17(1):892.

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021;372:n71.

Neil-Sztramko SE, Belita E, Traynor RL, Clark E, Hagerman L, Dobbins M. Methods to support evidence-informed decision-making in the midst of COVID-19: creation and evolution of a rapid review service from the National Collaborating Centre for Methods and Tools. BMC Med Res Methodol. 2021;21(1):231.

Lizarondo L, Stern C, Carrier J, Godfrey C, Rieger K, Salmond S, Apostolo J, Kirkpatrick P, Loveday H. Chapter 8: mixed methods systematic reviews. Aromataris EMZ. 2020.

Thomas J, Kneale D, McKenzie JE, Brennan SE, Bhaumik S. In: Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Chapter 2: determining the scope of the review and the questions it will address. editor: Cochrane: Higgins JPT TJ; 2023.

Organisation for Economic Co-operation and Development. List of OECD Member countries - Ratification of the Convention on the OECD; 2021. Available from: https://www.oecd.org/about/document/ratification-oecd-convention.htm .

Joanna Briggs Institute. Available from: https://jbi.global/critical-appraisal-tools .

McKenzie JE, Brennan SE. Chapter 12. Synthesizing and presenting findings using other methods. 2021.

Brogly C, Bauer MA, Lizotte DJ, Press ML, MacDougall A, Speechley M, et al. An app-based Surveillance System for undergraduate students’ Mental Health during the COVID-19 pandemic: protocol for a prospective cohort study. JMIR Res Protoc. 2021;10(9):e30504.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. J Clin Epidemiol. 2011;64(4):383–94.

Allen P, O’Connor JC, Best LA, Lakshman M, Jacob RR, Brownson RC. Management practices to build evidence-based decision-making capacity for Chronic Disease Prevention in Georgia: a Case Study. Prev Chronic Dis. 2018;15:E92.

Allen P, Jacob RR, Lakshman M, Best LA, Bass K, Brownson RC. Lessons learned in promoting evidence-based Public Health: perspectives from Managers in State Public Health Departments. J Community Health. 2018;43(5):856–63.

Augustino LR, Braun L, Heyne RE, Shinn A, Lovett-Floom L, King H, et al. Implementing evidence-based practice facilitators: a Case Series. Mil Med. 2020;185(Suppl 2):7–14.

Awan S, Samokhvalov AV, Aleem N, Hendershot CS, Irving JA, Kalvik A, et al. Development and implementation of an Ambulatory Integrated Care Pathway for Major Depressive Disorder and Alcohol Dependence. Psychiatr Serv. 2015;66(12):1265–7.

Bennett S, Whitehead M, Eames S, Fleming J, Low S, Caldwell E. Building capacity for knowledge translation in occupational therapy: learning through participatory action research. BMC Med Educ. 2016;16(1):257.

Breckenridge-Sproat ST, Throop MD, Raju D, Murphy DA, Loan LA, Patrician PA. Building a unit-level Mentored Program to sustain a culture of Inquiry for evidence-based practice. Clin Nurse Spec. 2015;29(6):329–37.

Brodowski ML, Counts JM, Gillam RJ, Baker L, Collins VS, Winkle E, Skala J, Stokes K, Gomez R, Redmon J. Translating evidence-based policy to practice: a Multilevel Partnership using the interactive systems Framework. J Contemp Social Serv. 2018;94(3):141–9.

Brownson RC, Allen P, Jacob RR, deRuyter A, Lakshman M, Reis RS, et al. Controlling Chronic diseases through evidence-based decision making: a Group-Randomized Trial. Prev Chronic Dis. 2017;14:E121.

Dobbins M, Greco L, Yost J, Traynor R, Decorby-Watson K, Yousefi-Nooraie R. A description of a tailored knowledge translation intervention delivered by knowledge brokers within public health departments in Canada. Health Res Policy Syst. 2019;17(1):63.

Elliott MJ, Allu S, Beaucage M, McKenzie S, Kappel J, Harvey R, et al. Defining the scope of knowledge translation within a National, Patient-Oriented Kidney Research Network. Can J Kidney Health Dis. 2021;8:20543581211004803.

Fernandez ME, Melvin CL, Leeman J, Ribisl KM, Allen JD, Kegler MC, et al. The cancer prevention and control research network: an interactive systems approach to advancing cancer control implementation research and practice. Cancer Epidemiol Biomarkers Prev. 2014;23(11):2512–21.

Flaherty HB, Bornheimer LA, Hamovitch E, Garay E, Mini de Zitella ML, Acri MC, et al. Examining organizational factors supporting the adoption and use of evidence-based interventions. Community Ment Health J. 2021;57(6):1187–94.

Gallagher-Ford L. Implementing and sustaining EBP in real world healthcare settings: transformational evidence-based leadership: redesigning traditional roles to promote and sustain a culture of EBP. Worldviews Evid Based Nurs. 2014;11(2):140–2.

Gifford W, Lefebre N, Davies B. An organizational intervention to influence evidence-informed decision making in home health nursing. J Nurs Adm. 2014;44(7/8):395–402.

Haynes A, Rowbotham S, Grunseit A, Bohn-Goldbaum E, Slaytor E, Wilson A, et al. Knowledge mobilisation in practice: an evaluation of the Australian Prevention Partnership Centre. Health Res Policy Syst. 2020;18(1):13.

Hitch D, Lhuede K, Vernon L, Pepin G, Stagnitti K. Longitudinal evaluation of a knowledge translation role in occupational therapy. BMC Health Serv Res. 2019;19(1):154.

Hooge N, Allen DH, McKenzie R, Pandian V. Engaging advanced practice nurses in evidence-based practice: an e-mentoring program. Worldviews Evid Based Nurs. 2022;19(3):235–44.

Humphries S, Hampe T, Larsen D, Bowen S. Building organizational capacity for evidence use: the experience of two Canadian healthcare organizations. Healthc Manage Forum. 2013;26(1):26–32.

Irwin MM, Bergman RM, Richards R. The experience of implementing evidence-based practice change: a qualitative analysis. Clin J Oncol Nurs. 2013;17(5):544–9.

Kaplan L, Zeller E, Damitio D, Culbert S, Bayley KB. Improving the culture of evidence-based practice at a Magnet(R) hospital. J Nurses Prof Dev. 2014;30(6):274–80. quiz E1-2.

Kimber M, Barwick M, Fearing G. Becoming an evidence-based service provider: staff perceptions and experiences of organizational change. J Behav Health Serv Res. 2012;39(3):314–32.

Mackay HJ, Campbell KL, van der Meij BS, Wilkinson SA. Establishing an evidenced-based dietetic model of care in haemodialysis using implementation science. Nutr Diet. 2019;76(2):150–7.

Martin-Fernandez J, Aromatario O, Prigent O, Porcherie M, Ridde V, Cambon L. Evaluation of a knowledge translation strategy to improve policymaking and practices in health promotion and disease prevention setting in French regions: TC-REG, a realist study. BMJ Open. 2021;11(9):e045936.

Melnyk BM, Fineout-Overholt E, Giggleman M, Choy K. A test of the ARCC(c) model improves implementation of evidence-based practice, Healthcare Culture, and patient outcomes. Worldviews Evid Based Nurs. 2017;14(1):5–9.

Miro A, Perrotta K, Evans H, Kishchuk NA, Gram C, Stanwick RS, et al. Building the capacity of health authorities to influence land use and transportation planning: lessons learned from the healthy Canada by Design CLASP Project in British Columbia. Can J Public Health. 2014;106(1 Suppl 1):eS40–52.

Parke B, Stevenson L, Rowe M. Scholar-in-Residence: an Organizational Capacity-Building Model to move evidence to action. Nurs Leadersh (Tor Ont). 2015;28(2):10–22.

Plath D. Organizational processes supporting evidence-based practice. Adm Social work. 2013;37(2):171–88.

Roberts M, Reagan DR, Behringer B. A Public Health Performance Excellence Improvement Strategy: Diffusion and Adoption of the Baldrige Framework within Tennessee Department of Health. J Public Health Manag Pract. 2020;26(1):39–45.

Traynor R, DeCorby K, Dobbins M. Knowledge brokering in public health: a tale of two studies. Public Health. 2014;128(6):533–44.

van der Zwet RJM, Beneken genaamd Kolmer DM, Schalk R, Van Regenmortel T. Implementing evidence-based practice in a Dutch Social Work Organisation: A Shared responsibility. Br J Social Work. 2020;50(7):2212–32.

Waterman H, Boaden R, Burey L, Howells B, Harvey G, Humphreys J, et al. Facilitating large-scale implementation of evidence based health care: insider accounts from a co-operative inquiry. BMC Health Serv Res. 2015;15:60.

Williams NJ, Wolk CB, Becker-Haimes EM, Beidas RS. Testing a theory of strategic implementation leadership, implementation climate, and clinicians’ use of evidence-based practice: a 5-year panel analysis. Implement Sci. 2020;15(1):10.

Williams C, van der Meij BS, Nisbet J, McGill J, Wilkinson SA. Nutrition process improvements for adult inpatients with inborn errors of metabolism using the i-PARIHS framework. Nutr Diet. 2019;76(2):141–9.

Williams NJ, Glisson C, Hemmelgarn A, Green P. Mechanisms of change in the ARC Organizational Strategy: increasing Mental Health clinicians’ EBP adoption through improved Organizational Culture and Capacity. Adm Policy Ment Health. 2017;44(2):269–83.

Alexander KE, Brijnath B, Mazza D. Barriers and enablers to delivery of the healthy kids check: an analysis informed by the theoretical domains Framework and COM-B model. Implement Sci. 2014;9:60.

McArthur C, Bai Y, Hewston P, Giangregorio L, Straus S, Papaioannou A. Barriers and facilitators to implementing evidence-based guidelines in long-term care: a qualitative evidence synthesis. Implement Sci. 2021;16(1):70.

Moffat A, Cook EJ, Chater AM. Examining the influences on the use of behavioural science within UK local authority public health: qualitative thematic analysis and deductive mapping to the COM-B model and theoretical domains Framework. Front Public Health. 2022;10:1016076.

De Leo A, Bayes S, Bloxsome D, Butt J. Exploring the usability of the COM-B model and theoretical domains Framework (TDF) to define the helpers of and hindrances to evidence-based practice in midwifery. Implement Sci Commun. 2021;2(1):7.

Morshed AB, Ballew P, Elliott MB, Haire-Joshu D, Kreuter MW, Brownson RC. Evaluation of an online training for improving self-reported evidence-based decision-making skills in cancer control among public health professionals. Public Health. 2017;152:28–35.

Jones K, Armstrong R, Pettman T, Waters E. Knowledge translation for researchers: developing training to support public health researchers KTE efforts. J Public Health (Oxf). 2015;37(2):364–6.

Dreisinger M, Leet TL, Baker EA, Gillespie KN, Haas B, Brownson RC. Improving the public health workforce: evaluation of a training course to enhance evidence-based decision making. J Public Health Manag Pract. 2008;14(2):138–43.

Mendell J, Richardson L. Integrated knowledge translation to strengthen public policy research: a case study from experimental research on income assistance receipt among people who use drugs. BMC Public Health. 2021;21(1):153.

Russell DJ, Rivard LM, Walter SD, Rosenbaum PL, Roxborough L, Cameron D, et al. Using knowledge brokers to facilitate the uptake of pediatric measurement tools into clinical practice: a before-after intervention study. Implement Sci. 2010;5:92.

Brown KM, Elliott SJ, Robertson-Wilson J, Vine MM, Leatherdale ST. Can knowledge exchange support the implementation of a health-promoting schools approach? Perceived outcomes of knowledge exchange in the COMPASS study. BMC Public Health. 2018;18(1):351.

Langeveld K, Stronks K, Harting J. Use of a knowledge broker to establish healthy public policies in a city district: a developmental evaluation. BMC Public Health. 2016;16:271.

Bornbaum CC, Kornas K, Peirson L, Rosella LC. Exploring the function and effectiveness of knowledge brokers as facilitators of knowledge translation in health-related settings: a systematic review and thematic analysis. Implement Sci. 2015;10:162.

Sarkies MN, Robins LM, Jepson M, Williams CM, Taylor NF, O’Brien L, et al. Effectiveness of knowledge brokering and recommendation dissemination for influencing healthcare resource allocation decisions: a cluster randomised controlled implementation trial. PLoS Med. 2021;18(10):e1003833.

Jansen MW, De Leeuw E, Hoeijmakers M, De Vries NK. Working at the nexus between public health policy, practice and research. Dynamics of knowledge sharing in the Netherlands. Health Res Policy Syst. 2012;10:33.

Sibbald SL, Kothari A. Creating, synthesizing, and sharing: the management of knowledge in Public Health. Public Health Nurs. 2015;32(4):339–48.

Barnes SJ. Information management research and practice in the post-COVID-19 world. Int J Inf Manage. 2020;55:102175.

Dwivedi YH, Coombs DL, Constantiniou C, Duan I, Edwards Y, Gupta JS, Lal B, Misra B, Prashant S, Raman P, Rana R, Sharma NP, Upadhyay SK. Impact of COVID-19 pandemic on information management research and practice: transforming education, work and life. Int J Inf Manag. 2020;55:102211.

Krausz M, Westenberg JN, Vigo D, Spence RT, Ramsey D. Emergency response to COVID-19 in Canada: platform development and implementation for eHealth in Crisis Management. JMIR Public Health Surveill. 2020;6(2):e18995.

Smith RW, Jarvis T, Sandhu HS, Pinto AD, O’Neill M, Di Ruggiero E, et al. Centralization and integration of public health systems: perspectives of public health leaders on factors facilitating and impeding COVID-19 responses in three Canadian provinces. Health Policy. 2023;127:19–28.

Pereira VC, Silva SN, Carvalho VKS, Zanghelini F, Barreto JOM. Strategies for the implementation of clinical practice guidelines in public health: an overview of systematic reviews. Health Res Policy Syst. 2022;20(1):13.

Tomsic I, Heinze NR, Chaberny IF, Krauth C, Schock B, von Lengerke T. Implementation interventions in preventing surgical site infections in abdominal surgery: a systematic review. BMC Health Serv Res. 2020;20(1):236.

Harrison R, Fischer S, Walpola RL, Chauhan A, Babalola T, Mears S, et al. Where do models for Change Management, improvement and implementation meet? A systematic review of the applications of Change Management models in Healthcare. J Healthc Leadersh. 2021;13:85–108.

Correa VC, Lugo-Agudelo LH, Aguirre-Acevedo DC, Contreras JAP, Borrero AMP, Patino-Lugo DF, et al. Individual, health system, and contextual barriers and facilitators for the implementation of clinical practice guidelines: a systematic metareview. Health Res Policy Syst. 2020;18(1):74.

Valizadeh L, Zamanzadeh V, Alizadeh S, Namadi Vosoughi M. Promoting evidence-based nursing through journal clubs: an integrative review. J Res Nurs. 2022;27(7):606–20.

Portela Dos Santos O, Melly P, Hilfiker R, Giacomino K, Perruchoud E, Verloo H, et al. Effectiveness of educational interventions to increase skills in evidence-based practice among nurses: the EDITcare. Syst Rev Healthc (Basel). 2022;10(11):2204.

Shelton RC, Lee M. Sustaining evidence-based interventions and policies: recent innovations and future directions in implementation science. Am J Public Health. 2019;109(S2):S132–4.

Brownson RC, Fielding JE, Green LW. Building Capacity for evidence-based Public Health: reconciling the pulls of Practice and the push of Research. Annu Rev Public Health. 2018;39:27–53.

Li SA, Jeffs L, Barwick M, Stevens B. Organizational contextual features that influence the implementation of evidence-based practices across healthcare settings: a systematic integrative review. Syst Rev. 2018;7(1):72.

Belita E, Yost J, Squires JE, Ganann R, Dobbins M. Development and content validation of a measure to assess evidence-informed decision-making competence in public health nursing. PLoS One. 2021;16(3):e0248330.

Dobbins M, Robeson P, Ciliska D, Hanna S, Cameron R, O’Mara L, et al. A description of a knowledge broker role implemented as part of a randomized controlled trial evaluating three knowledge translation strategies. Implement Sci. 2009;4:23.

Foy R, Ivers NM, Grimshaw JM, Wilson PM. What is the role of randomised trials in implementation science? Trials. 2023;24(1):537.

Pawson R. Pragmatic trials and implementation science: grounds for divorce? BMC Med Res Methodol. 2019;19(1):176.

Download references

Acknowledgements

The authors would like to acknowledge the NCCMT’s Rapid Evidence Service, particularly Alyssa Kostopoulos, Sophie Neumann and Selin Akaraci, for their contributions to this review.

The National Collaborating Centre for Methods and Tools is hosted by McMaster University and funded by the Public Health Agency of Canada. The views expressed herein do not necessarily represent the views of the Public Health Agency of Canada. The funder had no role in the design of the study, collection, analysis, or interpretation of data or in writing the manuscript.

Author information

Authors and affiliations.

National Collaborating Centre for Methods and Tools, McMaster University, McMaster Innovation Park, 175 Longwood Rd S, Suite 210a, Hamilton, ON, L8P 0A1, Canada

Emily C. Clark, Trish Burnett, Rebecca Blair, Robyn L. Traynor, Leah Hagerman & Maureen Dobbins

School of Nursing, McMaster University, Health Sciences Centre, 2J20, 1280 Main St W, Hamilton, ON, L8S 4K1, Canada

Maureen Dobbins

You can also search for this author in PubMed   Google Scholar

Contributions

E.C.C. and M.D. designed the study. E.C.C., L.H., R.B., R.L.T., and T.B. completed screening, quality assessment and data extraction. E.C. and M.D. analyzed study results. E.C.C. and T.B. wrote the manuscript in consultation with M.D. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Maureen Dobbins .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Clark, E.C., Burnett, T., Blair, R. et al. Strategies to implement evidence-informed decision making at the organizational level: a rapid systematic review. BMC Health Serv Res 24 , 405 (2024). https://doi.org/10.1186/s12913-024-10841-3

Download citation

Received : 23 October 2023

Accepted : 07 March 2024

Published : 01 April 2024

DOI : https://doi.org/10.1186/s12913-024-10841-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Evidence-informed decision making
  • Evidence-based practice
  • Knowledge translation
  • Knowledge mobilization
  • Implementation
  • Organizational change

BMC Health Services Research

ISSN: 1472-6963

what is a systematic research study

How-To Create an Orthopaedic Systematic Review: A Step-by-Step Guide Part I: Study Design

Affiliations.

  • 1 Maimonides Medical Center, Department of Orthopaedic Surgery, Brooklyn, NY.
  • 2 Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland.
  • 3 Cleveland Clinic Foundation, Department of Orthopaedic Surgery, Cleveland, OH.
  • 4 Sinai Hospital of Baltimore, Rubin Institute for Advanced Orthopedics, Baltimore, Maryland; Northwell Health Orthopaedics, Lenox Hill Hospital, New York, NY. Electronic address: [email protected].
  • PMID: 38552865
  • DOI: 10.1016/j.arth.2024.03.059

Systematic reviews are conducted through a consistent and reproducible method to search, appraise, and summarize information. Within the evidence-based pyramid, systematic reviews can be at the apex when incorporating high-quality studies, presenting the strongest form of evidence given their synthesis of results from multiple primary studies to level IV evidence, depending on the studies they incorporate. When combined and supplemented with a meta-analysis using statistical methods to pool the results of three or more studies, systematic reviews are powerful tools to help answer research questions. The aim of this review is to serve as a guide on how to: 1) design; 2) execute; and 3) publish an orthopaedic arthroplasty systematic review and meta-analysis. In Part I, we discuss how to develop an appropriate research question as well as source and screen databases. To date, commonly used databases to source studies include PubMed/MEDLINE, Embase, Cochrane Library, Scopus, and Web of Science. Although not all-encompassing, this paper serves as a starting point for those interested in performing and/or critically reviewing lower extremity arthroplasty systematic reviews and meta-analyses.

Keywords: Arthroplasty research; Clinical orthopaedic research; Meta-analysis; Study design; Systematic review.

Copyright © 2024. Published by Elsevier Inc.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.11(2); Apr-Jun 2020

Study designs: Part 7 – Systematic reviews

Priya ranganathan.

Department of Anaesthesiology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India

Rakesh Aggarwal

1 Director, Jawaharlal Institute of Postgraduate Medical Education and Research, Puducherry, India

In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies. Systematic reviews often also use meta-analysis, which is a statistical tool to mathematically collate the results of various research studies to obtain a pooled estimate of treatment effect; this will be discussed in the next article.

In the previous six articles in this series on study designs, we have looked at different types of primary research study designs which are used to answer research questions. In this article, we describe the systematic review, a type of secondary research design that is used to summarize the results of prior primary research studies. Systematic reviews are considered the highest level of evidence for a particular research question.[ 1 ]

SYSTEMATIC REVIEWS

As defined in the Cochrane Handbook for Systematic Reviews of Interventions , “Systematic reviews seek to collate evidence that fits pre-specified eligibility criteria in order to answer a specific research question. They aim to minimize bias by using explicit, systematic methods documented in advance with a protocol.”[ 2 ]

NARRATIVE VERSUS SYSTEMATIC REVIEWS

Review of available data has been done since times immemorial. However, the traditional narrative reviews (“expert reviews”) do not involve a systematic search of the literature. Instead, the author of the review, usually an expert on the subject, used informal methods to identify (what he or she thinks are) the key studies on the topic. The final review thus is a summary of these “selected” studies. Since studies are chosen at will (haphazardly!) and without clearly defined criteria, such reviews preferentially include those studies that favor the author's views, leading to a potential for subjectivity or selection bias.

In contrast, systematic reviews involve a formal prespecified protocol with explicit, transparent criteria for the inclusion and exclusion of studies, thereby ensuring completeness of coverage of the available evidence, and providing a more objective, replicable, and comprehensive overview it.

META-ANALYSIS

Many systematic reviews use an additional tool, known as meta-analysis, which is a statistical technique for combining the results of multiple studies in a systematic review in a mathematically appropriate way, to create a single (pooled) and more precise estimate of treatment effect. The feasibility of performing a meta-analysis in a systematic review depends on the number of studies included in the final review and the degree of heterogeneity in the inclusion criteria as well as the results between the included studies. Meta-analysis will be discussed in detail in the next article in this series.

THE PROCESS OF A SYSTEMATIC REVIEW

The conduct of a systematic review involves several sequential key steps.[ 3 , 4 ] As in other research study designs, a clearly stated research question and a well-written research protocol are essential before commencing a systematic review.

Step 1: Stating the review question

Systematic reviews can be carried out in any field of medical research, e.g. efficacy or safety of interventions, diagnostics, screening or health economics. In this article, we focus on systematic reviews of studies looking at the efficacy of interventions. As for the other study designs, for a systematic review too, the question is best framed using the Population, Intervention, Comparator, and Outcome (PICO) format.

For example, Safi et al . carried out a systematic review on the effect of beta-blockers on the outcomes of patients with myocardial infarction.[ 5 ] In this review, the Population was patients with suspected or confirmed myocardial infarction, the Intervention was beta-blocker therapy, the Comparator was either placebo or no intervention, and the Outcomes were all-cause mortality and major adverse cardiovascular events. The review question was “ In patients with suspected or confirmed myocardial infarction, does the use of beta-blockers affect mortality or major adverse cardiovascular outcomes? ”

Step 2: Listing the eligibility criteria for studies to be included

It is essential to explicitly define a priori the criteria for selection of studies which will be included in the review. Besides the PICO components, some additional criteria used frequently for this purpose include language of publication (English versus non-English), publication status (published as full paper versus unpublished), study design (randomized versus quasi-experimental), age group (adults versus children), and publication year (e.g. in the last 5 years, or since a particular date). The PICO criteria used may not be very specific, e.g. it is possible to include studies that use one or the other drug belonging to the same group. For instance, the systematic review by Safi et al . included all randomized clinical trials, irrespective of setting, blinding, publication status, publication year, or language, and reported outcomes, that had used any beta-blocker and in a broad range of doses.[ 5 ]

Step 3: Comprehensive search for studies that meet the eligibility criteria

A thorough literature search is essential to identify all articles related to the research question and to ensure that no relevant article is left out. The search may include one or more electronic databases and trial registries; in addition, it is common to hand-search the cross-references in the articles identified through such searches. One could also plan to reach out to experts in the field to identify unpublished data, and to search the grey literature non-peer-reviewednon-peer-reviewed. This last option is particularly helpful non-pharmacologic (theses, conference abstracts, and non-peer-reviewed journals). These sources are particularly helpful when the intervention is relatively new, since data on these may not yet have been published as full papers and hence are unlikely to be found in literature databases. In the review by Safi et al ., the search strategy included not only several electronic databases (Cochrane, MEDLINE, EMBASE, LILACS, etc.) but also other resources (e.g. Google Scholar, WHO International Clinical Trial Registry Platform, and reference lists of identified studies).[ 5 ] It is not essential to include all the above databases in one's search. However, it is mandatory to define in advance which of these will be searched.

Step 4: Identifying and selecting relevant studies

Once the search strategy defined in the previous step has been run to identify potentially relevant studies, a two-step process is followed. First, the titles and abstracts of the identified studies are processed to exclude any duplicates and to discard obviously irrelevant studies. In the next step, full-text papers of the remaining articles are retrieved and closely reviewed to identify studies that meet the eligibility criteria. To minimize bias, these selection steps are usually performed independently by at least two reviewers, who also assign a reason for non-selection to each discarded study. Any discrepancies are then resolved either by an independent reviewer or by mutual consensus of the original reviewers. In the Cochrane review on beta-blockers referred to above, two review authors independently screened the titles for inclusion, and then, four review authors independently reviewed the screen-positive studies to identify the trials to be included in the final review.[ 5 ] Disagreements were resolved by discussion or by taking the opinion of a separate reviewer. A summary of this selection process, showing the degree of agreement between reviewers, and a flow diagram that depicts the numbers of screened, included and excluded (with reason for exclusion) studies are often included in the final review.

Step 5: Data extraction

In this step, from each selected study, relevant data are extracted. This should be done by at least two reviewers independently, and the data then compared to identify any errors in extraction. Standard data extraction forms help in objective data extraction. The data extracted usually contain the name of the author, the year of publication, details of intervention and control treatments, and the number of participants and outcome data in each group. In the review by Safi et al ., four review authors independently extracted data and resolved any differences by discussion.[ 5 ]

Handling missing data

Some of the studies included in the review may not report outcomes in accordance with the review methodology. Such missing data can be handled in two ways – by contacting authors of the original study to obtain the necessary data and by using data imputation techniques. Safi et al . used both these approaches – they tried to get data from the trial authors; however, where that failed, they analyzed the primary outcome (mortality) using the best case (i.e. presuming that all the participants in the experimental arm with missing data had survived and those in the control arm with missing mortality data had died – representing the maximum beneficial effect of the intervention) and the worst case (all the participants with missing data in the experimental arm assumed to have died and those in the control arm to have survived – representing the least beneficial effect of the intervention) scenarios.

Evaluating the quality (or risk of bias) in the included studies

The overall quality of a systematic review depends on the quality of each of the included studies. Quality of a study is inversely proportional to the potential for bias in its design. In our previous articles on interventional study design in this series, we discussed various methods to reduce bias – such as randomization, allocation concealment, participant and assessor blinding, using objective endpoints, minimizing missing data, the use of intention-to-treat analysis, and complete reporting of all outcomes.[ 6 , 7 ] These features form the basis of the Cochrane Risk of Bias Tool (RoB 2), which is a commonly used instrument to assess the risk of bias in the studies included in a systematic review.[ 8 ] Based on this tool, one can classify each study in a review as having low risk of bias, having some concerns regarding bias, or at high risk of bias. Safi et al . used this tool to classify the included studies as having low or high risk of bias and presented these data in both tabular and graphical formats.[ 5 ]

In some reviews, the authors decide to summarize only studies with a low risk of bias and to exclude those with a high risk of bias. Alternatively, some authors undertake a separate analysis of studies with low risk of bias, besides an analysis of all the studies taken together. The conclusions from such analyses of only high-quality studies may be more robust.

Step 6: Synthesis of results

The data extracted from various studies are pooled quantitatively (known as a meta-analysis) or qualitatively (if pooling of results is not considered feasible). For qualitative reviews, data are usually presented in the tabular format, showing the characteristics of each included study, to allow for easier interpretation.

Sensitivity analyses

Sensitivity analyses are used to test the robustness of the results of a systematic review by examining the impact of excluding or including studies with certain characteristics. As referred to above, this can be based on the risk of bias (methodological quality), studies with a specific study design, studies with a certain dosage or schedule, or sample size. If results of these different analyses are more-or-less the same, one can be more certain of the validity of the findings of the review. Furthermore, such analyses can help identify whether the effect of the intervention could vary across different levels of another factor. In the beta-blocker review, sensitivity analysis was performed depending on the risk of bias of included studies.[ 5 ]

IMPORTANT RESOURCES FOR CARRYING OUT SYSTEMATIC REVIEWS AND META-ANALYSES

Cochrane is an organization that works to produce good-quality, updated systematic reviews related to human healthcare and policy, which are accessible to people across the world.[ 9 ] There are more than 7000 Cochrane reviews on various topics. One of its main resources is the Cochrane Library (available at https://www.cochranelibrary.com/ ), which incorporates several databases with different types of high-quality evidence to inform healthcare decisions, including the Cochrane Database of Systematic Reviews, Cochrane Central Register of Controlled Trials (CENTRAL), and Cochrane Clinical Answers.

The Cochrane Handbook for Systematic Reviews of Interventions

The Cochrane handbook is an official guide, prepared by the Cochrane Collaboration, to the process of preparing and maintaining Cochrane systematic reviews.[ 10 ]

Review Manager software

Review Manager (RevMan) is a software developed by Cochrane to support the preparation and maintenance of systematic reviews, including tools for performing meta-analysis.[ 11 ] It is freely available in both online (RevMan Web) and offline (RevMan 5.3) versions.

Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement is an evidence-based minimum set of items for reporting of systematic reviews and meta-analyses of randomized trials.[ 12 ] It can be used both by authors of such studies to improve the completeness of reporting and by reviewers and readers to critically appraise a systematic review. There are several extensions to the PRISMA statement for specific types of reviews. An update is currently underway.

Meta-analysis of Observational Studies in Epidemiology statement

The Meta-analysis of Observational Studies in Epidemiology statement summarizes the recommendations for reporting of meta-analyses in epidemiology.[ 13 ]

PROSPERO is an international database for prospective registration of protocols for systematic reviews in healthcare.[ 14 ] It aims to avoid duplication of and to improve transparency in reporting of results of such reviews.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Open access
  • Published: 13 December 2023

Attributes of errors, facilitators, and barriers related to rate control of IV medications: a scoping review

  • Jeongok Park   ORCID: orcid.org/0000-0003-4978-817X 1 ,
  • Sang Bin You   ORCID: orcid.org/0000-0002-1424-4140 2 ,
  • Gi Wook Ryu   ORCID: orcid.org/0000-0002-4533-7788 3 &
  • Youngkyung Kim   ORCID: orcid.org/0000-0002-3696-5416 4  

Systematic Reviews volume  12 , Article number:  230 ( 2023 ) Cite this article

821 Accesses

1 Altmetric

Metrics details

Intravenous (IV) medication is commonly administered and closely associated with patient safety. Although nurses dedicate considerable time and effort to rate the control of IV medications, many medication errors have been linked to the wrong rate of IV medication. Further, there is a lack of comprehensive studies examining the literature on rate control of IV medications. This study aimed to identify the attributes of errors, facilitators, and barriers related to rate control of IV medications by summarizing and synthesizing the existing literature.

This scoping review was conducted using the framework proposed by Arksey and O’Malley and PRISMA-ScR. Overall, four databases—PubMed, Web of Science, EMBASE, and CINAHL—were employed to search for studies published in English before January 2023. We also manually searched reference lists, related journals, and Google Scholar.

A total of 1211 studies were retrieved from the database searches and 23 studies were identified from manual searches, after which 22 studies were selected for the analysis. Among the nine project or experiment studies, two interventions were effective in decreasing errors related to rate control of IV medications. One of them was prospective, continuous incident reporting followed by prevention strategies, and the other encompassed six interventions to mitigate interruptions in medication verification and administration. Facilitators and barriers related to rate control of IV medications were classified as human, design, and system-related contributing factors. The sub-categories of human factors were classified as knowledge deficit, performance deficit, and incorrect dosage or infusion rate. The sub-category of design factor was device. The system-related contributing factors were classified as frequent interruptions and distractions, training, assignment or placement of healthcare providers (HCPs) or inexperienced personnel, policies and procedures, and communication systems between HCPs.

Conclusions

Further research is needed to develop effective interventions to improve IV rate control. Considering the rapid growth of technology in medical settings, interventions and policy changes regarding education and the work environment are necessary. Additionally, each key group such as HCPs, healthcare administrators, and engineers specializing in IV medication infusion devices should perform its role and cooperate for appropriate IV rate control within a structured system.

Peer Review reports

Medication errors are closely associated with patient safety and the quality of care [ 1 , 2 ]. In particular, medication errors, which denote a clinical issue of global importance for patient safety, negatively affect patient morbidity and mortality and lead to delays in discharge [ 3 , 4 ]. The National Health Service in the UK estimates that 237 million medication errors occur each year, of which 66 million cause clinically significant harm [ 5 ]. The US Food and Drug Administration reported that they received more than 100,000 reports each year associated with suspected medication errors [ 6 ]. Additionally, it was estimated that 40,000–98,000 deaths per year in the USA could be attributed to errors by healthcare providers (HCPs) [ 7 ]. Previous studies have revealed that medication errors account for 6–12% of hospital admissions [ 8 ].

Intravenous (IV) medication is a common treatment in hospitalized patient care [ 9 ]. It is used in wards, intensive care units (ICUs), emergency rooms, and outpatient clinics in hospitals [ 9 , 10 ]. As direct HCPs, nurses are integral in patient safety during the IV medication process which could result in unintended errors or violations of recommendations [ 3 ]. As many drugs injected via the IV route include high-risk drugs, such as chemotherapy agents, insulin, and opioids [ 10 ], inappropriate dose administration could lead to adverse events (AEs), such as death and life-threatening events [ 11 , 12 ].

IV medication process is a complex and multistage process. There are 12 stages in the IV medication process, which can be classified as follows: (1) obtain the drug for administration, (2) obtain the diluent, (3) reconstitute the drug in the diluent, (4) take the drug at the patient’s bedside, (5) check for the patient’s allergies, (6) check the route of drug administration, (7) check the drug dose, (8) check the patency of the cannula, (9) expel the air from the syringe, (10) administer the drug, (11) flush the cannula, and (12) sign the prescription chart [ 13 ]. IV medication errors can occur at any of these stages. It is imperative to administer the drug at the correct time and rate during the IV medication process [ 13 ]. The National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) defined an error in IV medication rates as “too fast or too slow rate than that intended” [ 14 ]. Maintaining the correct rate of IV medication is essential for enhancing the effectiveness of IV therapy and reducing AEs [ 9 ].

Infusion pumps are devices designed to improve the accuracy of IV infusions, with drug flow, volume, and timing programmed by HCPs [ 15 ]. A smart pump is an infusion pump with a software package containing a drug library. During programming, the smart pump software warns users about entering drug parameters that deviate from the recommended parameters, such as the type, dose, and dosage unit of the drug [ 15 ]. In the absence of a device for administering IV medication, such as an infusion pump or smart pump, the IV rate is usually controlled by counting the number of fluid drops falling into the drip chamber [ 9 ].

According to the previous study, applying an incorrect rate was the most prevalent IV medication error, accounting for 536 of 925 (57.9%) total IV medication errors [ 16 ]. Although rate control of IV medications is critical to patient safety and quality care, few studies review and map the relevant literature on rate control of IV medications. Therefore, this study aimed to identify the attributes of errors, facilitators, and barriers related to rate control of IV medications by summarizing the existing literature.

The specific research questions of this study are as follows:

What are the general characteristics of the studies related to rate control of IV medications?

What are the attributes of errors associated with rate control of IV medications?

What are the facilitators and barriers to rate control of IV medications?

This scoping review followed the framework suggested by Arksey and O’Malley [ 17 ] and developed by Levac et al. [ 18 ] and Peters et al. [ 19 ]. Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) developed in 2020 by the Joanna Briggs Institute (JBI) were used to ensure reliability in the reporting of methodology (Additional file 1 ) [ 19 ].

Search strategy

According to the JBI Manuals for Evidence Synthesis, a three-step search strategy was adopted [ 19 ]. First, a preliminary search in PubMed was conducted based on the title, abstract, keywords, and index terms of articles to develop our search strategy. In the preliminary search, we used keywords such as “patients,” “nurse,” “IV therapy,” “monitoring,” “rate,” and “medication error.” The search results indicated that studies on medical devices and system-related factors were excluded. Therefore, we decided to exclude the keywords “patients” and “nurse” and focus on “IV therapy,” “monitoring,” “rate,” and “medication error” to comprehensively include studies on factors associated with rate control of infusion medications. Secondly, we used all identified keywords and index terms across all included databases following consultations with a research librarian at Yonsei University Medical Library to elaborate our search strategy. Four databases—PubMed, CINAHL, EMBASE, and Web of Science—were searched using the keywords, index terms, and a comprehensive list of keyword variations to identify relevant studies published before January 2023. The details of the search strategy are described in Additional file 2 . All database search results were exported into Endnote version 20. Finally, we manually searched the reference lists of the included articles identified from the database search. Furthermore, we manually searched two journals related to medication errors and patient safety, and Google Scholar to comprehensively identify the relevant literature. When performing a search on Google Scholar, keywords such as “medication,” “rate,” “IV therapy,” “intravenous administration,” and “medication error” were appropriately combined using search modifiers.

Eligibility criteria

Inclusion criteria were established according to the participants, concept, and context (PCC) framework recommended by the JBI manuals for scoping reviews [ 19 ]. The participants include patients receiving IV therapy, HCPs involved in administering IV medications, and experts from non-healthcare fields related to rate control of IV medications. The concepts were facilitators and barriers to rate control of IV medications, and the contexts were the environments or situations in which errors in rate control of IV medications occurred. While screening the literature identified by the three-step search based on the inclusion criteria, we refined the exclusion criteria through discussion among researchers. The exclusion criteria were as follows: (1) not available in English, (2) not an original article, (3) studies of medication errors in general, (4) not accessible, or (5) prescription error.

Study selection

Once duplicates were automatically removed through Endnote, two independent researchers assessed the eligibility of all articles by screening the titles and abstracts based on the inclusion and exclusion criteria. Studies identified via database searches were screened by GWR and YK and studies identified via other methods were screened by SBY and YK. Full-text articles were obtained either when the studies met the inclusion criteria or when more information was needed to assess eligibility and the researchers independently reviewed the full-text articles. In case of any disagreement in the study selection process, a consensus was reached through discussion among three researchers (GWR, SBY, and YK) and a senior researcher (JP).

Data extraction

Through consensus among the researchers, a form for data extraction was developed to extract appropriate information following the JBI manuals for scoping reviews [ 19 ]. The following data were collected from each study: author information, publication year, country, study design, study period, aims, participants or events (defined as the occurrences related to patient care focused on in the study), contexts, methods, errors related to the control of IV medications (observed results or intervention outcomes), error severity, facilitators, and barriers according to the NCC MERP criteria. Three researchers (GWR SBY, and YK) independently conducted data charting and completed the data extraction form through discussion.

Data synthesis

The general characteristics of included studies such as publication year, country, study design, and study period were analyzed using descriptive statistics to identify trends or patterns. The aims, participants, events, contexts, and methods of the included studies were classified into several categories through a research meeting including a senior researcher (JP) to summarize and analyze the characteristics of the included studies comprehensively. Attributes of errors associated with rate control of IV medications were analyzed and organized through consensus among researchers based on extracted data. Facilitators and barriers to rate control of IV medications were independently classified according to NCC MERP criteria by three researchers (GWR, SBY, and YK) and iteratively modified. Discrepancies were resolved by discussion and re-reading the articles, with the final decision made in consultation with the senior researcher (JP).

A total of 1211 studies were selected through a database search. After reviewing the titles and abstracts of the studies, 42 studies were considered for a detailed assessment by the three researchers. In particular, 2 were not available in English, 3 were not original articles, 24 were studies of medication error in general without details on rate control of IV medications, 2 were regarding prescription errors, and 1 was not accessible. Finally, 10 studies were identified through a database search. Additionally, 23 studies were identified from a manual search. Among the 23, 5 were not original articles, and 6 were studies on medication error in general. Finally, 12 studies were identified via other methods. Hence, 22 studies were included in the data analysis (Fig.  1 , Additional file 3 ).

figure 1

PRISMA flow chart for literature selection

Characteristics of the studies

General characteristics.

Table 1 presents the general characteristics of the included studies. Two of the included studies had a publication year before 2000 [ 20 , 21 ], and more than half of the studies ( n  = 15) were published in 2010 and later. A majority of the included studies were conducted in Western countries ( n  = 15) [ 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 ], four were conducted in Asia [ 20 , 37 , 38 , 39 ], two were conducted in Australia [ 21 , 40 ], and one was conducted in Egypt [ 2 ]. In terms of the study design, most studies were project studies ( n  = 7) [ 22 , 24 , 27 , 28 , 30 , 34 , 39 ] or prospective observational studies ( n  = 5) [ 2 , 20 , 29 , 32 , 40 ], followed by retrospective studies ( n  = 3) [ 21 , 25 , 35 ], qualitative or mixed-methods studies ( n  = 3) [ 23 , 26 , 33 ], and descriptive cross-sectional studies ( n  = 2) [ 36 , 38 ]. Additionally, there was one controlled pre-posttest study [ 37 ] and one simulation laboratory experiment study [ 31 ]. The study period also varied greatly from 2 days [ 32 ] to 6 years [ 25 ].

The aims of the included studies were divided into two main categories. First, 13 studies identified the current status, causes, and factors influencing errors that could occur in healthcare settings [ 2 , 20 , 21 , 23 , 25 , 26 , 29 , 32 , 33 , 35 , 36 , 38 , 40 ]. Among these, three studies were on errors that may occur in specific healthcare procedures, such as anesthesia [ 20 ], vascular access [ 21 ], and pediatric chemotherapy [ 25 ]. Additionally, three studies explored possible errors associated with specific settings and medications, such as an obstetric emergency ward [ 2 ], cardiac critical care units [ 38 ], and high-alert medications [ 36 ], and three studies investigated the errors associated with the overall IV medication preparation or administration [ 23 , 33 , 40 ]. Moreover, three studies aimed at identifying potential problems associated with the use of IV medication infusion devices [ 26 , 32 , 35 ], and one study was about errors in medication preparation and administration that could occur in a setting using a specific system connected to electronic medical records [ 29 ]. Second, nine studies described the procedure of developing interventions or identified the effect of interventions [ 22 , 24 , 27 , 28 , 30 , 31 , 34 , 37 , 39 ].

Participants and events

Participants in the 22 studies included HCPs such as nurses, doctors, pharmacists, and patients. Notably, four of these studies were only for nurses [ 31 , 37 , 38 , 40 ] and there was also one study involving only pharmacists [ 36 ]. Furthermore, there were five studies wherein people from various departments or roles participated [ 23 , 26 , 27 , 28 , 39 ]. There were three studies wherein the patients were participants, and two studies included both patients and medical staff [ 29 , 33 ].

Among the included studies, nine studies focused on errors in IV medication preparation and administration as events [ 23 , 26 , 30 , 32 , 33 , 34 , 37 , 38 , 40 ] and five studies focused on the administration process only [ 30 , 32 , 34 , 37 , 40 ]. Four studies focused on problems in the administration of all types of drugs including errors associated with rate control of IV medications [ 2 , 22 , 28 , 29 ]. Additionally, four studies focused on events that occurred with IV medication infusion devices [ 24 , 27 , 35 , 39 ], two studies explored the events that occurred during chemotherapy [ 22 , 25 ], and some analyzed events with problems in vascular access [ 21 ], iatrogenic events among neonates [ 28 ], and critical events in anesthesia cases [ 20 ].

Contexts and methods

The contexts can be largely divided into healthcare settings, including hospitals and laboratory settings. Three hospital-based studies were conducted in the entire hospital [ 20 , 22 , 24 ], eight studies were conducted at several hospitals, and the number of hospitals involved varied from 2 to 132 [ 23 , 26 , 32 , 33 , 34 , 35 , 38 , 40 ]. Furthermore, four studies were conducted in different departments within one hospital [ 29 , 30 , 37 , 39 ], three studies were conducted in only one department [ 2 , 27 , 28 ], two studies considered other healthcare settings and were not limited to hospitals [ 21 , 25 ], and one study was conducted in a simulation laboratory setting that enabled a realistic simulation of an ambulatory chemotherapy unit [ 31 ].

Specifically, seven out of the nine studies developed or implemented interventions based on interdisciplinary or multidisciplinary collaboration [ 22 , 24 , 28 , 30 , 34 , 37 , 39 ]. Two studies developed and identified the effectiveness of interventions that created an environment for nurses to improve performance and correct errors associated with medication administration [ 31 , 39 ], and two intervention studies were on error reporting methods or observation tools and the processes of addressing reported errors [ 28 , 30 ]. There were also a study on a pharmacist-led educational program for nurses [ 37 ], a comprehensive intervention from drug prescription to administration to reduce chemotherapy-related medication errors [ 22 ], infusion safety intervention bundles [ 34 ], the implementation of a smart IV pump equipped with failure mode and effects analysis (FMEA) [ 24 ], and a smart system to prevent pump programming errors [ 27 ].

Data collection methods were classified as a review of reported incidents [ 20 , 21 , 22 , 25 , 35 ], a review of medical charts [ 26 ], observations [ 23 , 29 , 30 , 31 , 32 , 33 , 34 , 37 , 40 ], follow-up on every pump alert [ 27 ], and self-reporting questionnaires or surveys [ 36 , 38 ]. Some studies utilized retrospective reviews of reported incidents and self-report questionnaires [ 39 ]. Also, in the study by Kandil et al., observation, nursing records review, and medical charts review were all used [ 2 ].

Attributes of errors associated with rate control of IV medications

Table 2 presents the attributes of errors related to rate control of IV medications in observed results or intervention outcomes, and error severity. Notably, 6 of 13 studies presenting observed results reported errors related to IV medication infusion devices among the rate control errors [ 20 , 25 , 32 , 33 , 35 , 36 ]. Additionally, four studies reported errors in bolus dose administration or IV push and flushing lines among IV rate errors [ 2 , 23 , 36 , 40 ]. Among the 13, nine studies reported error severity, and among these, three studies used NCC MERP ratings [ 25 , 32 , 33 ]. In four studies, error severity was reported by describing several cases in detail [ 2 , 21 , 23 , 25 ], and two studies reported no injuries or damages due to errors [ 26 , 29 ]. Among the nine studies that developed interventions and identified their effectiveness, four presented the frequency of incorrect rate errors as an outcome variable [ 28 , 30 , 34 , 37 ]. Moreover, two studies suggested compliance rates for intervention as outcome variables [ 24 , 31 ].

Among the nine project or experiment studies, three showed a decrease in error rate as a result of the intervention [ 28 , 31 , 34 ]. Three studies developed interventions to reduce rate errors but did not report the frequency or incidence of rate errors [ 22 , 24 , 27 ]. A study reported the frequency of rate errors only after the intervention; the effect of the intervention could not be identified [ 30 ]. Also, three studies showed the severity of errors related to rate control of IV medications [ 24 , 30 , 34 ], two used NCC MERP severity ratings [ 30 , 34 ], and one reported that all errors caused by smart IV pumps equipped with FMEA resulted in either temporary harm or no harm [ 24 ].

Facilitators and barriers to rate control of IV medications

Table 3 presents the facilitators and barriers related to rate control of IV medications according to the NCC MERP taxonomy based on the 22 included studies. Sub-categories of human factors were classified as knowledge deficit, performance deficit, miscalculation of dosage or infusion rate, and stress. The sub-category of design factor was device. System-related contributing factors were classified as frequent interruptions and distractions, inadequate training, poor assignment or placement of HCPs or inexperienced personnel, policies and procedures, and communication systems between HCPs [ 14 ].

Human factors

Among the barriers extracted from the 22 studies, 11 factors belonged to the “knowledge deficit,” “performance deficit,” “miscalculation of dosage or infusion rate,” and “stress (high-volume workload)” in this category. Half of these factors are related to the “performance deficit.” Barriers identified in two or more studies were tubing misplacement [ 24 , 35 ] and non-compliance with protocols and guidelines [ 2 , 25 ], all of which belonged to the “performance deficit.” Additionally, the high workload and environmental characteristics of the ICU, which corresponded to the “stress,” were also identified as barriers to rate control of IV medications [ 23 , 37 ].

Most factors in this category were related to IV medication infusion devices such as infusion pumps and smart pumps. In the study by Lyons et al., the use of devices, such as patient-controlled analgesia pumps and syringe drivers, was a facilitator of rate control of IV medications [ 33 ]. In addition to the use of these devices, the expansion of capabilities [ 26 ], monitoring programming [ 27 ], and standardization [ 22 ] were also facilitators. Unexpected equipment faults, a barrier, were identified in five studies [ 2 , 20 , 25 , 35 , 38 ]. Moreover, the complex design of the equipment [ 23 , 24 ] and incomplete drug libraries in smart pumps [ 33 , 35 ] were identified in two studies each. Factors such as the misassembly of an unfamiliar infusion pump [ 21 ] and smart pumps not connected to electronic systems [ 30 ] were also barriers.

Contributing factors (system related)

The factors belonging to the “frequent interruptions and distractions” in this category were all barriers. Specifically, running multiple infusions at once [ 24 , 27 ], air-in-line alarms, or cleaning air [ 24 ] were identified as barriers. Among the facilitators of the “training,” there were education and training on the use of smart IV pumps [ 24 ] and chemotherapy errors [ 22 ]. There are two factors in the “assignment or placement of a HCP or inexperienced personnel,” where ward-based pharmacists were facilitators [ 36 ], but nurses with less than 6 years of experience were barriers [ 40 ]. The sub-category with the most factors was “policies and procedures,” where the facilitators extracted in the four studies were double-checks through the process [ 22 , 24 , 28 , 36 ]. Among the barriers, two were related to keep-the-vein-open, which was identified in three studies [ 30 , 32 , 33 ]. The lack of automated infusion pumps [ 2 ], the absence of culture for use [ 32 , 33 ], and problems in the drug prescription process [ 33 ] were also identified as barriers. Communication with physicians in instances of doubt identified was the only identified facilitator in the “communication systems between HCPs” [ 28 ].

Resolutions for the barriers to rate control of IV medications

Table 4 presents the resolutions for the barriers to rate control of IV medications in the included studies. The suggested resolutions primarily belonged to the “contributing factors (system-related)” category. Resolutions in the “human factors” category were mainly related to the knowledge and performance of individual healthcare providers, and there were no studies proposing resolutions specifically addressing stress (high-volume workload), which is one of the barriers. Resolutions in the “design” category focused on the development [ 26 , 30 ], appropriate use [ 24 , 33 ], evaluation [ 26 ], improvement [ 24 , 26 , 30 ], and supply [ 23 ] of infusion pumps or smart pumps. Resolutions addressing aspects within the “contributing factors (system-related)” category can be classified into six main areas: interdisciplinary or inter-institution collaboration [ 23 , 25 , 28 , 30 , 34 , 35 , 36 , 37 ], training [ 24 , 37 , 40 ], implementation of policies or procedures [ 29 , 31 , 34 , 35 , 37 , 39 ], system improvement [ 25 , 30 , 32 ], creating a patient safety culture [ 25 , 37 , 38 ], and staffing [ 2 , 38 ].

This scoping review provides the most recent evidence on the attributes of errors, facilitators, and barriers related to rate control of IV medications. The major findings of this study were as follows: (1) there were a few intervention studies that were effective in decreasing the errors related to rate control of IV medications; (2) there was limited research focusing on the errors associated with IV medication infusion devices; (3) a few studies have systematically evaluated and analyzed the severity of errors associated with rate control of IV medications; and (4) the facilitators and barriers related to rate control of IV medications were identified by NCC MERP taxonomy as three categories (human factors, design, and system-related contributing factors).

Among the nine project or experiment studies, only two interventions showed statistically significant effectiveness for IV rate control [ 28 , 31 ]. Six studies did not report the specific statistical significance of the intervention [ 22 , 24 , 27 , 30 , 37 , 39 ], and one study found that the developed intervention had no statistically significant effect [ 34 ]. In another study, administration errors, including rate errors, increased in the experimental group and decreased in the control group [ 37 ]. IV rate control is a major process in medication administration that is comprehensively related to environmental and personal factors [ 3 , 41 ]. According to previous studies, interdisciplinary or multidisciplinary cooperation is associated with the improvement in patient safety and decreased medical errors [ 42 , 43 , 44 ]. Seven of the included studies were also project or experiment studies that developed interventions based on an interdisciplinary or multidisciplinary approach [ 22 , 24 , 28 , 30 , 34 , 37 , 39 ]. Additionally, an effective intervention was developed by a multidisciplinary care quality improvement team [ 28 ]. Therefore, it is crucial to develop effective interventions based on an interdisciplinary or multidisciplinary approach to establish practice guidelines with a high level of evidence related to IV rate control.

Of the 22 included studies, three identified potential problems associated with the use of IV medication infusion devices [ 26 , 32 , 35 ], and four described the application of interventions or explored the effects of the intervention developed to reduce errors that occur when using IV medication infusion devices [ 24 , 27 , 34 , 39 ]. IV medication infusion devices, such as infusion pumps and smart pumps, are widely used in healthcare environments and allow more rigorous control in the process of administering medications that are continuously infused [ 45 ]. Smart pumps are recognized as useful devices for providing safe and effective nursing care [ 15 ]. However, the use of IV medication infusion devices requires an approach different from traditional rate monitoring by counting the number of fluid drops falling into the drip chamber [ 9 ]. However, there exist many problems, such as bypassing the drug library, device maintenance, malfunction, tubing/connection, and programming in the use of IV medication infusion devices [ 32 , 35 ]. None of the four studies that described the application of interventions or explored the effects of the intervention demonstrated statistically significant effects. All four studies had no control group [ 24 , 27 , 34 , 39 ] and two studies had only post-test designs [ 24 , 27 ]. Therefore, further research needs to be conducted to analyze errors in rate control related to IV medication infusion devices and develop effective interventions.

A few studies have systematically evaluated and analyzed the severity of errors associated with rate control of IV medications. Among the 12 studies that reported the severity of errors associated with rate control of IV medications, five studies used NCC MERP, an internationally validated and reliable tool for assessing error severity, and one study used the Severity Assessment Code (SAC) developed by the New South Wales Health Department. Six studies did not use tools to assess error severity. The term “error severity” means the degree of potential or actual harm to patients [ 46 ]. Evaluating the severity of medication errors is a vital point in improving patient safety throughout the medication administration process. This evaluation allows for distinguishing errors based on their severity to establish the development of risk mitigation strategies focused on addressing errors with the great potential to harm patients [ 47 , 48 ]. Specifically, errors associated with rate control of IV medications were categorized as A to E on the NCC MERP and to groups 3 and 4 on the SAC. Additionally, errors associated with rate control of IV medications caused direct physical damage [ 2 , 21 ] and necessitated additional medication to prevent side effects or toxicity [ 23 ]. Therefore, as errors in rate control of IV medications are likely to cause actual or potential harm to the patient, research systematically evaluating and analyzing error severity should be conducted to provide the basis for developing effective risk reduction strategies in the rate control of IV medications.

Facilitators and barriers were identified as human, design, and system-related contributing factors. Among the human factors, “performance deficit” included failure to check equipment properly, tubing misplacement, inadequate monitoring, non-compliance with protocols and guidelines, and human handling errors with smart pumps. Nurses play a major role in drug administration; thus, their monitoring and practices related to IV medication infusion devices can influence patient health outcomes [ 3 , 49 ]. A major reason for the lack of monitoring was overwork, which was related to the complex working environment, work pressure, and high workload [ 3 , 11 , 49 ]. Moreover, two of the included studies identified high workload as a barrier to rate control of IV medications [ 23 , 37 ]. Therefore, to foster adequate monitoring of rate control of IV medications, a systematic approach to alleviating the complex working environment and work pressure should be considered.

Most facilitators and barriers in the devices category were related to IV medication infusion devices. In particular, expanding pump capabilities [ 26 ], monitoring pump programming [ 27 ], standardization [ 22 ], and using a pump [ 33 ] can facilitate rate control of IV medications. However, unexpected equipment faults are significant barriers, as identified in five studies among the included studies [ 2 , 20 , 25 , 35 , 38 ]. Moreover, the design [ 23 , 24 ], user-friendliness [ 21 ], connectivity to electronic systems [ 30 ], and completeness of drug libraries [ 33 , 35 ] are factors that can affect rate control of IV medications. Therefore, it is important to improve, monitor, and manage IV medication infusion devices so that they do not become barriers. Moreover, because rate errors caused by other factors can be prevented by devices, active utilization and systematic management of devices at the system level are required.

Although there are many benefits of infusion and smart pumps for reducing errors in rate control of IV medications, they cannot be used in all hospitals because of the limitation of medical resources. The standard infusion set, which is a device for controlling the rate of IV medication by a controller [ 9 ], is widely used in outpatient as well as inpatient settings [ 32 ]. Devices for monitoring the IV infusion rate, such as FIVA™ (FIVAMed Inc, Halifax, Canada) and DripAssist (Shift Labs Inc, Seattle, USA), which can continuously monitor flow rate and volume with any gravity drip set, have been commercialized [ 33 ]. However, they have not been widely used in hospitals. Therefore, developing novel IV infusion rate monitoring devices that are simple to use, can be used remotely, and are affordable for developing and underdeveloped countries can help nurses to reduce their workloads in monitoring IV infusion rates and thus maintain patient safety.

Most facilitators and barriers were system-related contributing factors, most of which belonged to the “policies and procedures.” In four studies, the absence of hospital policies or culture related to rate control of IV medications was identified as a barrier [ 2 , 30 , 32 , 33 ]. Medication errors related to incorrect rate control are problems that should be approached from macroscopic levels, such as via institutional policies and safety cultures. Therefore, large-scale research including more diverse departments and institutions needs to be conducted.

The second most common categories in system-related contributing factors were “frequent interruptions and distractions” and “training.” Although nurses experienced frequent interruptions and distributions during work, only one of the included studies was on interventions that were developed to create an environment with reduced interruptions [ 31 ]. Additionally, four studies found that education for nurses who are directly associated with medication administration is mandatory [ 22 , 23 , 24 , 36 ]. Therefore, education and a work environment for safety culture should be created to improve IV rate control.

Based on resolutions for barriers to rate control of IV medications, key groups relevant to rate control of IV medications include HCPs, healthcare administrators, and engineers specializing in IV medication infusion devices. HCPs directly involved in the preparation and administration of IV medications need to enhance their knowledge of drugs, raise awareness for the importance of rate control of IV medications, and improve performance related to IV infusion device monitoring. Engineers specializing in IV medication infusion devices should develop these devices by integrating various information technologies used in clinical settings. Additionally, they should identify issues related to these devices and continuously enhance both software and hardware. Healthcare administrators play a crucial role in establishing and leading interdisciplinary or inter-institution collaborations. They should foster leadership, build a patient safety culture within the organization, and implement training, interventions, and policies for correct rate control of IV medications. Decreasing medication errors, including errors in IV rate control, is closely linked to the various key groups [ 50 , 51 , 52 , 53 ], and multidisciplinary collaboration is emphasized for quality care [ 54 , 55 , 56 , 57 ]. Therefore, each key group should perform its role and cooperate for appropriate IV rate control within a structured system.

This review has some limitations that should be considered. As there was no randomized controlled trial in this review, the causal relationship between wrong rate errors and their facilitators or barriers could not be determined. Moreover, only limited literature may have been included in this review because we included literature published in English and excluded gray literature. Since we did not evaluate the quality of the study, there may be a risk of bias in data collection and analysis. Despite these limitations, this study provides a meaningful assessment of published studies related to rate control of IV medications. This contribution will provide an important basis for new patient safety considerations in IV medication administration when determining future policies and device development.

The findings of this review suggest that further research is needed to be conducted to develop effective interventions to improve the practice of IV rate control. Moreover, given the rapid growth of technology in medical settings, research on IV medication infusion devices should be conducted. Additionally, to establish effective risk reduction strategies, it is necessary to systematically evaluate and analyze the severity of errors related to the rate control of IV medications. Several facilitators and barriers to rate control of IV medications were identified in this review to ensure patient safety and quality care, interventions and policy changes related to education and the work environment are required. Additionally, the development of a device capable of monitoring the flow of IV medication is necessary. This review will be useful for HCPs, hospital administrators, and engineers specializing in IV medication infusion devices to minimize errors in rate control of IV medications and improve patient safety.

Availability of data and materials

The corresponding author can provide the datasets that were utilized and/or examined during the present study upon reasonable request.

Abbreviations

Adverse event

Healthcare provider

Intensive care unit

Intravenous

Joanna Briggs Institute

The National Coordinating Council for Medication Error Reporting and Prevention

Cousins DD, Heath WM. The National Coordinating Council for medication error reporting and prevention: promoting patient safety and quality through innovation and leadership. Jt Comm J Qual Patient Saf. 2008;34(12):700–2. https://doi.org/10.1016/s1553-7250(08)34091-4 .

Article   PubMed   Google Scholar  

Kandil M, Sayyed T, Emarh M, Ellakwa H, Masood A. Medication errors in the obstetrics emergency ward in a low resource setting. J Matern Fetal Neonatal Med. 2012;25(8):1379–82. https://doi.org/10.3109/14767058.2011.636091 .

Parry AM, Barriball KL, While AE. Factors contributing to registered nurse medication administration error: a narrative review. Int J Nurs Stud. 2015;52(1):403–20. https://doi.org/10.1016/j.ijnurstu.2014.07.003 .

Vrbnjak D, Denieffe S, O’Gorman C, Pajnkihar M. Barriers to reporting medication errors and near misses among nurses: a systematic review. Int J Nurs Stud. 2016;63:162–78. https://doi.org/10.1016/j.ijnurstu.2016.08.019 .

Elliott RA, Camacho E, Jankovic D, Sculpher MJ, Faria R. Economic analysis of the prevalence and clinical and economic burden of medication error in England. BMJ Qual Saf. 2021;30(2):96–105. https://doi.org/10.1136/bmjqs-2019-010206 .

U.S. Food and Drug Administration (FDA) . Working to reduce medication errors [Internet]. U.S. Food and Drug Administration (FDA). 2019. Available from: https://www.fda.gov/drugs/information-consumers-and-patients-drugs/working-reduce-medication-errors . Cited 27 Dec 2022

Institute of Medicine (US). Committee on quality of health care in America. In: Kohn LT, Corrigan JM, Donaldson MS, editors. To err is human: building a safer health system. Washington: National Academies Press (US); 2000. PMID: 25077248.

Google Scholar  

EscriváGracia J, Brage Serrano R, Fernández GJ. Medication errors and drug knowledge gaps among critical-care nurses: a mixed multi-method study. BMC Health Serv Res. 2019;19(1):640. https://doi.org/10.1186/s12913-019-4481-7 .

Article   Google Scholar  

Park K, Lee J, Kim SY, Kim J, Kim I, Choi SP, et al. Infusion volume control and calculation using metronome and drop counter based intravenous infusion therapy helper. Int J Nurs Pract. 2013;19(3):257–64. https://doi.org/10.1111/ijn.12063 .

Marwitz KK, Giuliano KK, Su WT, Degnan D, Zink RJ, DeLaurentis P. High-alert medication administration and intravenous smart pumps: a descriptive analysis of clinical practice. Res Social Adm Pharm. 2019;15(7):889–94. https://doi.org/10.1016/j.sapharm.2019.02.007 .

Kale A, Keohane CA, Maviglia S, Gandhi TK, Poon EG. Adverse drug events caused by serious medication administration errors. BMJ Qual Saf. 2012;21(11):933–8. https://doi.org/10.1136/bmjqs-2012-000946 .

Yoon J, Yug JS, Ki DY, Yoon JE, Kang SW, Chung EK. Characterization of medication errors in a medical intensive care unit of a university teaching hospital in South Korea. J Patient Saf. 2022;18(1):1–8. https://doi.org/10.1097/pts.0000000000000878 .

McDowell SE, Mt-Isa S, Ashby D, Ferner RE. Where errors occur in the preparation and administration of intravenous medicines: a systematic review and Bayesian analysis. Qual Saf Health Care. 2010;19(4):341–5. https://doi.org/10.1136/qshc.2008.029785 .

National Coordinating Council for Medication Error Reporting and Prevention. Taxonomy of medication errors. NCC MERP. 2001. Available from: https://www.nccmerp.org/taxonomy-medication-errors . Cited 27 Dec 2022

Moreira APA, Carvalho MF, Silva R, Marta CB, Fonseca ERD, Barbosa MTS. Handling errors in conventional and smart pump infusions: a systematic review with meta-analysis. Rev Esc Enferm USP. 2020;54:e03562. https://doi.org/10.1590/s1980-220x2018032603562 .

Sutherland A, Canobbio M, Clarke J, Randall M, Skelland T, Weston E. Incidence and prevalence of intravenous medication errors in the UK: a systematic review. Eur J Hosp Pharm. 2020;27(1):3–8. https://doi.org/10.1136/ejhpharm-2018-001624 .

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Levac D, Colquhoun H, O’Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5:1–9.

Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Implement. 2021;19(1):3–10. https://doi.org/10.1097/xeb.0000000000000277 .

Short TG, O’Regan A, Lew J, Oh TE. Critical incident reporting in an anaesthetic department quality assurance programme. Anaesthesia. 1993;48(1):3–7. https://doi.org/10.1111/j.1365-2044.1993.tb06781.x .

Article   CAS   PubMed   Google Scholar  

Singleton RJ, Webb RK, Ludbrook GL, Fox MA. The Australian incident monitoring study. Problems associated with vascular access: an analysis of 2000 incident reports. Anaesth Intensive Care. 1993;21(5):664–9. https://doi.org/10.1177/0310057x9302100531 .

Goldspiel BR, DeChristoforo R, Daniels CE. A continuous-improvement approach for reducing the number of chemotherapy-related medication errors. Am J Health Syst Pharm. 2000;15(57 Suppl 4):S4-9. https://doi.org/10.1093/ajhp/57.suppl_4.S4 . PMID: 11148943.

Taxis K, Barber N. Causes of intravenous medication errors: an ethnographic study. Qual Saf Health Care. 2003;12(5):343–7. https://doi.org/10.1136/qhc.12.5.343 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Wetterneck TB, Skibinski KA, Roberts TL, Kleppin SM, Schroeder ME, Enloe M, et al. Using failure mode and effects analysis to plan implementation of smart i.v. pump technology. Am J Health Syst Pharm. 2006;63(16):1528–38. https://doi.org/10.2146/ajhp050515 .

Rinke ML, Shore AD, Morlock L, Hicks RW, Miller MR. Characteristics of pediatric chemotherapy medication errors in a national error reporting database. Cancer. 2007;110(1):186–95. https://doi.org/10.1002/cncr.22742 .

Nuckols TK, Bower AG, Paddock SM, Hilborne LH, Wallace P, Rothschild JM, et al. Programmable infusion pumps in ICUs: an analysis of corresponding adverse drug events. J Gen Intern Med. 2008;23:41–5.

Evans RS, Carlson R, Johnson KV, Palmer BK, Lloyd JF. Enhanced notification of infusion pump programming errors. Stud Health Technol Inform. 2010;160(Pt 1):734–8 PMID: 20841783.

PubMed   Google Scholar  

Ligi I, Millet V, Sartor C, Jouve E, Tardieu S, Sambuc R, Simeoni U. Iatrogenic events in neonates: beneficial effects of prevention strategies and continuous monitoring. Pediatrics. 2010;126(6):e1461–8. https://doi.org/10.1542/peds.2009-2872 .

Rodriguez-Gonzalez CG, Herranz-Alonso A, Martin-Barbero ML, Duran-Garcia E, Durango-Limarquez MI, Hernández-Sampelayo P, Sanjurjo-Saez M. Prevalence of medication administration errors in two medical units with automated prescription and dispensing. J Am Med Inform Assoc. 2012;19(1):72–8. https://doi.org/10.1136/amiajnl-2011-000332 .

Ohashi K, Dykes P, McIntosh K, Buckley E, Wien M, Bates DW. Evaluation of intravenous medication errors with smart infusion pumps in an academic medical center. AMIA Annu Symp Proc. 2013;2013:1089–98 PMID: 24551395; PMCID: PMC3900131.

PubMed   PubMed Central   Google Scholar  

Prakash V, Koczmara C, Savage P, Trip K, Stewart J, McCurdie T, et al. Mitigating errors caused by interruptions during medication verification and administration: interventions in a simulated ambulatory chemotherapy setting. BMJ Qual Saf. 2014;23(11):884–92. https://doi.org/10.1136/bmjqs-2013-002484 .

Article   PubMed   PubMed Central   Google Scholar  

Schnock KO, Dykes PC, Albert J, Ariosto D, Call R, Cameron C, et al. The frequency of intravenous medication administration errors related to smart infusion pumps: a multihospital observational study. BMJ Qual Saf. 2017;26(2):131–40. https://doi.org/10.1136/bmjqs-2015-004465 .

Lyons I, Furniss D, Blandford A, Chumbley G, Iacovides I, Wei L, et al. Errors and discrepancies in the administration of intravenous infusions: a mixed methods multihospital observational study. BMJ Qual Saf. 2018;27(11):892–901. https://doi.org/10.1136/bmjqs-2017-007476 .

Schnock KO, Dykes PC, Albert J, Ariosto D, Cameron C, Carroll DL, et al. A multi-hospital before-after observational study using a point-prevalence approach with an infusion safety intervention bundle to reduce intravenous medication administration errors. Drug Saf. 2018;41(6):591–602. https://doi.org/10.1007/s40264-018-0637-3 .

Taylor MA, Jones R. Risk of medication errors with infusion pumps: a study of 1,004 events from 132 hospitals across Pennsylvania. Patient Safety. 2019;1(2):60–9. https://doi.org/10.33940/biomed/2019.12.7 .

Schilling S, Koeck JA, Kontny U, Orlikowsky T, Erdmann H, Eisert A. High-alert medications for hospitalised paediatric patients - a two-step survey among paediatric clinical expert pharmacists in Germany. Pharmazie. 2022;77(6):207–15. https://doi.org/10.1691/ph.2022.12025 .

Nguyen HT, Pham HT, Vo DK, Nguyen TD, van den Heuvel ER, Haaijer-Ruskamp FM, Taxis K. The effect of a clinical pharmacist-led training programme on intravenous medication errors: a controlled before and after study. BMJ Qual Saf. 2014;23(4):319–24. https://doi.org/10.1136/bmjqs-2013-002357 .

Bagheri-Nesami M, Esmaeili R, Tajari M. Intravenous medication administration errors and their causes in cardiac critical care units in Iran. Mater Sociomed. 2015;27(6):442–6. https://doi.org/10.5455/msm.2015.27.442-446 .

Tsang LF, Tsang WY, Yiu KC, Tang SK, Sham SYA. Using the PDSA cycle for the evaluation of pointing and calling implementation to reduce the rate of high-alert medication administration incidents in the United Christian Hospital of Hong Kong, China. J Patient Safety Qual Improv. 2017;5(3):577–83. https://doi.org/10.22038/PSJ.2017.9043 .

Westbrook JI, Rob MI, Woods A, Parry D. Errors in the administration of intravenous medications in hospital and the role of correct procedures and nurse experience. BMJ Qual Saf. 2011;20(12):1027–34. https://doi.org/10.1136/bmjqs-2011-000089 .

Daker-White G, Hays R, McSharry J, Giles S, Cheraghi-Sohi S, Rhodes P, Sanders C. Blame the patient, blame the doctor or blame the system? A meta-synthesis of qualitative studies of patient safety in primary care. PLoS ONE. 2015;10(8):e0128329. https://doi.org/10.1371/journal.pone.0128329 .

Kucukarslan SN, Peters M, Mlynarek M, Nafziger DA. Pharmacists on rounding teams reduce preventable adverse drug events in hospital general medicine units. Arch Intern Med. 2003;163(17):2014–8. https://doi.org/10.1001/archinte.163.17.2014 .

Lemieux-Charles L, McGuire WL. What do we know about health care team effectiveness? A review of the literature. Med Care Res Rev. 2006;63(3):263–300. https://doi.org/10.1177/1077558706287003 .

O’Leary KJ, Buck R, Fligiel HM, Haviley C, Slade ME, Landler MP, et al. Structured interdisciplinary rounds in a medical teaching unit: improving patient safety. Arch Intern Med. 2011;171(7):678–84. https://doi.org/10.1001/archinternmed.2011.128 .

Yu D, Obuseh M, DeLaurentis P. Quantifying the impact of infusion alerts and alarms on nursing workflows: a retrospective analysis. Appl Clin Inform. 2021;12(3):528–38. https://doi.org/10.1055/s-0041-1730031 . Epub 2021 Jun 30. PMID: 34192773; PMCID: PMC8245209.

Gates PJ, Baysari MT, Mumford V, Raban MZ, Westbrook JI. Standardising the classification of harm associated with medication errors: the harm associated with medication error classification (HAMEC). Drug Saf. 2019;42(8):931–9. https://doi.org/10.1007/s40264-019-00823-4 .

Assunção-Costa L, Ribeiro Pinto C, Ferreira Fernandes Machado J, Gomes Valli C, de PortelaFernandes Souza LE, Dean FB. Validation of a method to assess the severity of medication administration errors in Brazil: a study protocol. J Public Health Res. 2022;11(2):2022. https://doi.org/10.4081/jphr.2022.2623 .

Walsh EK, Hansen CR, Sahm LJ, Kearney PM, Doherty E, Bradley CP. Economic impact of medication error: a systematic review. Pharmacoepidemiol Drug Saf. 2017;26(5):481–97. https://doi.org/10.1002/pds.4188 .

Khalil H, Shahid M, Roughead L. Medication safety programs in primary care: a scoping review. JBI Database Syst Rev Implement Rep. 2017;15(10):2512–26. https://doi.org/10.11124/jbisrir-2017-003436 .

Atey TM, Peterson GM, Salahudeen MS, Bereznicki LR, Simpson T, Boland CM, et al. Impact of partnered pharmacist medication charting (PPMC) on medication discrepancies and errors: a pragmatic evaluation of an emergency department-based process redesign. Int J Environ Res Public Health. 2023;20(2):1452. https://doi.org/10.3390/ijerph20021452 .

Atey TM, Peterson GM, Salahudeen MS, Bereznicki LR, Wimmer BC. Impact of pharmacist interventions provided in the emergency department on quality use of medicines: a systematic review and meta-analysis. Emerg Med J. 2023;40(2):120–7. https://doi.org/10.1136/emermed-2021-211660 .

Hanifin R, Zielenski C. Reducing medication error through a collaborative committee structure: an effort to implement change in a community-based health system. Qual Manag Health Care. 2020;29(1):40–5. https://doi.org/10.1097/qmh.0000000000000240 .

Kirwan G, O’Leary A, Walsh C, Grimes T. Economic evaluation of a collaborative model of pharmaceutical care in an Irish hospital: cost-utility analysis. HRB Open Res. 2023;6:19. https://doi.org/10.12688/hrbopenres.13679.1 .

Billstein-Leber M, Carrillo CJD, Cassano AT, Moline K, Robertson JJ. ASHP guidelines on preventing medication errors in hospitals. Am J Health Syst Pharm. 2018;75(19):1493–517. https://doi.org/10.2146/ajhp170811 .

Lewis KA, Ricks TN, Rowin A, Ndlovu C, Goldstein L, McElvogue C. Does simulation training for acute care nurses improve patient safety outcomes: a systematic review to inform evidence-based practice. Worldviews Evid Based Nurs. 2019;16(5):389–96. https://doi.org/10.1111/wvn.12396 .

Mardani A, Griffiths P, Vaismoradi M. The role of the nurse in the management of medicines during transitional care: a systematic review. J Multidiscip Healthc. 2020;13:1347–61. https://doi.org/10.2147/jmdh.S276061 .

L Naseralallah D Stewart M Price V Paudyal 2023 Prevalence, contributing factors, and interventions to reduce medication errors in outpatient and ambulatory settings: a systematic review Int J Clin Pharm https://doi.org/10.1007/s11096-023-01626-5

Download references

This research was supported by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: RS-2020-KD000077) and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No. 2020R1A6A1A03041989). This work also supported by the Brain Korea 21 FOUR Project funded by National Research Foundation (NRF) of Korea, Yonsei University College of Nursing.

Author information

Authors and affiliations.

College of Nursing, Mo-Im Kim Nursing Research Institute, Yonsei University, Seoul, Korea

Jeongok Park

University of Pennsylvania School of Nursing, Philadelphia, PA, USA

Sang Bin You

Department of Nursing, Hansei University, 30, Hanse-Ro, Gunpo-Si, 15852, Gyeonggi-Do, Korea

Gi Wook Ryu

College of Nursing and Brain Korea 21 FOUR Project, Yonsei University, Seoul, Korea

Youngkyung Kim

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: JP; study design: JP; data collection: GWR, YK, SBY; data analysis: JP, GWR, YK, SBY; administration: JP; funding acquisition: JP; writing—original draft: JP, GWR, YK; writing—review and editing: JP, YK.

Corresponding authors

Correspondence to Gi Wook Ryu or Youngkyung Kim .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:.

Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist.

Additional file 2:

Search queries and strategies by electronic databases.

Additional file 3:

Studies included in the data analysis.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Park, J., You, S.B., Ryu, G.W. et al. Attributes of errors, facilitators, and barriers related to rate control of IV medications: a scoping review. Syst Rev 12 , 230 (2023). https://doi.org/10.1186/s13643-023-02386-z

Download citation

Received : 15 May 2023

Accepted : 08 November 2023

Published : 13 December 2023

DOI : https://doi.org/10.1186/s13643-023-02386-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medication safety
  • Patient safety
  • Quality improvement
  • Safety culture

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is a systematic research study

SYSTEMATIC REVIEW article

Measuring resilience for chinese-speaking populations: a systematic review of chinese resilience scales.

Zhenyu Tian

  • 1 Department of Communication Studies, College of Wooster, Wooster, OH, United States
  • 2 School of Journalism and Communication, Tsinghua University, Beijing, China
  • 3 Department of Communication, University of South Florida, Tampa, FL, United States

Introduction: Despite the rapid growth of interdisciplinary resilience research in Chinese contexts, no study has systematically reviewed individual-level measurement scales for Chinese-speaking populations. We report a systematic review of scales developed for or translated/adapted to Chinese-speaking contexts, where we assessed how widely used scales fare in terms of their psychometric qualities.

Methods: Studies included in this review must have been published in peer-reviewed English or Chinese journals between 2015-2020 and included self-reported resilience scales in Chinese-speaking populations. Searches were conducted in PsycINFO, CNKI (completed in May 2021), and PubMed (completed in January 2024). We developed coding schemes for extracting relevant data and adapted and applied an existing evaluation framework to assess the most frequently used resilience scales by seven methodological criteria.

Results: Analyses of 963 qualified studies suggested that Chinese resilience scales were used in a diverse range of study contexts. Among 85 unique kinds of resilience measures, we highlighted and evaluated the three most frequently used translated scales and three locally developed scales (nine scales in total including variations such as short forms). In short, resilience studies in Chinese contexts relied heavily on the translated 25-item Connor-Davidson Resilience Scale, which scored moderately on the overall quality. The locally developed Resilience Scale for Chinese Adolescents and Essential Resilience Scale received the best ratings but could use further development.

Discussion: We discussed how future work may advance widely used scales, and specified seven methodological recommendations for future resilience scale development with existing and new scales in and beyond the Chinese study contexts. We further addressed issues and challenges in measuring resilience as a process and called on researchers to further develop/evaluate process measures for Chinese-speaking populations.

1 Introduction

Resilience has become a catch-all term for how individuals, communities, and nations cope with and adapt to disruptions, adversities, or stressors. Pioneered in developmental psychology, resilience scholarship has flourished across multiple areas of psychology and related disciplines (e.g., anthropology, communication, education, and medicine; Southwick et al., 2014 ; Houston and Buzzanell, 2020 ). Researchers have conceptualized resilience as a trait, process, and/or “positive” outcome ( Southwick et al., 2014 ) and have offered numerous operational definitions and measures ( Windle et al., 2011 ). Additionally, resilience scholarship has grown beyond its Western (and English-speaking) academic origins and cultural boundaries ( Southwick et al., 2014 ), responding to critiques of earlier research being acultural/acontextual ( Ungar, 2015 ).

Contributing to this movement are resilience studies in Chinese (speaking) contexts, such as how Chinese internet marketers’ “psychological resilience” could promote their sense of career success when shifting work conditions were complicated by the pandemic ( Wang and Gao, 2022 ), and how, through resilience, social support buffered Chinese college students against anxiety due to experiencing prolonged lockdowns ( He et al., 2022 ). Overall, studies in Chinese-speaking regions and cultures often attend to disruptions salient in, if not unique to, Chinese cultural contexts (e.g., left-behind children of migrant workers; academic stress associated with college entrance examination). Scholars have developed, translated, and adapted resilience measures for Chinese-speaking populations ( Liu et al., 2017 ). However, assessing resilience in Chinese (and other non-English-speaking) contexts raises issues of translations and adaptations of the construct’s conceptualization and operationalization across cultural groups ( Farh et al., 2006 ), compounded by the multiplicity of available measures.

Measurement reliability and validity are critical for obtaining scientifically useful information about resilience ( Windle et al., 2011 ), and issues of contextualization must be considered when developing resilience measures across cultures ( Farh et al., 2006 ; Southwick et al., 2014 ). Given the rise of interdisciplinary resilience studies in Chinese contexts since the mid-2010s and that Chinese remains the language with the second largest native-speaking population ( Ethnologue, 2023 ), 1 it is important to (a) identify frequently used scales in various contexts, (b) assess how widely used scales (both adapted from English and locally developed) fare in terms of their psychometric qualities, and (c) outline directions for future research on resilience measurement in Chinese contexts. In this study, we address these goals with a focus on individual-level resilience in Chinese-speaking populations and analyze peer-reviewed articles published in either English or Chinese from three major databases ( Jackson and Kuriyama, 2019 ). Specifically, we focus on articles published over the six years of 2015–2020, from when a surge of resilience scholarship in Chinese contexts began to occur to the start of the COVID pandemic. We evaluate commonly used measures in Chinese contexts using a framework adapted from Windle et al. (2011) , which builds on a widely cited methodological publication ( Terwee et al., 2007 ) to organize a range of criteria for evaluating a measure’s psychometric quality. In applying their framework to cross-cultural/linguistic work, we identify additional qualities crucial for translational work. In what follows, we review conceptions of resilience and the emergence of resilience research in China, explain how we adapt Windle et al.’s (2011) framework for assessing resilience scales, and articulate our aims.

2 Literature review

2.1 conceptualizations of resilience.

Resilience research is characterized by “definitional diversity,” which has raised confusion about what resilience is and how to best characterize processes and/or outcomes of resilience ( Luthar et al., 2000 ). Reviews written in both English and Chinese commonly discern three foci—trait, outcome, and process—for understanding resilience (e.g., Fletcher and Sarkar, 2013 ). The trait view treats resilience as an individual characteristic, a stable trait/set of traits, or innate qualities people possess ( Block and Block, 1980 ; Connor and Davidson, 2003 ; Campbell-Sills and Stein, 2007 ). When people are faced with risks and adversities, specific resilience traits (e.g., ability to cope with change, persistence) function as a protective factor to enable individual adaptation and thriving. Scholars have problematized seeing resilience as a static trait for its acontextual assessments applied to complex social and cultural contexts (e.g., people facing poverty) and for the implied assumption that only some people have resilience or are resilient ( Walsh, 2002 ; Ungar, 2004 ).

The outcome view conceptualizes resilience as the presence of positive developmental and/or adaptive outcomes, such as “healthy” attributes and behaviors ( Werner et al., 1971 ), among different demographic groups that have experienced conditions commonly considered “unhealthy,” “risky,” or “adverse” ( Masten and Barnes, 2018 ). Early work in developmental psychology reflects this approach, such as studies on the stress resistance and thriving of children living with Schizophrenic parents ( Garmezy, 1974 ). These studies typically consider protective factors—both internal, such as personal resilient qualities (e.g., self-esteem), and external, such as environmental considerations (e.g., family support and community climate)—that enable positive adaptive responses to adversities ( Richardson, 2002 ). In short, resilience is the presence of positive results despite difficulties, according to the outcome view.

The third view conceptualizes resilience as a “dynamic process” ( Luthar et al., 2000 ), neither just the protective factors nor the adaptive outcome, but rather elements involved in experiencing, adapting to, and transforming adversities over time and across situations ( Buzzanell, 2010 ; Windle, 2010 ; Fletcher and Sarkar, 2013 ). Richardson (2002) theorized that resilience begins with disruptions to “biopsychospiritual” homeostasis. An expansive term, “process” may refer to (a) underlying mechanisms by which protective factors interact with risk factors to enable adaptation ( Rutter, 1990 ), (b) how humans as open systems successfully adapt to disturbances ( Masten, 2015 ), and (c) ways in which individuals harness resources from contexts to sustain well-being despite difficulties ( Panter-Brick and Leckman, 2013 ). Windle (2010) defined resilience as the process of “effectively negotiating, adapting to, or managing significant sources of stress or trauma” facilitated by “assets and resources within the individual, their life and environment” (p. 163), the nature of which may vary across the life course.

Scholars have not only identified distinct conceptualizations of resilience but also suggested guidelines for identifying appropriate resilience scales. This latter area of scholarship is less developed, as Windle et al. (2011) contended in their attempt to create a “robust evaluation framework” for (English) resilience scales (p. 2).

2.2 Resilience research in Chinese contexts

Resilience research in China and within geopolitical borders politically, historically, and culturally affiliated with China started in the 2000s ( Yu and Zhang, 2005 ). Although resilience has been considered a construct developed by “Western” academics, particularly psychologists in the United States, scholars soon found similar ideas rooted in Chinese cultural and linguistic traditions. Such commonalities prompted researchers, such as Ungar (2009 , 2015) , to argue that although resilience research centers around Western psychological discourse, resilience phenomena manifest in universal and specific ways within and across cultural borders and through diverse ways of living and being.

In Chinese, resilience has been translated into “ fu yuan li ” (ability to recover), “ kang ni li ” (ability to resist adversity), “ xin li tan xing ” (emphasizing a “psychological” trait, the idea of “bouncing back,” and a sense of elasticity), and “ ren xing ” (a bendable, stretchable feature) by scholars in Mainland China, Hong Kong, and Taiwan ( Liu et al., 2015 ). Some scholars draw from Chinese idioms, Taoist and Confucianist values, and traditional dialectical (co-dependent) views on adversity ( ni jing ) and growth or fortune to suggest that “ ren xing ”—a naturally developed term—best captures the meanings of resilience in everyday Chinese (e.g., Yu and Zhang, 2007 ; Hu and Gan, 2008 ). Diverging translations of resilience reflect conceptual inconsistencies. For example, translating resilience as “ fu yuan li ” implies an “ability” or “capacity,” whereas “ ren xing ” implies that resilience is a “trait” or “feature.”

Scholars also have translated and/or created resilience scales for use in Chinese cultural contexts. Popular scales such as the Connor–Davidson Resilience Scale (CD-RISC; Connor and Davidson, 2003 ) and the Resilience Scale (RS; Wagnild and Young, 1993 ) have been translated, validated, and sometimes adapted to Chinese contexts. These scales also have inspired the development of localized scales (e.g., Hu and Gan, 2008 ). Prior research, however, has not systematically assessed which Chinese resilience scales are used most commonly nor evaluated their psychometric properties.

2.3 Existing systematic reviews and current goals

Windle et al. (2011) provided a “robust methodological review” of English resilience measures for researchers and clinicians whose selection of instruments previously might have been “arbitrary and inappropriate” (p. 2), by applying the psychometric properties proposed by Terwee et al. (2007) to assess English resilience scales (see Table 1 ). 2 Specifically, Windle et al. (2011) identified 19 measures, including 15 original measures and four variations (e.g., CD-RISC-25 vs. CD-RISC-10), and they reported ratings of these measures on their content validity, internal consistency, construct validity, reproducibility/reliability, and interpretability. These criteria offer a useful framework for the current study given its and Terwee et al.’s (2007) influence and how they further inform at least two recent systematic reviews on resilience measures ( Zhou et al., 2020 ; Windle et al., 2022 ). 3

www.frontiersin.org

Table 1 . Scale evaluation rubric (adapted from Windle et al., 2011 and Terwee et al., 2007 ).

Although no systematic review of individual-level resilience measures in Chinese contexts exists, a few narrative literature reviews written in Chinese provide insights for identifying and selecting measures. Specifically, Liu et al. (2017) discussed the progress, current understandings and models of resilience, and future directions of “domestic and foreign/international” resilience studies. They provided a list of commonly used scales developed in Chinese or other languages (mostly English). They also reported Cronbach’s α reliability for several listed scales. Similarly, Liu et al. (2015) reviewed popular scales organized around investigated populations (e.g., scales for children and teenagers). For each scale, the authors summarized its Cronbach’s α value, dimension(s), item numbers, whether it has been translated and validated, and study populations (e.g., students and nurses). However, as narrative reviews, these authors’ identification, selection, and evaluation of scales were not driven by a comprehensive review process.

In this project, we build on Windle et al.’s (2011) review to systematically synthesize Chinese resilience studies, extending it in two ways. First, applying it to a more recent period may reveal new trends, issues, and findings (or lingering problems) concerning resilience. Therefore, we review Chinese resilience studies published between 2015 and 2020. This time frame was informed by the search results of the first database that showed a noticeable increase in resilience research in 2016; hence, to map trends in the growing literature but in a still manageable scope, we limited the review to studies published between 2015 and 2020. Second, Windle et al. (2011) provided guidelines for resilience scales based on studies reported in English. In comparison, we address similar goals in the context of translating and developing scholarship from one language to another. We, hence, add two criteria to assess the extent to which (a) cultural and linguistic appropriateness is addressed when developing Chinese resilience scales ( Farh et al., 2006 ) and (b) the factor structure of resilience measures in Chinese contexts is examined and replicated (see Table 1 ).

Our first goal is to identify the contexts (e.g., left-behind children, urban–rural migration) where resilience testing is relevant, as illuminated by included studies, as resilience is contextualized in disruptive events, and experts from multiple disciplines have emphasized how context matters for studying resilience ( Southwick et al., 2014 ). We further aim to (a) identify the most frequently used resilience measurement scales for Chinese-speaking populations in studies from 2015 to 2020 and examine these scales’ popularity in relation to specific study contexts, and (b) address how such scales fare in terms of their psychometric properties. In so doing, we not only capture research developments that laid the foundation for the current boom in Chinese resilience scholarship but also make informed recommendations/guidelines for advancing and selecting research instruments in future.

The review process and report were guided by the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA; Moher et al., 2015 ; Page et al., 2021 ).

3.1 Eligibility criteria

Studies included in this systematic review met the following inclusion criteria. First, included articles must have been (1) published in peer-reviewed outlets between 2015 and 2020, (2) based on primary study results from the use of self-report resilience measurement scales, and (3) full-text accessible. Second, the primary language used by the population of interest in the included studies must have been Chinese. 4 Third, included studies must have operationalized resilience as (1) a single concept, (2) multiple related subscales/sub-dimensions, or (3) part of a broader construct (e.g., positive psychological capital; Luthans et al., 2007 ) and was treated as an independent subscale in analyses (see Table 2 ).

www.frontiersin.org

Table 2 . Eligibility criteria.

3.2 Databases search

A systematic search was first conducted between July 2020 and May 2021 in two databases. For studies published in English, we chose PsycINFO for its coverage of more than 2,000 journals across multiple related disciplines (e.g., psychology, health, sociology, management, and communication). We then used China National Knowledge Infrastructure (CNKI) to access reports published in Chinese. CNKI is the largest academic database in China containing a wide range of sources from all disciplines, with its built-in search system enabling more targeted searches (e.g., considering quality and topics). To ensure our systematic review was comprehensive, we conducted an additional search in PubMed in January 2024. In the early scoping stage, the first author emailed six researchers cited in other reviews or studies for obtaining original scales in Chinese not available through databases, two of whom responded.

In PsycINFO and PubMed, the Boolean search query “(resilience OR resiliency OR resilient) AND (China OR Chinese)” was used. Filters applied to PsycINFO included “Linked Full Text” and “Peer Reviewed.” The first PsycINFO search was in July 2020 and was completed by another search in May 2021, with an added date limiter “2020/07/01–2020/12/31” for completing 2020 results. Based on learned experience, additional filers were added for PubMed results: “Humans,” “Chinese,” and “English,” and published between “2015/01/01” and “2020/12/31.” The search query in CNKI, with the adjusted time frame, was informed by existing conceptual reviews (e.g., Yu and Zhang, 2005 ) that identified commonly used translations of “resilience” (i.e., “‘心理弹性’ + ‘心理韧性’ + ‘复原力’ + ‘抗逆力’”). Specifically, “弹性” and “韧性” were both common words (e.g., in mechanical engineering and biology). We included the modifier “心理 (psychology/ical)” to avoid retrieving numerous irrelevant records (e.g., resilience of mechanic systems). We then limited the search by discipline. For example, we unselected “basic science” (e.g., physics), “engineering,” and “agriculture” while only keeping relevant ones such as healthcare, humanities, social sciences, communication, and management. To ensure quality and consistency with the peer-reviewed work in (primarily) English databases, we further filtered the search using CNKI’s built-in citation index-based qualifying system (SCI, EI, PKU Core, CSSCI, and CSCD) for academic journal publications.

3.3 Selection

We began by screening PsycINFO records/abstracts, which were coded for whether it (a) named a specific resilience scale (e.g., CD-RISC), (b) explicitly described measuring resilience, (c) clearly used Chinese-speaking sample(s), or (d) was clearly unqualified (e.g., mouse models, systematic reviews, and qualitative reports). To be prudent, full-text articles were assessed if there was any indication that a study used a resilience measure. These abstracts usually mentioned at least one of the following: (a) a specific resilience scale, (b) some relationship between resilience and other variables, (c) studies on phenomena often conceptually related to resilience (e.g., post-traumatic growth) despite missing the word “resilience,” and (d) resilience as a keyword.

The first author independently started the screening while the team met periodically to discuss results and ambiguous cases until the completion of a detailed code book (see Table 2 ). Then, the fourth author screened 50 randomly selected records for inclusion/exclusion to check reliability; however, several disagreements between the two coders occurred (Krippendorff’s alpha = 0.57). 5 We, therefore, met to further clarify the criteria, after which another interrater reliability check of 50 more studies was run (Krippendorff’s alpha = 0.96), with the sole disagreement addressed through discussion. The first author reexamined coded records and proceeded to screen the remaining, including CNKI and PubMed records. Duplicates were removed. 6 Assisted by other authors, the first and second authors retrieved full-text reports that passed abstract screening for further assessment; ineligible and inaccessible reports were excluded. Figure 1 demonstrates the screening processes.

www.frontiersin.org

Figure 1 . The review process (adapted from Page et al., 2021 ; the PRISMA template is distributed under the terms of the Creative Commons Attribution License ).

3.4 Data extraction

3.4.1 coding study context.

To address our first goal, we coded the specific context in which resilience was investigated in each study. Determining context was an emic sensemaking process (i.e., identifying and building categories from the ground up) whereby we identified and coded context categories based on included studies. Detailed instructions and specific categories and descriptions are elaborated in the Supplementary Table S1 . For the PsycINFO reports, the team met periodically (e.g., after every 50 studies were coded) to discuss study contexts until the finalization of a context codebook, using which two authors independently coded the CNKI reports (Krippendorff’s alpha = 0.91), while one author coded the later added PubMed reports. Disagreements/uncertainties were addressed through group discussion.

3.4.2 Scale identification and evaluation

The first author recorded the resilience scale(s) used in each report, which was later checked by two other authors. Reports including results about two resilience scales (e.g., CD-RISC and RS) were listed two times, and different versions of the same scale (e.g., CD-RISC-25 vs. CD-RISC-10) were noted. The frequency of each scale became evident through this process, whereby we determined specific scales to be further evaluated. We then referred to in-text citations and references of relevant reports to identify the scale development and/or translation and validation study/studies reporting the original scale development and/or translation work (hereto referenced as “original reports”) for selected scales. If a translated scale had multiple referenced sources of translations in relevant studies, the most frequently referenced translation was selected. For example, for the 14-item Resilience Scale, which had three referenced translated versions, the version by Tian and Hong (2013) was used because it has been used more frequently than others (e.g., Chung et al., 2020 ). Next, the first and second authors retrieved the original reports for each scale to be evaluated.

Table 1 presents the rubric adapted from Windle et al. (2011) , which shows seven criteria (e.g., content validity, internal consistency) relevant to evaluating the psychometric properties of resilience measures in this project, including “factor structure” and “contextualized translation” that were added given their importance for assessing scales across languages and cultures ( Farh et al., 2006 ). The rubric follows Windle et al.’s (2011) 3-point scoring system, including “2” fully meeting, “1” partially meeting, and “0” failing to meet (or missing information about) a given criterion. For example, for internal consistency, a “2” rating means that factor analyses with adequately sized samples (i.e., 7* #items and > = 100) have been conducted and Cronbach’s α values 0.70–0.95 per dimension have been reported ( Terwee et al., 2007 ). A “1” can mean doubtful design (e.g., inadequate sample size) or, for a multidimensional scale, Cronbach’s alphas for no more than half of its dimensions are outside of the previous range. Finally, a “0” means problematic value(s) per dimension regardless of design and/or missing information (for more details, see Table 1 ). Using Windle et al. (2011) to further illustrate, regarding internal consistency, a scale receives a “1” for not reporting Cronbach’s alphas for subscales despite acceptable whole-scale alpha, whereas a one-factor (tested through EFA and CFA) scale with an alpha between 0.70 and 0.95 gets a “2.” For test–retest reliability, a scale is rated “1” despite good ICC (e.g., 0.87) for inadequate retest sample size (<50), whereas a “0” is given to a scale without reported test–retest reliability. Importantly, the scores are ordinal, meaning that a score for one specific criterion enables the ranking of selected scales by the given criterion; sum scores per scale enable a ranking of scales on overall psychometric qualities (with seven criteria, sum scores for measures can range from 0 to 14 points, with 14 being the highest possible score). Terwee et al.’s (2007) system uses symbols (e.g., “+”/“?”), which is less effective in demonstrating overall qualities.

Using the criteria, the two authors evaluated original reports and corresponding scales independently. The team met to clarify the rubric and address disagreements. Then, one author performed the rating independently, and all three coders reached 100% agreement. Although most evaluation criteria were assessed from the original reports, we sought information about some criteria (e.g., internal consistency and test–retest reliability) from relevant articles. Additionally, two authors recorded Cronbach’s alphas of the evaluated scales reported in relevant reports (if available). For scales used over 100 times (CD-RISC and RSCA), we randomly selected 50 studies for each scale.

In total, 963 reports (301 PsycINFO, 551 CNKI, 111 PubMed) met the inclusion criteria and were included in this review (see Figure 1 ). In these included articles, resilience assessment occurred 973 times in various contexts using a range of measures (some studies used two to three resilience scales, e.g., Li et al., 2018 ). Among these assessments, we identified 85 unique self-report resilience measurement scales (a scale and its variations, such as short forms, are considered one in this count) (see Supplementary Table S2 ).

4.1 Chinese resilience research contexts

Our first aim was to identify contexts where resilience testing has been conducted. Among the 963 research reports, the largest group ( n  = 332, 34.5%) focused on health conditions. These studies were characterized by their clear foci on mental or physical disorders (e.g., schizophrenia and bipolar disorder; Deng et al., 2018 ), diseases (e.g., HIV/AIDS; Gao et al., 2018 ), illnesses (e.g., cancer; Ye et al., 2018 ), and/or public health concerns (e.g., aging framed in such a matter; Tan et al., 2016 ). This group also included studies on temporary or lasting conditions where some level of medical care was required (e.g., pregnancy; Ma et al., 2020 ). In these studies, participants were commonly patients, survivors, or caregivers.

Next, 176 studies (18.3%) used resilience scales in a general context as part of a survey for a mental health index/profile of a population not known for facing specific risks, such as “healthy individuals” and college students in some cities (e.g., Kong et al., 2018 ). The general context usually considered no specific stressor, or alternatively, a range of adversities/stressors/risk factors such as unspecified childhood adversity ( Li et al., 2014 ) or chronic or short-term stress as a common experience ( Ramsay et al., 2015 ; Shi and Wu, 2020 ). Two contexts determined by the physical settings (of social organizing) were work , occupational, and/or organizational challenges ( n  = 158, 16.4%) and school life and academic challenges ( n  = 32, 3.3%). Studies in these categories usually focused on routine challenges associated with these settings, such as workplace burnout and fatigue common to stressful occupations (e.g., medical professionals and civil servants; Qiu et al., 2020 ) and academic burnout ( Ying et al., 2016 ).

The next context concerned experiences and trends associated with systemic, socio-cultural-economic phenomena in and beyond contemporary China ( n  = 158, 16.4%). These studies examined a range of overlapping, publicly aware “ social problems” as complex forms of adversity, often involving the marginalization of specific populations, such as urban–rural migrant workers (e.g., Yang et al., 2020 ) and families (e.g., Gao et al., 2020 ), left-behind children (e.g., Gao et al., 2019 ), migration/immigration (e.g., Yu et al., 2015 ), and LGBTQ+ groups (e.g., Yang et al., 2016 ). Moreover, resilience scales were also used in micro contexts concerning the common and specific challenges of relating , including personal, family, and community relationships ( n  = 58, 6.0%), such as parent–child conflict ( Tian et al., 2018 ) and older adults losing their sense of community ( Zhang et al., 2017 ). Additionally, we decided to present abuse and bullying as a unique context ( n  = 31, 3.2%), given the specificity of the behaviors, usually with the intention to harm, and the ways such events involve similar experiences of victimization (e.g., fear and isolation) across settings (e.g., school, workplace, and family; Zhou et al., 2017 ; Lin et al., 2020 ).

Furthermore, researchers examined resilience in the context of natural disasters ( n  = 29, 3.0%), such as post-traumatic stress disorder and growth linked to experiencing an earthquake ( Xi et al., 2020 ) or rainstorm disaster ( Quan et al., 2017 ) and shidu after an earthquake ( Wang and Xu, 2017 ). 7 Finally, 31 studies (3.2%) focused on resilience during the COVID-19 pandemic (e.g., Ye et al., 2020 ). We identified the pandemic as a separate context due to the magnitude of this atypical event and the multiple ways in which it disrupted social order. In sum, these findings show the broad range of life disruptions explored in interdisciplinary resilience research with Chinese populations.

4.2 Chinese resilience scales

Our second aim was to identify which resilience scales have been used most frequently with Chinese-speaking populations, including whether the popularity of scales varies across contexts. Only three scales (including their variations) accounted for at least 5% of the total 973 times of resilience assessment, including the CD-RISC scales (54.9%), RSCA (12.5%), and various versions of RS combined (8.4%). The CD-RISC and RS were translated scales, whereas the RSCA was a locally developed measure.

Given our desire to focus on popular scales, including both translated and locally developed, we decided to limit our focus to the three most frequently used translated scales (including both the original version and common variations) and a matching number of locally developed scales when conducting a detailed evaluation of psychometric properties (see below). We specifically focused on the following nine Chinese scales organized into six groups: (1) CD-RISC-25 ( Yu and Zhang, 2007 ) and CD-RISC-10 ( Wang et al., 2010 ); (2) RSCA ( Hu and Gan, 2008 ); (3) three versions of the RS including RS-25 ( Lei et al., 2012 ), RS-14 ( Tian and Hong, 2013 ), and RS-11 ( Gao et al., 2013 ); (4) the 14-item version of the Ego-Resiliency Scale (ER-14 CN; Chen et al., 2020 ); (5) the Essential Resilience Scale (ERS; Chen et al., 2016 ); and (6) the Resilience Trait Scale for Chinese Adults (RTSCA; Liang and Cheng, 2012 ). To address which of these scales were used most frequently in different contexts, we performed a cross-tabulation analysis to show how “popular” these scales were in each context. The CD-RISC-25 was the most popular choice across most study contexts, including “general” (49.7%), “health” (67.0%), “work” (68.9%), “school” (41.7%), “relational” (42.0%), “disaster” (79.2%), and “COVID-19” (57.1%). The RSCA was used most frequently in the remaining “systemic” (52.4%) and “abuse” (40.9%) contexts (see Table 3 ). In short, CD-RISC-25 dominated resilience testing in Chinese.

www.frontiersin.org

Table 3 . Scale * context crosstabulation (only counting reports related to evaluated scales).

4.3 Scale quality

Our third aim was to assess the psychometric properties of widely used translated and locally developed Chinese resilience measures. The nine scales (including multiple variations of the top six scales) were evaluated based on seven criteria. Scales received a score of “0” (e.g., no information provided) to “2” (e.g., criterion met using rigorous procedures) for each criterion, resulting in an overall score ranging from 0 to 14 possible points. Overall, locally developed scales tended to score higher than translated ones, but none achieved the highest possible rating (see Table 1 for the rating system and Table 4 for ratings of each scale).

www.frontiersin.org

Table 4 . Summary of scale assessments.

4.3.1 Content validity

The three locally developed measurement scales (i.e., RSCA, ERS, and RTSCA) received the full score for content validity. All three clarified their aims, discussed resilience and its cultural relevance in Chinese contexts, and involved the target population in item creation/selection (e.g., items were written based on themes in interviews with Chinese participants). Although all translated scales clarified aims and defined resilience, descriptions of target population involvement in the item translation/selection process were not found in the original articles reporting the Chinese versions of these scales.

4.3.2 Internal consistency

Regarding the CD-RISC-25, Cronbach’s alpha for one of the subscales (optimism) was only 0.60 ( Yu and Zhang, 2007 ). The same subscale has displayed less than optimal internal consistency in other studies as well (e.g., Cai et al., 2017 ). For RTSCA, the internal locus of control subscale (out of five) was 0.60. Nonetheless, all studies reported accepted Cronbach’s alphas (i.e., 0.70–0.95) for the total scales with adequate sample sizes. Additionally, we calculated the average alphas (if reported) of these scales. Results were as follows: CD-RISC-25 and –10 = 0.90 and 0.90; RS-25, −14, and −11 = 0.92, 0.90, and 0.86; ER-14 = 0.82; RSCA = 0.83; ERS = 0.92; RTSCA = 0.86.

4.3.3 Factor structure

The authors of RS-25, ER-14, and RSCA examined the proposed or revised factor structures of their measures by conducting EFA and CFA in separate samples. Additionally, for the CD-RISC-25, CFA results failed to retain the original five-factor structure; therefore, Yu and Zhang (2007) conducted EFA and proposed a three-factor structure for the scale. Although they did not replicate the three-factor structure in an independent sample, other researchers have done so (e.g., Xie et al., 2016 ). Similarly, ERS’s three-factor structure tested through a single CFA ( Chen et al., 2016 ) was replicated by Lau et al. (2020) . RTSCA’s five-factor structure was confirmed in one sample ( Liang and Cheng, 2012 ) and was replicated in Li et al. (2017) . Wang et al. (2010) conducted an EFA to explore CD-RISC-10’s one-factor structure, which was then verified by Cheng et al. (2020) through CFA and in the case of RS-14, Tian and Hong (2013) reported two factors through EFA, which were verified in later structural equation modeling (SEM) analyses (e.g., Huang et al., 2020 ). Therefore, these measures were rated highest (i.e., “2”) for the factor structure criterion. RS-11 received a “0” because insufficient information was available to judge whether its factor structure was supported or replicated.

4.3.4 Construct validity

Four measures (CD-RISC-25 and -10; RSCA; ERS) achieved the full score regarding construct validity. The other scales received the intermediate score due to a lack of clarity. That is, although validation studies for measures assessed resilience along with other literature-informed, conceptually related constructs and reported significant relationships among them, the authors did not formulate hypotheses or clearly state some type of expected relationship (directional or not) between resilience and these constructs before reporting statistical tests.

4.3.5 Test–retest reliability

Adequate information about test–retest reliability was available for five measures (CD-RISC-10, RS-25, −14, and −110, and RTSCA) in their original or most popular versions. However, none achieved the full score for two reasons: failing to report intraclass correlation coefficients and using small sample sizes. The test–retest correlation coefficient for CD-RISC-10 was 0.90 across two weeks, 0.31 after six months for RS-25, and “ranged from 0.53 to 0.85” with “86% > 0.70” for RS-14 ( Tian and Hong, 2013 , p, 1500). The sample sizes for these studies, however, were small (below 40), which constitutes a design issue according to Windle et al. (2011) , given that correlations from small samples contain greater sampling error. For RTSCA, the coefficient for the full scale was 0.88 after three weeks; however, the coefficients for three of the five dimensions were smaller than 0.70; its retest sample of 47 individuals also did not fully meet the criterion. The coefficient for RS-11 was 0.62, therefore failing to meet the criterion. The authors of RSCA ( Hu and Gan, 2008 ) mentioned a “retest” in Chinese; however, it used a new group of participants instead of returning members of an existing sample. It is worth noting that although test–retest information was missing from Yu and Zhang (2007) , one included study ( Xie et al., 2016 ) that examined the psychometrics of this version of CD-RSIC-25 did report test–retest reliability of 0.66 across two months. The coefficient for a version of Chinese ER-14 utilized in an unpublished dissertation (cited by a few; Li, 2008 ) was 0.71 across a month with a retest sample of 198 people.

4.3.6 Interpretability

Information demonstrating potential differences in scoring between or among subgroups of a reference population was available for all but CD-RISC-25 and RTSCA. However, none achieved the maximum score because they did not identify and report results about at least four subgroups and/or present the means and standard deviations.

4.3.7 Contextualized translation

Four measures (ER-14, RSCA, ERS, and RTSCA) fully met this added criterion that considered whether researchers (a) clarified/justified whether they viewed resilience as universal, context-specific, or containing elements of both and (b) considered cultural and linguistic appropriateness during scale development or translation. RSCA was perhaps the most rigorous as Hu and Gan (2008) considered different available translations for resilience and chose the most suited one by referencing Taoist values and then creating items informed by the results of thematic analysis of qualitative interviews. For the ERS, Chen et al. (2016) considered resilience a universal concept, and they developed and revised the wording of scale items in both Chinese and English by consulting experts and conducting pilot tests with native speakers in both contexts. For the RTSCA, Liang and Cheng (2012) explained available translations for “resilience” emphasized “Chinese cultural background” when interviewing experts for item generation and conducted a pilot test to finalize items. When Chen et al. (2020) translated the ER-14 CN, they established their position (universal and cultural-specific) and addressed how items may have changed in different translations. The translation and back-translation were an iterative process involving researchers, a third-party expert fluent in both languages, and another reviewer. Other translated scales used the standard translation-back-translation technique without additional information on contextualized translation, therefore receiving the intermediate score.

5 Discussion

Scholars have stressed the importance of cultural and social contexts for understanding the shapes, trajectories, and determinants of resilience ( Southwick et al., 2014 ). Our review identified nine such contexts in which empirical testing of resilience was conducted (e.g., health/illness, natural disasters, workplace challenges). After identifying 85 groups of unique self-report scales, we chose to focus on three widely used translated scales and three locally developed scales, which, when different versions of the same measure were considered (e.g., CD-RISC-25 and CD-RISC-10), resulted in nine total scales. The CD-RISC scales accounted for over half of the reported tests in our sample, and perhaps not surprisingly, CD-RISC-25 was the most popular measure in general, health, work, school, relational, disaster, and COVID-19 contexts. Considering the CD-RISC-25’s immense popularity and contribution, we discuss how its application might be advanced. Following this, we offer recommendations for future resilience scale development/refinement, including more focus on process-oriented scales, and acknowledge the limitations of our systematic review.

5.1 Future considerations for CD-RISC-25-CN

Extant knowledge about resilience in Chinese contexts relies heavily on the translated CD-RISC-25, yet the scale scored only moderately (7/14) on overall quality (see Table 4 ). Given this, future research should continue scrutinizing the scale’s content validity and translation. The simultaneous universality and cultural specificity of resilience have been widely acknowledged, which Yu and Zhang (2007) discussed when explaining the changed factor structure for their translation of the CD-RISC-25 (i.e., three factors in Chinese contexts as opposed to five factors in U.S. contexts). Since their original contribution, however, researchers have not further explored whether the scale taps qualities that may be saliently associated with resilience in Chinese contexts. For example, the items in both languages are notably individualistic (e.g., I will/can/take the lead), while “traditional” Chinese cultures are characterized by communal and relational orientations and the contemporary Chinese “self” is socially and individually oriented (reflecting the merger of global cultures; Kolstad and Gjesvik, 2014 ), what Lu and Yang (2006) termed a “composite self.” Given this, research might assess whether the scale’s content, predictive, and convergent validity might be enhanced by modifying or adding items ( Farh et al., 2006 ) to tap these qualities. As a second example, Yu and Zhang (2007) stated that “Chinese people are probably the least religious people in the world” (p. 27) to explain the changed factor structure and no longer salient “spiritual influence” with one item explicitly mentioning “God” merged into “optimism” (which might in part explain why the alpha value of this specific dimension has been subpar across studies). Resilience researchers might consider how Chinese contexts are characterized by religious/spiritual diversity (rather than simply lacking religion; Chao and Yang, 2018 ). These issues could be addressed by reexamining the content validity and language of the current version and subsequently updating the scale, whose factor structure could then be explored and test–retest reliability reexamined. Our point here is not to discount the value of research findings based on the translated CD-RISC-25, as the scale clearly has been heuristic. We suggest that scale validation is an ongoing process, and issues such as content validity and contextualized translation are critical to consider when more than half of recent Chinese resilience studies have employed this measure. Advancing this influential scale is one critical area for future research.

It is worth noting that RSCA and ERS, both developed in China and involving Chinese-speaking populations, scored the highest (11/14 points). Given that the RSCA focuses on adolescents, one direction for researchers is to develop a version for the general population through similar rigorous processes (see Hu and Gan, 2008 ; discussed more later). The ERS, which is relatively new, needs further validation across contexts. Importantly, test–retest reliability for both RSCA and ERS is yet to be established.

5.2 Future recommendations

In this section, we draw on evaluation results to provide seven methodological recommendations for future resilience scale development work with existing and new scales worth considering in and beyond the Chinese study contexts.

First, regarding content validity (see Section 4.3.1 and Table 1 ), future work could explicitly engage with target populations and/or consult third-party experts (e.g., someone who is familiar with resilience in Chinese contexts either because of lived experience or extensive learning) in item translation, selection, and/or adaptation. These additional processes, which may result in modified item wording, are commonly expected in new-scale development ( Worthington and Whittaker, 2006 ). Developing culturally adapted versions of existing scales in a new language should not be exempted from these steps ( Farh et al., 2006 ).

Second, studies should habitually report Cronbach’s alphas for scales, and, for multiple-dimension instruments, consider evaluating internal consistency for subscales rather than only overall scales. These practices were surprisingly absent in many of the included studies. In addition, scholars recently have argued that McDonald’s omega is a better test of a (sub)scale’s internal consistency (i.e., unidimensionality), so future research should consider reporting omega ( Goodboy and Martin, 2020 ; Hayes and Coutts, 2020 ).

Third, future work should carefully examine the contextual translation of resilience scales (see Farh et al., 2006 ) as well as their factor structure in unique cultural situations. We added a new criterion to evaluate the factor structure of a scale. As shown in existing work, such as the case of CD-RISC-25 ( Yu and Zhang, 2007 ), the factor structure can be sensitive to translation ( Chen et al., 2020 ). When introducing, translating, and applying a scale developed and validated in a different source language and cultural context, researchers must consider the heterogeneity regarding the factor structure of translated versions. Specifically, variations in how items are translated may result in inconsistency. For example, although Tian and Hong (2013) reported a two-factor structure for RS-14—which was replicated in other studies in both simplified and traditional Chinese (e.g., Chung et al., 2020 )—a new translation of the scale showed a one-factor structure ( Chen et al., 2020 ). Moreover, translating a generic scale into a more specific context (or vice versa) in the same language may also yield a changed factor structure. For example, when Hao et al. (2015) adapted the five-factor RTSCA for the specific occupation of civil servants, EFA suggested a four-factor structure instead.

Fourth, given that more than half of the evaluated scales were tested without clear hypotheses regarding resilience’s relationship with study constructs (i.e., dubious design, Windle, 2010 ), future work with new or existing scales should clearly articulate their rationale for testing associations between resilience and associated constructs as informed by theory and/or existing literature, rather than only mentioning possible relationships among constructs.

Fifth, future researchers should include test–retest with adequate sample sizes and justified time intervals in their design. Test – retest reliability is the most problematic property in the results (see Table 4 ). When assessed at all, researchers tended to perform retests using small (below 50) samples, which may partly explain the dubious coefficients (below 0.70) reported in some studies. 8 Test–retest results could further inform a discussion on situations under which resilience should be expected to remain stable or change over time. For example, a trait measure that assumes the stability of resilience over time should result in higher test – retest coefficients than a measure based on the changing process view on resilience, where the way in which resilience is enacted might change over time (see below). Additionally, only a few studies reported the ICC (e.g., Lo et al., 2014 ; Hsieh et al., 2016 ) even though Terwee et al. (2007) deemed it “the most suitable and most commonly used reliability parameter for continuous measures” for its consideration of “systematic differences [as] part of the measurement error” (p. 37). Future studies should report the ICC.

Sixth, future work could begin exploring minimal important change (MIC), concerning interpretability, for Chinese resilience scales . MIC was not assessed for selected scales (as it was not for English scales, see Windle et al., 2011 ), which was indeed acknowledged as a limitation in an excluded Singaporean study, where CD-RISC-10 was validated in English-speaking patients with axial spondylarthritis ( Kwan et al., 2019 ). In clinical research, MIC concerns the threshold where patients begin to perceive their internal change over the course of treatment to be important, which enriches the interpretation of results from the perspective of recipients of the treatment (for a systematic guideline on methods, see Terwee et al., 2021 ). Exploring MIC in future research could promote an understanding of resilience stemming from dialogue between researchers and participants (e.g., through anchor questions; Terwee et al., 2021 ), rather than based on assumptions that small but statistically significant changes in researcher-defined outcomes would have a meaningful impact on participants’ lives.

Seventh, future researchers should address the universality and/or cultural specificity of their concept of interest as well as consider how similar (but nuanced) experiences and phenomena are presented across specific living languages (see Farh et al., 2006 ). This suggestion reflects our alignment with the current consensus that considers resilience as both “universally observable” and “culturally specific” human (biological and social) experiences expressed in numerous ways ( Southwick et al., 2014 ). Regardless of whether scholars translate an existing scale from another culture or locally develop a scale for self-report instruments, procedures for ensuring that the items make sense to a variety of participants in the target language could further enhance rigor and ethics, as well as contribute to content validity. Translation work requires taking additional steps beyond standard translation and back-translation. A recent effort in translating, applying, and validating the 14-item ego resiliency scale ( Chen and Padilla, 2019 ; Chen et al., 2020 ) presents an example of researchers taking manageable steps to demonstrate awareness and sensitivity to culture and language. Chen and colleagues considered resilience to be both universal and culturally specific and addressed how the scale items had changed in previous translation studies. Their back-translation involved several experts fluent in both English and Chinese, who met to reach a conceptual and translational consensus. The version was then reviewed by a third party. Researchers might also gather pilot data from both expert and lay persons and adjust the translation accordingly, similar to early-scale development ( Worthington and Whittaker, 2006 ).

5.3 Moving beyond trait conceptions of resilience

We also call for more future attention on developing and assessing the psychometric qualities of process-oriented measures in individual resilience in Chinese-speaking contexts. Scholars in our review predominantly took the trait approach to resilience assessment in that over half of the tests employed some version of the CD-RISC scale, which is known for its trait view. Original authors of all but one evaluated scale also aligned with the trait view. For example, for the RS-25, resilience is defined as a “personality characteristic that moderates the negative effects of stress and promotes adaptation” ( Wagnild and Young, 1993 , p. 165). Ego-resilience/resiliency, as the name implies, is considered part of the ego structures maintaining the personality system ( Block and Kremen, 1996 ). Notwithstanding the importance of examining how resilient traits or trait resilience relate to other phenomena, interdisciplinary resilience theorizing has endorsed a more complex and comprehensive process views where resilience systems consider interconnected mechanisms mobilizing protective and risk factors ( Windle, 2010 ; Panter-Brick and Leckman, 2013 ; Southwick et al., 2014 ). Luthar et al. (2000) have long suggested that examining what (and to what extent) specific mechanisms (e.g., informal support) mediate the effect of protective factors (e.g., religiosity) is crucial for prevention and intervention designs for populations in need (e.g., Chiu et al., 2020 ). In short, individual-level resilience assessment in Chinese contexts in the recent past has not moved very far from the long-established individual-trait view; available tools from process perspectives remain mostly underused or underdeveloped in translation work. To encourage resilience process assessment in Chinese contexts, we help highlight some process-focused measures and discuss future directions.

Two of the more frequently used scales (see Supplementary Table S2 ) are based on process views of resilience. The RSCA ( Hu and Gan, 2008 ) attempts to assess resilience as a dynamic process through which adverse life events interact with protective factors. Nevertheless, specifically developed from the perspective of adolescents and largely concerning the parent–child relationship, this scale may not be appropriate for other contexts and populations (e.g., adults experiencing chronic illness). Hu and Gan demonstrated a rigorous way of involving target populations and developing a scale from the ground up before testing and revising new samples. Studies replicating their procedures (and adding testing–retesting reliability) in adult and/or general samples to develop population/context-appropriate scales could advance the current resilience process studies. In addition, the Resilience Scale for Adults (RSA) conceptualizes resilience as a multidimensional construct referring to “important psychological skills or abilities [and] the individual’s ability to use family, social and external support systems to cope better with stress” ( Friborg et al., 2003 , p. 66), which aligns with seeing resilience as complex processes encompassing interacting protective systems (e.g., Rutter, 1990 ). However, the English-to-Chinese translation needs further scrutiny and development, considering that there are several referenced translations of RSA and that the factor structures of the scale were not consistent across different samples (e.g., Yang and Lv, 2008 ; Peng et al., 2011 ; Ma et al., 2019 ).

Furthermore, process views may focus on the connections between individuals and surrounding systems, including how protection/adaptation emerges from the interactive process ( Masten, 2015 ). For example, another translated, population-specific scale in our data, the Child and Youth Resilience Measure (CYRM; Liebenberg et al., 2012 , 2013 ; Ungar and Liebenberg, 2011 ) operationalizes resilience process with emphases on specific communicative practices (e.g., talking about adversity itself), social construction, nonstatic interpretation, and contextual sensitivity. Although not yet used often, Xiang et al. (2012) translated and performed factor analyses for the 28-item version, which resulted in a 27-item Chinese version. Early translation and validation work of the 12-item version has been provided by Mu and Hu (2016) . The Family Resilience Assessment Scale FRAS ( Sixbey, 2005 ) based on Walsh’s (2016) family resilience framework is worth mentioning for similarly tapping family interactions, though the FRAS has at least three Chinese versions with different numbers of items (see Li et al., 2016 ; Fan et al., 2017 ; Chiu et al., 2019 ). More recently, Wang and Lu (2022) translated and initially validated Walsh’s (2016) questionnaire. Although we have excluded these latter scales from this review, given our focus on self-report measures for individual-level resilience, these scales should be of interest to family resilience researchers who wish to advance process views on resilience.

Additionally, we highlight a recent development in resilience theorizing that fully commits to the focus on social interaction. Buzzanell (2010 , 2019) has proposed a communication theory of resilience (CTR) that theorized resilience as five communication processes, which different levels of agentic actors (e.g., individuals and organizations) perform/enact to maintain and/or transform normalcy and meaningfulness in responses to disruptive trigger events. 9 Based on this theory, Wilson et al. (2021) developed a 32-item Communication Resilience Process Scale (CRPS) in a series of three studies with participants from the United States. The CRPS includes seven subscales tapping the five CTR resilience processes at the individual level (e.g., the process of “crafting normalcy” is subdivided into maintaining current routines and creating new ones). When using the CPRS, researchers first ask participants to recall and describe a significant life disruption (one event or a series of events) that they have faced within a timeframe (e.g., the past two years) before completing self-report items about the extent to which they enacted communicative practices reflective of the resilience processes. Kuang et al. (2022 , 2023) recently translated and contextualized the CRPS into a Chinese version with a retained factor structure but six additional items and slight changes to item wording based on pilot results and feedback from native speakers and experts. The validation work of any measurement is, of course, an ongoing process. Therefore, this process scale needs to be further evaluated in terms of a full range of psychometric standards.

5.4 Limitations

Resilience scholarship is growing quickly; therefore, while capturing trends in the recent past, we are simultaneously missing new ones. A quick search in any database would show that resilience publications had continued growing since 2021, likely due to the pandemic. Researchers should consider conducting similar reviews in future to track ongoing attempts to develop and validate resilience measures. By the same token, scholarship on resilience would also benefit from similar measurement reviews for other languages and cultures. Next, multiple translations of “resilience” are used in Chinese studies, which may orient the conceptualization of resilience differently ( Hu and Gan, 2008 ). We did not take into consideration the various translations in the screening and coding for this review; hence, this task could be addressed in future work. Similarly, how well a scale fits its target population age-wise is also an important future concern. Furthermore, our data and screening results might be limited due to not using chain referential sampling to find all possible existing studies and unique scales; however, the 963 reports we identified are sufficient to capture trends such as which research contexts are being explored and which scales are being used most often (i.e., the nine scales we evaluate in depth). Finally, considering the scope and limited available resources, we only used CNKI to collect reports published in Chinese. Future reviews should consider including other Chinese information sources as well.

In closing, we present this first effort to systematically review Chinese resilience measurement scales, in which we identify a list of contexts relevant to empirical testing of resilience in Chinese and commonly used scales. Based on these results, we direct researchers’ attention to actions and practices through which future instrument development work can be more rigorous, such as when assessing test–retest reliability or translating English scales for different languages and populations. Because studies have predominantly provided validation evidence for scales based on trait conceptualizations of resilience, we also call for more focus on developing/evaluating process measures that assess how Chinese individuals and groups create or enact resilience in response to life’s inevitable disruptions.

Data availability statement

The original contributions presented in the study are included in the article/ Supplementary material , further inquiries can be directed to the corresponding author.

Author contributions

ZT: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Resources, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. KK: Writing – review & editing, Visualization, Validation, Supervision, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. SW: Writing – review & editing, Validation, Supervision, Methodology, Investigation, Formal analysis, Conceptualization. PB: Writing – review & editing, Validation, Supervision, Methodology, Investigation, Formal analysis, Conceptualization. JY: Writing – review & editing, Formal analysis, Data curation. XM: Writing – review & editing, Formal analysis, Data curation. HW: Writing – review & editing, Formal analysis.

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. The project was partially supported by ZT’s startup research funds granted by the College of Wooster for open access publishing, and by the second author’s University Initiative Scientific Research Program at Tsinghua University (2022THZWJC07) for the research process.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fpsyg.2024.1293857/full#supplementary-material

1. ^ It is important to clarify that in this review, we aim at the broad, plural sense of Chinese contexts, by which we refer to the use of Chinese (as a language group) and cultural affiliations rooted in historicity.

2. ^ Terwee et al.’s (2007) proposed psychometric properties include as follows: (1) content validity, (2) internal consistency, (3) criterion validity, (4) construct validity, (5) reproducibility (including agreement and reliability), (6) responsiveness, (7) floor and ceiling effects, and (8) interpretability. However, several criteria (i.e., gold standard, reproducibility-agreement, floor and ceiling effects, and responsiveness) are typically meaningful in medical/clinical contexts (e.g., clinical trials, interventions, and changes) but are less relevant to the current stage of resilience research (see Windle et al., 2011 ; Zhou et al., 2020 ; Windle et al., 2022 ) and thus were excluded from this review.

3. ^ Zhou et al. (2020) systematically reviewed family resilience questionnaires in both English and Chinese studies; however, our manuscript focuses on individual-level measurement, given that most Chinese resilience scales are cast at the individual level. Only considering English reports, Windle et al.’s (2022) review is specific to scales for people (samples from various countries) living with dementia and their caregivers and hence is less relevant to our review, which looks at scales for Chinese-speaking populations who are facing a wide range of disruptive life events.

4. ^ The data were collected from contexts where Chinese was the primary language, regardless of whether the research report was written in English or Chinese. These criteria enabled us to infer the primary research language if not explicitly identified. For example, a study whose sample was a local community in Shandong province or Hong Kong most likely used scales in simplified or traditional written Chinese (syntactically and semantically similar if not identical), whereas a study whose participants were South African women was unlikely to have used Chinese. Based on the screening of PsycINFO articles, we decided to focus on studies utilizing Mainland Chinese, Hong Kong, Macao, and/or Taiwanese samples because all qualified articles’ samples were affiliated with one or more than one of these areas/regions. Articles were excluded, if the information provided was inadequate to infer the use of Chinese resilience scale(s) (e.g., no mention of the study population).

5. ^ Krippendorf’s alpha was calculated using Hayes and Krippendorff’s (2007) SPSS macro and selecting nominal as the level of measurement, given that whether a study should be included (yes/no) is a nominal-level variable.

6. ^ There were 193 duplicate records between PsycINFO and PubMed, two between PubMed and CNKI, and none between PsycINFO and CNKI. We used a Chinese search query in CNKI, which enabled us to include many reports published in Chinese.

7. ^ These three examples also illuminated other contexts; however, as acute, highly disruptive events that overturn not only individual lives but also environments, natural disasters could override other more routine challenges.

8. ^ The size of test–retest result depends on (a) the time interval between tests (longer intervals may justify smaller correlations) and (b) sample size (small samples have more sampling error).

9. ^ Specifically, people adapt and/or transform by crafting normalcy (e.g., holding onto rituals and creating new routines), performing identity anchors (e.g., meaningful roles and values that guide action), mobilizing communication networks (e.g., reaching out to strong/weak ties), enacting alternative logics (e.g., reframing events, using humor to lighten challenges), and foregrounding productive action while legitimating negative emotions (e.g., validating fear/anger while still choosing to take steps toward important goals).

Block, J. H., and Block, J. (1980). “The role of ego-control and ego resiliency in the organization of behavior” in The Minnesota Symposia on child psychology. Vol. 13: Development of cognition, affect and social relations . ed. W. A. Collins (Hillsdale, NJ: Erlbaum), 39–101.

Google Scholar

Block, J., and Kremen, A. M. (1996). IQ and ego-resiliency: conceptual and empirical connections and separateness. J. Pers. Soc. Psychol. 70, 349–361. doi: 10.1037/0022-3514.70.2.349

PubMed Abstract | Crossref Full Text | Google Scholar

Buzzanell, P. M. (2010). Resilience: talking, resisting, and imagining new normalcies into being. J. Commun. 60, 1–14. doi: 10.1111/j.1460-2466.2009.01469.x

Crossref Full Text | Google Scholar

Buzzanell, P. M. (2019). “Communication theory of resilience in everyday talk, interactions, and network structures” in Reflections on interpersonal communication research . eds. S. R. Wilson and S. Smith (Solana Beach: Cognella), 65–88.

Cai, W. P., Pan, Y., Zhang, S. M., Wei, C., Dong, W., and Deng, G. H. (2017). Relationship between cognitive emotion regulation, social support, resilience and acute stress responses in Chinese soldiers: exploring multiple mediation model. Psychiatry Res. 256, 71–78. doi: 10.1016/j.psychres.2017.06.018

Campbell-Sills, L., and Stein, M. B. (2007). Psychometric analysis and refinement of the Connor–Davidson resilience scale (CD-RISC): validation of a 10-item measure of resilience. J. Trauma. Stress. 20, 1019–1028. doi: 10.1002/jts.20271

Chao, L., and Yang, F. (2018). Measuring religiosity in a religiously diverse society: the China case. Soc. Sci. Res. 74, 187–195. doi: 10.1016/j.ssresearch.2018.04.001

Chen, X., He, J., and Fan, X. (2020). Applicability of the ego-resilience scale (ER89) in the Chinese cultural context: a validation study. J. Psychoeduc. Assess. 38, 675–691. doi: 10.1177/0734282919889242

Chen, X., and Padilla, A. M. (2019). Emotions and creativity as predictors of resilience among L3 learners in the Chinese educational context. Curr. Psychol. 41, 406–416. doi: 10.1007/s12144-019-00581-7

Chen, X., Wang, Y., and Yan, Y. (2016). The essential resilience scale: instrument development and prediction of perceived health and behaviour. Stress. Health 32, 533–542. doi: 10.1002/smi.2659

Cheng, C., Dong, D., He, J., Zhong, X., and Yao, S. (2020). Psychometric properties of the 10-item Connor–Davidson resilience scale (CD-RISC-10) in Chinese undergraduates and depressive patients. J. Affect. Disord. 261, 211–220. doi: 10.1016/j.jad.2019.10.018

Chiu, S. J., Chou, Y. T., Chen, P. T., and Chien, L. Y. (2019). Psychometric properties of the mandarin version of the family resilience assessment scale. J. Child Fam. Stud. 28, 354–369. doi: 10.1007/s10826-018-1292-0

Chiu, S., Lin, I., Chou, Y., and Chien, L. (2020). Family quality of life among Taiwanese children with developmental delay before and after early intervention. J. Intellect. Disabil. Res. 64, 589–601. doi: 10.1111/jir.12754

Chung, J. O. K., Lam, K. K. W., Ho, K. Y., Cheung, A. T., Ho, L. K., Xei, V. W., et al. (2020). Psychometric evaluation of the traditional Chinese version of the resilience Scale-14 and assessment of resilience in Hong Kong adolescents. Health Qual. Life Outcomes 18:33. doi: 10.1186/s12955-020-01285-4

Connor, K. M., and Davidson, J. R. (2003). Development of a new resilience scale: the Connor-Davidson resilience scale (CD-RISC). Depress. Anxiety 18, 76–82. doi: 10.1002/da.10113

Deng, M., Pan, Y., Zhou, L., Chen, X., Liu, C., Huang, X., et al. (2018). Resilience and cognitive function in patients with schizophrenia and bipolar disorder, and healthy controls. Front. Psych. 9:279. doi: 10.3389/fpsyt.2018.00279

Ethnologue (2023). What are the top 200 most spoken languages? Available at: https://www.ethnologue.com/insights/ethnologue200/ (Accessed September 8, 2023).

Fan, Y., Mi, X., and Zhang, L. (2017). Zhongwenban jiating pinggu liangbiao zai aizheng huanzhe jiating zhong de xinxiaodu jiance [Testing the reliability and validity of the Chinese version of the family resilience assessment scale in families with cancer patients]. Chin. Gen. Pract. 20, 2894–2899. doi: 10.3969/j.issn.1007-9572.2017.05.y17

Farh, J. L., Cannella, A. A., and Lee, C. (2006). Approaches to scale development in Chinese management research. Manag. Organ. Rev. 2, 301–318. doi: 10.1111/j.1740-8784.2006.00055.x

Fletcher, D., and Sarkar, M. (2013). Psychological resilience. Eur. Psychol. 18, 12–23. doi: 10.1027/1016-9040/a000124

Friborg, O., Hjemdal, O., Rosenvinge, J. H., and Martinussen, M. (2003). A new rating scale for adult resilience: what are the central protective resources behind healthy adjustment? Int. J. Methods Psychiatr. Res. 12, 65–76. doi: 10.1002/mpr.143

Gao, M., Xiao, C., Zhang, X., Li, S., and Yan, H. (2018). Social capital and PTSD among PLWHA in China: the mediating role of resilience and internalized stigma. Psychol. Health Med. 23, 698–706. doi: 10.1080/13548506.2018.1426869

Gao, Y., Xie, S., and Frost, C. J. (2020). An ecological investigation of resilience among rural-urban migrant adolescents of low socioeconomic status families in China. J. Community Psychol. 48, 862–878. doi: 10.1002/jcop.22303

Gao, Z., Yang, S., Margraf, J., and Zhang, X. (2013). Reliability and validity test for Wagnild and Young’s resilience scale (RS-11) in Chinese. Chin. J. Health Psychol. 21, 1324–1326. doi: 10.13342/cnki.cjhp.2013.09.006

Gao, F., Yao, Y., Yao, C., Xiong, Y., Ma, H., and Liu, H. (2019). The mediating role of resilience and self-esteem between negative life events and positive social adjustment among left-behind adolescents in China: a cross-sectional study. BMC Psychiatry 19:239. doi: 10.1186/s12888-019-2219-z

Garmezy, N. (1974). “The study of competence in children at risk for severe psychopathology” in The child in his family. Vol. 3: Children at psychiatric risk . eds. E. J. Anthony and C. Koupernik (Hoboken, NJ: Wiley), 77–97.

Goodboy, A. K., and Martin, M. J. (2020). Omega over alpha for reliability estimation of unidimensional communication measures. Ann. Int. Commun. Assoc. 44, 422–439. doi: 10.1080/23808985.2020.1846135

Hao, S., Hong, W., Xu, H., Zhou, L., and Xie, Z. (2015). Relationship between resilience, stress and burnout among civil servants in Beijing, China: mediating and moderating effect analysis. Personal. Individ. Differ. 83, 65–71. doi: 10.1016/j.paid.2015.03.048

Hayes, A. F., and Coutts, J. J. (2020). Use omega rather than Cronbach’s alpha for estimating reliability. But…. Commun. Methods Meas. 14, 1–24. doi: 10.1080/19312458.2020.1718629

Hayes, A. F., and Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Commun. Methods Meas. 1 , 77–89. doi: 10.1080/19312450709336664

He, T. B., Tu, C., and Bai, X. (2022). Impact of social support on college students’ anxiety due to COVID-19 isolation: mediating roles of perceived risk and resilience in the postpandemic period. Front. Psychol. 13:948214. doi: 10.3389/fpsyg.2022.948214

Houston, J. B., and Buzzanell, P. M. (2020). Communication and resilience: introduction to the journal of applied communication research special issue. J. Appl. Commun. Res. 48, 1–4. doi: 10.1080/00909882.2020.1711956

Hsieh, H. F., Chang, S. C., and Wang, H. H. (2016). The relationships among personality, social support, and resilience of abused nurses at emergency rooms and psychiatric wards in Taiwan. Women Health 57, 40–51. doi: 10.1080/03630242.2016.1150385

Hu, Y., and Gan, Y. (2008). Development and psychometric validity of the resilience scale for Chinese adolescents. Acta Psychol. Sin. 40, 902–912. doi: 10.3724/sp.j.1041.2008.00902

Huang, F. F., Chen, W., Lin, Y. A., Hong, Y. T., and Chen, B. (2020). Cognitive reactivity among high-risk individuals at the first and recurrent episode of depression symptomology: a structural equation modelling analysis. Int. J. Ment. Health Nurs. 30, 334–345. doi: 10.1111/inm.12789

Jackson, J. L., and Kuriyama, A. (2019). How often do systematic reviews exclude articles not published in English? J. Gen. Intern. Med. 34, 1388–1389. doi: 10.1007/s11606-019-04976-x

Kolstad, A., and Gjesvik, N. (2014). Collectivism, individualism, and pragmatism in China: implications for perceptions of mental health. Transcult. Psychiatry 51, 264–285. doi: 10.1177/1363461514525220

Kong, F., Ma, X., You, X., and Xiang, Y. (2018). The resilient brain: psychological resilience mediates the effect of amplitude of low-frequency fluctuations in orbitofrontal cortex on subjective well-being in young healthy adults. Soc. Cogn. Affect. Neurosci. 13, 755–763. doi: 10.1093/scan/nsy045

Kuang, K., Tian, Z., Wilson, S. E., and Buzzanell, P. M. (2023). Memorable messages as anticipatory resilience: examining associations among memorable messages, communication resilience processes, and mental health. Health Commun. 38, 1136–1145. doi: 10.1080/10410236.2021.1993585

Kuang, K., Wilson, S. R., Tian, Z., and Buzzanell, P. M. (2022). Development and validation of a culturally adapted measure of communication resilience processes for Chinese contexts. Int. J. Intercult. Relat. 91, 70–87. doi: 10.1016/j.ijintrel.2022.09.003

Kwan, Y. H., Ng, A. Y. E., Lim, K. K., Fong, W., Phang, J. K., Chew, E., et al. (2019). Validity and reliability of the ten-item Connor–Davidson resilience scale (CD-RISC10) instrument in patients with axial spondyloarthritis (axSpA) in Singapore. Rheumatol. Int. 39, 105–110. doi: 10.1007/s00296-018-4217-8

Lau, C., Chiesi, F., Saklofske, D. H., Gao, Y., and Li, C. (2020). How essential is the essential resilience scale? Differential item functioning of Chinese and English versions and criterion validity. Personality and Individual Differences , 155:109666. doi: 10.1016/j.paid.2019.109666

Lei, M., Li, C., Xiao, X., Qiu, J., Dai, Y., and Zhang, Q. (2012). Evaluation of the psychometric properties of the Chinese version of the Resilience Scale in Wenchuan earthquake survivors. Comprehensive Psychiatry 53, 616–622. doi: 10.1016/j.comppsych.2011.08.007

Li, D. (2008) Qingshaonian xinjingdongtai fazhantedian ji butong tiaojiecelve dui qi xinjingbianhua yingxiang de yanjiu [ A study on the dynamic development of juveniles' mood and the influences of emotion regulation strategies ]. [Dissertation]. [Beijing (BJ)] Capital Normal University

Li, Y., Cao, F., Cao, D., and Liu, J. (2014). Nursing students’ post-traumatic growth, emotional intelligence and psychological resilience. J. Psychiatr. Ment. Health Nurs. 22, 326–332. doi: 10.1111/jpm.12192

Li, Q., Liang, D., Chen, P., and Xu, W. (2017). The emotionality, resilience, and hardiness traits of migrant workers and their predictive effect on psychological health. Chin. J. Appl. Psychol. 23, 278–284.

Li, Y., Wang, K., Yin, Y., Li, Y., and Li, S. (2018). Relationships between family resilience, breast cancer survivors’ individual resilience, and caregiver burden: a cross-sectional study. Int. J. Nurs. Stud. 88, 79–84. doi: 10.1016/j.ijnurstu.2018.08.011

Li, Y., Zhao, Y., Zhang, J., Lou, F., and Cao, F. (2016). Psychometric properties of the shortened Chinese version of the family resilience assessment scale. J. Child Fam. Stud. 25, 2710–2717. doi: 10.1007/s10826-016-0432-7

Liang, B., and Cheng, C. (2012). Psychological health diathesis assessment system: the development of resilient trait scale for Chinese adults. Stud. Psychol. Behav. 10, 269–277.

Liebenberg, L., Ungar, M., and LeBlanc, J. C. (2013). The CYRM-12: a brief measure of resilience. Can. J. Public Health 104, e131–e135. doi: 10.1007/bf03405676

Liebenberg, L., Ungar, M., and Vijver, F. V. D. (2012). Validation of the child and youth resilience Measure-28 (CYRM-28) among Canadian youth. Res. Soc. Work. Pract. 22, 219–226. doi: 10.1177/1049731511428619

Lin, M., Wolke, D., Schneider, S., and Margraf, J. (2020). Bullying history and mental health in university students: the mediator roles of social support, personal resilience, and self-efficacy. Front. Psych. 10:960. doi: 10.3389/fpsyt.2019.00960

Liu, F., Li, X., and Li, W. (2015). Research progress on resilience assessment tools. Chin. Nurs. Res. 29, 3211–3214. doi: 10.3969/j.issn.1009-6493.2015.26.003

Liu, W., Wang, B., Li, M., and Huang, L. (2017). A review of current status and prospects of research on resilience. J. Ningbo Univ. (Educ. Sci. Ed.) 39, 18–23. doi: 10.3969/j.issn.1008-0627.2017.01.005

Lo, F. S., Hsu, H. Y., Chen, B. H., Lee, Y. J., Chen, Y. T., and Wang, R. H. (2014). Factors affecting health adaptation of Chinese adolescents with type 1 diabetes. J. Child Health Care 20, 5–16. doi: 10.1177/1367493514540815

Lu, L., and Yang, K. (2006). Emergence and composition of the traditional-modern bicultural self of people in contemporary Taiwanese societies. Asian J. Soc. Psychol. 9, 167–175. doi: 10.1111/j.1467-839x.2006.00195.x

Luthans, F., Avolio, B. J., Avey, J. B., and Norman, S. M. (2007). Positive psychological capital: measurement and relationship with performance and satisfaction. Pers. Psychol. 60, 541–572. doi: 10.1111/j.1744-6570.2007.00083.x

Luthar, S. S., Cicchetti, D., and Becker, B. (2000). The construct of resilience: a critical evaluation and guidelines for future work. Child Dev. 71, 543–562. doi: 10.1111/1467-8624.00164

Ma, X., Shi, H., Wang, Y., Hu, H., Zhu, Q., and Zhang, Y. (2019). Reliability and validity of resilience scale for adults (RSA) in urban pregnant women. Fudan Univ. J. Med. Sci. 46, 1–7. doi: 10.3969/j.issn.1672-8467.2019.01.001

Ma, X., Wei, Q., Jiang, Z., Shi, Y., Zhang, Y., and Shi, H. (2020). The role of serum oxytocin levels in the second trimester in regulating prenatal anxiety and depression: a sample from Shanghai maternal-child pairs cohort study. J. Affect. Disord. 264, 150–156. doi: 10.1016/j.jad.2019.12.019

Masten, A. S. (2015). Ordinary magic: resilience in development . New York: Guilford.

Masten, A. S., and Barnes, A. (2018). Resilience in children: developmental perspectives. Children 5:98. doi: 10.3390/children5070098

Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., et al. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst. Rev. 4. doi: 10.1186/2046-4053-4-1

Mu, G. M., and Hu, Y. (2016). Validation of the Chinese version of the 12-item child and youth resilience measure. Child Youth Serv. Rev. 70, 332–339. doi: 10.1016/j.childyouth.2016.09.037

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., et al. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. PLoS Med 372:e1003583. doi: 10.1371/journal.pmed.1003583

Panter-Brick, C., and Leckman, J. F. (2013). Editorial commentary: resilience in child development — interconnected pathways to wellbeing. J. Child Psychol. Psychiatry 54, 333–336. doi: 10.1111/jcpp.12057

Peng, L., Li, J., Li, M., Zhang, J., Zuo, X., Miao, Y., et al. (2011). Application of resilience scale for adults in Chinese army. J. Third Med. Univ. 19, 2081–2084. doi: 10.16016/j.1000-5404.2011.19.002

Qiu, T., Yang, Y., Liu, C., Tian, F., Gu, Z., Yang, S., et al. (2020). The association between resilience, perceived organizational support and fatigue among Chinese doctors: a cross-sectional study. J. Affect. Disord. 265, 85–90. doi: 10.1016/j.jad.2020.01.056

Quan, L., Zhen, R., Yao, B., Zhou, X., and Yu, D. (2017). The role of perceived severity of disaster, rumination, and trait resilience in the relationship between rainstorm-related experiences and PTSD amongst Chinese adolescents following rainstorm disasters. Arch. Psychiatr. Nurs. 31, 507–515. doi: 10.1016/j.apnu.2017.06.003

Ramsay, J. E., Yang, F., Pang, J. S., Lai, C. M., Ho, R. C., and Mak, K. K. (2015). Divergent pathways to influence: cognition and behavior differentially mediate the effects of optimism on physical and mental quality of life in Chinese university students. J. Health Psychol. 20, 963–973. doi: 10.1177/1359105313504441

Richardson, G. E. (2002). The metatheory of resilience and resiliency. J. Clin. Psychol. 58, 307–321. doi: 10.1002/jclp.10020

Rutter, M. (1990). “Psychosocial resilience and protective mechanisms” in Risk and protective factors in the development of psychopathology . eds. J. E. Rolf, A. S. Masten, D. Cicchetti, K. H. Nuechterlein, and S. Weintraub (Cambridge: Cambridge University Press), 181–214.

Shi, X., and Wu, J. (2020). Chronic stress and anticipatory event-related potentials: the moderating role of resilience. Stress 23, 607–613. doi: 10.1080/10253890.2020.1766019

Sixbey, M. T. (2005). Development of the family resilience assessment scale to identify family resilience constructs. [Dissertation]. [Gainesville (FL)]: University of Florida

Southwick, S. M., Bonanno, G. A., Masten, A. S., Panter-Brick, C., and Yehuda, R. (2014). Resilience definitions, theory, and challenges: interdisciplinary perspectives. Eur. J. Psychotraumatol. 5:25338. doi: 10.3402/ejpt.v5.25338

Tan, K. K., Chan, S. W. C., Wang, W., and Vehviläinen-Julkunen, K. (2016). A salutogenic program to enhance sense of coherence and quality of life for older people in the community: a feasibility randomized controlled trial and process evaluation. Patient Educ. Couns. 99, 108–116. doi: 10.1016/j.pec.2015.08.003

Terwee, C. B., Bot, S. D., de Boer, M. R., van der Windt, D. A., Knol, D. L., Dekker, J., et al. (2007). Quality criteria were proposed for measurement properties of health status questionnaires. J. Clin. Epidemiol. 60, 34–42. doi: 10.1016/j.jclinepi.2006.03.012

Terwee, C. B., Peipert, J. D., Chapman, R., Lai, J. S., Terluin, B., Cella, D., et al. (2021). Minimal important change (MIC): a conceptual clarification and systematic review of MIC estimates of PROMIS measures. Qual. Life Res. 30, 2729–2754. doi: 10.1007/s11136-021-02925-y

Tian, J., and Hong, J. S. (2013). Validation of the Chinese version of the resilience scale and its cutoff score for detecting low resilience in Chinese cancer patients. Support. Care Cancer 21, 1497–1502. doi: 10.1007/s00520-012-1699-x

Tian, L., Liu, L., and Shan, N. (2018). Parent–child relationships and resilience among Chinese adolescents: the mediating role of self-esteem. Front. Psychol. 9:1030. doi: 10.3389/fpsyg.2018.01030

Ungar, M. (2004). A constructionist discourse on resilience multiple contexts, multiple realities among at-risk children and youth. Youth Soc. 35, 341–365. doi: 10.1177/0044118x03257030

Ungar, M. (2009). Resilience across cultures. Br. J. Soc. Work 38, 218–235. doi: 10.1093/bjsw/bcl343

Ungar, M. (2015). “Resilience and culture: the diversity of protective processes and positive adaptation” in Youth resilience and culture . eds. L. Theron, L. Liebenberg, and M. Ungar (New York: Springer), 37–48.

Ungar, M., and Liebenberg, L. (2011). Assessing resilience across cultures using mixed methods: construction of the child and youth resilience measure. J. Mixed Methods Res. 5, 126–149. doi: 10.1177/1558689811400607

Wagnild, G. M., and Young, H. M. (1993). Development and psychometric evaluation of the resilience scale. J. Nurs. Meas. 1, 165–178.

PubMed Abstract | Google Scholar

Walsh, F. (2002). A family resilience framework: innovative practice applications. Fam. Relat. 51, 130–137. doi: 10.1111/j.1741-3729.2002.00130.x

Walsh, F. (2016). Applying a family resilience framework in training, practice, and research: mastering the art of the possible. Fam. Process 55, 616–632. doi: 10.1111/famp.12260

Wang, T., and Gao, D. (2022). How does psychological resilience influence subjective career success of internet marketers in China? A moderated mediation model. Front. Psychol. 13:921721. doi: 10.3389/fpsyg.2022.921721

Wang, A., and Lu, J. (2022). Validation of the Chinese version of the Walsh family resilience questionnaire. Fam. Process 62, 368–386. doi: 10.1111/famp.12751

Wang, L., Shi, Z., Zhang, Y., and Zhang, Z. (2010). Psychometric properties of the 10-item Connor–Davidson resilience scale in Chinese earthquake victims. Psychiatry Clin. Neurosci. 64, 499–504. doi: 10.1111/j.1440-1819.2010.02130.x

Wang, Z., and Xu, J. (2017). Association between resilience and quality of life in Wenchuan earthquake shidu parents: the mediating role of social support. Community Ment. Health J. 53, 859–863. doi: 10.1007/s10597-017-0099-6

Werner, E. E., Bierman, J. M., and French, F. E. (1971). The children of Kauai: A longitudinal study from the prenatal period to age ten . Honolulu: University of Hawaii Press.

Wilson, S. R., Kuang, K., Hintz, E. A., and Buzzanell, P. M. (2021). Developing and validating the communication resilience processes scale. J. Commun. 71, 478–513. doi: 10.1093/joc/jqab013

Windle, G. (2010). What is resilience? A review and concept analysis. Rev. Clin. Gerontol. 21, 152–169. doi: 10.1017/s0959259810000420

Windle, G., Bennett, K. M., and Noyes, J. (2011). A methodological review of resilience measurement scales. Health Qual. Life Outcomes 9:8. doi: 10.1186/1477-7525-9-8

Windle, G., MacLeod, C., Algar-Skaife, K., Stott, J., Waddington, C. S., Camic, P. M., et al. (2022). A systematic review and psychometric evaluation of resilience measurement scales for people living with dementia and their carers. BMC Med. Res. Methodol. 22:298. doi: 10.1186/s12874-022-01747-x

Worthington, R. L., and Whittaker, T. A. (2006). Scale development research. Couns. Psychol. 34, 806–838. doi: 10.1177/0011000006288127

Xi, Y., Yu, H., Yao, Y., Peng, K., Wang, Y., and Chen, R. (2020). Post-traumatic stress disorder and the role of resilience, social support, anxiety and depression after the Jiuzhaigou earthquake: a structural equation model. Asian J. Psychiatr. 49:101958. doi: 10.1016/j.ajp.2020.101958

Xiang, X., Tian, G., Wang, X., and Han, L. (2012). Ertong qingshaonian kangnili celiang zhongwenban zai Beijing qingshaonian zhong de shiyongxing yanjiu [Examining the applicability of the Chinese version of the child and youth resilience measure to Beijing adolescents]. China Youth Study 5, 5–10. doi: 10.19633/j.cnki.11-2579/d.2014.05.001

Xie, Y., Peng, L., Zuo, X., and Li, M. (2016). The psychometric evaluation of the Connor-Davidson resilience scale using a Chinese military sample. PLoS One 11:e0148843. doi: 10.1371/journal.pone.0148843

Yang, L., and Lv, Y. (2008). The psychometric analysis of the resilience scale for adults. Highlights of Sciencepaper Online . 1, pp. 1–13. Available at: http://www.paper.edu.cn/releasepaper/content/200806-94

Yang, X., You, L., Jin, D., Zou, X., Yang, H., and Liu, T. (2020). A community-based cross-sectional study of sleep quality among internal migrant workers in the service industry. Compr. Psychiatry 97:152154. doi: 10.1016/j.comppsych.2019.152154

Yang, X., Zhao, L., Wang, L., Hao, C., Gu, Y., Song, W., et al. (2016). Quality of life of transgender women from China and associated factors: a cross-sectional study. J. Sex. Med. 13, 977–987. doi: 10.1016/j.jsxm.2016.03.369

Ye, Z. J., Liang, M. Z., Zhang, H. W., Li, P. F., Ouyang, X. R., Yu, Y. L., et al. (2018). Psychometric properties of the Chinese version of resilience scale specific to cancer: an item response theory analysis. Qual. Life Res. 27, 1635–1645. doi: 10.1007/s11136-018-1835-2

Ye, Z., Yang, X., Zeng, C., Wang, Y., Shen, Z., Li, X., et al. (2020). Resilience, social support, and coping as mediators between COVID-19-related stressful experiences and acute stress disorder among college students in China. Appl. Psychol. Health Well Being 12, 1074–1094. doi: 10.1111/aphw.12211

Ying, L., Wang, Y., Lin, C., and Chen, C. (2016). Trait resilience moderated the relationships between PTG and adolescent academic burnout in a post-disaster context. Personal. Individ. Differ. 90, 108–112. doi: 10.1016/j.paid.2015.10.048

Yu, N. X., Lam, T. H., Liu, I. K. F., and Stewart, S. M. (2015). Mediation of short and longer term effects of an intervention program to enhance resilience in immigrants from mainland China to Hong Kong. Front. Psychol. 6:1769. doi: 10.3389/fpsyg.2015.01769

Yu, X., and Zhang, J. (2005). Resilience: the psychological mechanism for recovery and growth during stress. Adv. Psychol. Sci. 13, 658–665. Available at: http://journal.psych.ac.cn/xlkxjz/EN/Y2005/V13/I05/658

Yu, X., and Zhang, J. (2007). Factor analysis and psychometric evaluation of the Connor-Davidson resilience scale (CD-RISC) with Chinese people. Soc. Behav. Personal. Int. J. 35, 19–30. doi: 10.2224/sbp.2007.35.1.19

Zhang, J., Zhang, J., Zhou, M., and Yu, N. X. (2017). Neighborhood characteristics and older adults’ well-being: the roles of sense of community and personal resilience. Soc. Indic. Res. 137, 949–963. doi: 10.1007/s11205-017-1626-0

Zhou, J., He, B., He, Y., Huang, W., Zhu, H., Zhang, M., et al. (2020). Measurement properties of family resilience assessment questionnaires: a systematic review. Fam. Pract. 37, 581–591. doi: 10.1093/fampra/cmaa027

Zhou, Z. K., Liu, Q. Q., Niu, G. F., Sun, X. J., and Fan, C. Y. (2017). Bullying victimization and depression in Chinese children: a moderated mediation model of resilience and mindfulness. Personal. Individ. Differ. 104, 137–142. doi: 10.1016/j.paid.2016.07.040

Keywords: resilience, Chinese cultural contexts, cross-cultural scale adaptation, measurement, scale development, systematic review

Citation: Tian Z, Kuang K, Wilson SR, Buzzanell PM, Ye J, Mao X and Wei H (2024) Measuring resilience for Chinese-speaking populations: a systematic review of Chinese resilience scales. Front. Psychol . 15:1293857. doi: 10.3389/fpsyg.2024.1293857

Received: 13 September 2023; Accepted: 11 March 2024; Published: 28 March 2024.

Reviewed by:

Copyright © 2024 Tian, Kuang, Wilson, Buzzanell, Ye, Mao and Wei. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Zhenyu Tian, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • Systematic Review
  • Open access
  • Published: 28 March 2024

Thoracolumbar Interfascial Plane (TLIP) block verses other paraspinal fascial plane blocks and local infiltration for enhanced pain control after spine surgery: a systematic review

  • Tarika D. Patel 1 ,
  • Meagan N. McNicholas 1 ,
  • Peyton A. Paschell 1 ,
  • Paul M. Arnold 2 &
  • Cheng-ting Lee 3  

BMC Anesthesiology volume  24 , Article number:  122 ( 2024 ) Cite this article

91 Accesses

Metrics details

Spinal surgeries are accompanied by excessive pain due to extensive dissection and muscle retraction during the procedure. Thoracolumbar interfascial plane (TLIP) blocks for spinal surgeries are a recent addition to regional anesthesia to improve postoperative pain management. When performing a classical TLIP (cTLIP) block, anesthetics are injected between the muscle (m.) multifidus and m. longissimus. During a modified TLIP (mTLIP) block, anesthetics are injected between the m. longissimus and m. iliocostalis instead. Our systematic review provides a comprehensive evaluation of the effectiveness of TLIP blocks in improving postoperative outcomes in spinal surgery through an analysis of randomized controlled trials (RCTs).

We conducted a systematic review based on the PRISMA guidelines using PubMed and Scopus databases. Inclusion criteria required studies to be RCTs in English that used TLIP blocks during spinal surgery and report both outcome measures. Outcome data includes postoperative opioid consumption and pain.

A total of 17 RCTs were included. The use of a TLIP block significantly decreases postoperative opioid use and pain compared to using general anesthesia (GA) plus 0.9% saline with no increase in complications. There were mixed outcomes when compared against wound infiltration with local anesthesia. When compared with erector spinae plane blocks (ESPB), TLIP blocks often decreased analgesic use, however, this did not always translate to decreased pain. The cTLIP and mTLP block methods had comparable postoperative outcomes but the mTLIP block had a significantly higher percentage of one-time block success.

The accumulation of the current literature demonstrates that TLIP blocks are superior to non-block procedures in terms of analgesia requirements and reported pain throughout the hospitalization in patients who underwent spinal surgery. The various levels of success seen with wound infiltration and ESPB could be due to the nature of the different spinal procedures. For example, studies that saw superiority with TLIP blocks included fusion surgeries which is a more invasive procedure resulting in increased postoperative pain compared to discectomies.

The results of our systematic review include moderate-quality evidence that show TLIP blocks provide effective pain control after spinal surgery. Although, the application of mTLIP blocks is more successful, more studies are needed to confirm that superiority of mTLIP over cTLIP blocks. Additionally, further high-quality research is needed to verify the potential benefit of TLIP blocks as a common practice for spinal surgeries.

Peer Review reports

Introduction

Spinal surgeries are often accompanied by excessive pain due to extensive tissue dissection and muscle retraction during the procedure [ 1 , 2 ]. Effective pain control is a crucial aspect of patient comfort and a pivotal determinant of overall surgical outcomes. Regional anesthesia techniques have gained prominence in the quest for optimal analgesia, with thoracolumbar interfascial plane (TLIP) blocks emerging as a noteworthy option.

Opioids are commonly used for post-spinal surgery pain management [ 3 , 4 ]. While opioids provide effective analgesia, their use is associated with reoperations and can lead to undesired outcomes such as long-term dependence, nausea, vomiting, and respiratory depression. Multimodal analgesic regimens are the use of 2 or more analgesics or techniques to reduce the dose of each individual drug and can help with the goal of reducing opioid use while providing adequate pain control [ 5 , 6 ]. While there is no one optimal analgesic combination, unless there are patient-specific contraindications, all patients should receive a combination of acetaminophen and nonsteroidal anti-inflammatory drugs (NSAIDs) perioperatively or intraoperatively and continued postoperatively as scheduled dosing. Furthermore, all patients should receive surgical site infiltration and/or regional anesthesia (interfascial plane or peripheral nerve block). While opioid use should be reduced, the role of opioid-free analgesia remains controversial. In the acute postoperative period, opioids should be administered only as a rescue agent. Intravenous (IV) analgesic should be limited with the goal of transferring patients to oral medications and not impede ambulation and rehabilitation [ 7 ]. Therefore, a multimodal pain regimen is key to improving patient outcomes and reducing total opioid consumption.

In 2015, Hand et al. [ 8 ] introduced the classical TLIP (cTLIP) block, which targets the dorsal rami of the thoracolumbar nerves.. This is a relatively recent addition to regional anesthesia techniques for spinal surgeries. It involves the precise administration of local anesthetics between the multifidus and longissimus paraspinal muscles at the third lumbar vertebra often assisted by ultrasound. It is often difficult to delineate between the two muscles, however, lumbar extension can help improve the visualization of the intended injection site. This technique is designed to selectively target the sensory innervation of the thoracolumbar region, potentially offering a valuable alternative to systemic opioids. To improve upon the challenges and difficulties seen with the cTLIP block, Ahiskalioglu et al. [ 9 ], in 2017, introduced the modified TLIP (mTLIP) block where anesthetics are instead injected between the longissimus and iliocostalis muscles. The erector spinae plane block (ESPB) is similar to the TLIP block, however, an ESPB targets both the ventral and dorsal rami of the thoracic and abdominal spinal nerves by injecting anesthetics between the erector spinae muscle and transverse processes of vertebrae. By targeting only the dorsal rami of spinal nerves, the TLIP block provides more focused dermatomal coverage for back muscles which could lead to better controlled postoperative pain [ 10 ].

Our systematic review endeavors to provide a comprehensive evaluation of the effectiveness of TLIP blocks in improving postoperative outcomes in spinal surgery. The primary objectives encompass a multifaceted exploration of the impact of TLIP blocks for patients undergoing lumbar spinal surgery, focusing on postoperative pain control, opioid consumption, and the incidence of complications. We aim to provide a nuanced understanding of how TLIP blocks fare in comparison to other anesthesia modalities commonly employed in spinal surgery through a meticulous analysis of randomized controlled trials.

We conducted a systematic review based on the Preferred Reporting items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. PubMed, Scopus, and clinicaltrials.gov were the databases used. The search strategy was focused on “thoracolumbar interfascial plane blocks” for “spine” surgeries. Multiple search phrases and keywords were used to limit bias and capture missed studies that may not have shown up using a single search. The snowball method was used to collect references from other systematic reviews for potentially relevant articles that were missed with the initial search. At the start, all abstracts were read in their entirety for initial screening. The full text of studies with potential for final inclusion were evaluated for eligibility based on inclusion and exclusion criteria. Each article was reviewed by two independent researchers to determine inclusion based on our pre-determined criteria, then was confirmed by a third reviewer.

Inclusion criteria required studies to be randomized control trials (RCTs) in the English language that evaluated the impact of TLIP blocks during spinal surgery on postoperative pain and analgesia. Cohort studies were not included because most were redacted, and case studies provided minimal quantifiable outcome measurements. Inclusion criteria included the use of TLIP blocks for any type of spine-based surgery and report standardized outcome measures of both postoperative analgesic use and pain. Studies that are not randomized control trials of human patients, do not report any outcome data, and involve surgery beyond the spine were excluded.

We collected data regarding age range, total number of participants, type of surgery, treatment characteristics, and type of anesthesia mixture. Outcome data include intraoperative and postoperative opioid consumption, time to postoperative analgesia, and postoperative pain. Complications were also collected for each study. Continuous variable data was reported as a mean ± standard deviations or a median (interquartile range). Categorical variable data were reported as frequency with percentages. Associations were reported with statistical significance at a p -value < 0.5. Studies were grouped based on the type of control TLIP blocks were compared against.

The critical appraisal of included studies was evaluated using the JBI assessment tool for risk of bias for randomized controlled trials [ 11 ]. This tool includes 11 items that aim to assess for a variety of biases such as selection, performance, and measurement. Each question can receive an answer of yes, no, unclear, or not applicable. Studies with a higher number of answers to be yes have a low risk of bias and those with a higher number of answers to be no have a high risk of bias.

A total of 17 RCTs were included in this study with the age of patients ranging from 18–74 years (Fig.  1 ). Risk of bias was moderate to high given there were several different areas where there were doubts if criteria were met for each study (Table  1 ). Only two studies, Chen et al. [ 12 ] and Ahiskalioglu et al., [ 13 ] met all criteria, leading to a low risk of bias. Additionally, only three studies met the criteria where those delivering the treatment were blind to treatment assignment (question 5).

figure 1

PRISMA flow diagram for study selection

The types of surgeries performed, most often at the lumbar level, include discectomies, fusions, and decompression/stabilization procedures. TLIP blocks were performed after induction with general anesthesia (GA) either by the modified or classical method. The TLIP blocks were often compared to either GA plus 0.9% saline ( n  = 5), wound infiltration ( n  = 4), ESPB ( n  = 4), quadratus lumborum block (QLB) ( n  = 1), or epidural analgesia ( n  = 1). Two studies compared the two modes of TLIP blocks, classical and modified (Table  2 ).

The common make-up of the anesthesia provided was bilateral injections of 20 mL of 0.25% bupivacaine ( n  = 10). Other compositions include bilateral injections of 30 mL 0.25% bupivacaine ( n  = 1), 30 mL of 0.375% ropivacaine ( n  = 2), and 20 mL of 0.2% ropivacaine ( n  = 1) along with a mixture of bupivacaine and lidocaine ( n  = 2) and a mixture of ropivacaine and lignocaine ( n  = 1).

The two main outcomes that were analyzed were postoperative pain and opioid consumption. Pain intensity was reported using the visual analog scale (VAS) or the numeric rating scale (NRS), both of which utilize a scale of 1 to 10. Analgesia consumption includes the amount of opioid use, time to first analgesia, percentage of patients requiring rescue analgesia, and frequency of PCA (patient-controlled analgesia) use. The most frequently reported complications of the anesthesia blocks were nausea and vomiting. The rate and incidence of complications were low and insignificant between treatment groups for most studies. Ahiskalioglu, [ 13 ] Ciftci [ 10 ] and Ekinci [ 26 ] were the only studies that reported a significant decrease in nausea with TLIP block.

Overall, the use of a TLIP block for spinal surgery significantly decreases postoperative opioid use and pain compared to using general anesthesia (GA) plus 0.9% saline only with no increase in complications. The time before analgesia was requested significantly increased for patients who received a TLIP block.

When TLIP blocks were compared against wound infiltration of local anesthesia, two studies, Ince et al. [ 19 ] and Bicak et al. [ 27 ], found wound infiltration was as effective as a TLIP block for postoperative pain relief. On the other hand, Ekinci et al. [ 26 ] and Pavithran et al. [ 15 ] found TLIP blocks to be superior.

There appeared to be varying levels of success when TLIP blocks were compared with ESPB. Kumar et al. [ 14 ] found patients who were given ESPBs reported significantly decreased total opioid consumption and decreased pain for up to 24 h. However, Ciftci et al. [ 10 ] saw no difference in analgesic efficacy between ESPBs and TLIP block groups, but compared to those who did not receive either block, postoperative opioid use was significantly decreased. Similarly, Tantri et al. [ 17 ] saw no difference in postoperative pain control between the two block groups. However, TLIP block provided a prolonged duration of analgesia as seen by a significantly increased length of time until first analgesia.

TLIP blocks were also compared against a posterior QLB and epidural analgesia. TLIP provided superior analgesia with quality of recovery score (QoR-40), Kaplan–Meier survival analysis, and postoperative pain control favoring patients who received TLIP blocks.

When the two different methods of TLIP blocks were compared against each other, there was no significant difference in terms of postoperative pain and opioid use. However, Ciftci et al. [ 21 ] showed that the mTLIP block method had a significantly higher percentage of success of one-time block at 90% compared to 40% with a cTLIP block.

Conventional spinal surgeries often involve extensive dissection of subcutaneous tissues, bones, and ligaments, resulting in a high degree of postoperative pain and a strikingly high use of opioid analgesics [ 1 , 28 ]. Long-term consequences of postoperative opioid analgesia surrounding dependence and addiction are well documented and a feared sequela of physicians prescribing these medications. One trial demonstrated opioid overuse in spine surgery, with an increase in postsurgical opioid dependence from 0% to nearly 48% of patients who underwent surgical fusion for degenerative scoliosis in the early 2000s to mid-2010s [ 28 ]. Effective pain control is thus an important aspect of postoperative care, supporting the clinical value of our study. The use of TLIP blocks during spine surgery has the possibility of providing better postsurgical pain control with the likelihood of decreasing the incidence of chronic pain. However, current studies only evaluate the effect of TLIP on pain during the first few days after surgery. Thus, further research with longer follow-up is needed to better evaluate its effect on chronic pain over the course of weeks to months after surgery.

The use of regional anesthesia is supported by the Enhanced Recovery After Surgery protocols with the goal of minimizing opioid consumption in patients. One novel technique includes the use of TLIP blocks, first introduced by Hand et al. [ 8 ]. TLIP blocks involve targeting the dorsal rami of thoracolumbar nerves as they pass through the paraspinal muscles. The TLIP block is analogous to the transversus abdominis plane (TAP) block for abdominal procedures where the ventral rami of the thoracolumbar nerves are targeted instead. Given the success of TAP blocks in providing analgesia, TLIP blocks were hypothesized to provide a similar benefit for spinal surgeries. The accumulation of the current literature demonstrates that TLIP blocks are superior to non-block procedures in terms of analgesia requirements (total opioid use and time to analgesia) and reported pain throughout the hospitalization in patients who underwent spinal surgery.

Hand et al. [ 8 ] developed what is now known as the cTLIP block, where the needle is injected at a 30 degree angle from the skin between the muscle (m.) multifidus and m. longissimus, and is advanced from a lateral to medial direction. Ahiskalioglu et al. [ 9 ] modified the TLIP block by injecting anesthetics at a 15 degree angle in a medial to lateral direction, between the m. longissimus and m. iliocostalis. The advantages of the mTLIP block are the elimination of the risk of inadvertent neuraxial injection and the increased success rate of the block as the m. longissimus is more easily discernible from the m. iliocostalis than the m. multifidus. Two studies directly comparing the two methods demonstrate similar postoperative analgesic effects, however, the block success rate was significantly higher with the modified version, supporting the conclusions of Ahiskalioglu et al [ 9 ]. However, given the limited reports comparing the two methods of the TLIP block, more RCT studies should be conducted to further validate the mTLIP block and its advantages. It is also important to note a proposal for the nomenclature for paraspinal interfascial plane (PIP) blocks given the new variations to the original TLIP block by Hand et al [ 8 ]. There is the complication that the paraspinal muscles of the cervical, thoracic, and lumbar region all have different anatomy, and thus a dorsal ramus block technique is specific to each area [ 29 ]. Naming the blocks after the target muscle fascia in PIP blocks could offer more clarity. For example, the TLIP block would include the thoracic multifidus plane (TMP) or lumbar multifidus plane (LMP) blocks while the mTLIP block would include the thoracic longissimus plane (TLP) and lumbar longissimus plane (LLP) blocks. The clinical efficiency of wound infiltration with local anesthetics is  questionable, given the various levels of success seen in studies. A systematic review [ 30 ] saw only a few RCTs showing a modest reduction in pain intensity, mainly immediately after the operation, and a minor reduction in opioid use with local anesthetic wound infiltration for lumbar spine surgeries. There were mixed reports among RCTs comparing wound infiltration against TLIP blocks. The varying levels of success may be in part due to the nature of the surgery. Ince et al. [ 19 ] and Bicak et al. [ 27 ] saw no difference in postoperative analgesics, which may be because discectomies are less invasive than spine fusion surgeries. The studies that saw superiority over wound infiltration included patients who underwent lumbar fusion surgeries, providing further support for the conclusion.

ESPBs are another type of fascial plane block where anesthetics are injected between the erector spinae muscles and thoracic transverse processes, blocking the dorsal and ventral rami of the thoracic and abdominal spinal nerves [ 31 ]. A RCT by Avis et al. [ 32 ] found that lumbar ESPB combined with the Enhance Recovery After Surgery (ERAS) program did not lead to decreased opioid use than with saline after major spine surgery. Furthermore, quality of life at 3 months between the control and treatment group was similar, further demonstrating the limited benefit of the block. On the other hand, several systematic reviews [ 33 , 34 , 35 ] found that ESPBs decreased postoperative pain and opioid consumption for those undergoing spinal surgery. However, much of the evidence is low-quality and is insufficient to support the widespread use of ESPBs for spine surgery. There were mixed results regarding the efficacy of TLIP blocks over ESPBs. All but one report saw a clear decrease in analgesia with TLIP blocks; however, this did not always translate to a decrease in pain intensity or difference in complication rate. The slight benefit of TLIP block may be due to the ability to provide more focused analgesia than ESPBs [ 36 ]. While the evidence shows that fascial plane blocks improve outcomes after spine surgery, it is difficult to conclude which block is superior given the limited reports available. The decision to perform one technique over the other may be based on physician and institution preference and expertise [ 37 ].

It is important to note that while some studies show TLIP blocks having a statistically significant decrease in pain, this change in pain perception does not appear to be clinically significant. A study by Smith et al. [ 38 ] looked at determining the magnitude of reduction in pain that is meaningful for patients with acute or chronic pain. A reduction in pain intensity by 10–20% is “minimally important”, by ≥ 30% is “moderately important” and by ≥ 50% is “substantially important” by patients. In our review, the mean difference in VAS/NRS pain scores across studies is rarely greater than one and never greater than two. Thus, pain is only reduced by 10–20% and is not likely to provide patients with a meaningful improvement in pain control. Therefore, the true value of TLIP blocks for spine surgery is likely in the reduction of analgesic and opioid consumption.

Our review includes 17 RCTs and provides updates to the previous systematic reviews that included studies now redacted or removed from publication. Such studies were not included in this report to increase the strength and validity of our findings. In general, our results are consistent with the previous conclusions, with some differences. Both meta-analyses found TLIP blocks to drastically reduce opioid use and provide effective pain control compared to no/sham blocks. However, both Ye et al. [ 39 ] and Long et al. [ 40 ] do not include any studies comparing TLIP blocks against other types of paraspinal blocks. Second, while the number of reports is limited, they also did not mention studies comparing the modified and classic versions of TLIP. Lastly, while the superiority of TLIP blocks over wound infiltration appeared to be dependent on the type of spinal surgery, Ye et al. found TLIP blocks to be superior overall.

Limitations

Our review has inevitable limitations. First, there is a lack of homogeneity across studies. The heterogeneity is due to differences in the characteristics of the subjects, anesthetic agents and protocol, postoperative analgesic protocol, and type of surgery. Different spinal surgeries with varying levels of invasiveness make comparison between studies more difficult, as less invasive procedures by nature are expected to result in less postoperative pain than their more invasive counterparts. Additionally, slight variations in the formulation of the anesthetic provided and mode of delivery may have resulted in some differences in effectiveness that we were unable to account for. Furthermore, data on outcome measures was reported in different types of metrics, and some variables like the need for rescue analgesia, QoR-40 score, and Bruggemann comfort scale score were sparse across studies. Lastly, there were limited studies that compared TLIP blocks against wound infiltration and other paraspinal blocks and that compared the two modes of TLIP blocks. Overall, the risk of bias among studies was moderate. Thus, the presence of bias lowers the overall quality and confidence of evidence and conclusion.

The results of our systematic review provide evidence of the effectiveness of TLIP blocks in improving postoperative pain control. TLIP blocks showed improved outcomes after surgery, including lower pain scores and decreased analgesic requirements compared to patients who received no block and wound infiltration. However, when comparing ESPB and TLIP blocks, it is difficult to ascertain the appropriate choice for a nerve block regarding spinal surgeries. mTLIP blocks appear to be superior to cTLIP blocks, but further research is needed to verify this.

Availability of data and materials

All data generated or analyzed during this study are included in this published article [and its supplementary information files].

Bajwa SJS, Haldar R. Pain management following spinal surgeries: An appraisal of the available options. J Craniovertebr Junction Spine. 2015;6(3):105–10. https://doi.org/10.4103/0974-8237.161589 .

Article   PubMed   PubMed Central   Google Scholar  

Prabhakar NK, Chadwick AL, Nwaneshiudu C, et al. Management of Postoperative Pain in Patients Following Spine Surgery: A Narrative Review. Int J Gen Med. 2022;15:4535–49. https://doi.org/10.2147/IJGM.S292698 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Warner NS, Habermann EB, Hooten WM, et al. Association Between Spine Surgery and Availability of Opioid Medication. JAMA Netw Open. 2020;3(6):e208974. https://doi.org/10.1001/jamanetworkopen.2020.8974 .

Yerneni K, Nichols N, Abecassis ZA, Karras CL, Tan LA. Preoperative Opioid Use and Clinical Outcomes in Spine Surgery: A Systematic Review. Neurosurg. 2020;86(6):E490–507. https://doi.org/10.1093/neuros/nyaa050 .

Article   Google Scholar  

Nicholas TA, Robinson R. Multimodal Analgesia in the Era of the Opioid Epidemic. Surg Clin North Am. 2022;102(1):105–15. https://doi.org/10.1016/j.suc.2021.09.003 .

Article   PubMed   Google Scholar  

O’Neill A, Lirk P. Multimodal Analgesia. Anesthesiol Clin. 2022;40(3):455–68. https://doi.org/10.1016/j.anclin.2022.04.002 .

Joshi GP. Rational Multimodal Analgesia for Perioperative Pain Management. Curr Pain Headache Rep. 2023;27(8):227–37. https://doi.org/10.1007/s11916-023-01137-y .

Hand WR, Taylor JM, Harvey NR, et al. Thoracolumbar interfascial plane (TLIP) block: a pilot study in volunteers. Can J Anesth/J Can Anesth. 2015;62(11):1196–200. https://doi.org/10.1007/s12630-015-0431-y .

Ahiskalioglu A, Alici HA, Selvitopi K, Yayik AM. Ultrasonography-guided modified thoracolumbar interfascial plane block: a new approach. Can J Anesth/J Can Anesth. 2017;64(7):775–6. https://doi.org/10.1007/s12630-017-0851-y .

Ciftci B, Ekinci M, Celik EC, Yayik AM, Aydin ME, Ahiskalioglu A. Ultrasound-Guided Erector Spinae Plane Block versus Modified-Thoracolumbar Interfascial Plane Block for Lumbar Discectomy Surgery: A Randomized. Controlled Study World Neurosurg. 2020;144:e849–55. https://doi.org/10.1016/j.wneu.2020.09.077 .

Barker TH, Stone JC, Sears K, et al. The revised JBI critical appraisal tool for the assessment of risk of bias for randomized controlled trials. JBI Evidence Synthesis. 2023;21(3):494–506. https://doi.org/10.11124/JBIES-22-00430 .

Chen K, Wang L, Ning M, Dou L, Li W, Li Y. Evaluation of ultrasound-guided lateral thoracolumbar interfascial plane block for postoperative analgesia in lumbar spine fusion surgery: a prospective, randomized, and controlled clinical trial. PeerJ. 2019;7:e7967. https://doi.org/10.7717/peerj.7967 .

Ahiskalioglu A, Yayik AM, Doymus O, et al. Efficacy of ultrasound-guided modified thoracolumbar interfascial plane block for postoperative analgesia after spinal surgery: a randomized-controlled trial. Can J Anaesth. 2018;65(5):603–4. https://doi.org/10.1007/s12630-018-1051-0 .

Kumar A, Sinha C, Kumar A, et al. Modified thoracolumbar Interfascial Plane Block Versus Erector Spinae Plane Block in Patients Undergoing Spine Surgeries: A Randomized Controlled Trial. J Neurosurg Anesthesiol. Published online January 9, 2023. https://doi.org/10.1097/ANA.0000000000000900

Pavithran P, Sudarshan PK, Eliyas S, Sekhar B, Kaniachallil K. Comparison of thoracolumbar interfascial plane block with local anaesthetic infiltration in lumbar spine surgeries - A prospective double-blinded randomised controlled trial. Indian J Anaesth. 2022;66(6):436–41. https://doi.org/10.4103/ija.ija_1054_21 .

Eltaher E, Nasr N, Abuelnaga ME, Elgawish Y. Effect of Ultrasound-Guided Thoracolumbar Interfascial Plane Block on the Analgesic Requirements in Patients Undergoing Lumbar Spine Surgery Under General Anesthesia: A Randomized Controlled Trial. J Pain Res. 2021;14:3465–74. https://doi.org/10.2147/JPR.S329158 .

Tantri AR, Rahmi R, Marsaban AHM, Satoto D, Rahyussalim AJ, Sukmono RB. Comparison of postoperative IL-6 and IL-10 levels following Erector Spinae Plane Block (ESPB) and classical Thoracolumbar Interfascial Plane (TLIP) block in a posterior lumbar decompression and stabilization procedure: a randomized controlled trial. BMC Anesthesiol. 2023;23(1):13. https://doi.org/10.1186/s12871-023-01973-w .

Tantri AR, Sukmono RB, LumbanTobing SDA, Natali C. Comparing the Effect of Classical and Modified Thoracolumbar Interfascial Plane Block on Postoperative Pain and IL-6 Level in Posterior Lumbar Decompression and Stabilization Surgery. Anesth Pain Med. 2022;12(2):e122174. https://doi.org/10.5812/aapm-122174 .

Ince I, Atalay C, Ozmen O, et al. Comparison of ultrasound-guided thoracolumbar interfascial plane block versus wound infiltration for postoperative analgesia after single-level discectomy. J Clin Anesth. 2019;56:113–4. https://doi.org/10.1016/j.jclinane.2019.01.017 .

Ammar MA, Taeimah M. Evaluation of thoracolumbar interfascial plane block for postoperative analgesia after herniated lumbar disc surgery: A randomized clinical trial. Saudi J Anaesth. 2018;12(4):559–64. https://doi.org/10.4103/sja.SJA_177_18 .

Çiftçi B, Ekinci M. A prospective and randomized trial comparing modified and classical techniques of ultrasound-guided thoracolumbar interfascial plane block. Agri. 2020;32(4):186–92. https://doi.org/10.14744/agri.2020.72325 .

Alver S, Ciftci B, Celik EC, et al. Postoperative recovery scores and pain management: a comparison of modified thoracolumbar interfascial plane block and quadratus lumborum block for lumbar disc herniation. Eur Spine J. Published online June 14, 2023. https://doi.org/10.1007/s00586-023-07812-3

Ozmen O, Ince I, Aksoy M, Dostbil A, Atalay C, Kasali K. The Effect of the Modified Thoracolumbar Interfacial Nerve Plane Block on Postoperative Analgesia and Healing Quality in Patients Undergoing Lumbar Disk Surgery: A Prospective. Randomized Study Medeni Med J. 2019;34(4):340–5. https://doi.org/10.5222/MMJ.2019.36776 .

Wang L, Wu Y, Dou L, Chen K, Liu Y, Li Y. Comparison of Two Ultrasound-guided Plane Blocks for Pain and Postoperative Opioid Requirement in Lumbar Spine Fusion Surgery: A Prospective, Randomized, and Controlled Clinical Trial. Pain Ther. 2021;10(2):1331–41. https://doi.org/10.1007/s40122-021-00295-4 .

Çelik EC, Ekinci M, Yayik AM, Ahiskalioglu A, Aydi ME, Karaavci NC. Modified thoracolumbar interfascial plane block versus epidural analgesia at closure for lumbar discectomy: a randomized prospective study. APIC. 2020;24(6):588–95. https://doi.org/10.35975/apic.v24i6.1396 .

Ekinci M, ÇIFTÇI B, ÇELIK E, Yayik A, Tahta A, Atalay Y. A Comparison of Ultrasound-Guided modified-Thoracolumbar Interfascial Plane Block and Wound Infiltration for Postoperative Pain Management in Lumbar Spinal Surgery Patients. Agri. Published online 2019. https://doi.org/10.14744/agri.2019.97759

Bicak M, Salik F, Aktas U, Akelma H, AktizBicak E, Kaya S. Comparison of thoracolumbar Ýnterfascial plane block with the application of local anesthesia in the management of postoperative pain in patients with lomber disk surgery. Turkish Neurosurg Published online. 2021. https://doi.org/10.5137/1019-5149.JTN.33017-20.2 .

Berardino K, Carroll AH, Kaneb A, Civilette MD, Sherman WF, Kaye AD. An Update on Postoperative Opioid Use and Alternative Pain Control Following Spine Surgery. Orthop Rev. 2021;13(2). https://doi.org/10.52965/001c.24978 .

Xu JL, Tseng V. Proposal to standardize the nomenclature for paraspinal interfascial plane blocks. Reg Anesth Pain Med. Published online June 19, 2019:rapm-2019–100696. https://doi.org/10.1136/rapm-2019-100696

Kjærgaard M, Møiniche S, Olsen KS. Wound infiltration with local anesthetics for post-operative pain relief in lumbar spine surgery: a systematic review. Acta Anaesthesiol Scand. 2012;56(3):282–90. https://doi.org/10.1111/j.1399-6576.2011.02629.x .

Article   CAS   PubMed   Google Scholar  

Jain K, Jaiswal V, Puri A. Erector spinae plane block: Relatively new block on horizon with a wide spectrum of application - A case series. Indian J Anaesth. 2018;62(10):809–13. https://doi.org/10.4103/ija.IJA_263_18 .

Avis G, Gricourt Y, Vialatte PB, et al. Analgesic efficacy of erector spinae plane blocks for lumbar spine surgery: a randomized double-blind controlled clinical trial. Reg Anesth Pain Med. 2022;47(10):610–6. https://doi.org/10.1136/rapm-2022-103737 .

Liang X, Zhou W, Fan Y. Erector spinae plane block for spinal surgery: a systematic review and meta-analysis. Korean J Pain. 2021;34(4):487–500. https://doi.org/10.3344/kjp.2021.34.4.487 .

Qiu Y, Zhang TJ, Hua Z. Erector Spinae Plane Block for Lumbar Spinal Surgery: A Systematic Review. J Pain Res. 2020;13:1611–9. https://doi.org/10.2147/JPR.S256205 .

Elias E, Nasser Z, Elias C, et al. Erector Spinae Blocks for Spine Surgery: Fact or Fad? Systematic Review of Randomized Controlled Trials. World Neurosurg. 2022;158:106–12. https://doi.org/10.1016/j.wneu.2021.11.005 .

Hamilton DL. Does Thoracolumbar Interfascial Plane Block Provide More Focused Analgesia Than Erector Spinae Plane Block in Lumbar Spine Surgery? J Neurosurg Anesthesiol. 2021;33(1):92–3. https://doi.org/10.1097/ANA.0000000000000643 .

McCracken S, Lauzadis J, Soffin EM. Ultrasound-guided fascial plane blocks for spine surgery. Curr Opin Anaesthesiol. 2022;35(5):626–33. https://doi.org/10.1097/ACO.0000000000001182 .

Smith SM, Dworkin RH, Turk DC, et al. Interpretation of chronic pain clinical trial outcomes: IMMPACT recommended considerations. Pain. 2020;161(11):2446–61. https://doi.org/10.1097/j.pain.0000000000001952 .

Ye Y, Bi Y, Ma J, Liu B. Thoracolumbar interfascial plane block for postoperative analgesia in spine surgery: A systematic review and meta-analysis. PLoS ONE. 2021;16(5):e0251980. https://doi.org/10.1371/journal.pone.0251980 .

Long G, Liu C, Liang T, Zhan X. The efficacy of thoracolumbar interfascial plane block for lumbar spinal surgeries: a systematic review and meta-analysis. J Orthop Surg Res. 2023;18(1):318. https://doi.org/10.1186/s13018-023-03798-2 .

Download references

Acknowledgements

Not applicable.

Author information

Authors and affiliations.

Carle Illinois College of Medicine, Champaign, IL, USA

Tarika D. Patel, Meagan N. McNicholas & Peyton A. Paschell

Carle Neuroscience Institute, Carle Foundation Hospital, Urbana, IL, USA

Paul M. Arnold

Department of Anesthesiology, Carle Foundation Hospital Urbana, Illinois, USA

Cheng-ting Lee

You can also search for this author in PubMed   Google Scholar

Contributions

TP extracted, analyzed and interpreted the data, and was a major contributor in writing and editing the manuscript. MM extracted and analyzed the data, and wrote several sections of the manuscript. PP extracted and analyzed the data, and wrote several sections of the manuscript. PA edited the manuscript. CTL reviewed the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Tarika D. Patel .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Patel, T.D., McNicholas, M.N., Paschell, P.A. et al. Thoracolumbar Interfascial Plane (TLIP) block verses other paraspinal fascial plane blocks and local infiltration for enhanced pain control after spine surgery: a systematic review. BMC Anesthesiol 24 , 122 (2024). https://doi.org/10.1186/s12871-024-02500-1

Download citation

Received : 21 January 2024

Accepted : 15 March 2024

Published : 28 March 2024

DOI : https://doi.org/10.1186/s12871-024-02500-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Thoracolumbar interfascial plane block
  • Spine surgery
  • Pain management
  • Systematic review

BMC Anesthesiology

ISSN: 1471-2253

what is a systematic research study

IMAGES

  1. The Systematic Review Process

    what is a systematic research study

  2. Basics of Systematic Review

    what is a systematic research study

  3. How to Conduct a Systematic Review

    what is a systematic research study

  4. Systematic reviews

    what is a systematic research study

  5. Evidence Based Medicine

    what is a systematic research study

  6. Levels of evidence and study design

    what is a systematic research study

VIDEO

  1. Introduction to Systematic Review of Research

  2. Day 2: Basics of Scientific Research Writing (Batch 18)

  3. Systematic Literature Review: An Introduction [Urdu/Hindi]

  4. What is Linguistic Anthropology and How to Study Language in Culture

  5. ​Evaluating the literature #research #study #science

  6. EO'2024 Research Paper

COMMENTS

  1. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  2. Introduction to systematic review and meta-analysis

    It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, by collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical ...

  3. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  4. Systematic review

    A systematic review is a scholarly synthesis of the evidence on a clearly presented topic using critical methods to identify, define and assess research on the topic. [1] A systematic review extracts and interprets data from published studies on the topic (in the scientific literature ), then analyzes, describes, critically appraises and ...

  5. Systematic reviews: Structure, form and content

    Topic selection and planning. In recent years, there has been an explosion in the number of systematic reviews conducted and published (Chalmers & Fox 2016, Fontelo & Liu 2018, Page et al 2015) - although a systematic review may be an inappropriate or unnecessary research methodology for answering many research questions.Systematic reviews can be inadvisable for a variety of reasons.

  6. Systematic Review

    What is a systematic review? A review is an overview of the research that's already been completed on a topic.. What makes a systematic review different from other types of reviews is that the research methods are designed to reduce research bias.The methods are repeatable, and the approach is formal and systematic:. Formulate a research question; Develop a protocol

  7. What are systematic reviews and meta-analyses?

    Systematic reviews summarize the results of all the studies on a medical treatment and assess the quality of the studies. The analysis is done following a specific, methodologically sound process. In a way, it's a "study of studies.". Good systematic reviews can provide a reliable overview of the current knowledge in a certain area.

  8. Introduction to Systematic Reviews

    Abstract. A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question. Systematic review methods can be used to answer many types of research questions. The type of question most relevant to trialists is the effects of treatments and is thus the focus of this chapter.

  9. What is a Systematic Review?

    an explicit, reproducible methodology. a systematic search that attempts to identify all studies that would meet the eligibility criteria. an assessment of the validity of the findings of the included studies, for example through the assessment of the risk of bias. a systematic presentation, and synthesis, of the characteristics and findings of ...

  10. 1.2.2 What is a systematic review?

    1.2.2. What is a systematic review? A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. It uses explicit, systematic methods that are selected with a view to minimizing bias, thus providing more reliable findings from which conclusions can ...

  11. What is a Systematic Review (SR)?

    A meta-study of qualitative research examining determinants of children's independent active free play. ... one of the challenges is interpreting such apparently conflicting research. A systematic review is a method to systematically identify relevant research, appraise its quality, and synthesize the results. ...

  12. Getting Started

    A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies.

  13. Getting Started

    Systematic Review. A summary of the clinical literature. A systematic review is a critical assessment and evaluation of all research studies that address a particular clinical issue. The researchers use an organized method of locating, assembling, and evaluating a body of literature on a particular topic using a set of specific criteria.

  14. How to do a systematic review

    A systematic review aims to bring evidence together to answer a pre-defined research question. This involves the identification of all primary research relevant to the defined review question, the critical appraisal of this research, and the synthesis of the findings.13 Systematic reviews may combine data from different.

  15. What is a Systematic Review?

    A systematic review is a firmly structured literature review, undertaken according to a fixed plan, system or method. As such, it is highly focused on a particular and explicit topic area with strict research parameters. Systematic reviews will often have a detailed plan known as a protocol, which is a statement of the approach and methods to ...

  16. Easy guide to conducting a systematic review

    A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a ...

  17. LibGuides: Systematic Reviews: What is a Systematic Review?

    A systematic review attempts to collate all empirical evidence that fits pre-specified eligibility criteria in order to answer a specific research question. The key characteristics of a systematic review are: a clearly defined question with inclusion and exclusion criteria; a rigorous and systematic search of the literature;

  18. What is a systematic review?

    A high-quality systematic review is described as the most reliable source of evidence to guide clinical practice. The purpose of a systematic review is to deliver a meticulous summary of all the available primary research in response to a research question. A systematic review uses all the existing research and is sometime called 'secondary research' (research on research).

  19. How to Write a Systematic Review: A Narrative Review

    Background. A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[2,3] To identify assess ...

  20. Guidance on Conducting a Systematic Literature Review

    Step 3: Search the Literature. The quality of literature review is highly dependent on the literature collected for the review—"Garbage-in, garbage-out.". The literature search finds materials for the review; therefore, a systematic review depends on a systematic search of literature.

  21. Research Guides: Study Design 101: Systematic Review

    This systematic review was interested in comparing the diet quality of vegetarian and non-vegetarian diets. Twelve studies were included. Vegetarians more closely met recommendations for total fruit, whole grains, seafood and plant protein, and sodium intake. In nine of the twelve studies, vegetarians had higher overall diet quality compared to ...

  22. Strategies to implement evidence-informed decision making at the

    Achievement of evidence-informed decision making (EIDM) requires the integration of evidence into all practice decisions by identifying and synthesizing evidence, then developing and executing plans to implement and evaluate changes to practice. This rapid systematic review synthesizes evidence for strategies for the implementation of EIDM across organizations, mapping facilitators and ...

  23. How-To Create an Orthopaedic Systematic Review: A Step-by-Step ...

    Systematic reviews are conducted through a consistent and reproducible method to search, appraise, and summarize information. Within the evidence-based pyramid, systematic reviews can be at the apex when incorporating high-quality studies, presenting the strongest form of evidence given their synthesis of results from multiple primary studies to level IV evidence, depending on the studies they ...

  24. Empathy in family medicine postgraduate education: A mixed studies

    Medline, PsyINFO, and Embase were searched in this systematic mixed studies systematic review. Two independent reviewers screened abstracts and full texts. Disagreements were solved through research team consensus-based discussion. Included studies were synthesized thematically.

  25. Stress as a mediator of brain alterations in attention-deficit

    However, extensive further research is warranted due to little available evidence and the difficulty of obtaining clear results. In light of such a complex research question, in order to confirm findings, provide further evidence, and establish causality systematic longitudinal studies would be required. Investigating the topic may provide ...

  26. Study designs: Part 7

    Study designs: Part 7 - Systematic reviews. In this series on research study designs, we have so far looked at different types of primary research designs which attempt to answer a specific question. In this segment, we discuss systematic review, which is a study design used to summarize the results of several primary research studies.

  27. Attributes of errors, facilitators, and barriers related to rate

    The major findings of this study were as follows: (1) there were a few intervention studies that were effective in decreasing the errors related to rate control of IV medications; (2) there was limited research focusing on the errors associated with IV medication infusion devices; (3) a few studies have systematically evaluated and analyzed the ...

  28. Measuring resilience for Chinese-speaking populations: a systematic

    Studies included in this systematic review met the following inclusion criteria. First, included articles must have been (1) published in peer-reviewed outlets between 2015 and 2020, (2) based on primary study results from the use of self-report resilience measurement scales, and (3) full-text accessible.

  29. Human stem cell transplantation for Parkinson's disease: A systematic

    Stem cell-based brain repair is a promising emergent therapy for Parkinson's which is based on years of foundational research using human fetal donors as a cell source. Unlike current therapeutic options for patients, this approach has the potential to provide long-term stem cell-derived reconstruction and restoration of the dopaminergic input to denervated regions of the brain allowing for ...

  30. Thoracolumbar Interfascial Plane (TLIP) block verses other paraspinal

    For example, studies that saw superiority with TLIP blocks included fusion surgeries which is a more invasive procedure resulting in increased postoperative pain compared to discectomies. The results of our systematic review include moderate-quality evidence that show TLIP blocks provide effective pain control after spinal surgery.