Systematic Reviews: Medical Literature Databases to search

  • Types of literature review, methods, & resources
  • Protocol and registration
  • Search strategy
  • Medical Literature Databases to search
  • Study selection and appraisal
  • Data Extraction/Coding/Study characteristics/Results
  • Reporting the quality/risk of bias
  • Manage citations using RefWorks This link opens in a new window
  • GW Box file storage for PDF's This link opens in a new window

How to document your literature search

You should always  document how you have searched each database, what keywords or index terms were used, the date on which the search was performed, how many results you retrieved, and if you use RefWorks to deduplicate results record how many were removed as duplicates and the final number of discrete studies you subjected to your first sift through of study selection.  Here is an example of how to document a literature search on an Excel spreadsheet , this example records a search of the hematology literature for articles about sickle cell disease. Here is another example of  how to document a literature search, this time on one page of a Word document , this example records a search of the medical literature for a poster on Emergency Department throughput.  The numbers recorded can then be used to populate the PRISMA flow diagram summarizing the literature search.

In the final report add as an appendix the full electronic search strategy for each database searched for the literature review e.g. MEDLINE with MeSH terms, keywords & limits

In the final report in the methods section:

PRISMA checklist Item 7 information sources will be reported as:

  • What databases/websites you searched, the name of the database search platform and the start/end dates the index covers if relevant e.g. OVID MEDLINE (1950-present, or just PubMed
  • Who developed & conducted the searches
  • Date each database/website was last searched
  • Supplementary sources - what other websites did you search? What journal titles were hand searched, whether reference lists were checked, what trial registries or regulatory agency websites were searched, were manufacturers or other authors contacted to obtain unpublished or missing information on study methods or results.

PRISMA checklist Item 8 search will be reported as:

  • In text: describe the principal keywords used to search databases, websites & trials registers

What databases/indexes should you search?

At a minimum you need to search MEDLINE ,  EMBASE , and the  Cochrane CENTRAL  trials register .  This is the recommendation of three medical and public health research organizations: the U.S.  Agency for Healthcare Research and Quality ( AHRQ ), the U.K. Centre for Reviews and Dissemination ( CRD ), and the International Cochrane Collaboration (Source:  Institute of Medicine (2011) Finding What Works in Healthcare: Standards for Systematic Reviews  Table E-1, page 267).  Some databases have an alternate version, linked in parentheses below, that search the same records sets, ie the content of MEDLINE is in PubMed and Scopus, while the content of EMBASE is in Scopus. You should reformat your search for each database as appropriate, contact your librarian if you want help on how to search each database.  

Begin by searching:

1.        MEDLINE  (or  PubMed )

2.       EMBASE (or  Scopus )  Please note Himmelfarb Library does not have a subscription to EMBASE. The content is in the Scopus  database that you can search using keywords, but it is not possible to perform an EMTREE theasaurus search in Scopus.

3.        Cochrane Central Trials Register  (or  Cochrane Library ). In addition Cochrane researchers recommend you search the clinicaltrials.gov and ICTRP clinical trial registries due to the low sensitivity of the Cochrane CENTRAL index because according to Hunter et al (2022) "register records as they appear in CENTRAL are less comprehensive than the original register entry, and thus are at a greater risk than other systems of being missed in a search."

The Polyglot Search Translator is a very useful tool for translating search strings from PubMed or Medline via Ovid across multiple databases, developed by the Institute for Evidence-Based Healthcare at Bond University. But please note Polyglot does not automatically map subject terms across databases (e.g. MeSH terms to Emtree terms) so you will need to manually edit the search syntax in a text editor to change to the actual subject terms used by another database.

The Yale Mesh Analyzer is another very useful tool you can copy and paste in a list of up to 20 PMID numbers for records in the PubMed database, the Yale Mesh Analyzer will then display the Mesh Medical Subject Headings for those 20 articles as a table so you can identify and compare what Mesh headings they have in common, this can suggest additional search terms for your PubMed search.

The MedSyntax tool is another useful tool, for parsing out very long searches with many levels of brackets. This would be useful if you are trying to edit a pre-existing search strategy with many levels of parentheses.

Some sources for pre-existing database search filters or "hedges" include:

  • CADTH Search Filters Database ,
  • McMaster University Health Information Research Unit ,
  • University of York Centre for Reviews and Dissemination InterTASC Information Specialists' Sub-Group ,
  • InterTASC Population Specific search filters  (particularly useful for identifying Latinx, Indigenous people's, LGBTQ, Black & Minority ethnic)
  • CareSearch Palliative Care PubMed search filters  (bereavement, dementia, heart failure, lung cancer, cost of care, and Palliative Care)
  • Low and Middle Income countries filter at https://epoc.cochrane.org/lmic-filters . 
  • Search Pubmed for another validated search filter using some variation of a search like this, possibly adding your discipline or search topic keywords: ("Databases, Bibliographic"[Mesh] OR "Search Engine"[Mesh]) AND ("Reproducibility of Results"[Mesh] OR "Sensitivity and Specificity"[Mesh] OR validat*) AND (filter OR hedge) .
  • Search MEDLINE (or PubMed), preferably using a peer reviewed search strategy per protocol and apply any relevant methodology filters.
  • Search EMBASE (or Scopus) and the Cochrane Central trials register using appropriately reformatted search versions for those databases, and any other online resources. 
  • You should also search other subject specific databases that index the literature in your field.  Use our Himmelfarb Library  research guides  to identify other  subject specific databases . 
  • Save citations in Covidence to deduplicate citations prior to screening.
  • After screening export citations to  RefWorks database when you are ready to write up your manuscript. The Covidence and Refworks databases should be shared with all members of the investigative team.

Supplementary resources to search

Other member of your investigative team may have ideas about databases, websites, and journals they think you should search. Searching these sources is not required to perform a systematic review. You may need to reformat your search keywords.

Researchers at GW should check our subject research guides for suggestions, or check the libguides community for a guide on your subject.

In addition you may wish to search one or more of the following resources:

  • Google Scholar
  • BASE  academic search engine is useful for searching in University Institutional Repositories
  • Cochrane Database of Systematic Reviews  to search for a pre-existing systematic review on your topic
  • Epistemonikos database, has a matrix of evidence table so you can see what citations are shared in common across existing systematic reviews of the same topic. This feature might help identify sentinel or 'don't miss' articles.

You might also consider searching one or more of the following websites depending on your topic:

Clinical trial registers. The Cochrane Collaboration recommends for a systematic review to search both clinicaltrials.gov and the WHO ICTRP (See http://handbook.cochrane.org/ section 4.3):

  • ClinicalTrials.gov  - also contains study population characteristics and results data of FDA regulated drugs and medical devices in NIH funded studies produced after January 18, 2017.
  • WHO ICTRP  - trials register
  • TRIP  - searchable index of clinical trials, guidelines,and regulatory guidance
  • CenterWatch
  • Current Controlled Trials
  • European Clinical Trials Register
  • ISRCTN Register
  • COMPARE - tracks outcome switching in clinical trials
  • OpenTrials - aims to match published trials with the underlying data where this is publicly available in an open source 
  • ECRI Guidelines Trust

Grey literature resources:

  • WONDER - CDC data and reports
  • FDSys - search federal government publications
  • Science.gov
  • NRR Archive
  • NIH Reporter
  • re3data registry of data repositories
  • Data Repositories (listed by the Simmons Open Access Directory)
  • OpenDOAR  search academic open access research repositories
  • f1000research search open access repositories of articles, slides, and research posters, in the life sciences, public health, education, and communication.
  • RAND Health Reports
  • National Academy of Medicine Publications
  • Kaiser Family Foundation 
  • Robert Wood Johnson Foundation health and medical care data archive
  • Milbank Memorial Fund reports and issue briefs
  • Also search the resources listed in the CADTH (2019) Grey Matters checklist.

Preprints 

  • See our Himmelfarb preprints guide page on finding preprints , a useful database for searching Health Sciences preprints is  Europe PMC

Dissertations and Theses:

  • Proquest Dissertations and Theses Online 
  • Networked Digital Library of Theses and Dissertations
  • Open Access Theses and Dissertations
  • WorldCat and change Content: from Any Content to Thesis/dissertations

Conference proceedings:

Most conference proceedings are difficult to find because they may or may not be published. Only select individual papers may be made available in print as a book, journal, or series, rather than all of the presented items. Societies and Associations may only publish abstracts, or extended abstracts, from a conference, often in an annual supplement to an issue of the journal of record of that professional society.  Often posters are not published, if they are they may be made available only to other conference registrants at that meeting or online. Authors may "publish" their conference papers or posters on personal or institutional websites.  A limited set of conference proceedings databases include the following:

  • BASE  academic search engine, has an Advanced Search feature with a Limit by Type to 'Conference Objects', this is useful for searching for conference posters and submissions stored in University Institutional Repositories.
  • Web of Science - click All Databases and select Core Collection - under More Settings limit to the Conference Proceedings Citation Index (CPCI) - searches a limited set of conferences on Science, Social Science and Humanities from 1990-present.
  • Scopus - Limit Document Type to Conference Paper or Conference Review.
  • Proquest  - Limit search results to conference papers &/or proceedings under Advanced Search.
  • BioMed Central Proceedings  - searches a limited set of biomedical conference proceedings, including bioinformatics, genetics, medical students, and data visualization.
  • F1000 Research - browse by subject and click the tabs for articles, posters, and slides - which searches a limited number of biology and medical society meetings/conferences. This is a voluntary self-archive repository.

Individual Journals 

  • You may choose to "hand search" select journals where the research team reads the Table of Contents of each issue for a chosen period of time.  You can look for the names of high impact journal titles in a particular field indexed in Journal Citation Reports  (JCR). Please note as of August 2021 ISI are linking to a new version of JCR that currently does not have the particularly helpful 'Browse by Category' link working, so I recommend you click the Products link in the top right corner and select Journal Citation Reports (Classic) to switch back to the old version to get that functionality back.
  • The AllTrials petition aims to motivate health care researchers to petition regulators and research bodies to require the results and data of all clinical trials be published.
  • << Previous: Search strategy
  • Next: Study selection and appraisal >>

Creative Commons License

  • Last Updated: Mar 21, 2024 11:08 AM
  • URL: https://guides.himmelfarb.gwu.edu/systematic_review

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2850
  • [email protected]
  • https://himmelfarb.gwu.edu

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NLM logo

Accelerating Biomedical Discovery and Data-Powered Health

Citations for biomedical literature

MedlinePlus

Reliable, up-to-date health information for you

An experimental multimedia search engine

Medical Subject Headings

ClinicalTrials.gov

A database of clinical studies, worldwide

Basic Local Alignment Search Tool

News and Highlights

Musings from the Mezzanine

Musings from the Mezzanine

Circulating Now

Circulating Now

NCBI Insights

NCBI Insights

NLM Announcements

NLM Announcements

Technical Bulletin

Technical Bulletin

NIH Virtual Tour: National Library of Medicine

NIH Virtual Tour: National Library of Medicine

The National Library of Medicine (NLM) is the world's largest biomedical library and a national resource for health professionals, scientists, and the public.

News Spotlight

2024 nlm/mla joseph leiter lecture.

Maia Hightower headshot

Responsible AI in Healthcare: A Practical Approach

Artificial intelligence (AI) has the potential to transform healthcare and improve health outcomes, but it also poses ethical, legal, and social challenges. Dr. Maia Hightower, founder of Equality AI and former EVP, Chief Digital Transformation Officer at University of Chicago Medicine, will discuss some of the principles and practices of responsible AI in healthcare, drawing on her experience as a CEO and a physician. Dr. Hightower will share some examples of how AI can be used to enhance patient care, clinical decision making, and health equity, as well as some of the risks and pitfalls to avoid. She will also offer some recommendations on how to foster a culture of trust, transparency, and accountability in the development and deployment of AI in healthcare.

For more information go to https://www.mlanet.org/p/cm/ld/fid=250

Research at NLM

Nlm intramural research program.

Intramural research at NLM consists of the development and application of computational approaches to a broad range of problems in biomedicine, molecular biology, and health. READ RESEARCH HIGHLIGHTS | MEET OUR PRINCIPAL INVESTIGATORS | EXPLORE TRAINING OPPORTUNITIES

IRP home no text.

Historical Collections at NLM

Biomedical and clinical informatics at nlm, health it and health data standards.

Doctors with tablet

Efficient health care information exchange in the US and worldwide is made possible by NLM’s work with IT Data Standards.

Learn about NLM’s contributions to Health IT

Unified Medical Language System (UMLS) Terminology Services

This set of tooling services brings together many health and biomedical vocabularies and standards to enable interoperability between computer systems.

Explore UMLS

Biomedical Informatics Training Program

Woman at the conference

This training program provides biomedical and clinical informatics training and research opportunities for individuals at various stages in their career.

Investigate training opportunities

Old photo of NLM

The Library started as a shelf of books in the Surgeon General’s office in 1836 but has grown to a collection of millions of print and electronic resources.

Explore our past

Organization

The diverse centers, divisions, advisory bodies and other organizational units that make up NLM contribute in myriad ways to the Library’s mission.

Explore the Library

Strategic Plan

Strategic Plan cover

This ten year plan outlines NLM's role in a future where data and information transform and accelerate biomedical discovery and improve health and health care.

VIEW OUR STRATEGIC PLAN

U.S. flag

An official website of the United States government

Here's how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Literature Search: Databases and Gray Literature

The literature search.

  • A systematic review search includes a search of databases, gray literature, personal communications, and a handsearch of high impact journals in the related field.  See our list of recommended databases and gray literature sources on this page.
  • a comprehensive literature search can not be dependent on a single database, nor on bibliographic databases only.
  • inclusion of multiple databases helps avoid publication bias (georaphic bias or bias against publication of negative results).
  • The Cochrane Collaboration recommends PubMed, Embase and the Cochrane Central Register of Controlled Trials (CENTRAL) at a minimum.     
  • NOTE:  The Cochrane Collaboration and the IOM recommend that the literature search be conducted by librarians or persons with extensive literature search experience. Please contact the NIH Librarians for assistance with the literature search component of your systematic review. 

Cochrane Library

A collection of six databases that contain different types of high-quality, independent evidence to inform healthcare decision-making. Search the Cochrane Central Register of Controlled Trials here.

European database of biomedical and pharmacologic literature.

PubMed comprises more than 21 million citations for biomedical literature from MEDLINE, life science journals, and online books.

Largest abstract and citation database of peer-reviewed literature and quality web sources. Contains conference papers.

Web of Science

World's leading citation databases. Covers over 12,000 of the highest impact journals worldwide, including Open Access journals and over 150,000 conference proceedings. Coverage in the sciences, social sciences, arts, and humanities, with coverage to 1900.

Subject Specific Databases

APA PsycINFO

Over 4.5 million abstracts of peer-reviewed literature in the behavioral and social sciences. Includes conference papers, book chapters, psychological tests, scales and measurement tools.

CINAHL Plus

Comprehensive journal index to nursing and allied health literature, includes books, nursing dissertations, conference proceedings, practice standards and book chapters.

Latin American and Caribbean health sciences literature database

Gray Literature

  • Gray Literature  is the term for information that falls outside the mainstream of published journal and mongraph literature, not controlled by commercial publishers
  • hard to find studies, reports, or dissertations
  • conference abstracts or papers
  • governmental or private sector research
  • clinical trials - ongoing or unpublished
  • experts and researchers in the field     
  • Library catalogs
  • Professional association websites
  • Google Scholar  - Search scholarly literature across many disciplines and sources, including theses, books, abstracts and articles.
  • Dissertation Abstracts - dissertation and theses database - NIH Library biomedical librarians can access and search for you.
  • NTIS  - central resource for government-funded scientific, technical, engineering, and business related information.
  • AHRQ  - agency for healthcare research and quality
  • Open Grey  - system for information on grey literature in Europe. Open access to 700,000 references to the grey literature.
  • World Health Organization  - providing leadership on global health matters, shaping the health research agenda, setting norms and standards, articulating evidence-based policy options, providing technical support to countries and monitoring and assessing health trends.
  • New York Academy of Medicine Grey Literature Report  - a bimonthly publication of The New York Academy of Medicine (NYAM) alerting readers to new gray literature publications in health services research and selected public health topics. NOTE: Discontinued as of Jan 2017, but resources are still accessible.
  • Gray Source Index
  • OpenDOAR - directory of academic repositories
  • International Clinical Trials Registery Platform  - from the World Health Organization
  • Australian New Zealand Clinical Trials Registry
  • Brazilian Clinical Trials Registry
  • Chinese Clinical Trial Registry - 
  • ClinicalTrials.gov   - U.S.  and international federally and privately supported clinical trials registry and results database
  • Clinical Trials Registry  - India
  • EU clinical Trials Register
  • Japan Primary Registries Network  
  • Pan African Clinical Trials Registry
  • Users' Guide to the Medical Literature

Explore the foundations of evidence-based medicine with JAMA’s Users’ Guide to the Medical Literature collection. Learn to understand and interpret clinical research!

Publication

Article type.

This Users’ Guide to the Medical Literature describes the fundamental concepts of platform trials and master protocols and reviews issues in the conduct and interpretation of these studies.

This Users’ Guide to the Medical Literature provides suggestions for understanding guideline methods and recommendations for clinicians seeking direction in evaluating clinical practice guidelines for potential use in their practice.

  • Evaluating Machine Learning Articles JAMA Opinion November 12, 2019 Artificial Intelligence Full Text | pdf link PDF

This Users’ Guide to the Medical Literature discusses the use of machine learning models as a diagnostic tool, and it explains the important steps needed for making these models and the outcomes they derive clinically effective.

This Users’ Guide to the Medical Literature discusses discrimination and calibration, 2 primary ways to measure and compare the accuracy of clinical risk prediction models.

This Users’ Guide to the Medical Literature discusses strategies for adjusting analyses as a way of addressing prognostic imbalance in studies of therapy and harm.

  • How to Read a Systematic Review and Meta-analysis and Apply the Results to Patient Care: Users’ Guides to the Medical Literature JAMA Review July 9, 2014 Surgery Ischemic Heart Disease Perioperative Care and Consultation Acute Coronary Syndromes Cardiology Full Text | pdf link PDF has multimedia

Sun and coauthors provide 5 criteria to help clinicians distinguish credible subgroup analyses from spurious subgroup analyses.

  • How to Use an Article About Quality Improvement JAMA Review November 24, 2010 Health Care Quality Full Text | pdf link PDF
  • How to Use an Article About Genetic Association: C: What Are the Results and Will They Help Me in Caring for My Patients? JAMA Review January 21, 2009 Genetics and Genomics Full Text | pdf link PDF
  • How to Use an Article About Genetic Association: B: Are the Results of the Study Valid? JAMA Review January 14, 2009 Genetics and Genomics Full Text | pdf link PDF
  • How to Use an Article About Genetic Association: A: Background Concepts JAMA Review January 7, 2009 Genetics and Genomics Full Text | pdf link PDF

Select Your Interests

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

An Overview of How to Search and Write a Medical Literature Review

Affiliation.

  • 1 American Association for Respiratory Care, and Georgia State University, Atlanta, Georgia. [email protected].
  • PMID: 37339890
  • PMCID: PMC10589118 (available on 2024-11-01 )
  • DOI: 10.4187/respcare.11198

Without a literature review, there can be no research project. Literature reviews are necessary to learn what is known (and not known) about a topic of interest. In the respiratory care profession, the body of research is enormous, so a method to search the medical literature efficiently is needed. Selecting the correct databases, use of Boolean logic operators, and consultations with librarians are used to optimize searches. For a narrow and precise search, use PubMed, MEDLINE, Ovid, EBSCO, the Cochrane Library, or Google Scholar. Reference management tools assist with organizing the evidence found from the search. Analyzing the search results and writing the review provides an understanding of why the research question is important and its meaning. Spending time in reviewing published literature reviews can serve as a guide or model for understanding the components and style of a well-written literature review.

Keywords: MEDLINE; PubMed; bibliographies; biomedical research; database; evidence; index medicus; journals; literature review; literature synthesis; medical literature review; research; search engine.

Copyright © 2023 by Daedalus Enterprises.

  • Databases, Factual
  • Information Storage and Retrieval*
  • Research Design
  • Review Literature as Topic*
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Performing a...

Performing a literature review

  • Related content
  • Peer review
  • Gulraj S Matharu , academic foundation doctor ,
  • Christopher D Buckley , Arthritis Research UK professor of rheumatology
  • 1 Institute of Biomedical Research, College of Medical and Dental Sciences, School of Immunity and Infection, University of Birmingham, UK

A necessary skill for any doctor

What causes disease, which drug is best, does this patient need surgery, and what is the prognosis? Although experience helps in answering these questions, ultimately they are best answered by evidence based medicine. But how do you assess the evidence? As a medical student, and throughout your career as a doctor, critical appraisal of published literature is an important skill to develop and refine. At medical school you will repeatedly appraise published literature and write literature reviews. These activities are commonly part of a special study module, research project for an intercalated degree, or another type of essay based assignment.

Formulating a question

Literature reviews are most commonly performed to help answer a particular question. While you are at medical school, there will usually be some choice regarding the area you are going to review.

Once you have identified a subject area for review, the next step is to formulate a specific research question. This is arguably the most important step because a clear question needs to be defined from the outset, which you aim to answer by doing the review. The clearer the question, the more likely it is that the answer will be clear too. It is important to have discussions with your supervisor when formulating a research question as his or her input will be invaluable. The research question must be objective and concise because it is easier to search through the evidence with a clear question. The question also needs to be feasible. What is the point in having a question for which no published evidence exists? Your supervisor’s input will ensure you are not trying to answer an unrealistic question. Finally, is the research question clinically important? There are many research questions that may be answered, but not all of them will be relevant to clinical practice. The research question we will use as an example to work through in this article is, “What is the evidence for using angiotensin converting enzyme (ACE) inhibitors in patients with hypertension?”

Collecting the evidence

After formulating a specific research question for your literature review, the next step is to collect the evidence. Your supervisor will initially point you in the right direction by highlighting some of the more relevant papers published. Before doing the literature search it is important to agree a list of keywords with your supervisor. A source of useful keywords can be obtained by reading Cochrane reviews or other systematic reviews, such as those published in the BMJ . 1 2 A relevant Cochrane review for our research question on ACE inhibitors in hypertension is that by Heran and colleagues. 3 Appropriate keywords to search for the evidence include the words used in your research question (“angiotensin converting enzyme inhibitor,” “hypertension,” “blood pressure”), details of the types of study you are looking for (“randomised controlled trial,” “case control,” “cohort”), and the specific drugs you are interested in (that is, the various ACE inhibitors such as “ramipril,” “perindopril,” and “lisinopril”).

Once keywords have been agreed it is time to search for the evidence using the various electronic medical databases (such as PubMed, Medline, and EMBASE). PubMed is the largest of these databases and contains online information and tutorials on how to do literature searches with worked examples. Searching the databases and obtaining the articles are usually free of charge through the subscription that your university pays. Early consultation with a medical librarian is important as it will help you perform your literature search in an impartial manner, and librarians can train you to do these searches for yourself.

Literature searches can be broad or tailored to be more specific. With our example, a broad search would entail searching all articles that contain the words “blood pressure” or “ACE inhibitor.” This provides a comprehensive list of all the literature, but there are likely to be thousands of articles to review subsequently (fig 1). ⇓ In contrast, various search restrictions can be applied on the electronic databases to filter out papers that may not be relevant to your review. Figure 2 gives an example of a specific search. ⇓ The search terms used in this case were “angiotensin converting enzyme inhibitor” and “hypertension.” The limits applied to this search were all randomised controlled trials carried out in humans, published in the English language over the last 10 years, with the search terms appearing in the title of the study only. Thus the more specific the search strategy, the more manageable the number of articles to review (fig 3), and this will save you time. ⇓ However, this method risks your not identifying all the evidence in the particular field. Striking a balance between a broad and a specific search strategy is therefore important. This will come with experience and consultation with your supervisor. It is important to note that evidence is continually becoming available on these electronic databases and therefore repeating the same search at a later date can provide new evidence relevant to your review.

Figure1

Fig 1 Results from a broad literature search using the term “angiotensin converting enzyme inhibitor”

  • Download figure
  • Open in new tab
  • Download powerpoint

Figure2

Fig 2 Example of a specific literature search. The search terms used were “angiotensin converting enzyme inhibitor” and “hypertension.” The limits applied to this search were all randomised controlled trials carried out in humans, published in English over the past 10 years, with the search terms appearing in the title of the study only

Figure3

Fig 3 Results from a specific literature search (using the search terms and limits from figure 2)

Reading the abstracts (study summary) of the articles identified in your search may help you decide whether the study is applicable for your review—for example, the work may have been carried out using an animal model rather than in humans. After excluding any inappropriate articles, you need to obtain the full articles of studies you have identified. Additional relevant articles that may not have come up in your original search can also be found by searching the reference lists of the articles you have already obtained. Once again, you may find that some articles are still not applicable for your review, and these can also be excluded at this stage. It is important to explain in your final review what criteria you used to exclude articles as well as those criteria used for inclusion.

The National Institute for Health and Clinical Excellence (NICE) publishes evidence based guidelines for the United Kingdom and therefore provides an additional resource for identifying the relevant literature in a particular field. 4 NICE critically appraises the published literature with recommendations for best clinical practice proposed and graded based on the quality of evidence available. Similarly, there are internationally published evidence based guidelines, such as those produced by the European Society of Cardiology and the American College of Chest Physicians, which can be useful when collecting the literature in a particular field. 5 6

Appraising the evidence

Once you have collected the evidence, you need to critically appraise the published material. Box 1 gives definitions of terms you will encounter when reading the literature. A brief guide of how to critically appraise a study is presented; however, it is advisable to consult the references cited for further details.

Box 1: Definitions of common terms in the literature 7

Prospective—collecting data in real time after the study is designed

Retrospective—analysis of data that have already been collected to determine associations between exposure and outcome

Hypothesis—proposed association between exposure and outcome. If presented in the negative it is called the null hypothesis

Variable—a quantity or quality that changes during the study and can be measured

Single blind—subjects are unaware of their treatment, but clinicians are aware

Double blind—both subjects and clinicians are unaware of treatment given

Placebo—a simulated medical intervention, with subjects not receiving the specific intervention or treatment being studied

Outcome measure/endpoint—clinical variable or variables measured in a study subsequently used to make conclusions about the original interventions or treatments administered

Bias—difference between reported results and true results. Many types exist (such as selection, allocation, and reporting biases)

Probability (P) value—number between 0 and 1 providing the likelihood the reported results occurred by chance. A P value of 0.05 means there is a 5% likelihood that the reported result occurred by chance

Confidence intervals—provides a range between two numbers within which one can be certain the results lie. A confidence interval of 95% means one can be 95% certain the actual results lie within the reported range

The study authors should clearly define their research question and ideally the hypothesis to be tested. If the hypothesis is presented in the negative, it is called the null hypothesis. An example of a null hypothesis is smoking does not cause lung cancer. The study is then performed to assess the significance of the exposure (smoking) on outcome (lung cancer).

A major part of the critical appraisal process is to focus on study methodology, with your key task being an assessment of the extent to which a study was susceptible to bias (the discrepancy between the reported results and the true results). It should be clear from the methods what type of study was performed (box 2).

Box 2: Different study types 7

Systematic review/meta-analysis—comprehensive review of published literature using predefined methodology. Meta-analyses combine results from various studies to give numerical data for the overall association between variables

Randomised controlled trial—random allocation of patients to one of two or more groups. Used to test a new drug or procedure

Cohort study—two or more groups followed up over a long period, with one group exposed to a certain agent (drug or environmental agent) and the other not exposed, with various outcomes compared. An example would be following up a group of smokers and a group of non-smokers with the outcome measure being the development of lung cancer

Case-control study—cases (those with a particular outcome) are matched as closely as possible (for age, sex, ethnicity) with controls (those without the particular outcome). Retrospective data analysis is performed to determine any factors associated with developing the particular outcomes

Cross sectional study—looks at a specific group of patients at a single point in time. Effectively a survey. An example is asking a group of people how many of them drink alcohol

Case report—detailed reports concerning single patients. Useful in highlighting adverse drug reactions

There are many different types of bias, which depend on the particular type of study performed, and it is important to look for these biases. Several published checklists are available that provide excellent resources to help you work through the various studies and identify sources of bias. The CONSORT statement (which stands for CONsolidated Standards Of Reporting Trials) provides a minimum set of recommendations for reporting randomised controlled trials and comprises a rigorous 25 item checklist, with variations available for other study types. 8 9 As would be expected, most (17 of 25) of the items focus on questions relating to the methods and results of the randomised trial. The remaining items relate to the title, abstract, introduction, and discussion of the study, in addition to questions on trial registration, protocol, and funding.

Jadad scoring provides a simple and validated system to assess the methodological quality of a randomised clinical trial using three questions. 10 The score ranges from zero to five, with one point given for a “yes” in each of the following questions. (1) Was the study described as randomised? (2) Was the study described as double blind? (3) Were there details of subject withdrawals, exclusions, and dropouts? A further point is given if (1) the method of randomisation was appropriate, and (2) the method of blinding was appropriate.

In addition, the Critical Appraisal Skills Programme provides excellent tools for assessing the evidence in all study types (box 2). 11 The Oxford Centre for Evidence-Based Medicine levels of evidence is yet another useful resource for assessing the methodological quality of all studies. 12

Ensure all patients have been accounted for and any exclusions, for whatever reason, are reported. Knowing the baseline demographic (age, sex, ethnicity) and clinical characteristics of the population is important. Results are usually reported as probability values or confidence intervals (box 1).

This should explain the major study findings, put the results in the context of the published literature, and attempt to account for any variations from previous work. Study limitations and sources of bias should be discussed. Authors’ conclusions should be supported by the study results and not unnecessarily extrapolated. For example, a treatment shown to be effective in animals does not necessarily mean it will work in humans.

The format for writing up the literature review usually consists of an abstract (short structured summary of the review), the introduction or background, methods, results, and discussion with conclusions. There are a number of good examples of how to structure a literature review and these can be used as an outline when writing your review. 13 14

The introduction should identify the specific research question you intend to address and briefly put this into the context of the published literature. As you have now probably realised, the methods used for the review must be clear to the reader and provide the necessary detail for someone to be able to reproduce the search. The search strategy needs to include a list of keywords used, which databases were searched, and the specific search limits or filters applied. Any grading of methodological quality, such as the CONSORT statement or Jadad scoring, must be explained in addition to any study inclusion or exclusion criteria. 6 7 8 The methods also need to include a section on the data collected from each of the studies, the specific outcomes of interest, and any statistical analysis used. The latter point is usually relevant only when performing meta-analyses.

The results section must clearly show the process of filtering down from the articles obtained from the original search to the final studies included in the review—that is, accounting for all excluded studies. A flowchart is usually best to illustrate this. Next should follow a brief description of what was done in the main studies, the number of participants, the relevant results, and any potential sources of bias. It is useful to group similar studies together as it allows comparisons to be made by the reader and saves repetition in your write-up. Boxes and figures should be used appropriately to illustrate important findings from the various studies.

Finally, in the discussion you need to consider the study findings in light of the methodological quality—that is, the extent of potential bias in each study that may have affected the study results. Using the evidence, you need to make conclusions in your review, and highlight any important gaps in the evidence base, which need to be dealt with in future studies. Working through drafts of the literature review with your supervisor will help refine your critical appraisal skills and the ability to present information concisely in a structured review article. Remember, if the work is good it may get published.

Originally published as: Student BMJ 2012;20:e404

Competing interests: None declared.

Provenance and peer review: Not commissioned; externally peer reviewed.

  • ↵ The Cochrane Library. www3.interscience.wiley.com/cgibin/mrwhome/106568753/HOME?CRETRY=1&SRETRY=0 .
  • ↵ British Medical Journal . www.bmj.com/ .
  • ↵ Heran BS, Wong MMY, Heran IK, Wright JM. Blood pressure lowering efficacy of angiotensin converting enzyme (ACE) inhibitors for primary hypertension. Cochrane Database Syst Rev 2008 ; 4 : CD003823 , doi: 10.1002/14651858.CD003823.pub2. OpenUrl PubMed
  • ↵ National Institute for Health and Clinical Excellence. www.nice.org.uk .
  • ↵ European Society of Cardiology. www.escardio.org/guidelines .
  • ↵ Geerts WH, Bergqvist D, Pineo GF, Heit JA, Samama CM, Lassen MR, et al. Prevention of venous thromboembolism: American College of Chest Physicians evidence-based clinical practice guidelines (8th ed). Chest 2008 ; 133 : 381 -453S. OpenUrl CrossRef
  • ↵ Wikipedia. http://en.wikipedia.org/wiki .
  • ↵ Moher D, Schulz KF, Altman DG, Egger M, Davidoff F, Elbourne D, et al. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 2001 ; 357 : 1191 -4. OpenUrl CrossRef PubMed Web of Science
  • ↵ The CONSORT statement. www.consort-statement.org/ .
  • ↵ Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, et al. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials 1996 ; 17 : 1 -12. OpenUrl CrossRef PubMed Web of Science
  • ↵ Critical Appraisal Skills Programme (CASP). www.sph.nhs.uk/what-we-do/public-health-workforce/resources/critical-appraisals-skills-programme .
  • ↵ Oxford Centre for Evidence-based Medicine—Levels of Evidence. www.cebm.net .
  • ↵ Van den Bruel A, Thompson MJ, Haj-Hassan T, Stevens R, Moll H, Lakhanpaul M, et al . Diagnostic value of laboratory tests in identifying serious infections in febrile children: systematic review. BMJ 2011 ; 342 : d3082 . OpenUrl Abstract / FREE Full Text
  • ↵ Awopetu AI, Moxey P, Hinchliffe RJ, Jones KG, Thompson MM, Holt PJ. Systematic review and meta-analysis of the relationship between hospital volume and outcome for lower limb arterial surgery. Br J Surg 2010 ; 97 : 797 -803. OpenUrl CrossRef PubMed

medical literature review sites

Health (Nursing, Medicine, Allied Health)

  • Find Articles/Databases
  • Reference Resources
  • Evidence Summaries & Clinical Guidelines
  • Drug Information
  • Health Data & Statistics
  • Patient/Consumer Facing Materials
  • Images and Streaming Video
  • Grey Literature
  • Mobile Apps & "Point of Care" Tools
  • Tests & Measures This link opens in a new window
  • Citing Sources
  • Selecting Databases
  • Framing Research Questions
  • Crafting a Search
  • Narrowing / Filtering a Search
  • Expanding a Search
  • Cited Reference Searching
  • Saving Searches
  • Term Glossary
  • Critical Appraisal Resources
  • What are Literature Reviews?
  • Conducting & Reporting Systematic Reviews
  • Finding Systematic Reviews
  • Tutorials & Tools for Literature Reviews
  • Finding Full Text

What are Systematic Reviews? (3 minutes, 24 second YouTube Video)

Systematic Literature Reviews: Steps & Resources

medical literature review sites

These steps for conducting a systematic literature review are listed below . 

Also see subpages for more information about:

  • The different types of literature reviews, including systematic reviews and other evidence synthesis methods
  • Tools & Tutorials

Literature Review & Systematic Review Steps

  • Develop a Focused Question
  • Scope the Literature  (Initial Search)
  • Refine & Expand the Search
  • Limit the Results
  • Download Citations
  • Abstract & Analyze
  • Create Flow Diagram
  • Synthesize & Report Results

1. Develop a Focused   Question 

Consider the PICO Format: Population/Problem, Intervention, Comparison, Outcome

Focus on defining the Population or Problem and Intervention (don't narrow by Comparison or Outcome just yet!)

"What are the effects of the Pilates method for patients with low back pain?"

Tools & Additional Resources:

  • PICO Question Help
  • Stillwell, Susan B., DNP, RN, CNE; Fineout-Overholt, Ellen, PhD, RN, FNAP, FAAN; Melnyk, Bernadette Mazurek, PhD, RN, CPNP/PMHNP, FNAP, FAAN; Williamson, Kathleen M., PhD, RN Evidence-Based Practice, Step by Step: Asking the Clinical Question, AJN The American Journal of Nursing : March 2010 - Volume 110 - Issue 3 - p 58-61 doi: 10.1097/01.NAJ.0000368959.11129.79

2. Scope the Literature

A "scoping search" investigates the breadth and/or depth of the initial question or may identify a gap in the literature. 

Eligible studies may be located by searching in:

  • Background sources (books, point-of-care tools)
  • Article databases
  • Trial registries
  • Grey literature
  • Cited references
  • Reference lists

When searching, if possible, translate terms to controlled vocabulary of the database. Use text word searching when necessary.

Use Boolean operators to connect search terms:

  • Combine separate concepts with AND  (resulting in a narrower search)
  • Connecting synonyms with OR  (resulting in an expanded search)

Search:  pilates AND ("low back pain"  OR  backache )

Video Tutorials - Translating PICO Questions into Search Queries

  • Translate Your PICO Into a Search in PubMed (YouTube, Carrie Price, 5:11) 
  • Translate Your PICO Into a Search in CINAHL (YouTube, Carrie Price, 4:56)

3. Refine & Expand Your Search

Expand your search strategy with synonymous search terms harvested from:

  • database thesauri
  • reference lists
  • relevant studies

Example: 

(pilates OR exercise movement techniques) AND ("low back pain" OR backache* OR sciatica OR lumbago OR spondylosis)

As you develop a final, reproducible strategy for each database, save your strategies in a:

  • a personal database account (e.g., MyNCBI for PubMed)
  • Log in with your NYU credentials
  • Open and "Make a Copy" to create your own tracker for your literature search strategies

4. Limit Your Results

Use database filters to limit your results based on your defined inclusion/exclusion criteria.  In addition to relying on the databases' categorical filters, you may also need to manually screen results.  

  • Limit to Article type, e.g.,:  "randomized controlled trial" OR multicenter study
  • Limit by publication years, age groups, language, etc.

NOTE: Many databases allow you to filter to "Full Text Only".  This filter is  not recommended . It excludes articles if their full text is not available in that particular database (CINAHL, PubMed, etc), but if the article is relevant, it is important that you are able to read its title and abstract, regardless of 'full text' status. The full text is likely to be accessible through another source (a different database, or Interlibrary Loan).  

  • Filters in PubMed
  • CINAHL Advanced Searching Tutorial

5. Download Citations

Selected citations and/or entire sets of search results can be downloaded from the database into a citation management tool. If you are conducting a systematic review that will require reporting according to PRISMA standards, a citation manager can help you keep track of the number of articles that came from each database, as well as the number of duplicate records.

In Zotero, you can create a Collection for the combined results set, and sub-collections for the results from each database you search.  You can then use Zotero's 'Duplicate Items" function to find and merge duplicate records.

File structure of a Zotero library, showing a combined pooled set, and sub folders representing results from individual databases.

  • Citation Managers - General Guide

6. Abstract and Analyze

  • Migrate citations to data collection/extraction tool
  • Screen Title/Abstracts for inclusion/exclusion
  • Screen and appraise full text for relevance, methods, 
  • Resolve disagreements by consensus

Covidence is a web-based tool that enables you to work with a team to screen titles/abstracts and full text for inclusion in your review, as well as extract data from the included studies.

Screenshot of the Covidence interface, showing Title and abstract screening phase.

  • Covidence Support
  • Critical Appraisal Tools
  • Data Extraction Tools

7. Create Flow Diagram

The PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) flow diagram is a visual representation of the flow of records through different phases of a systematic review.  It depicts the number of records identified, included and excluded.  It is best used in conjunction with the PRISMA checklist .

Example PRISMA diagram showing number of records identified, duplicates removed, and records excluded.

Example from: Stotz, S. A., McNealy, K., Begay, R. L., DeSanto, K., Manson, S. M., & Moore, K. R. (2021). Multi-level diabetes prevention and treatment interventions for Native people in the USA and Canada: A scoping review. Current Diabetes Reports, 2 (11), 46. https://doi.org/10.1007/s11892-021-01414-3

  • PRISMA Flow Diagram Generator (ShinyApp.io, Haddaway et al. )
  • PRISMA Diagram Templates  (Word and PDF)
  • Make a copy of the file to fill out the template
  • Image can be downloaded as PDF, PNG, JPG, or SVG
  • Covidence generates a PRISMA diagram that is automatically updated as records move through the review phases

8. Synthesize & Report Results

There are a number of reporting guideline available to guide the synthesis and reporting of results in systematic literature reviews.

It is common to organize findings in a matrix, also known as a Table of Evidence (ToE).

Example of a review matrix, using Microsoft Excel, showing the results of a systematic literature review.

  • Reporting Guidelines for Systematic Reviews
  • Download a sample template of a health sciences review matrix  (GoogleSheets)

Steps modified from: 

Cook, D. A., & West, C. P. (2012). Conducting systematic reviews in medical education: a stepwise approach.   Medical Education , 46 (10), 943–952.

  • << Previous: Critical Appraisal Resources
  • Next: What are Literature Reviews? >>
  • Last Updated: Apr 13, 2024 9:13 PM
  • URL: https://guides.nyu.edu/health

University of Texas

  • University of Texas Libraries
  • UT Libraries

Literature Reviews

  • Dell Medical School Library
  • LibKey Nomad - Full Text
  • What's New?
  • Clinical Practice Guidelines
  • Clinical Trials
  • Drug Information
  • Health and Medical Law
  • Point-of-Care Tools
  • Test Prep Resources
  • Video, Audio, and Images
  • Search Tips
  • PubMed Guide This link opens in a new window
  • Ask the Question
  • Acquire the Evidence
  • Appraise the Evidence
  • Evidence Hierarchy
  • EBM Bibliography
  • Child Neurology
  • Dermatology
  • Emergency Medicine
  • Family Medicine
  • Internal Medicine
  • Obstetrics & Gynecology
  • Ophthalmology
  • Orthopaedic Surgery
  • Physical Medicine & Rehabilitation
  • Mobile Apps
  • Citation Managers This link opens in a new window
  • Citation Manuals
  • General Resources
  • Study Types
  • Systematic Reviews
  • Scoping Reviews
  • Rapid Reviews
  • Integrative Reviews
  • Technical Reports
  • Case Reports
  • Getting Published
  • Selecting a Journal
  • Open Access Publishing
  • Avoiding Low Quality Open Access
  • High Quality Open Access Journals
  • Keeping Up with the Literature
  • Health Statistics
  • Research Funding This link opens in a new window
  • Author Metrics
  • Article Metrics
  • Journal Metrics
  • Scholarly Profile Tools
  • Health Humanities This link opens in a new window
  • Health Equity This link opens in a new window
  • UT-Authored Articles
  • Resources for DMS COVID-19 Elective
  • Types of Literature Reviews
  • How to Write a Literature Review
  • How to Write the Introduction to a Research Article
  • The Pandora's Box of Evidence Synthesis and the case for a living Evidence Synthesis Taxonomy | BMJ Evidence-Based Medicine, 2023
  • Meeting the review family: exploring review types and associated information retrieval requirements | Health Information and Libraries Journal, 2019
  • A typology of reviews: an analysis of 14 review types and associated methodologies | Health Information and Libraries Journal, 2009
  • Conceptual recommendations for selecting the most appropriate knowledge synthesis method to answer research questions related to complex evidence | Journal of Clinical Epidemiology, 2016
  • Methods for knowledge synthesis: an overview | Heart & Lung: The Journal of Critical Care, 2014
  • Not sure what type of review to conduct? Brief descriptions of each type plus tools to help you decide

Cover Art

  • Ten simple rules for writing a literature review | PLoS Computational Biology, 2013
  • The Purpose, Process, and Methods of Writing a Literature Review | AORN Journal. 2016
  • Why, When, Who, What, How, and Where for Trainees Writing Literature Review Articles. | Annals of Biomed Engineering, 2019
  • So You Want to Write a Narrative Review Article? | Journal of Cardiothoracic and Anesthesia, 2021
  • An Introduction to Writing Narrative and Systematic Reviews - Tasks, Tips and Traps for Aspiring Authors | Heart, Lung, and Circulation, 2018

Cover Art

  • The Literature Review: A Foundation for High-Quality Medical Education Research | Journal of Graduate Medical Education, 2016
  • Writing an effective literature review : Part I: Mapping the gap | Perspectives on Medical Education, 2018
  • Writing an effective literature review : Part II: Citation technique | Perspectives on Medical Education, 2018
  • Last Updated: Apr 17, 2024 11:29 PM
  • URL: https://guides.lib.utexas.edu/medicine

Creative Commons License

Banner Image

Writing in the Health Sciences: Research and Lit Reviews

  • Research and Lit Reviews
  • Tables and Figures
  • Citation Management
  • Further Reference

What Is a Literature Review?

In simple terms, a literature review investigates the available information on a certain topic. It may be only a knowledge survey with an intentional focus. However, it is often a well-organized examination of the existing research which evaluates each resource in a systematic way. Often a lit review will involve a series of inclusion/exclusion criteria or an assessment rubric which examines the research in-depth. Below are some interesting sources to consider.

medical literature review sites

The Writing Center's Literature Reviews - UNC-Chapel Hill's writing center explains some of the key criteria involved in doing a literature review.

Literature Review vs. Systematic Review - This recent article details the difference between a literature review and a systematic review. Though the two share similar attributes, key differences are identified here.

Literature Review Steps

1. Identify a research question. For example: "Does the use of warfarin in elderly patients recovering from myocardial infarction help prevent stroke?"

2. Consider which databases might provide information for your topic. Often PubMed or CINAHL will cover a wide spectrum of biomedical issues. However, other databases and grey literature sources may specialize in certain disciplines. Embase is generally comprehensive but also specializes in pharmacological interventions.

3. Select the major subjects or ideas from your question.  Focus in on the particular concepts involved in your research. Then brainstorm synonyms and related terminology for these topics.

4. Look for the  preferred indexing terms for each concept in your question. This is especially important with databases such as PubMed, CINAHL, or Scopus where headings within the MeSH database or under the Emtree umbrella are present.  For example, the above question's keywords such as " warfarin " or "myocardial infarction" can involve related terminology or subject headings such as "anti-coagulants" or "cardiovascular disease."

5. Build your search using boolean operators. Combine the synonyms in your database using boolean operators such as AND or OR. Sometimes it is necessary to research parts of a question rather than the whole. So you might link searches for things like the preventive effects of anti-coagulants with stroke or embolism, then AND these results with the therapy for patients with cardiovascular disease.

6. Filter and save your search results from the first database (do this for all databases). This may be a short list because of your topic's limitations, but it should be no longer than 15 articles for an initial search. Make sure your list is saved or archived and presents you with what's needed to access the full text.

7. Use the same process with the next databases on your list. But pay attention to how certain major headings may alter the terminology. "Stroke" may have a suggested term of "embolism" or even "cerebrovascular incident" depending on the database.

8. Read through the material for inclusion/exclusion . Based on your project's criteria and objective, consider which studies or reviews deserve to be included and which should be discarded. Make sure the information you have permits you to go forward. 

9. Write the literature review. Begin by summarizing why your research is important and explain why your approach will help fill gaps in current knowledge. Then incorporate how the information you've selected will help you to do this. You do not need to write about all of the included research you've chosen, only the most pe rtinent.

10. Select the most relevant literature for inclusion in the body of your report. Choose the articles and data sets that are most particularly relevant to your experimental approach. Consider how you might arrange these sources in the body of your draft. 

Library Books

medical literature review sites

Call #: WZ 345 G192h 2011

ISBN #: 9780763771867

This book details a practical, step-by-step method for conducting a literature review in the health sciences. Aiming to  synthesize the information while also analyzing it, the Matrix Indexing System enables users to establish a  structured process for tracking, organizing and integrating the knowledge within a collection.

Key Research Databases

PubMed -  The premier medical database for review articles in medicine, nursing, healthcare, other related biomedical disciplines. PubMed contains over 20 million citations and can be navigated through multiple database capabilities and searching strategies.

CINAHL Ultimate - Offers comprehensive coverage of health science literature. CINAHL is particularly useful for those researching the allied disciplines of nursing, medicine, and pharmaceutical sciences.

Scopus - Database with over 12 million abstracts and citations which include peer-reviewed titles from international and Open Access journals. Also includes interactive bibliometrics and researcher profiling.

Embase - Elsevier's fully interoperable database of both Medline and Emtree-indexed articles. Embase also specializes in pharmacologic interventions.

Cochrane - Selected evidence-based medicine resources from the Cochrane Collaboration that includes peer-reviewed systematic reviews and randomized controlled trials. Access this database through OVID with TTUHSC Libraries.

DARE - Literally the Datatase of Abstracts of Reviews of Effectiveness, this collection of systematic reviews and other evidence-based research contains critical assessments from a wide variety of medical journals.

TRIP - This TRIP database is structured according to the level of evidence for its EBM content. It allows users to quickly and easily locate high-quality, accredited medical literature for clinical and research purposes.

Web of Science - Contains bibliographic articles and data from a wide variety of publications in the life sciences and other fields. Also, see this link for conducting a lit review exclusively within Web of Science.

  • << Previous: Welcome
  • Next: Drafting >>
  • Last Updated: Sep 29, 2023 10:07 AM
  • URL: https://ttuhsc.libguides.com/Writing_HealthSciences

Texas Tech University Health Sciences Center logo

libraryheader-short.png

Systematic Reviews

Describes what is involved with conducting a systematic review of the literature for evidence-based public health and how the librarian is a partner in the process.

Several CDC librarians have special training in conducting literature searches for systematic reviews.  Literature searches for systematic reviews can take a few weeks to several months from planning to delivery.

Fill out a search request form here  or contact the Stephen B. Thacker CDC Library by email  [email protected] or telephone 404-639-1717.

Campbell Collaboration

Cochrane Collaboration

Eppi Centre

Joanna Briggs Institute

McMaster University

PRISMA Statement

Systematic Reviews – CRD’s Guide

Systematic Reviews of Health Promotion and Public Health Interventions

The Guide to Community Preventive Services

Look for systematic reviews that have already been published. 

  • To ensure that the work has not already been done.
  • To provides examples of search strategies for your topic

Look in PROSPERO for registered systematic reviews.

Search Cochrane and CRD-York for systematic reviews.

Search filter for finding systematic reviews in PubMed

Other search filters to locate systematic reviews

A systematic review attempts to collect and analyze all evidence that answers a specific question.  The question must be clearly defined and have inclusion and exclusion criteria. A broad and thorough search of the literature is performed and a critical analysis of the search results is reported and ultimately provides a current evidence-based answer  to the specific question.

Time:  According to Cochrane , it takes 18 months on average to complete a Systematic Review.

The average systematic review from beginning to end requires 18 months of work. “…to find out about a healthcare intervention it is worth searching research literature thoroughly to see if the answer is already known. This may require considerable work over many months…” ( Cochrane Collaboration )

Review Team: Team Members at minimum…

  • Content expert
  • 2 reviewers
  • 1 tie breaker
  • 1 statistician (meta-analysis)
  • 1 economist if conducting an economic analysis
  • *1 librarian (expert searcher) trained in systematic reviews

“Expert searchers are an important part of the systematic review team, crucial throughout the review process-from the development of the proposal and research question to publication.” ( McGowan & Sampson, 2005 )

*Ask your librarian to write a methods section regarding the search methods and to give them co-authorship. You may also want to consider providing a copy of one or all of the search strategies used in an appendix.

The Question to Be Answered: A clearly defined and specific question or questions with inclusion and exclusion criteria.

Written Protocol: Outline the study method, rationale, key questions, inclusion and exclusion criteria, literature searches, data abstraction and data management, analysis of quality of the individual studies, synthesis of data, and grading of the evidience for each key question.

Literature Searches:  Search for any systematic reviews that may already answer the key question(s).  Next, choose appropriate databases and conduct very broad, comprehensive searches.  Search strategies must be documented so that they can be duplicated.  The librarian is integral to this step of the process. Before your librarian creates a search strategy and starts searching in earnest you should write a detailed PICO question , determine the inclusion and exclusion criteria for your study, run a preliminary search, and have 2-4 articles that already fit the criteria for your review.

What is searched depends on the topic of the review but should include…

  • At least 3 standard medical databases like PubMed/Medline, CINAHL, Embase, etc..
  • At least 2 grey literature resources like Clinicaltrials.gov, COS Conference Papers Index, Grey Literature Report,  etc…

Citation Management: EndNote is a bibliographic management tools that assist researchers in managing citations.  The Stephen B. Thacker CDC Library oversees the site license for EndNote.

To request installation:   The library provides EndNote  to CDC staff under a site-wide license. Please use the ITSO Software Request Tool (SRT) and submit a request for the latest version (or upgraded version) of EndNote. Please be sure to include the computer name for the workstation where you would like to have the software installed.

EndNote Training:   CDC Library offers training on EndNote on a regular basis – both a basic and advanced course. To view the course descriptions and upcoming training dates, please visit the CDC Library training page .

For assistance with EndNote software, please contact [email protected]

Vendor Support and Services:   EndNote – Support and Services (Thomson Reuters)  EndNote – Tutorials and Live Online Classes (Thomson Reuters)

Getting Articles:

Articles can be obtained using DocExpress or by searching the electronic journals at the Stephen B. Thacker CDC Library.

IOM Standards for Systematic Reviews: Standard 3.1: Conduct a comprehensive systematic search for evidence

The goal of a systematic review search is to maximize recall and precision while keeping results manageable. Recall (sensitivity) is defined as the number of relevant reports identified divided by the total number of relevant reports in existence. Precision (specificity) is defined as the number of relevant reports identified divided by the total number of reports identified.

Issues to consider when creating a systematic review search:   

  • All concepts are included in the strategy
  • All appropriate subject headings are used
  • Appropriate use of explosion
  • Appropriate use of subheadings and floating subheadings
  • Use of natural language (text words) in addition to controlled vocabulary terms
  • Use of appropriate synonyms, acronyms, etc.
  • Truncation and spelling variation as appropriate
  • Appropriate use of limits such as language, years, etc.
  • Field searching, publication type, author, etc.
  • Boolean operators used appropriately
  • Line errors: when searches are combined using line numbers, be sure the numbers refer to the searches intended
  • Check indexing of relevant articles
  • Search strategy adapted as needed for multiple databases
  • Cochrane Handbook: Searching for Studies See Part 2, Chapter 6

A step-by-step guide to systematically identify all relevant animal studies

Materials listed in these guides are selected to provide awareness of quality public health literature and resources. A material’s inclusion does not necessarily represent the views of the U.S. Department of Health and Human Services (HHS), the Public Health Service (PHS), or the Centers for Disease Control and Prevention (CDC), nor does it imply endorsement of the material’s methods or findings. HHS, PHS, and CDC assume no responsibility for the factual accuracy of the items presented. The selection, omission, or content of items does not imply any endorsement or other position taken by HHS, PHS, and CDC. Opinion, findings, and conclusions expressed by the original authors of items included in these materials, or persons quoted therein, are strictly their own and are in no way meant to represent the opinion or views of HHS, PHS, or CDC. References to publications, news sources, and non-CDC Websites are provided solely for informational purposes and do not imply endorsement by HHS, PHS, or CDC.

Exit Notification / Disclaimer Policy

  • The Centers for Disease Control and Prevention (CDC) cannot attest to the accuracy of a non-federal website.
  • Linking to a non-federal website does not constitute an endorsement by CDC or any of its employees of the sponsors or the information and products presented on the website.
  • You will be subject to the destination website's privacy policy when you follow the link.
  • CDC is not responsible for Section 508 compliance (accessibility) on other federal or private website.

Literature Reviews for Medical Devices: 6 Expert Tips

Table of Contents

Catarina Carrão , freelance medical writer on Kolabtree, outlines the importance of literature reviews for medical devices and best practices to follow. 

A clinical evaluation is an ongoing process, conducted throughout the life cycle of a medical device . It is usually first performed during the development phase of the medical device, in order to identify the data that it needs to be granted market access. In the European Union, for an initial CE-marking, a Clinical Evaluation Report (CER) is mandatory, and it must be actively updated continuously afterwards[ 1] . In the United States, a Pre-Market Approval (PMA ) [2] is the Food and Drug Administration (FDA) process for scientific and regulatory review, to evaluate the safety and effectiveness of a medical device (Class III), so that it can reach the consumer. It also uses an evidence-based review system for scientific evaluation of medical devices.

This process of clinical evaluation is fundamental, because it ensures the safety and performance of the device based on abundant clinical evidence, throughout the lifetime of the medical device on the market. It enables Notified Bodies (NBs) and Competent Authorities to read through the clinical evidence to demonstrate the conformity of the device with the essential requirements, not just for initial marketing, but throughout its lifetime ( e.g. , fulfilment of post- market surveillance and reporting requirements) [ 1] .

Literature Reviews for Medical Devices

Literature reviews are crucial to the success of a CER and PMA, because a solid and systematic literature research strategy fortifies every stage of the medical device life cycle process: from concept and design, through clinical trials to release of the medical device and reimbursement [ 3] . So, more than just a wise investment, the screening of the literature to comply with regulatory authorities during the approval process and for post-market surveillance, is fundamental to the global success of any marketed medical device.

For many companies, especially Small and Medium Enterprises (SMEs), the data retrieved from literature searches will represent most, if not all, of the data collected. As such, this search identifies sources of clinical data for establishing the current knowledge or “the-state-of-the-art” that describes the clinical background in the corresponding medical field; the clinical data that is relevant to the device under evaluation, or to an equivalent device (if equivalence is claimed in a CER 1 , or 510K[ 4] ); and, the identification of potential clinical hazards. That’s why it is so critically important to develop a literature search strategy that is robust, and can be replicated during subsequent updates by any person.

1. Search protocol (Stage 1)

The searching strategy should be thorough and objective, i.e. it should identify all relevant favourable and unfavourable data; and, should be carried out based on a search protocol [ 1] . The search protocol documents the planning of the search before execution. Once the searches have been executed, the adequacy of the searches should be verified, and a literature search report should be compiled to present details, with any deviations from the literature search protocol documented, together with the results of the search. It is important that the literature search is documented to such degree that the methods can be appraised critically, the results can be verified, and the search reproduced if necessary.

According to the regulations [ 1], [2] , the literature search protocol should include the following elements [ 5] :

  • Sources of data used ( e.g. , MEDLINE/PubMed, Embase, Google Scholar, ResearchGate, internet searches, etc.);
  • The methodology used for the searches;
  • The exact search terms and parameters used to search scientific databases ( e.g. , dates);
  • Specific selection or exclusion criteria along with justifications for each;
  • How was duplication of data from multiple sources addressed;
  • How was data integrity ensured ( e.g. , Quality Control Methods or second reviewers);
  • How each data source was appraised, and its relevance for the specific device;
  • Analysis and data processing handling.

The search strategy must be broad enough to ensure that no essential information is missed, but still allow precise identification of relevant results. This can involve the use of search features such as filters to narrow down the result set; sub-headings based on key concepts such as device adverse effects, or device comparison; and triage and analysis methods to identify the most relevant literature. The results themselves are generally in the form of a list of citations or data, with descriptive indexing tags and other key information.

Abstracts lack sufficient detail to allow issues to be evaluated thoroughly and independently, but may be sufficient to allow a first evaluation of the relevance of a paper [ 1] . Good research informatics solutions allow both flagging of the citation and annotation of the article text, so that teams can work closely on individual items [ 3] . Copies of the full text papers should be included in the final files. The literature search protocol(s), the literature search report(s), and full text copies of articles and relevant documents, become part of the final technical documentation for the medical device.

2. Possible errors

A precise literature search provides accurate evidence; but, unless implemented correctly, the result can be misleading, time consuming, or even useless [ 6] . There are errors related to the volume of evidence, relevance of the data, tone of evidence, and its value to the research topic, that might undermine even a high-skilled researcher. It is necessary to focus the literature search on precise topics, and obtain relevant evidence within a stipulated time, otherwise outcomes might deviate.

Usually, errors have their origin in an incorrect use of primary attributes of literature search, viz. , keywords, Boolean, and database [ 6] . For example, the evaluator could create errors in setting eligibility criteria (type of literature and databases); or errors in selecting keywords and Boolean logics; or even, errors in setting up search phrases in the database.

These attributes can lead to errors of inclusion (too much data, partly not relevant to the issue); or, exclusion of important data, because of too stringent keyword use. But it can also lead to errors of “inclusive exclusions”, due to bias by literature professionals in the searches; and, “exclusive inclusions”, with the use of highly specific key-terms with inadequate Booleans, or even the exclusion of synonyms for the same medical terminology. An error of “exclusive exclusions“ is also called the „error of limited relevance“, and it happens in many cases. This error is a combination of bias and specific exclusiveness; where the search phrases constructed will be biased to only one-sided data trends, and the terms selected will be too exclusive to return sufficient information [ 6] .

Besides possible errors in retrieving important clinical data, uncertainty of the final literature review also arises from two sources: the methodological quality of the data, and the relevance of the data to the evaluation of the device in relation to the different aspects of its intended purpose [ 1] . Both sources of uncertainty should be analysed, in order to determine a weighting for each data set. As such, a balanced assessment of the quality of the data is essential to the success of the literature review search.

Also read: Clinical evaluation for EU MDR Compliance: 5 Dos and Don’ts 

3. Appraisal of the clinical data

When appraising the data generated by the database search (Stage 2), the evaluator is looking to make sure it has statistically significant data sets, uses proper statistical methods, has adequate controls, and properly collects mortality and/or serious adverse event data. It is essential that the correct assessment is done based on the complete text of the publications found, not just by reading the abstracts or summaries. For each document appraised, there needs to be a documentation of the appraisal to the point that it could be reasonably reviewed by others. The appraisal results should also support conclusions about the clinical safety and performance of the finished device ( e.g. , citing non-device-related literature would be ranked low for appraisal) 5 . There are some red flags provided by the regulation in order to appraise the medical publications, for example:

  • The article lacks basic information such as the methods used, number of patients, identity of products, etc.;
  • Has data sets that are too small to be statistically significant;
  • Contains data that applies improper statistical methods;
  • Employs studies that lack adequate controls;
  • Has an improper collection of mortality and serious adverse event data;
  • Depicts a misrepresentation by the authors;

The evaluators should verify whether clinical investigations have been defined in such a way as to confirm or refute the manufacturer’s claims for the device; and, whether these investigations include an adequate number of observations to guarantee the scientific validity of the conclusions [ 1] . Some papers considered unsuitable for demonstration of adequate performance because of poor elements of the study design or inadequate analysis, may still contain data suitable for safety analysis, or vice versa.

Typically, clinical data should receive the highest weighting, when generated through a well designed and monitored randomized controlled clinical investigation (also called randomised controlled trial), conducted with the device under evaluation in its intended purpose, with patients and users that are representative of the target population [ 1] . It is acknowledged by the regulators that randomized clinical investigations may not always be feasible and/or appropriate, and the use of alternative study designs may provide relevant clinical information of adequate weighting. When rejecting evidence, the evaluators should document the reasons.

4. Analysis and conclusions generated from the clinical data

During the analysis stage (Stage 3), a comprehensive assessment is done to determine if the data found actually meets the clinical safety requirements, clinical performance requirements, and General Safety and Performance Requirements (GSPR). It is important to evaluate if the risk-benefit ratio of the medical device is appropriate based on the intended purpose of the device, or if the device can actually achieve all performance claims made by the manufacturer. Also, if the materials supplied by the manufacturer (labelling/instructions) are adequate to describe the intended purpose and mitigate the risk [ 7] . All in all, the evaluation is intended to conclude whether the risks of the device are minimal and acceptable according to its purpose. As such, understanding the interaction between the device and body, the number and severity of adverse events, and the current standards of care, are some of the gaps that will need to be taken into account [ 1] .

The data from the literature is often put into Excel tables, which is a convenient way to compare different study details, patient populations, endpoints, adverse events, etc. [ 7] . This is extremely helpful in noting differences between studies when writing the summary and conclusions. The evaluators should also include aspects such as rare complications, uncertainties regarding medium- and long-term performance, or safety under wide-spread use; and, identify additional clinical investigations, or other measures, that are necessary in order to generate any missing data [ 1] .

5. Informatic tools

This massive task of literature search can be streamlined with the right research informatics solutions. Nowadays, the key is to select a literature database with appropriate medical device coverage, in terms of content and indexing. In a case study reviewing literature about a particular medical device, the top research informatics solutions were Embase and Science Citation Index (SCI) [ 3]. But also, Medline, and BioMed Central are considered top research informatic solutions for retrieving medical device literature and can be used by anyone.

Using a solution that performs automated searches and notifies the user of relevant new data via email alerts or RSS feeds saves considerable time; and, keeps the evaluator updated until the final stage of submission [ 3] . If the tool has the appropriate indexing and tagging, this will simplify literature triage; and, more importantly, it dramatically reduces the risk of missing adverse event reports. The combination of a good literature search tool and a trained evaluator can be the best solution to avoid errors and limitations of literature review search.

Also read: Writing a Clinical Evaluation Report: 5 Quick Tips

6. Process flow

Modern regulatory requirements have made biomedical literature research an essential part of the medical device life cycle; as such, a good strategy to find and summarize all the relevant clinical data is a must:

  • Identify the question to be answered;
  • Decide which database better fits the question,
  • Identify the Medical Subject Headings (MeSH), or Embase Subject Headings (EmTree);
  • Recognize the correct Boolean terms to use;
  • Create and document the Literature Search Protocol;
  • Run the Literature search automation tool;
  • Appraise and analyse the literature (tabulation),
  • Summarize conclusions.

Conclusions

The future looks promising for the dauting task of doing a systematic literature review , since artificial intelligence and natural language processing based-tools with cognitive capabilities provide a near-perfect solution [ 6] . But, until then, an organized and highly-skilled research-evaluator is essential to execute a dedicated strategy for literature monitoring, triage and analysis.

Need help with literature reviews for medical devices? Hire clinical evaluation experts and literature search specialists on Kolabtree. 

References:

  • European Commission. CLINICAL EVALUATION: A GUIDE FOR MANUFACTURERS AND NOTIFIED BODIES UNDER DIRECTIVES 93/42/EEC and 90/385/EEC. MEDDEV 27/1 revision 4 . 2016.
  • FDA. Premarket Approval (PMA). 2019;18th May 2020.
  • Elsevier. BOOSTING THE SUCCESS OF MEDICAL DEVICE DEVELOPMENT WITH SYSTEMATIC LITERATURE REVIEWS. 2014;18th May 2020.
  • FDA. Premarket Notification 510(k). 2020;19th May 2020.
  • OrielStat. Creating an EU CER Literature Review Protocol and Reviewing Medical Device Clinical Data. 2019;18th May 2020.
  • Ashish Indani* SRB, Nadeem Ansari. Literature Search for Scientific Processes in Medical Devices: Challenges, Errors, and Mitigation Strategies. Tata Consultancies . 2017;7.
  • OrielStat. Analyzing Your Medical Device Clinical Datasets and Drawing Conclusions. 2019;18th May 2020.

Unlock Corporate Benefits • Secure Payment Assistance • Onboarding Support • Dedicated Account Manager

Sign up with your professional email to avail special advances offered against purchase orders, seamless multi-channel payments, and extended support for agreements.

About Author

Ramya Sriram manages digital content and communications at Kolabtree (kolabtree.com), the world's largest freelancing platform for scientists. She has over a decade of experience in publishing, advertising and digital content creation.

Related Posts

How on-demand fda submission consultants can prove valuable to your business, three ways on-demand medical writers can help your business , fda 510k submissions guide: free kolabtree whitepaper, leave a reply cancel reply.

Save my name, email, and website in this browser for the next time I comment.

Automated page speed optimizations for fast site performance

This paper is in the following e-collection/theme issue:

Published on 17.4.2024 in Vol 26 (2024)

Mobile Apps to Support Mental Health Response in Natural Disasters: Scoping Review

Authors of this article:

Author Orcid Image

  • Nwamaka Alexandra Ezeonu 1 , MBBS, MSc, MBA   ; 
  • Attila J Hertelendy 2, 3 , BSc, MHS, MSc, PhD   ; 
  • Medard Kofi Adu 4 , BSc, MSc   ; 
  • Janice Y Kung 5 , BCom, LMIS   ; 
  • Ijeoma Uchenna Itanyi 1, 6, 7 , MBBS, MPH   ; 
  • Raquel da Luz Dias 4 , BSc, MSc, PhD   ; 
  • Belinda Agyapong 8 , HDip, BSc, MEd   ; 
  • Petra Hertelendy 9 , BS   ; 
  • Francis Ohanyido 10 , MBBS, MBA, MPH   ; 
  • Vincent Israel Opoku Agyapong 4 , BSc, PGD, MBChB, MSc, MD, PhD   ; 
  • Ejemai Eboreime 4 , MBBS, MSc, PhD  

1 Center for Translation and Implementation Research, College of Medicine, University of Nigeria, Nsukka, Nigeria

2 Department of Information Systems and Business Analytics, College of Business, Florida International University, Miami, FL, United States

3 Department of Emergency Medicine, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA, United States

4 Department of Psychiatry, Faculty of Medicine, Dalhousie University, Halifax, NS, Canada

5 Geoffrey and Robyn Sperber Health Sciences Library, University of Alberta, Edmonton, AB, Canada

6 Department of Community Medicine, University of Nigeria, Enugu, Nigeria

7 Department of Public Health Sciences, Dalla Lana School of Public Health, University of Toronto, Toronto, ON, Canada

8 Department of Psychiatry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada

9 Department of Psychology, Florida State University, Tallahassee, FL, United States

10 West African Institute of Public Health, Abuja, Nigeria

Corresponding Author:

Ejemai Eboreime, MBBS, MSc, PhD

Department of Psychiatry

Faculty of Medicine

Dalhousie University

5909 Veterans' Memorial Lane

8th Floor Abbie J Lane Memorial Building, QEII Health Sciences Centre

Halifax, NS, B3H 2E2

Phone: 1 9024732479

Email: [email protected]

Background: Disasters are becoming more frequent due to the impact of extreme weather events attributed to climate change, causing loss of lives, property, and psychological trauma. Mental health response to disasters emphasizes prevention and mitigation, and mobile health (mHealth) apps have been used for mental health promotion and treatment. However, little is known about their use in the mental health components of disaster management.

Objective: This scoping review was conducted to explore the use of mobile phone apps for mental health responses to natural disasters and to identify gaps in the literature.

Methods: We identified relevant keywords and subject headings and conducted comprehensive searches in 6 electronic databases. Studies in which participants were exposed to a man-made disaster were included if the sample also included some participants exposed to a natural hazard. Only full-text studies published in English were included. The initial titles and abstracts of the unique papers were screened by 2 independent review authors. Full texts of the selected papers that met the inclusion criteria were reviewed by the 2 independent reviewers. Data were extracted from each selected full-text paper and synthesized using a narrative approach based on the outcome measures, duration, frequency of use of the mobile phone apps, and the outcomes. This scoping review was reported according to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews).

Results: Of the 1398 papers retrieved, 5 were included in this review. A total of 3 studies were conducted on participants exposed to psychological stress following a disaster while 2 were for disaster relief workers. The mobile phone apps for the interventions included Training for Life Skills, Sonoma Rises, Headspace, Psychological First Aid, and Substance Abuse and Mental Health Services Administration (SAMHSA) Behavioural Health Disaster Response Apps. The different studies assessed the effectiveness or efficacy of the mobile app, feasibility, acceptability, and characteristics of app use or predictors of use. Different measures were used to assess the effectiveness of the apps’ use as either the primary or secondary outcome.

Conclusions: A limited number of studies are exploring the use of mobile phone apps for mental health responses to disasters. The 5 studies included in this review showed promising results. Mobile apps have the potential to provide effective mental health support before, during, and after disasters. However, further research is needed to explore the potential of mobile phone apps in mental health responses to all hazards.

Introduction

Rising global average temperatures and associated changes in weather patterns result in extreme weather events that include hazards such as heatwaves, wildfires, hurricanes, floods, and droughts [ 1 ]. These extreme events linked to climate change are resulting in overlapping and so-called cascading disasters leading to record numbers of “billion dollar” disasters with significant losses of lives and property [ 2 , 3 ]. In 2021 alone, approximately 10,000 fatalities caused by disasters were reported globally, while the economic loss was estimated at approximately US $343 billion [ 4 ]. Disasters are predicted to become more recurring as a result of the impact of human activities such as burning fossil fuels and deforestation, which release greenhouse gases into the atmosphere that trap heat and cause global temperatures to rise [ 5 ].

These catastrophes can adversely affect physical health, mental health, and well-being in both the short and long term as a result of changes due to the political and socioeconomic content, evacuations, social disruption, damage to health care facilities, and financial losses [ 6 - 10 ]. It is estimated that about 33% of people directly exposed to natural disasters will experience mental health sequelae such as posttraumatic stress disorders (PTSDs), anxiety, and depression, among others [ 11 , 12 ].

There is growing recognition of the importance of incorporating mental health into medical and emergency aspects of disaster response [ 12 , 13 ]. However, in contrast to most medical response strategies that are largely curative, mental health response to disasters is predicated on the principles of preventive medicine, thus, emphasizing health promotion, disaster prevention, preparedness, and mitigation [ 14 ]. The strategies of mental health response span across primary prevention (mitigating the risk of ill health before it develops), secondary prevention (early detection and intervention), and tertiary prevention (managing established ailment and averting further complications) [ 15 ].

Mobile health (mHealth) technology has shown great promise in mental health and has been applied across the 3 levels of prevention [ 16 - 20 ]. For example, SMS text messaging and mobile apps have been developed to promote mental health awareness among young people and older adults (primary prevention) [ 21 ]. Additionally, during the COVID-19 pandemic, mHealth was deployed at the population level in Canada to screen for symptoms of anxiety and depression (secondary prevention) [ 22 ]. In addition, mHealth interventions were deployed to support first responders and essential workers during the pandemic [ 23 , 24 ]. Further, the technology has been deployed for therapeutic purposes in patients diagnosed with mental health conditions while simultaneously providing support against complications such as suicidal ideation (tertiary prevention) [ 25 ].

Although videoconferencing and phone calls can be used for mental health conditions, mobile apps provide more mobility and accessibility, are interactive, more adaptable to users’ routines, and can be used repeatedly [ 26 , 27 ]. While numerous academic studies have been conducted on the app of mHealth in the preventive and curative management of mental health conditions in clinical, community, and public health settings, including epidemic response and control, little is known about the use of mobile apps in the mental health components of natural disaster management. This scoping review aims to fill this gap in the literature by mapping where and how mobile apps have been used as part of natural disaster mental health response strategies.

This scoping review was reported according to the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) [ 28 ]. The PRISMA-ScR checklist is available in Multimedia Appendix 1 . The protocol was not registered.

Search Strategy

A medical librarian (JYK) collaborated with the research team to identify relevant keywords and subject headings for the review, such as mHealth or m-health; mobile health or mobile applications; public health emergency, disaster, or catastrophe; and flood, earthquake, or hurricane. Equipped with this knowledge, the librarian developed and executed comprehensive searches in 6 electronic databases, including Ovid MEDLINE, Ovid Embase, APA PsycInfo, CINAHL, Scopus, and Web of Science Core Collection. The search was conducted on June 30, 2022, and was limited to the English language. The full search strategies are available in Multimedia Appendix 2 .

Inclusion and Exclusion Criteria

We included papers that applied mobile apps for mental health responses to disasters. Papers were included if the study participants were persons affected by a natural disaster (setting), the intervention included using a mobile phone app, and the outcome included the assessment of a mental health problem. Studies in which participants were exposed to a man-made disaster were included if the sample also included some participants exposed to a natural disaster. The mental health conditions included were stress, anxiety, depression, and PTSD. Only full-text studies published in English were included. Studies that did not include any intervention with a mobile app for mental health, those focused on videoconferencing or phone calls, and papers on protocols, trial registration, or review were excluded.

Selection of Studies

The search identified papers that were retrieved from the databases. After removing duplicates, the initial titles and abstracts of the unique papers were screened by 2 independent review authors based on the inclusion criteria in a web-based tool called Covidence (Veritas Health Innovation Ltd) [ 29 ]. Full texts of the selected papers that met the inclusion criteria were reviewed by the 2 independent reviewers. The research team resolved disagreements through discussion. The bibliographies from the included studies were also reviewed to identify additional studies for inclusion.

Data Extraction and Synthesis

Data from each selected full-text paper were extracted into a data extraction form developed by the research team. The data included the author and year of publication, country of study, study design, number of participants, type of natural disaster, name of the mobile app, duration of use of the app, outcome measures, and the study’s findings. These data were synthesized using a narrative approach based on the outcome measures, the duration, frequency of use of the mobile apps, and the outcomes.

Search Results

Of the 1532 papers retrieved from the searches, 976 unique papers had their titles and abstracts screened after deduplication. A total of 38 papers were moved to full-text screening, and data were extracted from 5 papers [ 30 - 34 ] ( Figure 1 ). Table 1 shows the summary of the details of the papers.

medical literature review sites

a TLS: Training for Life Skills.

b PTSD: posttraumatic stress disorder.

c MBSR: Mindfulness-Based Stress Reduction.

d PFA: Psychological First Aid.

e SAMHSA: Substance Abuse and Mental Health Services Administration.

Characteristics of Included Studies

Of the 5 studies included in this review, 3 (60%) were conducted in the United States [ 30 , 31 , 34 ], while 2 (40%) were conducted in South Korea [ 32 , 33 ]. All studies used different study designs. A total of 3 studies used a quasi-experimental design—the first, a single group postexperiment with 22 participants [ 32 ]; the second, a multiple-baseline single case experimental design with 7 participants [ 30 ], while the third study used a 1-group pre- and posttest design with 318 participants [ 31 ]. The Training for Life Skills (TLS) app study had only a posttest following the use of the app [ 32 ]; the other 2 had baseline and follow-up measurements with the Sonoma Rises app study having, in addition, preintervention and postintervention measurements. The Psychological First Aid (PFA) study was designed as a qualitative study, while the Substance Abuse and Mental Health Services Administration (SAMHSA) study used a mixed methods descriptive design.

Characteristics of the Population

The TLS, Sonoma, and Headspace apps were designed for disaster survivors, while the PFA and SAMHA apps were designed to support disaster relief workers. The TLS app study was administered to adults with a median age of 32 years. Participants of the Sonoma Rises app study had a mean age of 16 (SD 0.98) years, while participants of the Headspace app study had a mean age of 46.1 (SD 10) years. The TLS app study focused on all types of disasters; the Sonoma Rises study focused on adolescents exposed to wildfires, while the Headspace app focused on women who experienced hurricanes and deep-water oil spillage. The PFA study involved 19 disaster health care workers who first underwent disaster simulation training using the mobile app.

Characteristics of the Mobile App Interventions

The included studies revealed several mobile phone apps used as interventions. The first, the TLS app, was used as a psychological first aid program for disaster survivors with content on information, psychological healing, and mood change [ 32 ]. The second was the Sonoma Rises app, a Health Insurance Portability and Accountability Act (HIPAA)–compliant, cloud-based mobile app with daily push notifications as reminders designed to help survivors of wildfires or other disasters to find their new routines, build resilience, and increase well-being. The app included 6 self-paced content sections, psychoeducation, and direct connections to free and local mental health care services. The third was the Headspace app for a mindfulness-based stress reduction program that included a series consisting of 10 sessions designed to be used for about 10 minutes per day. The SAMHSA Disaster App equips behavioral health providers to respond to all kinds of traumatic incidents by enabling them to readily access disaster-specific information and other important materials directly on their mobile devices [ 34 ]. The PFA mobile app provided evidence-based information and tools for disaster workers to prepare for, execute, and recover from providing psychological first aid during disasters. Accessibility via smartphones and the inclusion of multimedia interventions and assessments tailored for disaster contexts were key features enabling its use integrated with the simulation training [ 33 ].

Frequency and Duration of App Use

The 3 survivor-based apps had variations in the duration of the intervention (app use), which were 8 weeks, at least 5 times a week, frequency of use per day not specified [ 32 ]; 4 weeks for 10 minutes per day [ 30 ]; and 6 weeks for 5-10 minutes per day [ 31 ]. Both the TLS app and the Sonoma Rises app studies had weekly follow-up assessments. The different interventions were applied at least a year following the disasters. Participants in the Sonoma Rises app study used the app on an average of 17 (SD 8.92) days and visited the app an average of 43.50 (SD 30.56) times, with an average session lasting 56.85 (SD 27.87) seconds. The mean time spent on the app was 35.77 (SD 30.03) minutes, while for the TLS app study, the median time spent on the app over the 8 weeks of use was 200-399 minutes. Participants used the Headspace app an average of 24 (SD 36) days and logged in an average of 36 (SD 80) times. There was no description of the frequency and duration of use for the relief worker apps.

Effectiveness Outcomes

Effectiveness outcomes refer to the effects or impact of an intervention or program on the intended outcomes or goals. Different measures were used to assess the effectiveness of the apps’ use as either the primary or secondary outcome. Emotional quotients (emotional stability), basic rhythm quotients (brain stability), alpha-blocking rates (increased positive mood), and brain quotients assessed using electroencephalogram (EEG)–measured brainwave activities adjusted for self-reported app use time were used in the TLS app study [ 32 ]. The Headspace app study assessed effectiveness using a combination of measures such as trait mindfulness using a 15-item Mindful Attention Awareness Scale (MAAS)—trait version; depressive symptoms using the Center for Epidemiologic Studies Depression Scale-10 (CESD-10); perceived stress with the Perceived Stress Scale, 4-item version (PSS-4); and sleep quality using the Pittsburgh Sleep Quality Index (PSQI) [ 31 ]. The Sonoma Rises app study measured efficacy using daily ratings of anxiety and fear, weekly measures of post-traumatic stress symptoms using the Child PTSD Symptom Scale (CPSS-5) for Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition ( DSM-5 ), internalizing and externalizing symptoms using the Behaviour and Feelings Survey (BFS), psychosocial functioning using the Ohio Scale for Youth—Functioning subscale (OSY), and measures of anxiety (Generalized Anxiety Disorder-7 [GAD-7]), depression (Patient Health Questionnaire-9 [PHQ-9]), well-being—Warwick-Edinburgh Mental Well-being Scale (WEMWBS), sleep (Insomnia—Severity Index [ISI]), academic engagement (Student Engagement Instrument [SEI]), and perceived social support (Wills’ Social Support Scale [WSSS]) [ 30 ].

All 3 survivor-based apps were found to have positive benefits in addressing mental health issues among persons exposed to natural disasters. The TLS mobile app was shown to be effective in increasing positive and decreasing negative psychological factors according to app use time. The TLS mobile apps’ use had a significant effect on the emotional quotients (β=.550; P <.008), explanatory power (EP) was 30%, had a significant positive effect on the basic rhythm quotient (left brain: β=.598; P <.003; EP 35; right brain: β=.451; P <.035; EP 20%). Additionally, it had a significant positive effect on the alpha-blocking rate (left brain: β=.510; P <.015; EP 26%; right brain: β=.463; P <.035, EP 21%); and a significant positive effect on the brain quotient (β=.451; P <.035; EP 20%) [ 16 ]. The Headspace app had a positive effect on depression (odds ratio [OR] 0.3, 95% CI 0.11-0.81), physical activity (OR 2.8, 95% CI 1.0-7.8), sleep latency (OR 0.3, 95% CI 0.11-0.81), sleep duration (OR 0.3, 95% CI 0.07-0.86), and sleep quality (OR 0.1, 95% CI 0.02-0.96); however, there was no change in mindfulness scores from baseline to follow-up. For the Sonoma Rises app, no significant effects were observed for the clinical and functional outcomes because the longitudinal part of the study was affected by limited statistical power as a result of small sample size and historical confounds that made the participants miss data submission. However, visual inspection of individual data following the intervention showed downward trends across the study phases for daily levels of anxiety, fearfulness, and individual posttraumatic stress symptom severity.

For the PFA app, the qualitative study explored disaster health workers’ experiences with simulation training using focus group discussions. A total of 19 participants engaged in disaster scenarios with standardized patients, using a PFA app for guidance. Workers valued the practical educational approach, felt increased self-efficacy to support survivors, and identified areas for enhancing simulations and app tools to optimize effectiveness.

Implementation Outcomes

Implementation outcomes refer to the effects of an intervention or program implementation on various aspects of the implementation process, such as the fidelity of implementation, acceptability, adoption, feasibility, and maintainability. In the papers reviewed, feasibility was assessed using enrollment, program participation, and retention. Acceptability was measured using how well participants liked the app using a rating scale, how much of the app program was completed, the biggest barriers, and whether the app would be recommended to others. Data on characteristics of app use (engagement) were measured using the total number of log ins, average log ins per program completer, platform used (iOS, Android, or web-based), day of week of use (weekday vs weekend), and time of day of use (in 4-hour blocks) [ 30 , 31 ].

The Headspace app was reported to be cost-effective to implement and easy to use [ 31 ]. For engagement, only 14% (43/318) of the enrolled women used the app. The level of engagement with the app was high, with 72% (31/43) of participants completing some or all the sessions. Retention was also high with 74% (32/43) of the participants completing the follow-up survey. Lack of time was cited as the main barrier to using the app for 37% (16/43) of users and 49% (94/193) of nonusers. The majority of the users (32/43, 74%) reported high levels of satisfaction with the app. Acceptability was also high, with most participants (32/43, 74%) reporting that they liked the app and 86% (37/43) reporting that they would recommend it to others. Characteristics of app use showed that of the 1530 log ins, most participants (n=1191, 78%) used the iOS platform, mainly on weekdays (n=1147, 75%) and at different times of day mostly from noon to 4 PM (n=375, 25%).

Sonoma Rises was found to be feasible in terms of engagement and satisfaction among teens with high levels of disaster-related posttraumatic stress symptoms [ 30 ]. The self-assessment and data visualization features of the Sonoma Rises app strongly appealed to all the participants, and they were willing to recommend the app to their friends. Self-satisfaction with the mobile app was rated as extremely high (mean 8.50, SD 0.58, on a scale of 0 to 10, with 10 as totally satisfied). The participants agreed or strongly agreed to recommend this intervention to a friend. The participants found the intervention helpful (mean 2, SD 0.82); had the content, functions, and capabilities they needed (mean 3, SD 1.12); and were satisfied with how easy it was to use the app (mean 2, SD 0), on a scale of 1 to 5 with 1 as strongly agree and 5 as strongly disagree. In the qualitative feedback, to make the use of the app better, the participants suggested more notifications to return to the app and the use of the app immediately after a disaster. Implementation outcome was not an objective of the TLS app, hence, none was reported.

Other Mobile Apps With Potential Use in Disasters

Some mobile apps not meeting the inclusion criteria showed promise for supporting mental health in disasters. PTSD Coach provides tools for managing PTSD symptoms [ 35 ]. Though not disaster-specific, its psychoeducation, symptom tracking, and coping strategies could aid survivors. Similarly, COVID Coach was designed to help manage pandemic-related stress and anxiety [ 36 ]. These apps are summarized in Table 2 .

a PTSD: posttraumatic stress disorder.

Principal Findings

This review sought to identify and map the use of mobile apps for the mental health component of natural disaster management. We found only 5 studies meeting the inclusion criteria. The scarcity of published literature in this area suggests that mobile apps have not been extensively used in mental health responses to natural disasters. Academic studies on the public’s use of mobile technologies in disaster management are still nascent [ 37 ], but there has been increased interest in developing and deploying digital technology and mobile apps by governments and nonstate actors as part of disaster preparedness and response [ 38 , 39 ]. A recent systematic review found that there is a lack of mental health preparedness in most countries when it comes to disasters [ 40 ]. The 5 studies included in our scoping review confirmed this gap and further demonstrated that mobile apps can provide mental health support to disaster-affected individuals and communities. The studies found that the use of mobile apps was associated with improvements in mental health outcomes, such as decreased anxiety and depression symptoms and increased resilience. The reviewed studies also suggest that mobile apps can be effective in delivering psychoeducation and coping skills training to disaster-affected individuals. A 2017 scoping review found that mobile apps have been largely used for communication purposes in disaster management [ 37 ]. The scope of use was classified into 5 categories which are not mutually exclusive. These categories are (1) crowdsourcing (organize and collect disaster-related data from the crowd), (2) collaborating platforms (serve as a platform for collaboration during disasters), (3) alerting and information (disseminate authorized information before and during disasters), (4) collating (gather, filter, and analyze data to build situation awareness), and (5) notifying (for users to notify others during disasters) [ 37 ].

Some authors classify disaster response into 3 phases: preparedness, response, and mitigation [ 41 ]. The studies included in this review exclusively examined the use of mobile apps during the recovery phase of disaster management. However, none of the studies explored the potential of mobile apps during the preparedness or response phases of disaster management. By addressing this gap, future research could help to provide more comprehensive and effective strategies for the use of mobile apps throughout all phases of disaster management. Examples of potential opportunities are demonstrated in Figure 2 .

medical literature review sites

Preparedness Phase

Mobile apps can play a critical role as primary prevention interventions by raising awareness and promoting mental health literacy in the community in preparation for natural disasters. These apps can provide information on common mental health problems that may arise during and after disasters and offer tips on staying mentally healthy. For example, apps can include psychoeducation modules on coping skills, stress reduction, and self-care techniques, as well as information on how to prepare for a disaster and what steps to take to protect one’s mental health during and after a disaster. The use and effectiveness of mobile apps in health literacy have been demonstrated in the literature [ 19 ], thus providing a foundation for adaptation in disaster management.

Response Phase

Mobile apps can be used to connect people in need of mental health support with mental health professionals or other resources. For example, apps can provide information on emergency hotlines, crisis intervention services, and support groups. This was demonstrated as effective during the COVID-19 pandemic [ 42 ]. Mobile apps can also provide coping strategies and techniques to manage stress and anxiety in response to other natural disasters [ 34 ]. In this scoping review, we found that 3 apps had positive benefits in addressing mental health issues among persons exposed to natural disasters.

Recovery Phase

As part of secondary and tertiary prevention strategies, mobile apps can provide valuable ongoing support to those affected by disasters. For secondary prevention, mobile apps can be designed to support early detection and intervention for mental health problems after a natural disaster. These apps can include screening tools to identify common mental health issues such as anxiety, depression, and PTSD and offer appropriate referral pathways [ 43 ]. Additionally, apps can provide symptom-tracking tools to help individuals monitor their mental health over time [ 43 ]. For tertiary prevention, mobile apps can support the ongoing management of established mental health problems after a natural disaster. For example, apps can provide evidence-based psychotherapy interventions, such as cognitive-behavioral therapy, to help individuals manage their symptoms [ 44 ]. They can also connect individuals with support groups and peer-to-peer networks to provide additional emotional support and help individuals connect with others who have experienced similar challenges. Furthermore, mobile apps can offer self-help tools, such as meditation exercises and mood tracking, to help people cope with the ongoing mental health effects of the disaster. They can also provide information on local mental health services and support groups, helping individuals access the resources they need to manage their mental health.

General Mental Health Apps Show Promise for Disaster Response

While not specifically designed for disaster contexts, some mobile apps demonstrate strategies to support mental health that could aid disaster survivors. PTSD Coach delivers PTSD psychoeducation, symptom tracking tools, coping skills training, and crisis resource access—elements that could help survivors experiencing common postdisaster issues like trauma or loss [ 35 ]. Though it was tailored for veterans and civilians with PTSD, 1 study found it improved users’ depression and functioning. Similarly, COVID Coach offered pandemic-related stress management through symptom tracking, healthy coping recommendations, and crisis line referrals [ 36 ]. By leveraging the scalability of mobile apps, COVID Coach reached many struggling during a global crisis. These examples illustrate that apps may provide accessible, far-reaching mediums for disseminating disaster mental health resources—even without disaster-specific tailoring. Research should further explore adapting evidence-based, general mental health apps for disaster contexts or incorporate elements of them into future disaster response tools. With mental health needs magnified during disasters, mobile apps with thoughtful design show promise in expanding access to psychosocial support.

There are several potential limitations when using mobile apps for mental health responses to disasters. One of the main concerns is the accessibility of these apps, as not all members of the affected communities may have access to smartphones or internet connectivity. Furthermore, language and cultural barriers may prevent effective use. Another potential limitation is the quality and accuracy of the information provided. Without proper oversight, some apps may provide misinformation or inaccurate advice, which could exacerbate mental health issues. In addition, privacy concerns around collecting and storing sensitive data must be addressed.

Barriers like lack of mobile devices and internet access can impede adoption, especially in marginalized areas. Apps not designed for low literacy users or that are only available in certain languages could also limit accessibility. Concerns around privacy and security may deter some individuals. However, smartphone ubiquity globally enables use by vulnerable groups. Government agencies and nongovernmental organizations (NGOs) can promote adoption by integrating vetted apps into disaster protocols and funding dissemination. Developing apps with stakeholders and prelaunch user testing also facilitate uptake. Monitoring user feedback allows for ongoing optimization and troubleshooting of barriers. Cultural tailoring to address stigma and use local beliefs further enables implementation success. Finally, limited evidence-based research into app effectiveness highlights the need for more rigorous evaluation and testing of mobile apps for disaster mental health response.

This scoping review has certain methodological limitations that should be considered while interpreting its results. First, the search was restricted to 6 electronic databases and only English-language papers were considered. We also searched MEDLINE and not PubMed, and these may have led to the omission of some relevant studies. Second, the study focused on mobile phone apps for mental health response to disasters, disregarding other types of technology that could also be used in disaster management such as telehealth, SMS text messaging, and emails. Moreover, since the study included only 5 papers, it may not offer a comprehensive overview of the use of mobile phone apps in disaster response strategies. There is the possibility of the existence of apps not yet published in academic literature. Fourth, the nonuse of a control group in the design of the studies makes it difficult to determine whether the observed effects were entirely due to the use of the apps or other characteristics of the participants that predisposed them to use the apps. Fifth, the small sample sizes for the studies mean they require caution with generalization. Despite these limitations, the review provides valuable insights into the use of mobile apps in disaster response and serves as a useful resource for developing contextually appropriate mobile apps for disaster management. Last, our study focused on natural disasters, further research should examine the role of apps in supporting mental health in conflict and complex emergencies such as wars, outbreaks of violence, and complex political conflict situations [ 45 ].

Conclusions

This scoping review found that mobile apps have not been extensively used in mental health responses to natural disasters, with only 5 studies meeting the inclusion criteria. However, the studies included in this review demonstrate that mobile apps can be useful in providing mental health support to disaster-affected individuals, as well as equip disaster responders. There is a critical gap identified in this study, as none of the studies investigated the use of mobile apps for potential victims in the preparedness or response phases of disaster management. We, therefore, recommend that mobile apps be integrated into the various phases of disaster management as part of mental health response. Additionally, it is important to ensure that these apps are accessible to all members of the community, taking into account cultural, linguistic, and other factors that may impact their effectiveness. Mobile apps have great potential to provide valuable ongoing support to those affected by disasters, and they can be a valuable resource in disaster management, helping people cope with the mental health effects of disasters and connecting with the necessary support services.

The findings from this scoping review have important implications for policy makers, disaster management professionals, and mental health practitioners. There is a clear need for policies and protocols that integrate evidence-based mobile apps into mental health disaster planning and response. Disaster agencies should invest in developing, evaluating, and widely disseminating mobile apps specifically designed to mitigate psychological trauma before, during, and after catastrophic events. Mental health professionals can incorporate vetted mobile apps into their standard of care for at-risk disaster survivors. Going forward, a collaborative approach across these groups will be essential to leverage mobile technology in building community resilience and addressing the rising mental health burdens in an era defined by climate change–fueled natural disasters.

Acknowledgments

This work was funded by the Department of Psychiatry, Dalhousie University, Halifax, Canada. The funder was not involved in the conceptualization or implementation of the study, nor the decision to publish the findings.

Conflicts of Interest

None declared.

The PRISMA-SCR checklist. PRISMA-SCR: Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews.

Detailed search strategy.

  • Birkmann J, Cardona OD, Carreño ML, Barbat AH, Pelling M, Schneiderbauer S, et al. Theoretical and conceptual framework for the assessment of vulnerability to natural hazards and climate change in Europe. In: Birkmann J, Kienberger S, Alexander DE, editors. Assessment of Vulnerability to Natural Hazards: A European Perspective. San Diego, California. Elsevier; 2014;1-19.
  • Bhola V, Hertelendy A, Hart A, Adnan SB, Ciottone G. Escalating costs of billion-dollar disasters in the US: climate change necessitates disaster risk reduction. J Clim Change Health. 2023;10:100201. [ CrossRef ]
  • Leppold C, Gibbs L, Block K, Reifels L, Quinn P. Public health implications of multiple disaster exposures. Lancet Public Health. 2022;7(3):e274-e286. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Global SDG indicators data platform. United Nations. 2021. URL: https://unstats.un.org/sdgs/dataportal [accessed 2024-04-05]
  • Sloggy MR, Suter JF, Rad MR, Manning DT, Goemans C. Changing climate, changing minds? The effects of natural disasters on public perceptions of climate change. Clim Change. 2021;168(3-4):25. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chen X, Bakal J, Whitten T, Waldie B, Ho C, Wright P, et al. Assessing the impact of COVID-19 pandemic on the health of residents and the healthcare system in Alberta, Canada: an observational study-the Alberta post-COVID follow-up study. BMJ Open. 2023;13(2):e067449. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Pazderka H, Shalaby R, Eboreime E, Mao W, Obuobi-Donkor G, Agyapong B, et al. Isolation, economic precarity, and previous mental health issues as predictors of PTSD status in females living in Fort McMurray during COVID-19. Front Psychiatry. 2022;13:837713. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • To P, Eboreime E, Agyapong VIO. The impact of wildfires on mental health: a scoping review. Behav Sci (Basel). 2021;11(9):126. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Agyapong B, Shalaby R, Eboreime E, Obuobi-Donkor G, Owusu E, Adu MK, et al. Cumulative trauma from multiple natural disasters increases mental health burden on residents of Fort McMurray. Eur J Psychotraumatol. 2022;13(1):2059999. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guerra O, Eboreime E. The impact of economic recessions on depression, anxiety, and trauma-related disorders and illness outcomes-a scoping review. Behav Sci (Basel). 2021;11(9):119. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Norris FH, Friedman MJ, Watson PJ, Byrne CM, Diaz E, Kaniasty K. 60,000 disaster victims speak: part I. An empirical review of the empirical literature, 1981-2001. Psychiatry. 2002;65(3):207-239. [ CrossRef ] [ Medline ]
  • North CS, Pfefferbaum B. Mental health response to community disasters: a systematic review. JAMA. 2013;310(5):507-518. [ CrossRef ] [ Medline ]
  • Pfefferbaum B, Flynn BW, Schonfeld D, Brown LM, Jacobs GA, Dodgen D, et al. The integration of mental and behavioral health into disaster preparedness, response, and recovery. Disaster Med Public Health Prep. 2012;6(1):60-66. [ CrossRef ] [ Medline ]
  • Math SB, Nirmala MC, Moirangthem S, Kumar NC. Disaster management: mental health perspective. Indian J Psychol Med. 2015;37(3):261-271. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ali A, Katz DL. Disease prevention and health promotion: how integrative medicine fits. Am J Prev Med. 2015;49(5 Suppl 3):S230-s240. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Reddy MC, Paul SA, Abraham J, McNeese M, DeFlitch C, Yen J. Challenges to effective crisis management: using information and communication technologies to coordinate emergency medical services and emergency department teams. Int J Med Inform. 2009;78(4):259-269. [ CrossRef ] [ Medline ]
  • Cheng DR, Coote A, South M. A digital approach in the rapid response to COVID-19—experience of a paediatric institution. Int J Med Inform. 2021;149:104407. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Souza F, Kushchu I. Mobile disaster management system applications—current overview and future potential. In: IOP Conference Series: Materials Science and Engineering. 2005. Presented at: Proceedings EURO mGOV; 2005;455-466; NA. URL: https://iopscience.iop.org/article/10.1088/1757-899X/1009/1/012049
  • Chandrashekar P. Do mental health mobile apps work: evidence and recommendations for designing high-efficacy mental health mobile apps. Mhealth. 2018;4:6. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hyde AM, Watt M, Carbonneau M, Eboreime EA, Abraldes JG, Tandon P. Understanding preferences toward virtual care: a pre-COVID mixed methods study exploring the perspectives of patients with chronic liver disease. Telemed J E Health. 2022;28(3):407-414. [ CrossRef ] [ Medline ]
  • Eboreime E, Ohinmaa A, Rusak B, Cassidy KL, Morrison J, McGrath P, et al. The Text4HealthyAging program: an evidence-based text messaging innovation to support healthy urban aging in Canada and Australia. Gerontol Geriatr Med. 2022;8:23337214221081378. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shalaby R, Adu MK, Andreychuk T, Eboreime E, Gusnowski A, Vuong W, et al. Prevalence, demographic, and clinical correlates of likely PTSD in subscribers of Text4Hope during the COVID-19 pandemic. Int J Environ Res Public Health. 2021;18(12):6227. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vilendrer S, Amano A, Johnson CGB, Favet M, Safaeinili N, Villasenor J, et al. An app-based intervention to support first responders and essential workers during the COVID-19 pandemic: needs assessment and mixed methods implementation study. J Med Internet Res. 2021;23(5):e26573. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Obuobi-Donkor G, Eboreime E, Bond J, Phung N, Eyben S, Hayward J, et al. An E-mental health solution to prevent and manage posttraumatic stress injuries among first responders in Alberta: protocol for the implementation and evaluation of text messaging services (Text4PTSI and Text4Wellbeing). JMIR Res Protoc. 2022;11(4):e30680. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Eboreime E, Shalaby R, Mao W, Owusu E, Vuong W, Surood S, et al. Reducing readmission rates for individuals discharged from acute psychiatric care in Alberta using peer and text message support: protocol for an innovative supportive program. BMC Health Serv Res. 2022;22(1):332. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Andersson G, Cuijpers P. Internet-based and other computerized psychological treatments for adult depression: a meta-analysis. Cogn Behav Ther. 2009;38(4):196-205. [ CrossRef ] [ Medline ]
  • Nicholas J, Ringland KE, Graham AK, Knapp AA, Lattie EG, Kwasny MJ, et al. Stepping up: predictors of 'stepping' within an iCBT stepped-care intervention for depression. Int J Environ Res Public Health. 2019;16(23):4689. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for Scoping Reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467-473. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Covidence systematic review software. Veritas Health Innovation. Melbourne, Australia.; 2022. URL: https://www.covidence.org/ [accessed 2024-04-05]
  • Heinz AJ, Wiltsey-Stirman S, Jaworski BK, Sharin T, Rhodes L, Steinmetz S, et al. Feasibility and preliminary efficacy of a public mobile app to reduce symptoms of postdisaster distress in adolescent wildfire survivors: Sonoma rises. Psychol Serv. 2021;19(2):67-79. [ CrossRef ] [ Medline ]
  • Rung AL, Oral E, Berghammer L, Peters ES. Feasibility and acceptability of a mobile mindfulness meditation intervention among women: intervention study. JMIR Mhealth Uhealth. 2020;8(6):e15943. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Choi YJ, Ko EJ, Choi EJ, Um YJ. Managing traumatic stress using a mental health care mobile app: a pilot study. Int J Ment Health Promot. 2021;23(3):385-393. [ FREE Full text ] [ CrossRef ]
  • Choi YJ, Jung HS, Choi EJ, Ko E. Disaster healthcare workers' experience of using the psychological first aid mobile app during disaster simulation training. Disaster Med Public Health Prep. 2021;17:e55. [ CrossRef ] [ Medline ]
  • Seligman J, Felder SS, Robinson ME. Substance Abuse and Mental Health Services Administration (SAMHSA) behavioral health disaster response app. Disaster Med Public Health Prep. 2015;9(5):516-518. [ CrossRef ] [ Medline ]
  • Kuhn E, van der Meer C, Owen JE, Hoffman JE, Cash R, Carrese P, et al. PTSD coach around the world. Mhealth. 2018;4:15. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jaworski BK, Taylor K, Ramsey KM, Heinz A, Steinmetz S, Pagano I, et al. Exploring usage of COVID coach, a public mental health app designed for the COVID-19 pandemic: evaluation of analytics data. J Med Internet Res. 2021;23(3):e26559. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tan ML, Prasanna R, Stock K, Hudson-Doyle E, Leonard G, Johnston D. Mobile applications in crisis informatics literature: a systematic review. Int J Disaster Risk Reduct. 2017;24:297-311. [ CrossRef ]
  • Romano M, Onorati T, Aedo I, Diaz P. Designing mobile applications for emergency response: citizens acting as human sensors. Sensors (Basel). 2016;16(3):406. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schoning J, Rohs M, Krüger A, Stasch C. Improving the communication of spatial information in crisis response by combining paper maps and mobile devices. In: Mobile Response 2008. Lecture Notes in Computer Science, vol 5424. Berlin, Heidelberg. Springer; 2009;57-65.
  • Roudini J, Khankeh HR, Witruk E. Disaster mental health preparedness in the community: a systematic review study. Health Psychol Open. 2017;4(1):2055102917711307. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Colajanni G, Daniele P, Nagurney A, Nagurney LS, Sciacca D. A three-stage stochastic optimization model integrating 5G technology and UAVs for disaster management. J Glob Optim. 2023;86:1-40. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Singh HJL, Couch D, Yap K. Mobile health apps that help with COVID-19 management: scoping review. JMIR Nurs. 2020;3(1):e20596. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Funnell EL, Spadaro B, Martin-Key N, Metcalfe T, Bahn S. mHealth solutions for mental health screening and diagnosis: a review of app user perspectives using sentiment and thematic analysis. Front Psychiatry. 2022;13:857304. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Murphy K, Coombes I, McMillan S, Wheeler AJ. Clozapine and shared care: the consumer experience. Aust J Prim Health. 2018;24(6):455-462. [ CrossRef ] [ Medline ]
  • Goniewicz K, Burkle FM, Hertelendy AJ, Khorram-Manesh A. The impact of war on emergency departments visits by Ukrainian refugees in Poland. Am J Emerg Med. 2023;67:189-190. [ CrossRef ] [ Medline ]

Abbreviations

Edited by G Eysenbach; submitted 13.06.23; peer-reviewed by T Benham, K Goniewicz, R Konu, J Ranse, P Moreno-Peral; comments to author 10.01.24; revised version received 25.02.24; accepted 23.03.24; published 17.04.24.

©Nwamaka Alexandra Ezeonu, Attila J Hertelendy, Medard Kofi Adu, Janice Y Kung, Ijeoma Uchenna Itanyi, Raquel da Luz Dias, Belinda Agyapong, Petra Hertelendy, Francis Ohanyido, Vincent Israel Opoku Agyapong, Ejemai Eboreime. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 17.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Britain Is Leaving the U.S. Gender-Medicine Debate Behind

The Cass report challenges the scientific basis of medical transition for minors.

Photo of a woman in glasses and a black-pearl necklace

Listen to this article

Produced by ElevenLabs and News Over Audio (NOA) using AI narration.

In a world without partisan politics, the Cass report on youth gender medicine would prompt serious reflection from American trans-rights activists, their supporters in the media, and the doctors and institutions offering hormonal and surgical treatments to minors. At the request of the English National Health Service, the senior pediatrician Hilary Cass has completed the most thorough consideration yet of this field, and her report calmly and carefully demolishes many common activist tropes. Puberty blockers do have side effects, Cass found. The evidence base for widely used treatments is “ shaky .” Their safety and effectiveness are not settled science.

The report drew on extensive interviews with doctors, parents, and young people, as well as on a series of new, systematic literature reviews. Its publication marks a decisive turn away from the affirmative model of treatment, in line with similar moves in other European countries. What Cass’s final document finds, largely, is an absence . “The reality is that we have no good evidence on the long-term outcomes of interventions to manage gender-related distress,” Cass writes. We also don’t have strong evidence that social transitioning, such as changing names or pronouns, affects adolescents’ mental-health outcomes (either positively or negatively). We don’t have strong evidence that puberty blockers are merely a pause button, or that their benefits outweigh their downsides, or that they are lifesaving care in the sense that they prevent suicides. We don’t know why the number of children turning up at gender clinics rose so dramatically during the 2010s, or why the demographics of those children changed from a majority of biological males to a majority of biological females. Neither “born that way” nor “it’s all social contagion” captures the complexity of the picture, Cass writes.

What Cass does feel confident in saying is this: When it comes to alleviating gender-related distress, “for the majority of young people, a medical pathway may not be the best way to achieve this.” That conclusion will now inform the creation of new state-provided services in England. These will attempt to consider patients more holistically, acknowledging that their gender distress might be part of a picture that also includes anxiety, autism, obsessive-compulsive disorder, eating disorders, or past trauma.

This is a million miles away from prominent American medical groups’ recommendation to simply affirm an adolescent’s stated gender—and from common practice at American gender clinics. For example, a Reuters investigation found that, of 18 U.S. clinics surveyed, none conducted the lengthy psychological assessments used by Dutch researchers who pioneered the use of medical gender treatments in adolescents; some clinics prescribe puberty blockers or hormones during a patient’s first visit. Under pressure from its members, the American Academy of Pediatrics last year commissioned its own evidence review, which is still in progress. But at the same time, the group restated its 2018 commitment to the medical model.

The Cass report’s findings also contradict the prevailing wisdom at many media outlets, some of which have uncritically repeated advocacy groups’ talking points. In an extreme example recently noted by the writer Jesse Singal, CNN seems to have a verbal formula , repeated across multiple stories, to assure its audience that “gender-affirming care is medically necessary, evidence-based care.” On a variety of platforms, prominent liberal commentators have presented growing concerns about the use of puberty blockers as an ill-informed moral panic.

Read: The power struggle over transgender students

The truth is that, although American medical groups have indeed reached a consensus about the benefits of youth gender medicine, doctors with direct experience in the field are divided, particularly outside the United States. “Clinicians who have spent many years working in gender clinics have drawn very different conclusions from their clinical experience about the best way to support young people with gender-related distress,” Cass writes. Her report is a challenge to the latest standards of care from the U.S.-based World Professional Association for Transgender Health, which declined to institute minimum-age limits for surgery. The literature review included with her report is notably brutal about these guidelines, which are highly influential in youth gender medicine in America and around the world—but which, according to Cass, “lack developmental rigour.”

The crux of the report is that the ambitions of youth gender medicine outstripped the evidence—or, as Cass puts it, that doctors at the U.K. clinic whose practices she was examining, although well-meaning, “developed a fundamentally different philosophy and approach compared to other paediatric and mental health services.” How, she asks, did the medical pathway of puberty blockers and then cross-sex hormones—a treatment based on a single Dutch study in the 1990s—spread around the world so quickly and decisively? Why didn’t clinicians seek out more studies to confirm or disprove its safety and utility earlier? And what should child gender services look like now?

The answer to those first two questions is the same. Medicalized gender treatments for minors became wrapped up with a push for wider social acceptance for transgender people, something that was presented as the “ next frontier in civil rights,” as Time magazine once described it. Any questions about such care were therefore read as stemming from transphobic hostility, full stop. And when those questions kept coming anyway, right-wing politicians and anti-woke comedians piled on, sensing an area where left-wing intellectuals were out of touch with popular opinion. In turn, that allowed misgivings to be dismissed as “ fascism ,” even though, as the British journalist Sarah Ditum has written, “it is not damning of feminists that they are on the same page as Vladimir Putin about there being two sexes. That is just how many sexes there are.”

In Britain, multiple clinicians working at the Gender Identity and Development Service (GIDS) at the Tavistock and Portman Trust, the central provider of youth gender medicine, tried to raise their concerns, only to have their fears dismissed as hostility toward trans people. Even those who stayed within the service have spoken about pressure from charities and lobbying groups to push children toward a medical pathway. As Cass notes, “There are few other areas of healthcare where professionals are so afraid to openly discuss their views, where people are vilified on social media, and where name-calling echoes the worst bullying behaviour.”

This hostile climate has hampered attempts to collect robust data about real-world outcomes. The report’s research team at the University of York tried to follow up on 9,000 former GIDS patients but was informed by National Health Service authorities in England in January that “despite efforts to encourage the participation of the NHS gender clinics, the necessary cooperation had not been forthcoming.” Cass has since wondered aloud if this decision was “ideologically driven,” and she recommends that the clinics be “directed to comply” with her team’s request for data.

As I have written before, the intense polarization of the past few years around gender appears to be receding in Britain. Kamran Abbasi, the editor in chief of The BMJ , the country’s foremost medical journal, wrote an editorial praising the report and echoing its conclusion that many “studies in gender medicine fall woefully short in terms of methodological rigour.” The country’s left-wing Labour Party has already accepted that feminist concerns about gender self-identification are legitimate, and its health spokesperson, Wes Streeting, welcomed the Cass report as soon as it was published. (The ruling Conservatives have also enthusiastically embraced its conclusions, and the former health secretary Sajid Javid pushed through a law change that made its data collection possible.)  The LGBTQ charity Stonewall responded to the report by saying that some of its recommendations could be “positive,” and urged politicians to read it. Even Mermaids, the charity most associated with pushing the affirmative model in Britain, offered only lukewarm criticism that more gatekeeping could further increase waiting times.

The Cass report is a model for the treatment of fiercely debated social issues: nuanced, empathetic, evidence-based. It has taken a political debate and returned it to the realm of provable facts. And, unlike American medical groups, its author appears to have made a real effort to listen to people with opposing views, and attempted to reconcile their very different experiences of this topic. “I have spoken to transgender adults who are leading positive and successful lives, and feeling empowered by having made the decision to transition,” she writes in the introduction. “I have spoken to people who have detransitioned, some of whom deeply regret their earlier decisions.” What a difference from America, where detransitioners are routinely dismissed as Republican pawns and where even researchers who are trans themselves get pushback for investigating transition-related regret—and where red states have passed laws restricting care even for transgender adults, or have proposed removing civil-rights protections from them.

Daniela Valdes and Kinnon McKinnon: Take detransitioners seriously

Has the Cass report gotten everything right? The methodology and conclusions of its research should be open to challenge and critique, as with any other study. But it is undoubtedly the work of serious people who have treated a delicate subject seriously. If you still think that concerns about child medical transition are nothing more than a moral panic, then I have a question: What evidence would change your mind?

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • BMC Med Inform Decis Mak

Logo of bmcmidm

Who can you trust? A review of free online sources of “trustworthy” information about treatment effects for patients and the public

Andrew d. oxman.

1 Centre for Informed Health Choices, Norwegian Institute of Public Health, PO Box 4404, Nydalen, N-0403 Oslo, Norway

2 University of Oslo, Oslo, Norway

Elizabeth J. Paulsen

Associated data.

All data generated or analysed during this study are included in this published article and the Additional file 1 .

Information about effects of treatments based on unsystematic reviews of research evidence may be misleading. However, finding trustworthy information about the effects of treatments based on systematic reviews, which is accessible to patients and the public can be difficult. The objectives of this study were to identify and evaluate free sources of health information for patients and the public that provide information about effects of treatments based on systematic reviews.

We reviewed websites that we and our colleagues knew of, searched for government sponsored health information websites, and searched for online sources of health information that provide evidence-based information. To be included in our review, a website had to be available in English, freely accessible, and intended for patients and the public. In addition, it had to have a broad scope, not limited to specific conditions or types of treatments. It had to include a description of how the information is prepared and the description had to include a statement about using systematic reviews. We compared the included websites by searching for information about the effects of eight treatments.

Three websites met our inclusion criteria: Cochrane Evidence , Informed Health , and PubMed Health . The first two websites produce content, whereas PubMed Health aggregated content. A fourth website that met our inclusion criteria, CureFacts , was under development. Cochrane Evidence provides plain language summaries of Cochrane Reviews (i.e. summaries that are intended for patients and the public). They are translated to several other languages. No information besides treatment effects is provided. Informed Health provides information about treatment effects together with other information for a wide range of topics. PubMed Health was discontinued in October 2018. It included a large number of systematic reviews of treatment effects with plain language summaries for Cochrane Reviews and some other reviews. None of the three websites included links to ongoing trials, and information about treatment effects was not reported consistently on any of the websites.

It is possible for patients and the public to access trustworthy information about the effects of treatments using the two of the websites included in this review.

Electronic supplementary material

The online version of this article (10.1186/s12911-019-0772-5) contains supplementary material, which is available to authorized users.

Patients and the public must make choices among different treatment options. We define “treatment” broadly, as any preventive, therapeutic, rehabilitative, or palliative action intended to improve the health or wellbeing of individuals or communities [ 1 ]. This includes, for example, drugs, surgery and other types of “modern medicine”; lifestyle changes, such as changes to what you eat or how you exercise; herbal remedies and other types of “traditional” or “alternative medicine”, and public health interventions. Few people would prefer that decisions about what they should and should not do for their health should be uninformed. Yet, if a decision is going to be well informed rather than misinformed, they need information that is relevant, trustworthy, and accessible. They also need to be able to distinguish between claims about the effects of treatments that are trustworthy and those that are not [ 2 ].

Often the problem is too much information rather than too little. For example, a Google search for “back pain” yields over 60 million hits [ 3 ]. PubMed, a free search engine for accessing MEDLINE and other databases maintained by the United States National Library of Medicine, includes over 27 million citations [ 4 ], and this represents only a fraction of the biomedical literature. The Cochrane Central Register of Controlled Trials, a bibliographic database that is restricted to controlled trials of treatments, contains over a million citations [ 5 ]. It is not practical for people making decisions about treatments to use search engines or databases such as these to find relevant information, critically appraise the studies they find, synthesize them, and interpret the results.

Systematic reviews reduce the risk of being misled by bias (systematic errors) and the play of chance (random errors), by using systematic and explicit methods to identify, select, and critically appraise relevant studies, and to collect and analyse data from them [ 6 ]. For information about treatment effects to be trustworthy, it should be based on systematic reviews. For it to be accessible to patients and the public, it should be easy to find and should be clearly communicated in plain language [ 7 ].

Unfortunately, a large amount of information about treatment effects is not based on systematic reviews and is not trustworthy [ 8 – 19 ]. This includes handouts for patients [ 8 , 9 ], internet-based information [ 10 , 11 ], information in social and mass media [ 12 – 18 ], information produced by patient organisations [ 8 , 9 , 12 ]. press releases [ 18 ], and advertisements [ 19 ]. Studies of the trustworthiness of health information have used a variety of criteria, but have consistently found important limitations [ 8 – 19 ]. Although trustworthy information about treatment effects can be found, evidence-based information is frequently written for health professionals or researchers, rather than for patients and the public [ 7 ].

There is an abundance of health information on the internet, which has become an important source of health information over the past two decades [ 10 , 11 , 20 – 24 ], but patients and the public find it difficult to search the internet for trustworthy information [ 21 – 23 ], and are unlikely to critically appraise the information that they do find [ 22 , 23 ].

There are a number of websites that aim to improve access to trustworthy health information for patients and the public. The objectives of this study were to identify free sources of health information for patients and the public which provide information about the effects of treatments based on systematic reviews, and to evaluate those websites.

Our motivation for undertaking this review grew out of a desire to respond to people who were looking for trustworthy information about the effects of specific treatments and landed on Testing Treatments international [ 25 ], a website for promoting critical thinking about treatment claims. Rather than simply noting that the Testing Treatments website does not provide the information they were seeking, we wanted to help them by directing them to sources that do provide this information. Given this motivation, we restricted our review of websites to ones with a broad scope. There were two reasons for this. First, websites with a broad scope can meet the needs of most people seeking trustworthy information about treatment effects. Furthermore, although disease-specific websites can be useful, it would be impractical to assess any more than a small sample of websites for specific conditions or types of treatments. Second, it is easier to become familiar with one or a small number of websites than it is to use multiple websites for questions about different conditions or types of treatments.

We considered any website that defined itself as providing “health information”, which included information about treatments. To be included in this review a website needed to be:

  • Available in English
  • Freely accessible (i.e. non-commercial with no cost to users or membership fees)
  • That described itself as being intended for patients and the public
  • Broad in scope (not limited to specific conditions or types of treatments)
  • Explicitly based on systematic reviews (i.e. there had to be a description of how the information is prepared and the description had to include a statement about using systematic reviews)

We identified websites that potentially met those criteria by considering websites that we and our colleagues (see Acknowledgements) knew of. The first author (AO) searched for government sponsored websites in English speaking countries (including Australia, Canada, Ireland, New Zealand, the UK, and the USA); searched Google for “health information” and “patient information” to identify websites that are frequently accessed for health information; and checked links to other websites on the websites that were identified. On 29 January 2018, AO conducted a final set of searches using the following terms: “health information”, “patient information”, “evidence-based health information”, and “evidence-based patient information”; and these search engines: Google [ 3 ], Bing [ 26 ], DuckDuckGo [ 27 ], and HONsearch patients [ 28 ]. Google and Bing are the two most popular search engines, DuckDuckGo is not affected by your previous search history, and HONsearch searches “trustworthy” health websites. The first 20 hits for each search were screened, and any websites that looked like they might meet our inclusion criteria were checked.

AO assessed each identified website for inclusion and the second author (EP) checked those judgements using information provided on the websites. In addition, we emailed each excluded website to confirm that our reason for excluding it was correct.

AO collected the following information for each included website:

  • The stated purpose
  • A statement that information about treatment effects is based on systematic reviews
  • Availability of links to the systematic reviews
  • Reporting size of effects
  • Reporting certainty of the evidence; i.e. a judgement using GRADE (Grading of Recommendations Assessment, Development and Evaluation) [ 29 – 31 ] or another formal approach or an informal judgement about how sure we can be about the reported effects
  • Availability of links to ongoing trials
  • Information about how up-to-date information about treatment effects is
  • What other information is provided
  • What tools there are for searching, sorting, and filtering information
  • Use of plain language (i.e. summaries written for patients and the public) and the availability of a glossary

EP checked all of the information that was recorded and the judgements that were made. To inform these judgements, both authors independently searched each included website for eight questions about treatments to assess the ease of finding information (AO on 22 December 2017 and EP on 9 January 2018). We selected the eight questions by searching Google for “common health questions” and selecting the first relevant list that we found ( 25 Questions About Your Health Answered - Oprah.com ). Many of the questions in that list were not about treatment effects and we modified some of the questions with the intention of having a variety of questions for different types of conditions and treatments. Table  1 shows the original question from that list, our question, the conditions, the treatments, and the initial search terms that we used to find information about treatment effects on each website.

Questions about treatments used to assess the included websites

We then independently assessed what was reported about treatment effects, the consistency of reporting, and the advantages and disadvantages of each website. Disagreements were resolved by discussion. Based on these assessments and the information we had collected for each website we suggested how the websites could be improved and provided tips for website users.

For each question, we searched for information using plain language terms without Boolean logic (using the first terms shown for each question in the last column of Table ​ Table1). 1 ). We recorded the number of hits for each search and each relevant summary that we found. We assessed the search as easy if we found relevant information using plain language terms without Boolean logic and the relevant information was one of the first few hits. We assessed searches as hard if we had to use technical terms or Boolean logic, or if we could not find relevant information; and as moderate if finding relevant information required some minor fiddling with the search terms or screening more than a few hits.

For each relevant summary that we found, we recorded whether any information was provided about benefits of the treatment and harms of the treatment, whether quantitative information was provided for at least one outcome, and whether a formal or informal assessment of the certainty of the evidence was provided. We then ranked the three websites for each question based on an overall assessment of how hard it was to find relevant information and the completeness of the information about the effects of the treatments.

We considered 35 websites for inclusion. Of these, 26 were excluded because information about treatment effects was not explicitly based on systematic reviews (Table  2 ), five were excluded because they were not intended for patients and the public (Table  3 ), and one was under development (Table  4 ). Three of the 34 websites met our inclusion criteria: Cochrane Evidence , Informed Health , and PubMed Health (Table  5 ). Cochrane Evidence and Informed Health produce content, whereas PubMed Health , which was discontinued in October 2018, aggregated content, including content from the first two websites.

Websites excluded because they are not explicitly based on systematic reviews a

a These websites were excluded because they do not include a description of how information is prepared that includes a statement about using or being based on systematic reviews of research evidence. It is unclear to what extent information about treatment effects on these websites is based on systematic reviews

Websites excluded because they are not intended for patients and the public a

a These websites were excluded because they are not primarily intended for patients and the general public. However, some patients and members of the general public use these databases

Websites under development a

a Last accessed 14 February 2018

Included websites

a The headings used were inconsistent for all three

b We got an error message (“A technical error has occurred. Please try again later.”) when we used AND to limit searches on Informed Health , and no search results when we used quotation marks (e.g. “back pain”). It was possible to use this logic on the other two websites

Cochrane Evidence provides plain language summaries of over 7500 Cochrane Reviews, most of which are systematic reviews of the effects of treatments. The systematic reviews and the plain language summaries are prepared and updated by Cochrane review groups. Cochrane is a global independent network of researchers, professionals, patients, carers, and people interested in health, with over 37,000 contributors from more than 130 countries.

The plain language summaries include links to the full reviews. The full reviews are available in The Cochrane Library, which can be accessed for free in countries that have a national subscription or if the review or an update was published more than one year previously. The headings and content of the plain language summaries are inconsistent. The summaries include some background information, the authors’ conclusions, and links to other summaries that may be of interest. There is variability in the quality of the summaries. Some summaries include pop-up definitions (but not links to longer explanations) for some research and medical terms, and there is a glossary of terms relevant for Cochrane Reviews available on the Cochrane website. The summaries are translated into Chinese, Croatian, Czech, French, German, Japanese, Korean, Malay, Polish, Portuguese, Romanian, Russian, Spanish, Tamil, and Thai. The glossary is only in English.

No other information regarding treatments is provided in Cochrane Evidence , besides the plain language summaries of Cochrane Reviews. Cochrane website, where Cochrane Evidence is found has other information about the Cochrane Colaboration. Navigation tools for Cochrane Evidence are limited to a simple search for the entire Cochrane website. It is possible to sort findings by relevance, alphabetically, or by date of publication; and to filter the summaries by broad health topics and whether the reviews are new or updated.

Informed Health is the English-language version of the German website Gesundheitsinformation.de. The website is prepared by the Institute for Quality and Efficiency in Health Care (IQWiG) in Germany. IQWiG is a professionally-independent, scientific institute established under the Health Care Reform 2004.

The Informed Health website provides information about treatment effects together with other information for a wide range of topics. The website includes “research summaries” for some but not all treatments. “These are objective, brief summaries of the latest findings on a research question described in the title. They usually summarize the results of studies, for instance the results of one or (rarely) several systematic reviews or IQWiG reports. They also describe the study/studies in more detail and explain how the researchers came to their conclusions.” The website states that they “mainly use systematic reviews of studies to answer questions about the benefits and harms of medical interventions.” Links to systematic reviews are provided when these are used, but the reviews may not be freely available.

All of the research summaries that we examined (Additional file  1 ) included quantitative information about the size of the benefits, and they included frequencies for at least one outcome, but most often only for one outcome. The certainty of the evidence is not reported. All of the information is in plain language, written for patients and the public. There are hyperlinks to background information (but not pop-up definitions). There is a glossary of “medical and scientific” terms that includes primarily medical terms and few research terms.

In addition to information about treatments, Informed Health includes information about symptoms, causes, outlook, diagnosis, everyday life, where to learn more, and explanations (“Extras”) of topics such as how the body works, how treatments work, and types of treatments. Navigation tools for Informed Health include browsing by broad topic areas, an index (A to Z list) and a simple search. Search results can be sorted by relevance, the date information on the website was created, or the date it was updated.

PubMed Health specialized in systematic reviews of clinical effectiveness research. It included plain language summaries and abstracts of Cochrane Reviews; abstracts (technical summaries) of systematic reviews in the Database of Abstracts of Reviews of Effects (DARE) up to 31 March 2015; full texts of reviews from public agencies; information developed by public agencies for consumers and clinicians based on systematic reviews; and methods resources about the best research and statistical techniques for systematic reviews and clinical effectiveness research. PubMed Health was a service provided by the National Center for Biotechnology Information at the U.S. National Library of Medicine. It was discontinued October 31, 2018 “in an effort to consolidate similar resources and make information easier to find”. It included information from over 40,000 systematic reviews from a variety of sources, but plain language summaries were not available for most of those reviews. Links to the systematic reviews were provided, but not all of the reviews were freely available.

The reporting was inconsistent. Headings, reporting of effects, and reporting of the certainty of the evidence were inconsistent. PubMed Health had an extensive glossary (Health A – Z) and background information on drugs. Navigation tools included a simple search. Search results could be sorted by date of publication and filtered by Article types (including “Consumer information”); when information was added to PubMed Health , Content providers (including Cochrane and IQWiG); and Reviews with a quality assessment.

None of the three included websites includes links to ongoing trials and adverse effects are not consistently reported on any of the websites. All three include information about how up-to-date the information about treatment effects is.

PubMed Health was the easiest website to search, despite the large number of records that it includes. However, we had difficulties searching all three websites. We found information easily in Cochrane Evidence and Informed Health for one of the eight questions in Table ​ Table1, 1 , and for three of the questions in PubMed Health (Additional file 1 ). Conversely, it was hard to find information (or we did not find any information) for the five questions in Cochrane Evidence , six questions in Informed Health , and three questions in PubMed Health . It was not possible to use Boolean logic when searching Informed Health . This was possible on the other two websites, but none of the three provided any instructions or help for searching.

When we found information, it was consistently available about benefits, but only Informed Health consistently reported this information quantitatively in the plain language summaries. Quantitative information was provided in the linked scientific abstracts. None of the websites consistently reported information about harms or the certainty of the evidence, although Cochrane plain language summaries in Cochrane Evidence and PubMed Health frequently reported the certainty of the evidence. When the certainty of the evidence was reported using GRADE or another systematic approach, there was not a link to an explanation of what the grade means.

Overall we were most satisfied with Cochrane Evidence for 2 questions, with Informed Health for one question, and with PubMed Health for 3 of our questions. We did not find information about treatment effects on any of the three websites for two questions: “Should I stop using phone, tablet, computer, and TV screens before going to bed (for insomnia)?” and “Should I get my osteoarthritic knee replaced?” Informed Health provided advice for the first question (“For instance, it might help to only listen to relaxing music before going to bed and keep from talking on the phone or playing computer or mobile phone games”), but no reference to research evidence for that advice. We easily found relevant systematic reviews for both of these questions in Epistemonikos (Additional file 1 ).

We identified three websites for patients and the public that provide free information about treatment effects based on systematic reviews. A fourth, promising website, CureFacts, was under development (Table ​ (Table4), 4 ), and is still under development as of February 2019. Twenty-two other websites that provide free information for patients and the public claim to provide trustworthy, evidence-based information. However, it is not possible to know the extent to which the information they provide about treatment effects is based on systematic reviews, so is therefore less likely to be trustworthy. We considered four websites that provide access to systematic reviews, but none of these is intended for patients and the public (Table ​ (Table3). 3 ). Nonetheless, some people may find these useful, particularly Epistemonikos . It includes over 100,000 systematic reviews with the abstracts translated to Arabic, Chinese, Dutch, French, German, Italian, Portuguese, and Spanish. It is aimed for health professionals, researchers and policymakers but plain language summaries are not available for most of the reviews. Although it is not intended for patients and the public, it “has been used by well-informed lay people and journalists successfully” (Table ​ (Table3 3 ).

We did not consider databases that are not free, such as Trip Pro, which includes access to over 100,000 systematic reviews; or patient information from web-based medical compendia for clinicians, such as Best Practice , Dynamed , and UptoDate. We also did not consider websites that provide information for patients and the public based on guidelines, such as the UK National Institute for Health and Care Excellence (NICE) guidance for patients; or websites that are limited to specific conditions or types of treatments.

The three websites for patients and the public that explicitly provided information about treatment effects based on systematic reviews were likely to appeal to different people and their appeal may vary depending on the question being asked. We found that we preferred each of the websites for at least one of the eight questions we used as test cases (Table ​ (Table1). 1 ). We found PubMed Health somewhat easier to search, despite the large number of records it includes, and we found both Cochrane plain language summaries and Health Information research summaries when searching PubMed Health . Simple instructions regarding the use of Boolean logic and the use of quotations to limit searches would help improve the ease of use for all three websites. For example, the default for Cochrane Evidence appears to be to insert OR between words, resulting in large numbers of irrelevant hits.

All of the websites could be improved by more consistent use of headings and consistent reporting of both benefits and (especially) harms; inclusion of quantitative information about the size of the effects; and information about the certainty of the evidence based on the use of a consistent set of criteria, such as GRADE [ 29 – 31 ], and links to explanations of what the grades mean. Because many systematic reviews, including Cochrane Reviews, do not consistently provide this information, plain language summaries based on systematic reviews cannot always provide this information. However, they can alert users to the absence of trustworthy information about adverse effects, when this is the case, and it is possible to provide an assessment of the certainty of the evidence even when review authors have not done this [ 32 , 33 ].

All three websites provided plain language summaries of systematic reviews and all three had glossaries. However, none of the websites included both pop-up short definitions (which can be quickly accessed and read as scroll overs without having to go to another webpage) and links to longer explanations (that can be easily accessed when needed).

None of the websites included links to ongoing trials. This is something that, for example, NHS Choices does [ 34 ]. This is important because when there is important uncertainty about the effects of treatments, participating in a randomised trial may be the best option for patients [ 35 , 36 ].

We are not aware of any other studies that have attempted to systematically identify and evaluate websites that provide free access to information about the effects of treatments for patients and the public which is based on systematic reviews. There are thousands of websites that provide health information and we did not systematically screen all of these. Although we believe it is unlikely that there are other websites that meet our inclusion criteria, we did not consider websites for specific conditions or types of interventions, non-English language websites, or websites that were not freely accessible. Others might want to assess these and other sources of information about treatment effects in future studies.

"The evaluation criteria that we used were based on our judgement about what information is important and what is needed to make that information accessible. For example, providing a link to the systematic review enables people to go to the source of information about treatment effects for more information, if they desire. It also makes the basis of the information clear. Information about the size of effects and the certainty of the evidence is essential for making well-informed decisions. Basic search tools are necessary to make it easy to find information on the websites, and summaries that are written in plain language for patients and the public are more likely to be understandable than abstracts written for researchers or health professionals. Consistent headings, content, and use of language make it easier for users to become familiar with the websites and to find and understand information.

Our evaluation was based in part on searching for answers for eight treatment questions (Table ​ (Table1). 1 ). The criteria that we used to assess what we found for each question did not require a great deal of judgement. Consequently, there were only minor disagreements in our assessments (Additional file 1 ), and those were easily resolved. It is uncertain how representative what we found for those questions is for what would be found for other treatment questions, but we believe they provided a fair basis for assessing the websites. Moreover, we sent full drafts of this report to people responsible for each website and their corrections did not substantially alter our assessments or conclusions.

We did not evaluate the readability of the plain language summaries and, although we described other information that each website provides, we did not evaluate whether the websites provided other information that patients and the public want or need to make informed decisions; for example, information about other treatment alternatives, costs, and people’s experiences with the treatment [ 37 , 38 ]. We also did not evaluate how users of the websites experience them [ 39 ]. All of these are potential areas for future research."

Conclusions

It is possible for patients and the public to access trustworthy information about the effects of treatments based on systematic reviews using two of the three websites included in this review. However, all three of these websites could be improved and made more useful and easier to use by consistently reporting information about the size of both the benefits and harms of treatments and the certainty of the evidence, and by making it easier to find relevant information.

Searching the three websites frequently yielded much irrelevant information. Users can limit searches by using Boolean logic - inserting AND between terms (e.g. for the condition and for the treatment) and quotation marks to indicate that words need to be next to each other; e.g. “back pain”. However, this is unlikely to be obvious to novice users. Some users may want to use sources that are not intended for patients and the public, such as Epistemonikos , if they are unable to find information on one of these websites. They also might want to consider searching for ongoing trials, if there is important uncertainty about the effects of relevant treatments.

There are many other websites that claim to provide evidence-based or reliable information about treatments, but it is difficult to assess the reliability of the information about treatment effects provided on those websites since they do not explicitly base that information on systematic reviews.

Additional file

Appendix review of online evidence-based patient info. Assessments of three included websites. Description of data: Search results and assessments of the information found in the three included websites for eight common health questions. (XLSX 29 kb)

Acknowledgements

With support from the James Lind Initiative, Anita Peerson prepared an earlier unpublished version of this review with advice from Iain Chalmers, Douglas Badenoch, Sarah Rosenbaum, and Astrid Austvoll-Dahlgren. We would like to thank the following colleagues for helpful comments on an earlier version of this paper: Astrid Austvoll-Dahlgren, Atle Fretheim, Claire Glenton, Hilda Bastian, Iain Chalmers, Jon Brasey, Karla Soares-Weiser, Marit Johansen, Marita Sporstøl Fønhus, Sarah Rosenbaum, Signe Flottorp.

Not applicable.

Availability of data and materials

Abbreviations, authors’ contributions.

AO made all of the initial assessments and wrote the first draft of this report. EP checked all of the assessments and contributed to revisions of this report. Both authors read and approved the final manuscript.

Ethics approval and consent to participate

Consent for publication, competing interests.

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Andrew D. Oxman, Email: on.enilno@namxo .

Elizabeth J. Paulsen, Email: [email protected] .

  • Share full article

Advertisement

Supported by

Texas Surgeon Is Accused of Secretly Denying Liver Transplants

A Houston hospital is investigating whether a doctor altered a transplant list to make his patients ineligible for care. A disproportionate number of them have died while waiting for new organs.

A billboard with a portrait and the phrase, “Dr. Bunon gives new life to transplant patients.”

By Brian M. Rosenthal and Jessica Silver-Greenberg

For decades, Dr. J. Steve Bynon Jr., a transplant surgeon in Texas, gained accolades and national prominence for his work, including by helping to enforce professional standards in the country’s sprawling organ transplant system.

But officials are now investigating allegations that Dr. Bynon was secretly manipulating a government database to make some of his own patients ineligible to receive new livers, potentially depriving them of lifesaving care.

Memorial Hermann-Texas Medical Center in Houston, where Dr. Bynon oversaw both the liver and kidney transplant programs, abruptly shut down those programs in the past week while looking into the allegations.

On Thursday, the medical center, a teaching hospital affiliated with the University of Texas, said in a statement that a doctor in its liver transplant program had admitted to changing patient records. That effectively denied the transplants, the hospital said. An official with knowledge of the investigation identified the physician as Dr. Bynon, who is employed by the University of Texas Health Science Center at Houston and has had a contract to lead Memorial Hermann’s abdominal transplant program since 2011.

It was not clear what could have motivated Dr. Bynon. Reached by phone on Thursday, he referred questions to UTHealth Houston. He did not confirm he had admitted to altering records.

On Friday, after this article was published online, UTHealth Houston released a statement to news outlets defending Dr. Bynon as “an exceptionally talented and caring physician, and a pioneer in abdominal organ transplantation.” The statement said that the survival rates of Dr. Bynon’s patients who received transplants were among the best in the nation. “Our faculty and staff members, including Dr. Bynon, are assisting with the inquiry into Memorial Hermann’s liver transplant program and are committed to addressing and resolving any findings identified by this process,” it said.

Got a confidential news tip?

The New York Times would like to hear from readers who want to share messages and materials with our journalists.

Founded in 1925, Memorial Hermann is a major hospital in Houston, but it has a relatively small liver transplant program. Last year, it performed 29 liver transplants, according to federal data, making it one of the smallest programs in Texas.

In recent years, a disproportionate number of Memorial Hermann patients have died while waiting for a liver, data shows. Last year, 14 patients were taken off the center’s waiting list because they either died or became too sick, and its mortality rate for people waiting for a transplant was higher than expected, according to the Scientific Registry of Transplant Recipients, a research group.

This year, as of last month, five patients had died or become too sick to receive a liver transplant, while the hospital had performed three transplants, records show. The investigation is in early stages, and it was unclear if possible changes to the waiting list actually resulted in a patient not receiving a liver. A hospital spokeswoman said the center treated patients who were more severely ill than average.

The U.S. Department of Health and Human Services said in a statement that it was also investigating the allegations. So is the United Network for Organ Sharing, the federal contractor that oversees the country’s organ transplant system.

“We acknowledge the severity of this allegation,” the H.H.S. statement said. “We are working diligently to address this issue with the attention it deserves.”

Officials began investigating after being alerted by a complaint. An analysis then found what the hospital called “irregularities” in how patients were classified on a waiting list for liver transplants. When doctors place a patient on the list, they must identify the types of donors they would consider, including the person’s age and weight.

Hospital officials said they found patients had been listed as accepting only donors with ages and weights that were impossible — for instance, a 300-pound toddler — making them unable to receive any transplant.

Other transplant surgeons said if the list was tampered with, patients would not be aware of changes in their status.

“They’re sitting at home, maybe not traveling, thinking they could get an organ offer any time, but in reality, they’re functionally inactive, and so they’re not going to get that transplant,” said Dr. Sanjay Kulkarni, the vice chair of the ethics committee of the national organ transplant system. “It’s highly unusual, I’ve never heard of it before, and it’s also highly inappropriate.”

The hospital said in its statement that it did not know how many patients were affected by the changes, or when they began. It said the issues affected only the liver transplant program, but the hospital also closed the kidney transplant program because it was led by the same doctor.

Dr. Bynon, 64, has spent his career in abdominal transplants, and is considered one of the early practitioners of advanced liver transplants. He spent nearly 20 years at the University of Alabama at Birmingham before moving to Texas in 2011.

Some former colleagues described Dr. Bynon as off-putting and arrogant, while others called him talented and dedicated.

“In my experience, everything he did was about the patient,” said Dr. Brendan McGuire, the medical director of liver transplants at that Alabama program, who worked with Dr. Bynon for more than a decade. “When he transplanted someone, that person was his patient for life.”

On its LinkedIn page, the University of Texas Health Science Center once featured a photo of a billboard with Dr. Bynon on it . The sign read, “Dr. Bynon gives new life to transplant patients.”

Dr. Bynon also has worked for the national transplant system’s Membership and Professional Standards Committee, which investigates wrongdoing in the system.

Most recently, in December, Dr. Bynon made headlines for performing a kidney transplant for former Lt. Gov. Ben Barnes of Texas.

The closure of the programs at Memorial Hermann has surprised many in the transplant community because it is extremely rare for a program to be suspended over ethical issues.

At the time it shut down its programs, Memorial Hermann had 38 patients on its liver transplant waiting list and 346 patients on its kidney list, according to the hospital.

Officials said they were contacting those patients to help them find new providers.

Roni Caryn Rabin contributed reporting. Susan C. Beachy and Kirsten Noyes contributed research.

Share your story about the organ transplant system

We will not publish any part of your submission without contacting you first. We may use your contact information to follow up with you.

An earlier version of this story mischaracterized Dr. Bynon’s involvement in the Membership and Professional Standards Committee. He has worked for the committee but did not serve on it.

How we handle corrections

Brian M. Rosenthal is an investigative reporter who has worked at The Times since 2017. More about Brian M. Rosenthal

Jessica Silver-Greenberg is an investigative reporter writing about big business with a focus on health care. She has been a reporter for more than a decade. More about Jessica Silver-Greenberg

IMAGES

  1. Guide on How to Write a Literature Review Medicine

    medical literature review sites

  2. medical literature review methodology

    medical literature review sites

  3. (PDF) Medical literature review: Search or perish

    medical literature review sites

  4. Unsw Medicine Literature Review : Getting Started on Your Literature Review

    medical literature review sites

  5. (PDF) Conducting a Literature Review on the Effectiveness of Health

    medical literature review sites

  6. (PDF) Medical devices early assessment methods: Systematic literature

    medical literature review sites

VIDEO

  1. LITERATURE REVIEW HPEF7063 ACADEMIC WRITING FOR POSTGRADURATES

  2. Literature Review in Research ( Hands on Session) PART 1

  3. The Literature Review

  4. LITERATURE REVIEW IN RESEARCH

  5. Literature Review

  6. LITERATURE REVIEW

COMMENTS

  1. PubMed

    PubMed® comprises more than 37 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full text content from PubMed Central and publisher web sites. Clipboard, Search History, and several other advanced features are temporarily unavailable. ...

  2. MEDLINE Overview

    MEDLINE Overview. MEDLINE is the National Library of Medicine's (NLM) premier bibliographic database that contains more than 31 million references to journal articles in life sciences with a concentration on biomedicine. MEDLINE is a primary component of PubMed, a literature database developed and maintained by the NLM National Center for ...

  3. MEDLINE

    MEDLINE is the National Library of Medicine's (NLM) premier bibliographic database that contains references to journal articles in life sciences, with a concentration on biomedicine. See the MEDLINE Overview page for more information about MEDLINE.. MEDLINE content is searchable via PubMed and constitutes the primary component of PubMed, a literature database developed and maintained by the ...

  4. JAMA

    JAMA - The Latest Medical Research, Reviews, and Guidelines. Home New Online Issues For Authors. Editor's Choice: AI Tools to Improve Access to Reliable Health Information. Original Investigation Serious Bleeding in Patients With Atrial Fibrillation Using Diltiazem With Apixaban or Rivaroxaban Wayne A. Ray, PhD; Cecilia P. Chung, MD, MPH; C ...

  5. The New England Journal of Medicine

    The New England Journal of Medicine (NEJM) is a weekly general medical journal that publishes new medical research and review articles, and editorial opinion on a wide variety of topics of ...

  6. Systematic Reviews: Medical Literature Databases to search

    At a minimum you need to search MEDLINE, EMBASE, and the Cochrane CENTRAL trials register.This is the recommendation of three medical and public health research organizations: the U.S. Agency for Healthcare Research and Quality (AHRQ), the U.K. Centre for Reviews and Dissemination (CRD), and the International Cochrane Collaboration (Source: Institute of Medicine (2011) Finding What Works in ...

  7. National Library of Medicine

    NIH Virtual Tour: National Library of Medicine. The National Library of Medicine (NLM) is the world's largest biomedical library and a national resource for health professionals, scientists, and the public.

  8. Literature Search: Databases and Gray Literature

    Gray Literature. Gray Literature is the term for information that falls outside the mainstream of published journal and mongraph literature, not controlled by commercial publishers. includes: hard to find studies, reports, or dissertations. conference abstracts or papers. governmental or private sector research.

  9. Users' Guide to the Medical Literature

    October 19, 2021. This Users' Guide to the Medical Literature provides suggestions for understanding guideline methods and recommendations for clinicians seeking direction in evaluating clinical practice guidelines for potential use in their practice. Guidelines Conflict of Interest.

  10. An Overview of How to Search and Write a Medical Literature Review

    Literature reviews are necessary to learn what is known (and not known) about a topic of interest. In the respiratory care profession, the body of research is enormous, so a method to search the medical literature efficiently is needed. Selecting the correct databases, use of Boolean logic operators, and consultations with librarians are used ...

  11. The Literature Review: A Foundation for High-Quality Medical Education

    Purpose and Importance of the Literature Review. An understanding of the current literature is critical for all phases of a research study. Lingard 9 recently invoked the "journal-as-conversation" metaphor as a way of understanding how one's research fits into the larger medical education conversation. As she described it: "Imagine yourself joining a conversation at a social event.

  12. Performing a literature review

    Literature reviews are most commonly performed to help answer a particular question. While you are at medical school, there will usually be some choice regarding the area you are going to review. Once you have identified a subject area for review, the next step is to formulate a specific research question. This is arguably the most important ...

  13. Literature Reviews

    The different types of literature reviews, including systematic reviews and other evidence synthesis methods; Conducting & Reporting Systematic Reviews ... Cook, D. A., & West, C. P. (2012). Conducting systematic reviews in medical education: a stepwise approach. Medical Education, 46(10), 943-952. << Previous: Critical Appraisal Resources ...

  14. Medical literature review: Search or perish

    Abstract. Literature review is a cascading process of searching, reading, analyzing, and summing up of the materials about a specific topic. However, searching the literature is like searching "a needle in a haystack", and hence has been called "Cinderella". 1 Therefore, skills and effective pathways of searching the literature are ...

  15. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  16. PDF Doing a Literature Review in Health

    Doing a Literature Review in Health33 This chapter describes how to undertake a rigorous and thorough review of the literature and is divided into three sections. The first section examines the two main types of review: the narrative and the systematic review. The second section describes some techniques for undertaking a comprehensive search,

  17. Literature Reviews

    A typology of reviews: an analysis of 14 review types and associated methodologies | Health Information and Libraries Journal, 2009. Conceptual recommendations for selecting the most appropriate knowledge synthesis method to answer research questions related to complex evidence | Journal of Clinical Epidemiology, 2016.

  18. Writing in the Health Sciences: Research and Lit Reviews

    PubMed - The premier medical database for review articles in medicine, nursing, healthcare, other related biomedical disciplines. PubMed contains over 20 million citations and can be navigated through multiple database capabilities and searching strategies. CINAHL Ultimate - Offers comprehensive coverage of health science literature. CINAHL is particularly useful for those researching the ...

  19. Key Steps in a Literature Review

    The 5 key steps below are most relevant to narrative reviews. Systematic reviews include the additional step of using a standardized scoring system to assess the quality of each article. More information on Step 1 can be found here and Step 5 here. Identify a specific unresolved research question relevant to medicine. Identify relevant studies ...

  20. Systematic Reviews

    Literature Searching for a Systematic Review. IOM Standards for Systematic Reviews: Standard 3.1: Conduct a comprehensive systematic search for evidence. The goal of a systematic review search is to maximize recall and precision while keeping results manageable. Recall (sensitivity) is defined as the number of relevant reports identified ...

  21. Guidance to best tools and practices for systematic reviews

    The most popular site for systematic reviews, the International Prospective Register of Systematic Reviews (PROSPERO), for example, only registers reviews that report on an outcome with direct relevance to human health. The PROSPERO record documents protocols for all types of reviews except literature and scoping reviews.

  22. Literature Reviews for Medical Devices: 6 Expert Tips

    Appraisal of the clinical data. 4. Analysis and conclusions generated from the clinical data. 5. Informatic tools. 6. Process flow. Conclusions. Catarina Carrão, freelance medical writer on Kolabtree, outlines the importance of literature reviews for medical devices and best practices to follow.

  23. Journal of Medical Internet Research

    Objective: This scoping review was conducted to explore the use of mobile phone apps for mental health responses to natural disasters and to identify gaps in the literature. Methods: We identified relevant keywords and subject headings and conducted comprehensive searches in 6 electronic databases.

  24. Britain Confronts the Shaky Evidence for Youth Gender Medicine

    00:00. 10:26. Produced by ElevenLabs and News Over Audio (NOA) using AI narration. In a world without partisan politics, the Cass report on youth gender medicine would prompt serious reflection ...

  25. Youth Gender Medications Limited in England, Part of Big Shift in

    In England, around 5,800 children were on the waiting list for gender services at the end of 2023, according to the N.H.S. "The waiting list is known to be hell," said N., a 17-year-old ...

  26. Who can you trust? A review of free online sources of "trustworthy

    "Content is reviewed by a medical review panel of family doctors to ensure that the information: • Is medically accurate, complete, and useful ... Best-available source materials vary by topic and may include published medical literature, evidence-based guidelines, or a Mayo Clinic physician or scientist who has distinct interest, training ...

  27. Texas Surgeon Is Accused of Secretly Denying Liver Transplants

    Founded in 1925, Memorial Hermann is a major hospital in Houston, but it has a relatively small liver transplant program. Last year, it performed 29 liver transplants, according to federal data ...