Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 3 June 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

  • Search Menu
  • Sign in through your institution
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • Self-Archiving Policy
  • Reasons to Submit
  • About Journal of Surgical Protocols and Research Methodologies
  • Editorial Board
  • Advertising & Corporate Services
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, contents of a research study protocol, conflict of interest statement, how to write a research study protocol.

  • Article contents
  • Figures & tables
  • Supplementary Data

Julien Al Shakarchi, How to write a research study protocol, Journal of Surgical Protocols and Research Methodologies , Volume 2022, Issue 1, January 2022, snab008, https://doi.org/10.1093/jsprm/snab008

  • Permissions Icon Permissions

A study protocol is an important document that specifies the research plan for a clinical study. Many funders such as the NHS Health Research Authority encourage researchers to publish their study protocols to create a record of the methodology and reduce duplication of research effort. In this paper, we will describe how to write a research study protocol.

A study protocol is an essential part of a research project. It describes the study in detail to allow all members of the team to know and adhere to the steps of the methodology. Most funders, such as the NHS Health Research Authority in the United Kingdom, encourage researchers to publish their study protocols to create a record of the methodology, help with publication of the study and reduce duplication of research effort. In this paper, we will explain how to write a research protocol by describing what should be included.

Introduction

The introduction is vital in setting the need for the planned research and the context of the current evidence. It should be supported by a background to the topic with appropriate references to the literature. A thorough review of the available evidence is expected to document the need for the planned research. This should be followed by a brief description of the study and the target population. A clear explanation for the rationale of the project is also expected to describe the research question and justify the need of the study.

Methods and analysis

A suitable study design and methodology should be chosen to reflect the aims of the research. This section should explain the study design: single centre or multicentre, retrospective or prospective, controlled or uncontrolled, randomised or not, and observational or experimental. Efforts should be made to explain why that particular design has been chosen. The studied population should be clearly defined with inclusion and exclusion criteria. These criteria will define the characteristics of the population the study is proposing to investigate and therefore outline the applicability to the reader. The size of the sample should be calculated with a power calculation if possible.

The protocol should describe the screening process about how, when and where patients will be recruited in the process. In the setting of a multicentre study, each participating unit should adhere to the same recruiting model or the differences should be described in the protocol. Informed consent must be obtained prior to any individual participating in the study. The protocol should fully describe the process of gaining informed consent that should include a patient information sheet and assessment of his or her capacity.

The intervention should be described in sufficient detail to allow an external individual or group to replicate the study. The differences in any changes of routine care should be explained. The primary and secondary outcomes should be clearly defined and an explanation of their clinical relevance is recommended. Data collection methods should be described in detail as well as where the data will be kept secured. Analysis of the data should be explained with clear statistical methods. There should also be plans on how any reported adverse events and other unintended effects of trial interventions or trial conduct will be reported, collected and managed.

Ethics and dissemination

A clear explanation of the risk and benefits to the participants should be included as well as addressing any specific ethical considerations. The protocol should clearly state the approvals the research has gained and the minimum expected would be ethical and local research approvals. For multicentre studies, the protocol should also include a statement of how the protocol is in line with requirements to gain approval to conduct the study at each proposed sites.

It is essential to comment on how personal information about potential and enrolled participants will be collected, shared and maintained in order to protect confidentiality. This part of the protocol should also state who owns the data arising from the study and for how long the data will be stored. It should explain that on completion of the study, the data will be analysed and a final study report will be written. We would advise to explain if there are any plans to notify the participants of the outcome of the study, either by provision of the publication or via another form of communication.

The authorship of any publication should have transparent and fair criteria, which should be described in this section of the protocol. By doing so, it will resolve any issues arising at the publication stage.

Funding statement

It is important to explain who are the sponsors and funders of the study. It should clarify the involvement and potential influence of any party. The sponsor is defined as the institution or organisation assuming overall responsibility for the study. Identification of the study sponsor provides transparency and accountability. The protocol should explicitly outline the roles and responsibilities of any funder(s) in study design, data analysis and interpretation, manuscript writing and dissemination of results. Any competing interests of the investigators should also be stated in this section.

A study protocol is an important document that specifies the research plan for a clinical study. It should be written in detail and researchers should aim to publish their study protocols as it is encouraged by many funders. The spirit 2013 statement provides a useful checklist on what should be included in a research protocol [ 1 ]. In this paper, we have explained a straightforward approach to writing a research study protocol.

None declared.

Chan   A-W , Tetzlaff   JM , Gøtzsche   PC , Altman   DG , Mann   H , Berlin   J , et al.    SPIRIT 2013 explanation and elaboration: guidance for protocols of clinical trials . BMJ   2013 ; 346 : e7586 .

Google Scholar

  • conflict of interest
  • national health service (uk)

Email alerts

Citing articles via.

  • Advertising and Corporate Services
  • Journals Career Network
  • JSPRM Twitter

Affiliations

  • Online ISSN 2752-616X
  • Copyright © 2024 Oxford University Press and JSCR Publishing Ltd
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Understanding Research Ethics

  • First Online: 22 April 2022

Cite this chapter

guidelines for research study

  • Sarah Cuschieri 2  

537 Accesses

1 Citations

As a researcher, whatever your career stage, you need to understand and practice good research ethics. Moral and ethical principles are requisite in research to ensure no deception or harm to participants, scientific community, and society occurs. Failure to follow such principles leads to research misconduct, in which case the researcher faces repercussions ranging from withdrawal of an article from publication to potential job loss. This chapter describes the various types of research misconduct that you should be aware of, i.e., data fabrication and falsification, plagiarism, research bias, data integrity, researcher and funder conflicts of interest. A sound comprehension of research ethics will take you a long way in your career.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and affiliations.

Department of Anatomy, Faculty of Medicine and Surgery, University of Malta, Msida, Malta

Sarah Cuschieri

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cuschieri, S. (2022). Understanding Research Ethics. In: A Roadmap to Successful Scientific Publishing. Springer, Cham. https://doi.org/10.1007/978-3-030-99295-8_2

Download citation

DOI : https://doi.org/10.1007/978-3-030-99295-8_2

Published : 22 April 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-99294-1

Online ISBN : 978-3-030-99295-8

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you.

The NIH Clinical Trials and You website is a resource for people who want to learn more about clinical trials. By expanding the below questions, you can read answers to common questions about taking part in a clinical trial. 

What are clinical trials and why do people participate?

Clinical research is medical research that involves people like you. When you volunteer to take part in clinical research, you help doctors and researchers learn more about disease and improve health care for people in the future. Clinical research includes all research that involves people.  Types of clinical research include:

A potential volunteer talks with her doctor about participating in a clinical trial.

  • Epidemiology, which improves the understanding of a disease by studying patterns, causes, and effects of health and disease in specific groups.
  • Behavioral, which improves the understanding of human behavior and how it relates to health and disease.
  • Health services, which looks at how people access health care providers and health care services, how much care costs, and what happens to patients as a result of this care.
  • Clinical trials, which evaluate the effects of an intervention on health outcomes.

What are clinical trials and why would I want to take part?

Clinical trials are part of clinical research and at the heart of all medical advances. Clinical trials look at new ways to prevent, detect, or treat disease. Clinical trials can study:

  • New drugs or new combinations of drugs
  • New ways of doing surgery
  • New medical devices
  • New ways to use existing treatments
  • New ways to change behaviors to improve health
  • New ways to improve the quality of life for people with acute or chronic illnesses.

The goal of clinical trials is to determine if these treatment, prevention, and behavior approaches are safe and effective. People take part in clinical trials for many reasons. Healthy volunteers say they take part to help others and to contribute to moving science forward. People with an illness or disease also take part to help others, but also to possibly receive the newest treatment and to have added (or extra) care and attention from the clinical trial staff. Clinical trials offer hope for many people and a chance to help researchers find better treatments for others in the future

Why is diversity and inclusion important in clinical trials?

People may experience the same disease differently. It’s essential that clinical trials include people with a variety of lived experiences and living conditions, as well as characteristics like race and ethnicity, age, sex, and sexual orientation, so that all communities benefit from scientific advances.

See Diversity & Inclusion in Clinical Trials for more information.

How does the research process work?

The idea for a clinical trial often starts in the lab. After researchers test new treatments or procedures in the lab and in animals, the most promising treatments are moved into clinical trials. As new treatments move through a series of steps called phases, more information is gained about the treatment, its risks, and its effectiveness.

What are clinical trial protocols?

Clinical trials follow a plan known as a protocol. The protocol is carefully designed to balance the potential benefits and risks to participants, and answer specific research questions. A protocol describes the following:

  • The goal of the study
  • Who is eligible to take part in the trial
  • Protections against risks to participants
  • Details about tests, procedures, and treatments
  • How long the trial is expected to last
  • What information will be gathered

A clinical trial is led by a principal investigator (PI). Members of the research team regularly monitor the participants’ health to determine the study’s safety and effectiveness.

What is an Institutional Review Board?

Most, but not all, clinical trials in the United States are approved and monitored by an Institutional Review Board (IRB) to ensure that the risks are reduced and are outweighed by potential benefits. IRBs are committees that are responsible for reviewing research in order to protect the rights and safety of people who take part in research, both before the research starts and as it proceeds. You should ask the sponsor or research coordinator whether the research you are thinking about joining was reviewed by an IRB.

What is a clinical trial sponsor?

Clinical trial sponsors may be people, institutions, companies, government agencies, or other organizations that are responsible for initiating, managing or financing the clinical trial, but do not conduct the research.

What is informed consent?

Informed consent is the process of providing you with key information about a research study before you decide whether to accept the offer to take part. The process of informed consent continues throughout the study. To help you decide whether to take part, members of the research team explain the details of the study. If you do not understand English, a translator or interpreter may be provided. The research team provides an informed consent document that includes details about the study, such as its purpose, how long it’s expected to last, tests or procedures that will be done as part of the research, and who to contact for further information. The informed consent document also explains risks and potential benefits. You can then decide whether to sign the document. Taking part in a clinical trial is voluntary and you can leave the study at any time.

What are the types of clinical trials?

There are different types of clinical trials.

Why do researchers do different kinds of clinical studies?

  • Prevention trials look for better ways to prevent a disease in people who have never had the disease or to prevent the disease from returning. Approaches may include medicines, vaccines, or lifestyle changes.
  • Screening trials test new ways for detecting diseases or health conditions.
  • Diagnostic trials study or compare tests or procedures for diagnosing a particular disease or condition.
  • Treatment trials test new treatments, new combinations of drugs, or new approaches to surgery or radiation therapy.
  • Behavioral trials evaluate or compare ways to promote behavioral changes designed to improve health.
  • Quality of life trials (or supportive care trials) explore and measure ways to improve the comfort and quality of life of people with conditions or illnesses.

What are the phases of clinical trials?

Clinical trials are conducted in a series of steps called “phases.” Each phase has a different purpose and helps researchers answer different questions.

  • Phase I trials : Researchers test a drug or treatment in a small group of people (20–80) for the first time. The purpose is to study the drug or treatment to learn about safety and identify side effects.
  • Phase II trials : The new drug or treatment is given to a larger group of people (100–300) to determine its effectiveness and to further study its safety.
  • Phase III trials : The new drug or treatment is given to large groups of people (1,000–3,000) to confirm its effectiveness, monitor side effects, compare it with standard or similar treatments, and collect information that will allow the new drug or treatment to be used safely.
  • Phase IV trials : After a drug is approved by the FDA and made available to the public, researchers track its safety in the general population, seeking more information about a drug or treatment’s benefits, and optimal use.

What do the terms placebo, randomization, and blinded mean in clinical trials?

In clinical trials that compare a new product or therapy with another that already exists, researchers try to determine if the new one is as good, or better than, the existing one. In some studies, you may be assigned to receive a placebo (an inactive product that resembles the test product, but without its treatment value).

Comparing a new product with a placebo can be the fastest and most reliable way to show the new product’s effectiveness. However, placebos are not used if you would be put at risk — particularly in the study of treatments for serious illnesses — by not having effective therapy. You will be told if placebos are used in the study before entering a trial.

Randomization is the process by which treatments are assigned to participants by chance rather than by choice. This is done to avoid any bias in assigning volunteers to get one treatment or another. The effects of each treatment are compared at specific points during a trial. If one treatment is found superior, the trial is stopped so that the most volunteers receive the more beneficial treatment.  This video helps explain randomization for all clinical trials .

" Blinded " (or " masked ") studies are designed to prevent members of the research team and study participants from influencing the results. Blinding allows the collection of scientifically accurate data. In single-blind (" single-masked ") studies, you are not told what is being given, but the research team knows. In a double-blind study, neither you nor the research team are told what you are given; only the pharmacist knows. Members of the research team are not told which participants are receiving which treatment, in order to reduce bias. If medically necessary, however, it is always possible to find out which treatment you are receiving.

Who takes part in clinical trials?

Many different types of people take part in clinical trials. Some are healthy, while others may have illnesses. Research procedures with healthy volunteers are designed to develop new knowledge, not to provide direct benefit to those taking part. Healthy volunteers have always played an important role in research.

Healthy volunteers are needed for several reasons. When developing a new technique, such as a blood test or imaging device, healthy volunteers help define the limits of "normal." These volunteers are the baseline against which patient groups are compared and are often matched to patients on factors such as age, gender, or family relationship. They receive the same tests, procedures, or drugs the patient group receives. Researchers learn about the disease process by comparing the patient group to the healthy volunteers.

Factors like how much of your time is needed, discomfort you may feel, or risk involved depends on the trial. While some require minimal amounts of time and effort, other studies may require a major commitment of your time and effort, and may involve some discomfort. The research procedure(s) may also carry some risk. The informed consent process for healthy volunteers includes a detailed discussion of the study's procedures and tests and their risks.

A patient volunteer has a known health problem and takes part in research to better understand, diagnose, or treat that disease or condition. Research with a patient volunteer helps develop new knowledge. Depending on the stage of knowledge about the disease or condition, these procedures may or may not benefit the study participants.

Patients may volunteer for studies similar to those in which healthy volunteers take part. These studies involve drugs, devices, or treatments designed to prevent,or treat disease. Although these studies may provide direct benefit to patient volunteers, the main aim is to prove, by scientific means, the effects and limitations of the experimental treatment. Therefore, some patient groups may serve as a baseline for comparison by not taking the test drug, or by receiving test doses of the drug large enough only to show that it is present, but not at a level that can treat the condition.

Researchers follow clinical trials guidelines when deciding who can participate, in a study. These guidelines are called Inclusion/Exclusion Criteria . Factors that allow you to take part in a clinical trial are called "inclusion criteria." Those that exclude or prevent participation are "exclusion criteria." These criteria are based on factors such as age, gender, the type and stage of a disease, treatment history, and other medical conditions. Before joining a clinical trial, you must provide information that allows the research team to determine whether or not you can take part in the study safely. Some research studies seek participants with illnesses or conditions to be studied in the clinical trial, while others need healthy volunteers. Inclusion and exclusion criteria are not used to reject people personally. Instead, the criteria are used to identify appropriate participants and keep them safe, and to help ensure that researchers can find new information they need.

What do I need to know if I am thinking about taking part in a clinical trial?

Head-and-shoulders shot of a woman looking into the camera.

Risks and potential benefits

Clinical trials may involve risk, as can routine medical care and the activities of daily living. When weighing the risks of research, you can think about these important factors:

  • The possible harms that could result from taking part in the study
  • The level of harm
  • The chance of any harm occurring

Most clinical trials pose the risk of minor discomfort, which lasts only a short time. However, some study participants experience complications that require medical attention. In rare cases, participants have been seriously injured or have died of complications resulting from their participation in trials of experimental treatments. The specific risks associated with a research protocol are described in detail in the informed consent document, which participants are asked to consider and sign before participating in research. Also, a member of the research team will explain the study and answer any questions about the study. Before deciding to participate, carefully consider risks and possible benefits.

Potential benefits

Well-designed and well-executed clinical trials provide the best approach for you to:

  • Help others by contributing to knowledge about new treatments or procedures.
  • Gain access to new research treatments before they are widely available.
  • Receive regular and careful medical attention from a research team that includes doctors and other health professionals.

Risks to taking part in clinical trials include the following:

  • There may be unpleasant, serious, or even life-threatening effects of experimental treatment.
  • The study may require more time and attention than standard treatment would, including visits to the study site, more blood tests, more procedures, hospital stays, or complex dosage schedules.

What questions should I ask if offered a clinical trial?

If you are thinking about taking part in a clinical trial, you should feel free to ask any questions or bring up any issues concerning the trial at any time. The following suggestions may give you some ideas as you think about your own questions.

  • What is the purpose of the study?
  • Why do researchers think the approach may be effective?
  • Who will fund the study?
  • Who has reviewed and approved the study?
  • How are study results and safety of participants being monitored?
  • How long will the study last?
  • What will my responsibilities be if I take part?
  • Who will tell me about the results of the study and how will I be informed?

Risks and possible benefits

  • What are my possible short-term benefits?
  • What are my possible long-term benefits?
  • What are my short-term risks, and side effects?
  • What are my long-term risks?
  • What other options are available?
  • How do the risks and possible benefits of this trial compare with those options?

Participation and care

  • What kinds of therapies, procedures and/or tests will I have during the trial?
  • Will they hurt, and if so, for how long?
  • How do the tests in the study compare with those I would have outside of the trial?
  • Will I be able to take my regular medications while taking part in the clinical trial?
  • Where will I have my medical care?
  • Who will be in charge of my care?

Personal issues

  • How could being in this study affect my daily life?
  • Can I talk to other people in the study?

Cost issues

  • Will I have to pay for any part of the trial such as tests or the study drug?
  • If so, what will the charges likely be?
  • What is my health insurance likely to cover?
  • Who can help answer any questions from my insurance company or health plan?
  • Will there be any travel or child care costs that I need to consider while I am in the trial?

Tips for asking your doctor about trials

  • Consider taking a family member or friend along for support and for help in asking questions or recording answers.
  • Plan what to ask — but don't hesitate to ask any new questions.
  • Write down questions in advance to remember them all.
  • Write down the answers so that they’re available when needed.
  • Ask about bringing a tape recorder to make a taped record of what's said (even if you write down answers).

This information courtesy of Cancer.gov.

How is my safety protected?

A retired couple smiling for the camera.

Ethical guidelines

The goal of clinical research is to develop knowledge that improves human health or increases understanding of human biology. People who take part in clinical research make it possible for this to occur. The path to finding out if a new drug is safe or effective is to test it on patients in clinical trials. The purpose of ethical guidelines is both to protect patients and healthy volunteers, and to preserve the integrity of the science.

Informed consent

Informed consent is the process of learning the key facts about a clinical trial before deciding whether to participate. The process of providing information to participants continues throughout the study. To help you decide whether to take part, members of the research team explain the study. The research team provides an informed consent document, which includes such details about the study as its purpose, duration, required procedures, and who to contact for various purposes. The informed consent document also explains risks and potential benefits.

If you decide to enroll in the trial, you will need to sign the informed consent document. You are free to withdraw from the study at any time.

Most, but not all, clinical trials in the United States are approved and monitored by an Institutional Review Board (IRB) to ensure that the risks are minimal when compared with potential benefits. An IRB is an independent committee that consists of physicians, statisticians, and members of the community who ensure that clinical trials are ethical and that the rights of participants are protected. You should ask the sponsor or research coordinator whether the research you are considering participating in was reviewed by an IRB.

Further reading

For more information about research protections, see:

  • Office of Human Research Protection
  • Children's Assent to Clinical Trial Participation

For more information on participants’ privacy and confidentiality, see:

  • HIPAA Privacy Rule
  • The Food and Drug Administration, FDA’s Drug Review Process: Ensuring Drugs Are Safe and Effective

For more information about research protections, see: About Research Participation

What happens after a clinical trial is completed?

After a clinical trial is completed, the researchers carefully examine information collected during the study before making decisions about the meaning of the findings and about the need for further testing. After a phase I or II trial, the researchers decide whether to move on to the next phase or to stop testing the treatment or procedure because it was unsafe or not effective. When a phase III trial is completed, the researchers examine the information and decide whether the results have medical importance.

Results from clinical trials are often published in peer-reviewed scientific journals. Peer review is a process by which experts review the report before it is published to ensure that the analysis and conclusions are sound. If the results are particularly important, they may be featured in the news, and discussed at scientific meetings and by patient advocacy groups before or after they are published in a scientific journal. Once a new approach has been proven safe and effective in a clinical trial, it may become a new standard of medical practice.

Ask the research team members if the study results have been or will be published. Published study results are also available by searching for the study's official name or Protocol ID number in the National Library of Medicine's PubMed® database .

How does clinical research make a difference to me and my family?

A happy family of four. The two children are piggy-backing on their parents.

Only through clinical research can we gain insights and answers about the safety and effectiveness of treatments and procedures. Groundbreaking scientific advances in the present and the past were possible only because of participation of volunteers, both healthy and those with an illness, in clinical research. Clinical research requires complex and rigorous testing in collaboration with communities that are affected by the disease. As research opens new doors to finding ways to diagnose, prevent, treat, or cure disease and disability, clinical trial participation is essential to help us find the answers.

This page last reviewed on October 3, 2022

Connect with Us

  • More Social Media from NIH

guidelines for research study

An official website of the United States government

Here’s how you know

guidelines for research study

Official websites use .gov A .gov website belongs to an official government organization in the United States.

guidelines for research study

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

National Institute of Arthritis and Musculoskeletal and Skin Diseases logo

NIH Policies & Guidelines and Other Federal Regulations for Clinical Research

The NIH and other federal agencies have developed policies, regulations, and guidelines for investigators to follow for conducting safe, ethical, and high-quality clinical research. This page provides information that includes but is not limited to federal and NIH human subjects research policies and guidelines for monitoring clinical research, education and training for investigators, and privacy and protecting confidentiality. For further guidance or questions, reach out to the NIAMS Clinical Management Team at [email protected] .

NIH Human Subjects Policy and Guidance

Policies and guidelines for monitoring clinical research.

  • Education and Training for Investigators Conducting Clinical Research
  • Protecting Confidentiality
  • Office for Human Research Protections and General Human Subjects Guidelines

U.S. Food and Drug Administration (FDA) Guidelines for Conduct of Clinical Trials

  • Gene Therapy, Stem Cells and Fetal Tissue

The NIH has policies that govern the conduct of studies that involve human subjects. We encourage you to review the following guidelines for human subjects research and policies for inclusion of women, children, and individuals across the lifespan in studies. Additionally, this section contains information about single Institutional Review Board (sIRB) and requirements for registering clinical trials on ClinicalTrials.gov.  

  • NIH Human Subjects Research Policies
  • NIH Listing of Select Human Subjects Policy Statement Notices
  • NIH Clinical Research Policy 
  • Removal of the Requirement for IRB Review of NIH Grant Applications Contract
  • NIH Policy on the Dissemination of NIH-Funded Clinical Trial Information 
  • Requirements for Registering Clinical Trials into ClinicalTrials.gov  
  • Steps to Compliance for NIH awardees
  • NIH Grant Application and Proposal Considerations for Human Subjects Research
  • Human Subjects System (HSS)
  • Annotated Forms Set for NIH Grant Applications-FORMS-F-Series (Human Subjects on Page 32)
  • NIH Inclusion Across the Lifespan Policy  
  • NIH Policy and Guidelines on the Inclusion of Women and Minorities as Subjects in Clinical Research
  • Single IRB (sIRB) Policy for Multi-site Research
  • Frequently Asked Questions (FAQs), General Questions about Human Subjects

Review the NIH and other federal agency policies for data and safety monitoring in the conduct of clinical trials to ensure the safety of research participants and the appropriate and ethical conduct of the study. Learn the NIAMS requirements and guidelines for reportable events as well as reviewing and reporting unanticipated problems involving risks to human subjects or others and adverse events.

  • NIH Policy for Data and Safety Monitoring – June 1998
  • Further Guidance on Data and Safety Monitoring for Phase I and II Clinical Trials – June 2000
  • NIAMS Data and Safety Monitoring Guidelines and Policies
  • NIAMS Safety Reporting Assessment Flowchart
  • Guidance on Reporting Incidents to Office for Human Research Protections  
  • FDA Guidance for Clinical Trial Data Monitoring Committees – March 2006

Human Subjects Education, Training and Resources for Investigators Conducting Clinical Research 

NIH investigators and those involved with conducting NIH supported clinical research are expected to be trained and maintain up to date certification on human subjects protection education and good clinical practice (GCP). Here are some useful resources that investigators can refer to which will help them understand the education and training requirements and offer resources to gain knowledge in the various topics related to the safe and ethical conduct of human subjects research.  

  • Policy on Good Clinical Practice Training for NIH Awardees Involved in NIH-funded Clinical Trials
  • NIH Human Subjects Protections Training & Resources
  • Training Resources in the Responsible Conduct of Research (RCR) – HHS ORI 
  • CITI Program Training & Resources 
  • National Institute of Allergy and Infectious Diseases (NIAID) GCP Learning Center
  • National Drug Abuse Treatment Clinical Trials Network (NDAT CTN) GCP Course
  • Society of Behavioral Medicine GCP Training for Social and Behavioral Research
  • NIH Frequently Asked Questions (FAQs) on Human Subjects Education

Privacy and Confidentiality

Learn more about the policies and guidance for ensuring the confidentiality of individuals who participant in clinical research studies.

  • The Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule 
  • HIPAA Administrative Simplification Statute and Rules
  • Impact of the HIPAA Privacy Rule on NIH Processes
  • NIH Certificates of Confidentiality (CoC) - Human Subjects

OHRP and General Human Subjects Regulations

Learn the procedures investigators must follow in order to protect human subjects who participate in clinical research studies. 

  • Title 45 Code of Federal Regulations Part 46 – Protection of Human Subjects   
  • 2020 Edition of International Compilation of Human Research Standards
  • OHRP Policy and Guidance Index
  • Belmont Report 1979 – Ethical Principles and Guidelines for the Protection of Human Subjects of Research
  • International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH): Regulatory Guidance 
  • ICH Guidance for Industry: E6 (R2) Good Clinical Practice

Understand the FDA’s policies and guidance for the conduct of clinical trials as they relate to drugs, devices, and biologics.

  • Title 21 Code of Federal Regulations – Food and Drugs
  • FDA Clinical Trial Guidance Documents Directory  
  • Information for Clinical Investigators-Drugs (CDER)
  • Information for Clinical Investigators-Devices (CDRH)
  • Information for Clinical Investigators-Biologic (CBER)
  • FDA Encourages More Participation, Diversity in Clinical Trials
  • Notice to NIH Grantees Regarding Letters or Notices from the FDA

Additional Resources:  

  • Collection of Race and Ethnicity Data in Clinical Trials
  • Enrichment Strategies for Clinical Trials to Support Approval of Human Drugs and Biological Product
  • Investigational New Drug Applications (INDs) - Determining Whether Human Research Studies Can Be Conducted Without an IND
  • Financial Disclosure by Clinical Investigators
  • IRB Responsibilities for Reviewing the Qualifications of Investigators, Adequacy of Research Sites, and the Determination of Whether an IND/IDE is Needed
  • FDA and OHRP Final Guidance: Use of electronic Informed Consent & Questions and Answers
  • Elaboration of Definitions of Responsible Party and Applicable Clinical Trial

Gene Therapy, Stem Cells, and Fetal Tissue

Learn the policies and guidelines for conducting clinical research studies that involve gene therapy, stem cells, or fetal tissue.

  • NIH Stem Cell Research
  • NIH Biosafety, Biosecurity and Emerging Biotechnology 
  • New Initiatives to Protect Participants in Gene Therapy Trials
  • NIH Biosafety Guidelines
  • Approval Process for the Use of Human Pluripotent Stem Cells in NIH-Supported Research
  • Informed Consent on Use of Human Fetal Tissue
  • Changes to Requirements on Human Fetal Tissue Research 
  • Research on Dried Blood Spots Obtained Through Newborn Screening

U.S. flag

An official website of the United States government

Here’s how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NLM logo

In Spring 2021, the National Library of Medicine (NLM) PubMed® Special Query on this page will no longer be curated by NLM. If you have questions, please contact NLM Customer Support at https://support.nlm.nih.gov/

This chart lists the major biomedical research reporting guidelines that provide advice for reporting research methods and findings. They usually "specify a minimum set of items required for a clear and transparent account of what was done and what was found in a research study, reflecting, in particular, issues that might introduce bias into the research" (Adapted from the EQUATOR Network Resource Centre ). The chart also includes editorial style guides for writing research reports or other publications.

See the details of the search strategy. More research reporting guidelines are at the EQUATOR Network Resource Centre .

Last Reviewed: April 14, 2023

Study Design 101: Practice Guideline

  • Case Report
  • Case Control Study
  • Cohort Study
  • Randomized Controlled Trial
  • Practice Guideline
  • Systematic Review
  • Meta-Analysis
  • Helpful Formulas
  • Finding Specific Study Types

A statement produced by a panel of experts that outlines current best practice to inform health care professionals and patients in making clinical decisions. The statement is produced after an extensive review of the literature and is typically created by professional associations, government agencies, and/or public or private organizations.

Good guidelines clearly define the topic; appraise and summarize the best evidence regarding prevention, diagnosis, prognosis, therapy, harm, and cost-effectiveness; and identify the decision points where this information should be integrated with clinical experience and patient wishes to determine practice. Practice guidelines should be reviewed frequently and updated as necessary for continued accuracy and relevancy.

Practice guidelines are also known as "Evidence-based guidelines" and "Clinical guidelines."

  • Created by panels of experts
  • Based on professional published literature
  • Practical guidance for clinicians
  • Considered an evidence-based resource

Disadvantages

  • Slow to change or be updated
  • Not always available, especially for controversial topics
  • Expensive and time-consuming to produce
  • Recommendations might be affected by the type of organization creating the guideline

Design pitfalls to look out for

The panel should be composed of a variety of experts with assorted affiliations.

Is the panel composed of members from a variety of professional associations, government agencies and/or institutes? Does one organization/association predominate?

Fictitious Example

A practice guideline focusing on the best way to prevent sunburn when wearing sunscreen involved forming a multidisciplinary panel of experts (dermatologists, oncologists, sunscreen chemists, etc.). These experts searched the literature and identified 123 research articles on sunscreen and sunburn prevention for appraisal. The research was then reviewed by a member of the panel with critical appraisal experience in order to identify only those high-quality research articles that permit making recommendations. Ninety-seven high-quality studies were selected. These articles were read and synthesized by the panel to create a formal guideline recommendation. Based on the literature, the guideline recommended that the best way to prevent sunburn is to wear UVA blocking sunscreen daily. However, there was insufficient evidence in the literature to make any recommendations about newer sunscreen formulations. This identified the need for further research on this topic.

Real-life Examples

Chou, R., Deyo, R., Friedly, J., Skelly, A., Hashimoto, R., Weimer, M., ... Brodt, E. (2017). Nonpharmacologic therapies for low back pain: a systematic review for an American College of Physicians clinical practice guideline . Annals Of Internal Medicine, 166 (7), 493-+. https://doi.org/10.7326/M16-2459

A group from The American College of Physicians reviewed the current evidence to determine which nonpharmacologic options are effective in treating low back pain (both acute and chronic). New treatment options appeared in the literature since 2007 (prior guideline on this topic) and several show "small to moderate, usually short-term effect on pain" including tai chi, mindfulness-based stress reduction, yoga as well as continued support for prior treatment recommendations including exercise, psychological therapies, multidisciplinary rehabilitation, spinal manipulation, massage, and acupuncture. There were greater effects on pain than on function, and the strength of evidence for several of these interventions is low.

Lennon, S., Dellavalle, D., Rodder, S., Prest, M., Sinley, R., Hoy, M., & Papoutsakis, C. (2017). 2015 Evidence Analysis Library evidence-based nutrition practice guideline for the management of hypertension in adults. Journal of the Academy of Nutrition and Dietetics, 117 (9), 1445-1458.e17. https://doi.org/10.1016/j.jand.2017.04.008

This guideline addresses the role of nutrition in managing hypertension in adults. Seventy studies were evaluated, resulting in eight recommendations to reduce blood pressure in adults with hypertension, based on moderate levels of evidence: "provision of medical nutrition therapy by an RDN [registered dietitian nutritionist], adoption of the Dietary Approaches to Stop Hypertension dietary pattern, calcium supplementation, physical activity as a component of a healthy lifestyle, reduction in dietary sodium intake, and reduction of alcohol consumption in heavy drinkers. Increased intake of dietary potassium and calcium as well as supplementation with potassium and magnesium for lowering BP are also recommended."

Related Terms

National Guideline Clearinghouse (NGC)

The National Guideline Clearinghouse was a public resource for evidence-based clinical practice guidelines maintained by the Agency for Healthcare Research and Quality (AHRQ) . It was taken offline in 2018 after federal funding ended.

Now test yourself!

1. Practice guidelines are available for almost any condition you'll encounter in your patients.

a) True b) False

2. Practice Guidelines are typically written by which of the following?

a) Public or private organizations b) Government agencies c) Professional associations d) The National Guideline Clearinghouse e) b, c and d only f) a, b, and c only

Evidence Pyramid - Navigation

  • Meta- Analysis
  • Case Reports
  • << Previous: Randomized Controlled Trial
  • Next: Systematic Review >>

Creative Commons License

  • Last Updated: Sep 25, 2023 10:59 AM
  • URL: https://guides.himmelfarb.gwu.edu/studydesign101

GW logo

  • Himmelfarb Intranet
  • Privacy Notice
  • Terms of Use
  • GW is committed to digital accessibility. If you experience a barrier that affects your ability to access content on this page, let us know via the Accessibility Feedback Form .
  • Himmelfarb Health Sciences Library
  • 2300 Eye St., NW, Washington, DC 20037
  • Phone: (202) 994-2850
  • [email protected]
  • https://himmelfarb.gwu.edu

Reporting Guidelines

It is important that your manuscript gives a clear and complete account of the research that you have done. Well reported research is more useful and complete reporting allows editors, peer reviewers and readers to understand what you did and how.

Poorly reported research can distort the literature, and leads to research that cannot be replicated or used in future meta-analyses or systematic reviews.

You should make sure that you manuscript is written in a way that the reader knows exactly what you did and could repeat your study if they wanted to with no additional information. It is particularly important that you give enough information in the methods section of your manuscript.

To help with reporting your research, there are reporting guidelines available for many different study designs. These contain a checklist of minimum points that you should cover in your manuscript. You should use these guidelines when you are preparing and writing your manuscript, and you may be required to provide a completed version of the checklist when you submit your manuscript. 

The EQUATOR (Enhancing the Quality and Transparency Of health Research) Network is an international initiative that aims to improve the quality of research publications. It provides a comprehensive list of reporting guidelines and other material to help improve reporting. 

A list full of all of the reporting guidelines endorsed by the EQUATOR Network can be found here . Some of the reporting guidelines for common study designs are:

  • Randomized controlled trials – CONSORT
  • Systematic reviews – PRISMA
  • Observational studies – STROBE
  • Case reports – CARE
  • Qualitative research – COREQ
  • Pre-clinical animal studies – ARRIVE

Peer reviewers may be asked to use these checklists when assessing your manuscript. If you follow these guidelines, editors and peer reviewers will be able to assess your manuscript better as they will more easily understand what you did. It may also mean that they ask you for fewer revisions.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PLoS Comput Biol
  • v.18(6); 2022 Jun

Logo of ploscomp

Ten simple rules for good research practice

Simon schwab.

1 Center for Reproducible Science, University of Zurich, Zurich, Switzerland

2 Epidemiology, Biostatistics and Prevention Institute, University of Zurich, Zurich, Switzerland

Perrine Janiaud

3 Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland

Michael Dayan

4 Human Neuroscience Platform, Fondation Campus Biotech Geneva, Geneva, Switzerland

Valentin Amrhein

5 Department of Environmental Sciences, Zoology, University of Basel, Basel, Switzerland

Radoslaw Panczak

6 Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland

Patricia M. Palagi

7 SIB Training Group, SIB Swiss Institute of Bioinformatics, Lausanne, Switzerland

Lars G. Hemkens

8 Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, California, United States of America

9 Meta-Research Innovation Center Berlin (METRIC-B), Berlin Institute of Health, Berlin, Germany

Meike Ramon

10 Applied Face Cognition Lab, University of Lausanne, Lausanne, Switzerland

Nicolas Rothen

11 Faculty of Psychology, UniDistance Suisse, Brig, Switzerland

Stephen Senn

12 Statistical Consultant, Edinburgh, United Kingdom

Leonhard Held

This is a PLOS Computational Biology Methods paper.

Introduction

The lack of research reproducibility has caused growing concern across various scientific fields [ 1 – 5 ]. Today, there is widespread agreement, within and outside academia, that scientific research is suffering from a reproducibility crisis [ 6 , 7 ]. Researchers reach different conclusions—even when the same data have been processed—simply due to varied analytical procedures [ 8 , 9 ]. As we continue to recognize this problematic situation, some major causes of irreproducible research have been identified. This, in turn, provides the foundation for improvement by identifying and advocating for good research practices (GRPs). Indeed, powerful solutions are available, for example, preregistration of study protocols and statistical analysis plans, sharing of data and analysis code, and adherence to reporting guidelines. Although these and other best practices may facilitate reproducible research and increase trust in science, it remains the responsibility of researchers themselves to actively integrate them into their everyday research practices.

Contrary to ubiquitous specialized training, cross-disciplinary courses focusing on best practices to enhance the quality of research are lacking at universities and are urgently needed. The intersections between disciplines offer a space for peer evaluation, mutual learning, and sharing of best practices. In medical research, interdisciplinary work is inevitable. For example, conducting clinical trials requires experts with diverse backgrounds, including clinical medicine, pharmacology, biostatistics, evidence synthesis, nursing, and implementation science. Bringing researchers with diverse backgrounds and levels of experience together to exchange knowledge and learn about problems and solutions adds value and improves the quality of research.

The present selection of rules was based on our experiences with teaching GRP courses at the University of Zurich, our course participants’ feedback, and the views of a cross-disciplinary group of experts from within the Swiss Reproducibility Network ( www.swissrn.org ). The list is neither exhaustive, nor does it aim to address and systematically summarize the wide spectrum of issues including research ethics and legal aspects (e.g., related to misconduct, conflicts of interests, and scientific integrity). Instead, we focused on practical advice at the different stages of everyday research: from planning and execution to reporting of research. For a more comprehensive overview on GRPs, we point to the United Kingdom’s Medical Research Council’s guidelines [ 10 ] and the Swedish Research Council’s report [ 11 ]. While the discussion of the rules may predominantly focus on clinical research, much applies, in principle, to basic biomedical research and research in other domains as well.

The 10 proposed rules can serve multiple purposes: an introduction for researchers to relevant concepts to improve research quality, a primer for early-career researchers who participate in our GRP courses, or a starting point for lecturers who plan a GRP course at their own institutions. The 10 rules are grouped according to planning (5 rules), execution (3 rules), and reporting of research (2 rules); see Fig 1 . These principles can (and should) be implemented as a habit in everyday research, just like toothbrushing.

An external file that holds a picture, illustration, etc.
Object name is pcbi.1010139.g001.jpg

GRP, good research practices.

Research planning

Rule 1: specify your research question.

Coming up with a research question is not always simple and may take time. A successful study requires a narrow and clear research question. In evidence-based research, prior studies are assessed in a systematic and transparent way to identify a research gap for a new study that answers a question that matters [ 12 ]. Papers that provide a comprehensive overview of the current state of research in the field are particularly helpful—for example, systematic reviews. Perspective papers may also be useful, for example, there is a paper with the title “SARS-CoV-2 and COVID-19: The most important research questions.” However, a systematic assessment of research gaps deserves more attention than opinion-based publications.

In the next step, a vague research question should be further developed and refined. In clinical research and evidence-based medicine, there is an approach called population, intervention, comparator, outcome, and time frame (PICOT) with a set of criteria that can help framing a research question [ 13 ]. From a well-developed research question, subsequent steps will follow, which may include the exact definition of the population, the outcome, the data to be collected, and the sample size that is required. It may be useful to find out if other researchers find the idea interesting as well and whether it might promise a valuable contribution to the field. However, actively involving the public or the patients can be a more effective way to determine what research questions matter.

The level of details in a research question also depends on whether the planned research is confirmatory or exploratory. In contrast to confirmatory research, exploratory research does not require a well-defined hypothesis from the start. Some examples of exploratory experiments are those based on omics and multi-omics experiments (genomics, bulk RNA-Seq, single-cell, etc.) in systems biology and connectomics and whole-brain analyses in brain imaging. Both exploration and confirmation are needed in science, and it is helpful to understand their strengths and limitations [ 14 , 15 ].

Rule 2: Write and register a study protocol

In clinical research, registration of clinical trials has become a standard since the late 1990 and is now a legal requirement in many countries. Such studies require a study protocol to be registered, for example, with ClinicalTrials.gov, the European Clinical Trials Register, or the World Health Organization’s International Clinical Trials Registry Platform. Similar effort has been implemented for registration of systematic reviews (PROSPERO). Study registration has also been proposed for observational studies [ 16 ] and more recently in preclinical animal research [ 17 ] and is now being advocated across disciplines under the term “preregistration” [ 18 , 19 ].

Study protocols typically document at minimum the research question and hypothesis, a description of the population, the targeted sample size, the inclusion/exclusion criteria, the study design, the data collection, the data processing and transformation, and the planned statistical analyses. The registration of study protocols reduces publication bias and hindsight bias and can safeguard honest research and minimize waste of research [ 20 – 22 ]. Registration ensures that studies can be scrutinized by comparing the reported research with what was actually planned and written in the protocol, and any discrepancies may indicate serious problems (e.g., outcome switching).

Note that registration does not mean that researchers have no flexibility to adapt the plan as needed. Indeed, new or more appropriate procedures may become available or known only after registration of a study. Therefore, a more detailed statistical analysis plan can be amended to the protocol before the data are observed or unblinded [ 23 , 24 ]. Likewise, registration does not exclude the possibility to conduct exploratory data analyses; however, they must be clearly reported as such.

To go even further, registered reports are a novel article type that incentivize high-quality research—irrespective of the ultimate study outcome [ 25 , 26 ]. With registered reports, peer-reviewers decide before anyone knows the results of the study, and they have a more active role in being able to influence the design and analysis of the study. Journals from various disciplines increasingly support registered reports [ 27 ].

Naturally, preregistration and registered reports also have their limitations and may not be appropriate in a purely hypothesis-generating (explorative) framework. Reports of exploratory studies should indeed not be molded into a confirmatory framework; appropriate rigorous reporting alternatives have been suggested and start to become implemented [ 28 , 29 ].

Rule 3: Justify your sample size

Early-career researchers in our GRP courses often identify sample size as an issue in their research. For example, they say that they work with a low number of samples due to slow growth of cells, or they have a limited number of patient tumor samples due to a rare disease. But if your sample size is too low, your study has a high risk of providing a false negative result (type II error). In other words, you are unlikely to find an effect even if there truly was an effect.

Unfortunately, there is more bad news with small studies. When an effect from a small study was selected for drawing conclusions because it was statistically significant, low power increases the probability that an effect size is overestimated [ 30 , 31 ]. The reason is that with low power, studies that due to sampling variation find larger (overestimated) effects are much more likely to be statistically significant than those that happen to find smaller (more realistic) effects [ 30 , 32 , 33 ]. Thus, in such situations, effect sizes are often overestimated. For the phenomenon that small studies often report more extreme results (in meta-analyses), the term “small-study effect” was introduced [ 34 ]. In any case, an underpowered study is a problematic study, no matter the outcome.

In conclusion, small sample sizes can undermine research, but when is a study too small? For one study, a total of 50 patients may be fine, but for another, 1,000 patients may be required. How large a study needs to be designed requires an appropriate sample size calculation. Appropriate sample size calculation ensures that enough data are collected to ensure sufficient statistical power (the probability to reject the null hypothesis when it is in fact false).

Low-powered studies can be avoided by performing a sample size calculation to find out the required sample size of the study. This requires specifying a primary outcome variable and the magnitude of effect you are interested in (among some other factors); in clinical research, this is often the minimal clinically relevant difference. The statistical power is often set at 80% or larger. A comprehensive list of packages for sample size calculation are available [ 35 ], among them the R package “pwr” [ 36 ]. There are also many online calculators available, for example, the University of Zurich’s “SampleSizeR” [ 37 ].

A worthwhile alternative for planning the sample size that puts less emphasis on null hypothesis testing is based on the desired precision of the study; for example, one can calculate the sample size that is necessary to obtain a desired width of a confidence interval for the targeted effect [ 38 – 40 ]. A general framework to sample size justification beyond a calculation-only approach has been proposed [ 41 ]. It is also worth mentioning that some study types have other requirements or need specific methods. In diagnostic testing, one would need to determine the anticipated minimal sensitivity or specificity; in prognostic research, the number of parameters that can be used to fit a prediction model given a fixed sample size should be specified. Designs can also be so complex that a simulation (Monte Carlo method) may be required.

Sample size calculations should be done under different assumptions, and the largest estimated sample size is often the safer bet than a best-case scenario. The calculated sample size should further be adjusted to allow for possible missing data. Due to the complexity of accurately calculating sample size, researchers should strongly consider consulting a statistician early in the study design process.

Rule 4: Write a data management plan

In 2020, 2 Coronavirus Disease 2019 (COVID-19) papers in leading medical journals were retracted after major concerns about the data were raised [ 42 ]. Today, raw data are more often recognized as a key outcome of research along with the paper. Therefore, it is important to develop a strategy for the life cycle of data, including suitable infrastructure for long-term storage.

The data life cycle is described in a data management plan: a document that describes what data will be collected and how the data will be organized, stored, handled, and protected during and after the end of the research project. Several funders require a data management plan in grant submissions, and publishers like PLOS encourage authors to do so as well. The Wellcome Trust provides guidance in the development of a data management plan, including real examples from neuroimaging, genomics, and social sciences [ 43 ]. However, projects do not always allocate funding and resources to the actual implementation of the data management plan.

The Findable, Accessible, Interoperable, and Reusable (FAIR) data principles promote maximal use of data and enable machines to access and reuse data with minimal human intervention [ 44 ]. FAIR principles require the data to be retained, preserved, and shared preferably with an immutable unique identifier and a clear usage license. Appropriate metadata will help other researchers (or machines) to discover, process, and understand the data. However, requesting researchers to fully comply with the FAIR data principles in every detail is an ambitious goal.

Multidisciplinary data repositories that support FAIR are, for example, Dryad (datadryad.org https://datadryad.org/ ), EUDAT ( www.eudat.eu ), OSF (osf.io https://osf.io/ ), and Zenodo (zenodo.org https://zenodo.org/ ). A number of institutional and field-specific repositories may also be suitable. However, sometimes, authors may not be able to make their data publicly available for legal or ethical reasons. In such cases, a data user agreement can indicate the conditions required to access the data. Journals highlight what are acceptable and what are unacceptable data access restrictions and often require a data availability statement.

Organizing the study artifacts in a structured way greatly facilitates the reuse of data and code within and outside the lab, enhancing collaborations and maximizing the research investment. Support and courses for data management plans are sometimes available at universities. Another 10 simple rules paper for creating a good data management plan is dedicated to this topic [ 45 ].

Rule 5: Reduce bias

Bias is a distorted view in favor of or against a particular idea. In statistics, bias is a systematic deviation of a statistical estimate from the (true) quantity it estimates. Bias can invalidate our conclusions, and the more bias there is, the less valid they are. For example, in clinical studies, bias may mislead us into reaching a causal conclusion that the difference in the outcomes was due to the intervention or the exposure. This is a big concern, and, therefore, the risk of bias is assessed in clinical trials [ 46 ] as well as in observational studies [ 47 , 48 ].

There are many different forms of bias that can occur in a study, and they may overlap (e.g., allocation bias and confounding bias) [ 49 ]. Bias can occur at different stages, for example, immortal time bias in the design of the study, information bias in the execution of the study, and publication bias in the reporting of research. Understanding bias allows us researchers to remain vigilant of potential sources of bias when peer-reviewing and designing own studies. We summarized some common types of bias and some preventive steps in Table 1 , but many other forms of bias exist; for a comprehensive overview, see the Oxford University’s Catalogue of Bias [ 50 ].

For a comprehensive collection, see catalogofbias.org .

Here are some noteworthy examples of study bias from the literature: An example of information bias was observed when in 1998 an alleged association between the measles, mumps, and rubella (MMR) vaccine and autism was reported. Recall bias (a subtype of information bias) emerged when parents of autistic children recalled the onset of autism after an MMR vaccination more often than parents of similar children who were diagnosed prior to the media coverage of that controversial and meanwhile retracted study [ 51 ]. A study from 2001 showed better survival for academy award-winning actors, but this was due to immortal time bias that favors the treatment or exposure group [ 52 , 53 ]. A study systematically investigated self-reports about musculoskeletal symptoms and found the presence of information bias. The reason was that participants with little computer-time overestimated, and participants with a lot of computer-time spent underestimated their computer usage [ 54 ].

Information bias can be mitigated by using objective rather than subjective measurements. Standardized operating procedures (SOP) and electronic lab notebooks additionally help to follow well-designed protocols for data collection and handling [ 55 ]. Despite the failure to mitigate bias in studies, complete descriptions of data and methods can at least allow the assessment of risk of bias.

Research execution

Rule 6: avoid questionable research practices.

Questionable research practices (QRPs) can lead to exaggerated findings and false conclusions and thus lead to irreproducible research. Often, QRPs are used with no bad intentions. This becomes evident when methods sections explicitly describe such procedures, for example, to increase the number of samples until statistical significance is reached that supports the hypothesis. Therefore, it is important that researchers know about QRPs in order to recognize and avoid them.

Several questionable QRPs have been named [ 56 , 57 ]. Among them are low statistical power, pseudoreplication, repeated inspection of data, p -hacking [ 58 ], selective reporting, and hypothesizing after the results are known (HARKing).

The first 2 QRPs, low statistical power and pseudoreplication, can be prevented by proper planning and designing of studies, including sample size calculation and appropriate statistical methodology to avoid treating data as independent when in fact they are not. Statistical power is not equal to reproducibility, but statistical power is a precondition of reproducibility as the lack thereof can result in false negative as well as false positive findings (see Rule 3 ).

In fact, a lot of QRP can be avoided with a study protocol and statistical analysis plan. Preregistration, as described in Rule 2, is considered best practice for this purpose. However, many of these issues can additionally be rooted in institutional incentives and rewards. Both funding and promotion are often tied to the quantity rather than the quality of the research output. At universities, still only few or no rewards are given for writing and registering protocols, sharing data, publishing negative findings, and conducting replication studies. Thus, a wider “culture change” is needed.

Rule 7: Be cautious with interpretations of statistical significance

It would help if more researchers were familiar with correct interpretations and possible misinterpretations of statistical tests, p -values, confidence intervals, and statistical power [ 59 , 60 ]. A statistically significant p -value does not necessarily mean that there is a clinically or biologically relevant effect. Specifically, the traditional dichotomization into statistically significant ( p < 0.05) versus statistically nonsignificant ( p ≥ 0.05) results is seldom appropriate, can lead to cherry-picking of results and may eventually corrupt science [ 61 ]. We instead recommend reporting exact p -values and interpreting them in a graded way in terms of the compatibility of the null hypothesis with the data [ 62 , 63 ]. Moreover, a p -value around 0.05 (e.g., 0.047 or 0.055) provides only little information, as is best illustrated by the associated replication power: The probability that a hypothetical replication study of the same design will lead to a statistically significant result is only 50% [ 64 ] and is even lower in the presence of publication bias and regression to the mean (the phenomenon that effect estimates in replication studies are often smaller than the estimates in the original study) [ 65 ]. Claims of novel discoveries should therefore be based on a smaller p -value threshold (e.g., p < 0.005) [ 66 ], but this really depends on the discipline (genome-wide screenings or studies in particle physics often apply much lower thresholds).

Generally, there is often too much emphasis on p -values. A statistical index such as the p -value is just the final product of an analysis, the tip of the iceberg [ 67 ]. Statistical analyses often include many complex stages, from data processing, cleaning, transformation, addressing missing data, modeling, to statistical inference. Errors and pitfalls can creep in at any stage, and even a tiny error can have a big impact on the result [ 68 ]. Also, when many hypothesis tests are conducted (multiple testing), false positive rates may need to be controlled to protect against wrong conclusions, although adjustments for multiple testing are debated [ 69 – 71 ].

Thus, a p -value alone is not a measure of how credible a scientific finding is [ 72 ]. Instead, the quality of the research must be considered, including the study design, the quality of the measurement, and the validity of the assumptions that underlie the data analysis [ 60 , 73 ]. Frameworks exist that help to systematically and transparently assess the certainty in evidence; the most established and widely used one is Grading of Recommendations, Assessment, Development and Evaluations (GRADE; www.gradeworkinggroup.org ) [ 74 ].

Training in basic statistics, statistical programming, and reproducible analyses and better involvement of data professionals in academia is necessary. University departments sometimes have statisticians that can support researchers. Importantly, statisticians need to be involved early in the process and on an equal footing and not just at the end of a project to perform the final data analysis.

Rule 8: Make your research open

In reality, science often lacks transparency. Open science makes the process of producing evidence and claims transparent and accessible to others [ 75 ]. Several universities and research funders have already implemented open science roadmaps to advocate free and public science as well as open access to scientific knowledge, with the aim of further developing the credibility of research. Open research allows more eyes to see it and critique it, a principle similar to the “Linus’s law” in software development, which says that if there are enough people to test a software, most bugs will be discovered.

As science often progresses incrementally, writing and sharing a study protocol and making data and methods readily available is crucial to facilitate knowledge building. The Open Science Framework (osf.io) is a free and open-source project management tool that supports researchers throughout the entire project life cycle. OSF enables preregistration of study protocols and sharing of documents, data, analysis code, supplementary materials, and preprints.

To facilitate reproducibility, a research paper can link to data and analysis code deposited on OSF. Computational notebooks are now readily available that unite data processing, data transformations, statistical analyses, figures and tables in a single document (e.g., R Markdown, Jupyter); see also the 10 simple rules for reproducible computational research [ 76 ]. Making both data and code open thus minimizes waste of funding resources and accelerates science.

Open science can also advance researchers’ careers, especially for early-career researchers. The increased visibility, retrievability, and citations of datasets can all help with career building [ 77 ]. Therefore, institutions should provide necessary training, and hiring committees and journals should align their core values with open science, to attract researchers who aim for transparent and credible research [ 78 ].

Research reporting

Rule 9: report all findings.

Publication bias occurs when the outcome of a study influences the decision whether to publish it. Researchers, reviewers, and publishers often find nonsignificant study results not interesting or worth publishing. As a consequence, outcomes and analyses are only selectively reported in the literature [ 79 ], also known as the file drawer effect [ 80 ].

The extent of publication bias in the literature is illustrated by the overwhelming frequency of statistically significant findings [ 81 ]. A study extracted p -values from MEDLINE and PubMed Central and showed that 96% of the records reported at least 1 statistically significant p -value [ 82 ], which seems implausible in the real world. Another study plotted the distribution of more than 1 million z -values from Medline, revealing a huge gap from −2 to 2 [ 83 ]. Positive studies (i.e., statistically significant, perceived as striking or showing a beneficial effect) were 4 times more likely to get published than negative studies [ 84 ].

Often a statistically nonsignificant result is interpreted as a “null” finding. But a nonsignificant finding does not necessarily mean a null effect; absence of evidence is not evidence of absence [ 85 ]. An individual study may be underpowered, resulting in a nonsignificant finding, but the cumulative evidence from multiple studies may indeed provide sufficient evidence in a meta-analysis. Another argument is that a confidence interval that contains the null value often also contains non-null values that may be of high practical importance. Only if all the values inside the interval are deemed unimportant from a practical perspective, then it may be fair to describe a result as a null finding [ 61 ]. We should thus never report “no difference” or “no association” just because a p -value is larger than 0.05 or, equivalently, because a confidence interval includes the “null” [ 61 ].

On the other hand, studies sometimes report statistically nonsignificant results with “spin” to claim that the experimental treatment is beneficial, often by focusing their conclusions on statistically significant differences on secondary outcomes despite a statistically nonsignificant difference for the primary outcome [ 86 , 87 ].

Findings that are not being published have a tremendous impact on the research ecosystem, distorting our knowledge of the scientific landscape by perpetuating misconceptions, and jeopardizing judgment of researchers and the public trust in science. In clinical research, publication bias can mislead care decisions and harm patients, for example, when treatments appear useful despite only minimal or even absent benefits reported in studies that were not published and thus are unknown to physicians [ 88 ]. Moreover, publication bias also directly affects the formulation and proliferation of scientific theories, which are taught to students and early-career researchers, thereby perpetuating biased research from the core. It has been shown in modeling studies that unless a sufficient proportion of negative studies are published, a false claim can become an accepted fact [ 89 ] and the false positive rates influence trustworthiness in a given field [ 90 ].

In sum, negative findings are undervalued. They need to be more consistently reported at the study level or be systematically investigated at the systematic review level. Researchers have their share of responsibilities, but there is clearly a lack of incentives from promotion and tenure committees, journals, and funders.

Rule 10: Follow reporting guidelines

Study reports need to faithfully describe the aim of the study and what was done, including potential deviations from the original protocol, as well as what was found. Yet, there is ample evidence of discrepancies between protocols and research reports, and of insufficient quality of reporting [ 79 , 91 – 95 ]. Reporting deficiencies threaten our ability to clearly communicate findings, replicate studies, make informed decisions, and build on existing evidence, wasting time and resources invested in the research [ 96 ].

Reporting guidelines aim to provide the minimum information needed on key design features and analysis decisions, ensuring that findings can be adequately used and studies replicated. In 2008, the Enhancing the QUAlity and Transparency Of Health Research (EQUATOR) network was initiated to provide reporting guidelines for a variety of study designs along with guidelines for education and training on how to enhance quality and transparency of health research. Currently, there are 468 reporting guidelines listed in the network; see the most prominent guidelines in Table 2 . Furthermore, following the ICMJE recommendations, medical journals are increasingly endorsing reporting guidelines [ 97 ], in some cases making it mandatory to submit the appropriate reporting checklist along with the manuscript.

The EQUATOR Network is a library with more than 400 reporting guidelines in health research ( www.equator-network.org ).

The use of reporting guidelines and journal endorsement has led to a positive impact on the quality and transparency of research reporting, but improvement is still needed to maximize the value of research [ 98 , 99 ].

Conclusions

Originally, this paper targeted early-career researchers; however, throughout the development of the rules, it became clear that the present recommendations can serve all researchers irrespective of their seniority. We focused on practical guidelines for planning, conducting, and reporting of research. Others have aligned GRP with similar topics [ 100 , 101 ]. Even though we provide 10 simple rules, the word “simple” should not be taken lightly. Putting the rules into practice usually requires effort and time, especially at the beginning of a research project. However, time can also be redeemed, for example, when certain choices can be justified to reviewers by providing a study protocol or when data can be quickly reanalyzed by using computational notebooks and dynamic reports.

Researchers have field-specific research skills, but sometimes are not aware of best practices in other fields that can be useful. Universities should offer cross-disciplinary GRP courses across faculties to train the next generation of scientists. Such courses are an important building block to improve the reproducibility of science.

Acknowledgments

This article was written along the Good Research Practice (GRP) courses at the University of Zurich provided by the Center of Reproducible Science ( www.crs.uzh.ch ). All materials from the course are available at https://osf.io/t9rqm/ . We appreciated the discussion, development, and refinement of this article within the working group “training” of the SwissRN ( www.swissrn.org ). We are grateful to Philip Bourne for a lot of valuable comments on the earlier versions of the manuscript.

Funding Statement

S.S. received funding from SfwF (Stiftung für wissenschaftliche Forschung an der Universität Zürich; grant no. STWF-19-007). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

  • Privacy Policy

Research Method

Home » Research Paper – Structure, Examples and Writing Guide

Research Paper – Structure, Examples and Writing Guide

Table of Contents

Research Paper

Research Paper

Definition:

Research Paper is a written document that presents the author’s original research, analysis, and interpretation of a specific topic or issue.

It is typically based on Empirical Evidence, and may involve qualitative or quantitative research methods, or a combination of both. The purpose of a research paper is to contribute new knowledge or insights to a particular field of study, and to demonstrate the author’s understanding of the existing literature and theories related to the topic.

Structure of Research Paper

The structure of a research paper typically follows a standard format, consisting of several sections that convey specific information about the research study. The following is a detailed explanation of the structure of a research paper:

The title page contains the title of the paper, the name(s) of the author(s), and the affiliation(s) of the author(s). It also includes the date of submission and possibly, the name of the journal or conference where the paper is to be published.

The abstract is a brief summary of the research paper, typically ranging from 100 to 250 words. It should include the research question, the methods used, the key findings, and the implications of the results. The abstract should be written in a concise and clear manner to allow readers to quickly grasp the essence of the research.

Introduction

The introduction section of a research paper provides background information about the research problem, the research question, and the research objectives. It also outlines the significance of the research, the research gap that it aims to fill, and the approach taken to address the research question. Finally, the introduction section ends with a clear statement of the research hypothesis or research question.

Literature Review

The literature review section of a research paper provides an overview of the existing literature on the topic of study. It includes a critical analysis and synthesis of the literature, highlighting the key concepts, themes, and debates. The literature review should also demonstrate the research gap and how the current study seeks to address it.

The methods section of a research paper describes the research design, the sample selection, the data collection and analysis procedures, and the statistical methods used to analyze the data. This section should provide sufficient detail for other researchers to replicate the study.

The results section presents the findings of the research, using tables, graphs, and figures to illustrate the data. The findings should be presented in a clear and concise manner, with reference to the research question and hypothesis.

The discussion section of a research paper interprets the findings and discusses their implications for the research question, the literature review, and the field of study. It should also address the limitations of the study and suggest future research directions.

The conclusion section summarizes the main findings of the study, restates the research question and hypothesis, and provides a final reflection on the significance of the research.

The references section provides a list of all the sources cited in the paper, following a specific citation style such as APA, MLA or Chicago.

How to Write Research Paper

You can write Research Paper by the following guide:

  • Choose a Topic: The first step is to select a topic that interests you and is relevant to your field of study. Brainstorm ideas and narrow down to a research question that is specific and researchable.
  • Conduct a Literature Review: The literature review helps you identify the gap in the existing research and provides a basis for your research question. It also helps you to develop a theoretical framework and research hypothesis.
  • Develop a Thesis Statement : The thesis statement is the main argument of your research paper. It should be clear, concise and specific to your research question.
  • Plan your Research: Develop a research plan that outlines the methods, data sources, and data analysis procedures. This will help you to collect and analyze data effectively.
  • Collect and Analyze Data: Collect data using various methods such as surveys, interviews, observations, or experiments. Analyze data using statistical tools or other qualitative methods.
  • Organize your Paper : Organize your paper into sections such as Introduction, Literature Review, Methods, Results, Discussion, and Conclusion. Ensure that each section is coherent and follows a logical flow.
  • Write your Paper : Start by writing the introduction, followed by the literature review, methods, results, discussion, and conclusion. Ensure that your writing is clear, concise, and follows the required formatting and citation styles.
  • Edit and Proofread your Paper: Review your paper for grammar and spelling errors, and ensure that it is well-structured and easy to read. Ask someone else to review your paper to get feedback and suggestions for improvement.
  • Cite your Sources: Ensure that you properly cite all sources used in your research paper. This is essential for giving credit to the original authors and avoiding plagiarism.

Research Paper Example

Note : The below example research paper is for illustrative purposes only and is not an actual research paper. Actual research papers may have different structures, contents, and formats depending on the field of study, research question, data collection and analysis methods, and other factors. Students should always consult with their professors or supervisors for specific guidelines and expectations for their research papers.

Research Paper Example sample for Students:

Title: The Impact of Social Media on Mental Health among Young Adults

Abstract: This study aims to investigate the impact of social media use on the mental health of young adults. A literature review was conducted to examine the existing research on the topic. A survey was then administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO (Fear of Missing Out) are significant predictors of mental health problems among young adults.

Introduction: Social media has become an integral part of modern life, particularly among young adults. While social media has many benefits, including increased communication and social connectivity, it has also been associated with negative outcomes, such as addiction, cyberbullying, and mental health problems. This study aims to investigate the impact of social media use on the mental health of young adults.

Literature Review: The literature review highlights the existing research on the impact of social media use on mental health. The review shows that social media use is associated with depression, anxiety, stress, and other mental health problems. The review also identifies the factors that contribute to the negative impact of social media, including social comparison, cyberbullying, and FOMO.

Methods : A survey was administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The survey included questions on social media use, mental health status (measured using the DASS-21), and perceived impact of social media on their mental health. Data were analyzed using descriptive statistics and regression analysis.

Results : The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO are significant predictors of mental health problems among young adults.

Discussion : The study’s findings suggest that social media use has a negative impact on the mental health of young adults. The study highlights the need for interventions that address the factors contributing to the negative impact of social media, such as social comparison, cyberbullying, and FOMO.

Conclusion : In conclusion, social media use has a significant impact on the mental health of young adults. The study’s findings underscore the need for interventions that promote healthy social media use and address the negative outcomes associated with social media use. Future research can explore the effectiveness of interventions aimed at reducing the negative impact of social media on mental health. Additionally, longitudinal studies can investigate the long-term effects of social media use on mental health.

Limitations : The study has some limitations, including the use of self-report measures and a cross-sectional design. The use of self-report measures may result in biased responses, and a cross-sectional design limits the ability to establish causality.

Implications: The study’s findings have implications for mental health professionals, educators, and policymakers. Mental health professionals can use the findings to develop interventions that address the negative impact of social media use on mental health. Educators can incorporate social media literacy into their curriculum to promote healthy social media use among young adults. Policymakers can use the findings to develop policies that protect young adults from the negative outcomes associated with social media use.

References :

  • Twenge, J. M., & Campbell, W. K. (2019). Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study. Preventive medicine reports, 15, 100918.
  • Primack, B. A., Shensa, A., Escobar-Viera, C. G., Barrett, E. L., Sidani, J. E., Colditz, J. B., … & James, A. E. (2017). Use of multiple social media platforms and symptoms of depression and anxiety: A nationally-representative study among US young adults. Computers in Human Behavior, 69, 1-9.
  • Van der Meer, T. G., & Verhoeven, J. W. (2017). Social media and its impact on academic performance of students. Journal of Information Technology Education: Research, 16, 383-398.

Appendix : The survey used in this study is provided below.

Social Media and Mental Health Survey

  • How often do you use social media per day?
  • Less than 30 minutes
  • 30 minutes to 1 hour
  • 1 to 2 hours
  • 2 to 4 hours
  • More than 4 hours
  • Which social media platforms do you use?
  • Others (Please specify)
  • How often do you experience the following on social media?
  • Social comparison (comparing yourself to others)
  • Cyberbullying
  • Fear of Missing Out (FOMO)
  • Have you ever experienced any of the following mental health problems in the past month?
  • Do you think social media use has a positive or negative impact on your mental health?
  • Very positive
  • Somewhat positive
  • Somewhat negative
  • Very negative
  • In your opinion, which factors contribute to the negative impact of social media on mental health?
  • Social comparison
  • In your opinion, what interventions could be effective in reducing the negative impact of social media on mental health?
  • Education on healthy social media use
  • Counseling for mental health problems caused by social media
  • Social media detox programs
  • Regulation of social media use

Thank you for your participation!

Applications of Research Paper

Research papers have several applications in various fields, including:

  • Advancing knowledge: Research papers contribute to the advancement of knowledge by generating new insights, theories, and findings that can inform future research and practice. They help to answer important questions, clarify existing knowledge, and identify areas that require further investigation.
  • Informing policy: Research papers can inform policy decisions by providing evidence-based recommendations for policymakers. They can help to identify gaps in current policies, evaluate the effectiveness of interventions, and inform the development of new policies and regulations.
  • Improving practice: Research papers can improve practice by providing evidence-based guidance for professionals in various fields, including medicine, education, business, and psychology. They can inform the development of best practices, guidelines, and standards of care that can improve outcomes for individuals and organizations.
  • Educating students : Research papers are often used as teaching tools in universities and colleges to educate students about research methods, data analysis, and academic writing. They help students to develop critical thinking skills, research skills, and communication skills that are essential for success in many careers.
  • Fostering collaboration: Research papers can foster collaboration among researchers, practitioners, and policymakers by providing a platform for sharing knowledge and ideas. They can facilitate interdisciplinary collaborations and partnerships that can lead to innovative solutions to complex problems.

When to Write Research Paper

Research papers are typically written when a person has completed a research project or when they have conducted a study and have obtained data or findings that they want to share with the academic or professional community. Research papers are usually written in academic settings, such as universities, but they can also be written in professional settings, such as research organizations, government agencies, or private companies.

Here are some common situations where a person might need to write a research paper:

  • For academic purposes: Students in universities and colleges are often required to write research papers as part of their coursework, particularly in the social sciences, natural sciences, and humanities. Writing research papers helps students to develop research skills, critical thinking skills, and academic writing skills.
  • For publication: Researchers often write research papers to publish their findings in academic journals or to present their work at academic conferences. Publishing research papers is an important way to disseminate research findings to the academic community and to establish oneself as an expert in a particular field.
  • To inform policy or practice : Researchers may write research papers to inform policy decisions or to improve practice in various fields. Research findings can be used to inform the development of policies, guidelines, and best practices that can improve outcomes for individuals and organizations.
  • To share new insights or ideas: Researchers may write research papers to share new insights or ideas with the academic or professional community. They may present new theories, propose new research methods, or challenge existing paradigms in their field.

Purpose of Research Paper

The purpose of a research paper is to present the results of a study or investigation in a clear, concise, and structured manner. Research papers are written to communicate new knowledge, ideas, or findings to a specific audience, such as researchers, scholars, practitioners, or policymakers. The primary purposes of a research paper are:

  • To contribute to the body of knowledge : Research papers aim to add new knowledge or insights to a particular field or discipline. They do this by reporting the results of empirical studies, reviewing and synthesizing existing literature, proposing new theories, or providing new perspectives on a topic.
  • To inform or persuade: Research papers are written to inform or persuade the reader about a particular issue, topic, or phenomenon. They present evidence and arguments to support their claims and seek to persuade the reader of the validity of their findings or recommendations.
  • To advance the field: Research papers seek to advance the field or discipline by identifying gaps in knowledge, proposing new research questions or approaches, or challenging existing assumptions or paradigms. They aim to contribute to ongoing debates and discussions within a field and to stimulate further research and inquiry.
  • To demonstrate research skills: Research papers demonstrate the author’s research skills, including their ability to design and conduct a study, collect and analyze data, and interpret and communicate findings. They also demonstrate the author’s ability to critically evaluate existing literature, synthesize information from multiple sources, and write in a clear and structured manner.

Characteristics of Research Paper

Research papers have several characteristics that distinguish them from other forms of academic or professional writing. Here are some common characteristics of research papers:

  • Evidence-based: Research papers are based on empirical evidence, which is collected through rigorous research methods such as experiments, surveys, observations, or interviews. They rely on objective data and facts to support their claims and conclusions.
  • Structured and organized: Research papers have a clear and logical structure, with sections such as introduction, literature review, methods, results, discussion, and conclusion. They are organized in a way that helps the reader to follow the argument and understand the findings.
  • Formal and objective: Research papers are written in a formal and objective tone, with an emphasis on clarity, precision, and accuracy. They avoid subjective language or personal opinions and instead rely on objective data and analysis to support their arguments.
  • Citations and references: Research papers include citations and references to acknowledge the sources of information and ideas used in the paper. They use a specific citation style, such as APA, MLA, or Chicago, to ensure consistency and accuracy.
  • Peer-reviewed: Research papers are often peer-reviewed, which means they are evaluated by other experts in the field before they are published. Peer-review ensures that the research is of high quality, meets ethical standards, and contributes to the advancement of knowledge in the field.
  • Objective and unbiased: Research papers strive to be objective and unbiased in their presentation of the findings. They avoid personal biases or preconceptions and instead rely on the data and analysis to draw conclusions.

Advantages of Research Paper

Research papers have many advantages, both for the individual researcher and for the broader academic and professional community. Here are some advantages of research papers:

  • Contribution to knowledge: Research papers contribute to the body of knowledge in a particular field or discipline. They add new information, insights, and perspectives to existing literature and help advance the understanding of a particular phenomenon or issue.
  • Opportunity for intellectual growth: Research papers provide an opportunity for intellectual growth for the researcher. They require critical thinking, problem-solving, and creativity, which can help develop the researcher’s skills and knowledge.
  • Career advancement: Research papers can help advance the researcher’s career by demonstrating their expertise and contributions to the field. They can also lead to new research opportunities, collaborations, and funding.
  • Academic recognition: Research papers can lead to academic recognition in the form of awards, grants, or invitations to speak at conferences or events. They can also contribute to the researcher’s reputation and standing in the field.
  • Impact on policy and practice: Research papers can have a significant impact on policy and practice. They can inform policy decisions, guide practice, and lead to changes in laws, regulations, or procedures.
  • Advancement of society: Research papers can contribute to the advancement of society by addressing important issues, identifying solutions to problems, and promoting social justice and equality.

Limitations of Research Paper

Research papers also have some limitations that should be considered when interpreting their findings or implications. Here are some common limitations of research papers:

  • Limited generalizability: Research findings may not be generalizable to other populations, settings, or contexts. Studies often use specific samples or conditions that may not reflect the broader population or real-world situations.
  • Potential for bias : Research papers may be biased due to factors such as sample selection, measurement errors, or researcher biases. It is important to evaluate the quality of the research design and methods used to ensure that the findings are valid and reliable.
  • Ethical concerns: Research papers may raise ethical concerns, such as the use of vulnerable populations or invasive procedures. Researchers must adhere to ethical guidelines and obtain informed consent from participants to ensure that the research is conducted in a responsible and respectful manner.
  • Limitations of methodology: Research papers may be limited by the methodology used to collect and analyze data. For example, certain research methods may not capture the complexity or nuance of a particular phenomenon, or may not be appropriate for certain research questions.
  • Publication bias: Research papers may be subject to publication bias, where positive or significant findings are more likely to be published than negative or non-significant findings. This can skew the overall findings of a particular area of research.
  • Time and resource constraints: Research papers may be limited by time and resource constraints, which can affect the quality and scope of the research. Researchers may not have access to certain data or resources, or may be unable to conduct long-term studies due to practical limitations.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Assignment

Assignment – Types, Examples and Writing Guide

References in Research

References in Research – Types, Examples and...

Background of The Study

Background of The Study – Examples and Writing...

Appendices

Appendices – Writing Guide, Types and Examples

Appendix in Research Paper

Appendix in Research Paper – Examples and...

Evaluating Research

Evaluating Research – Process, Examples and...

  • Open access
  • Published: 03 June 2024

The use of evidence to guide decision-making during the COVID-19 pandemic: divergent perspectives from a qualitative case study in British Columbia, Canada

  • Laura Jane Brubacher   ORCID: orcid.org/0000-0003-2806-9539 1 , 2 ,
  • Chris Y. Lovato 1 ,
  • Veena Sriram 1 , 3 ,
  • Michael Cheng 1 &
  • Peter Berman 1  

Health Research Policy and Systems volume  22 , Article number:  66 ( 2024 ) Cite this article

Metrics details

The challenges of evidence-informed decision-making in a public health emergency have never been so notable as during the COVID-19 pandemic. Questions about the decision-making process, including what forms of evidence were used, and how evidence informed—or did not inform—policy have been debated.

We examined decision-makers' observations on evidence-use in early COVID-19 policy-making in British Columbia (BC), Canada through a qualitative case study. From July 2021- January 2022, we conducted 18 semi-structured key informant interviews with BC elected officials, provincial and regional-level health officials, and civil society actors involved in the public health response. The questions focused on: (1) the use of evidence in policy-making; (2) the interface between researchers and policy-makers; and (3) key challenges perceived by respondents as barriers to applying evidence to COVID-19 policy decisions. Data were analyzed thematically, using a constant comparative method. Framework analysis was also employed to generate analytic insights across stakeholder perspectives.

Overall, while many actors’ impressions were that BC's early COVID-19 policy response was evidence-informed, an overarching theme was a lack of clarity and uncertainty as to what evidence was used and how it flowed into decision-making processes. Perspectives diverged on the relationship between 'government' and public health expertise, and whether or not public health actors had an independent voice in articulating evidence to inform pandemic governance. Respondents perceived a lack of coordination and continuity across data sources, and a lack of explicit guidelines on evidence-use in the decision-making process, which resulted in a sense of fragmentation. The tension between the processes involved in research and the need for rapid decision-making was perceived as a barrier to using evidence to inform policy.

Conclusions

Areas to be considered in planning for future emergencies include: information flow between policy-makers and researchers, coordination of data collection and use, and transparency as to how decisions are made—all of which reflect a need to improve communication. Based on our findings, clear mechanisms and processes for channeling varied forms of evidence into decision-making need to be identified, and doing so will strengthen preparedness for future public health crises.

Peer Review reports

The challenges of evidence-informed decision-making Footnote 1 in a public health emergency have never been so salient as during the COVID-19 pandemic, given its unprecedented scale, rapidly evolving virology, and multitude of global information systems to gather, synthesize, and disseminate evidence on the SARS-CoV-2 virus and associated public health and social measures [ 1 , 2 , 3 ]. Early in the COVID-19 pandemic, rapid decision-making became central for governments globally as they grappled with crucial decisions for which there was limited evidence. Critical questions exist, in looking retrospectively at these decision-making processes and with an eye to strengthening future preparedness: Were decisions informed by 'evidence'? What forms of evidence were used, and how, by decision-makers? [ 4 , 5 , 6 ].

Scientific evidence, including primary research, epidemiologic research, and knowledge synthesis, is one among multiple competing influences that inform decision-making processes in an outbreak such as COVID-19 [ 7 ]. Indeed, the use of multiple forms of evidence has been particularly notable as it applies to COVID-19 policy-making. Emerging research has also documented the important influence of ‘non-scientific’ evidence such as specialized expertise and experience, contextual information, and level of available resources [ 8 , 9 , 10 ]. The COVID-19 pandemic has underscored the politics of evidence-use in policy-making [ 11 ]; what evidence is used and how can be unclear, and shaped by political bias [ 4 , 5 ]. Moreover, while many governments have established scientific advisory boards, the perspectives of these advisors were reportedly largely absent from COVID-19 policy processes [ 6 ]. How evidence and public health policy interface—and intersect—is a complex question, particularly in the dynamic context of a public health emergency.

Within Canada, a hallmark of the public health system and endorsed by government is evidence-informed decision-making [ 12 ]. In British Columbia (BC), Canada, during the early phases of COVID-19 (March—June 2020), provincial public health communication focused primarily on voluntary compliance with recommended public health and social measures, and on supporting those most affected by the pandemic. Later, the response shifted from voluntary compliance to mandatory enforceable government orders [ 13 ]. Like many other jurisdictions, the government’s public messaging in BC asserted that the province took an approach to managing the COVID-19 pandemic and developing related policy that was based on scientific evidence, specifically. For example, in March 2021, in announcing changes to vaccination plans, Dr. Bonnie Henry, the Provincial Health Officer, stated, " This is science in action " [ 14 ]. As a public health expert with scientific voice, the Provincial Health Officer has been empowered to speak on behalf of the BC government across the COVID-19 pandemic progression. While this suggests BC is a jurisdiction which has institutionalized scientifically-informed decision-making as a core tenet of effective public health crisis response, it remains unclear as to whether BC’s COVID-19 response could, in fact, be considered evidence-informed—particularly from the perspectives of those involved in pandemic decision-making and action. Moreover, if evidence-informed, what types of evidence were utilized and through what mechanisms, how did this evidence shape decision-making, and what challenges existed in moving evidence to policy and praxis in BC’s COVID-19 response?

The objectives of this study were: (1) to explore and characterize the perspectives of BC actors involved in the COVID-19 response with respect to evidence-use in COVID-19 decision-making; and (2) to identify opportunities for and barriers to evidence-informed decision-making in BC’s COVID-19 response, and more broadly. This inquiry may contribute to identifying opportunities for further strengthening the synthesis and application of evidence (considered broadly) to public health policy and decision-making, particularly in the context of future public health emergencies, both in British Columbia and other jurisdictions.

Study context

This qualitative study was conducted in the province of British Columbia (BC), Canada, a jurisdiction with a population of approximately five million people [ 15 ]. Within BC’s health sector, key actors involved in the policy response to COVID-19 included: elected officials, the BC Government’s Ministry of Health (MOH), the Provincial Health Services Authority (PHSA), Footnote 2 the Office of the Provincial Health Officer (PHO), Footnote 3 the BC Centre for Disease Control (BCCDC), Footnote 4 and Medical Health Officers (MHOs) and Chief MHOs at regional and local levels.

Health research infrastructure within the province includes Michael Smith Health Research BC [ 16 ] and multiple post-secondary research and education institutions (e.g., The University of British Columbia). Unlike other provincial (e.g., Ontario) and international (e.g., UK) jurisdictions, BC did not establish an independent, formal scientific advisory panel or separate organizational structure for public health intelligence in COVID-19. That said, a Strategic Research Advisory Council was established, reporting to the MOH and PHO, to identify COVID-19 research gaps and commission needed research for use within the COVID-19 response [ 17 ].

This research was part of a multidisciplinary UBC case study investigating the upstream determinants of the COVID-19 response in British Columbia, particularly related to institutions, politics, and organizations and how these interfaced with, and affected, pandemic governance [ 18 ]. Ethics approval for this study was provided by the University of British Columbia (UBC)’s Institutional Research Ethics Board (Certificate #: H20-02136).

Data collection

From July 2021 to January 2022, 18 semi-structured key informant interviews were conducted with BC elected officials, provincial and regional-level health officials, and civil society actors (e.g., within non-profit research organizations, unions) (Table 1 ). Initially, respondents were purposively sampled, based on their involvement in the COVID-19 response and their positioning within the health system organizational structure. Snowball sampling was used to identify additional respondents, with the intent of representing a range of organizational roles and actor perspectives. Participants were recruited via email invitation and provided written informed consent to participate.

Interviews were conducted virtually using Zoom® videoconferencing, with the exception of one hybrid in-person/Zoom® interview. Each interview was approximately one hour in duration. One to two research team members led each interview. The full interview protocol focused on actors’ descriptions of decision-making processes across the COVID-19 pandemic progression, from January 2020 to the date of the interviews, and they were asked to identify key decision points (e.g., emergency declaration, business closures) [see Additional File 1 for the full semi-structured interview guide]. For this study, we used a subset of interview questions focused on evidence-use in the decision-making process, and the organizational structures or actors involved, in BC's early COVID-19 pandemic response (March–August 2020). Questions were adapted to be relevant to a respondent’s expertise and particular involvement in the response. ‘Evidence’ was left undefined and considered broadly by the research team (i.e., both ‘scientific’/research-based and ‘non-scientific’ inputs) within interview questions, and therefore at the discretion of the participant as to what inputs they perceived and described as ‘evidence’ that informed or did not inform pandemic decision-making. Interviews were audio-recorded over Zoom® with permission and transcribed using NVivo Release 1.5© software. Each transcript was then manually verified for accuracy by 1–2 members of the research team.

Data analysis

An inductive thematic analysis was conducted, using a constant comparative method, to explore points of divergence and convergence across interviews and stakeholder perspectives [ 19 ]. Transcripts were inductively coded in NVivo Release 1.5© software, which was used to further organize and consolidate codes, generate a parsimonious codebook to fit the data, and retrieve interview excerpts [ 20 ]. Framework analysis was also employed as an additional method for generating analytic insights across stakeholder perspectives and contributed to refining the overall coding [ 21 ]. Triangulation across respondents and analytic methods, as well as team collaboration in reviewing and refining the codebook, contributed to validity of the analysis [ 22 ].

How did evidence inform early COVID-19 policy-making in BC?

Decision-makers described their perceptions on the use of evidence in policy-making; the interface between researchers and policy-makers; and specific barriers to evidence-use in policy-making within BC’s COVID-19 response. In discussing the use of evidence, respondents focused on ‘scientific’ evidence; however, they noted a lack of clarity as to how and what evidence flowed into decision-making. They also acknowledged that ‘scientific’ evidence was one of multiple factors influencing decisions. The themes described below reflect the narrative underlying their perspectives.

Perceptions of evidence-use

Multiple provincial actors generally expressed confidence or had an overall impression that decisions were evidence-based (IDI5,9), stating definitively that, "I don’t think there was a decision we made that wasn’t evidence-informed" (IDI9) and that "the science became a driver of decisions that were made" (IDI5). However, at the regional health authority level, one actor voiced skepticism that policy decisions were consistently informed by scientific evidence specifically, stating, "a lot of decisions [the PHO] made were in contrast to science and then shifted to be by the science" ( IDI6). The evolving nature of the available evidence and scientific understanding of the virus throughout the pandemic was acknowledged. For instance, one actor stated that, "I’ll say the response has been driven by the science; the science has been changing…from what I’ve seen, [it] has been a very science-based response" (IDI3).

Some actors narrowed in on certain policy decisions they believed were or were not evidence-informed. Policy decisions in 2020 that actors believed were directly informed by scientific data included the early decision to restrict informal, household gatherings; to keep schools open for in-person learning; to implement a business safety plan requirement across the province; and to delay the second vaccine dose for maximum efficacy. One provincial public health actor noted that an early 2020 decision made, within local jurisdictions, to close playgrounds was not based on scientific evidence. Further, the decision prompted public health decision-makers to centralize some decision-making to the provincial level, to address decisions being made 'on the ground' that were not based on scientific evidence (IDI16). Similarly, they added that the policy decision to require masking in schools was not based on scientific evidence; rather, "it's policy informed by the noise of your community." As parents and other groups within the community pushed for masking, this was "a policy decision to help schools stay open."

Early in the pandemic response, case data in local jurisdictions were reportedly used for monitoring and planning. These "numerator data" (IDI1), for instance case or hospitalization counts, were identified as being the primary mode of evidence used to inform decisions related to the implementation or easing of public health and social measures. The ability to generate epidemiological count data early in the pandemic due to efficient scaling up of PCR testing for COVID-19 was noted as a key advantage (IDI16). As the pandemic evolved in 2020, however, perspectives diverged in relation to the type of data that decision-makers relied on. For example, it was noted that BCCDC administered an online, voluntary survey to monitor unintended consequences of public health and social measures and inform targeted interventions. Opinions varied on whether this evidence was successfully applied in decision-making. One respondent emphasized this lack of application of evidence and perceived that public health orders were not informed by the level and type of evidence available, beyond case counts: "[In] a communicable disease crisis like a pandemic, the collateral impact slash damage is important and if you're going to be a public health institute, you actually have to bring those to the front, not just count cases" (IDI1).

There also existed some uncertainty and a perceived lack of transparency or clarity as to how or whether data analytic ‘entities’, such as BCCDC or research institutions, fed directly into decision-making. As a research actor shared, "I’m not sure that I know quite what all those channels really look like…I’m sure that there’s a lot of improvement that could be driven in terms of how we bring strong evidence to actual policy and practice" (IDI14). Another actor explicitly named the way information flowed into decision-making in the province as "organic" (IDI7). They also noted the lack of a formal, independent science advisory panel for BC’s COVID-19 response, which existed in other provincial and international jurisdictions. Relatedly, one regional health authority actor perceived that the committee that was convened to advise the province on research, and established for the purpose of applying research to the COVID-19 response, "should have focused more on knowledge translation, but too much time was spent commissioning research and asking what kinds of questions we needed to ask rather than looking at what was happening in other jurisdictions" (IDI6). Overall, multiple actors noted a lack of clarity around application of evidence and who is responsible for ensuring evidence is applied. As a BCCDC actor expressed, in relation to how to prevent transmission of COVID-19:

We probably knew most of the things that we needed to know about May of last year [2020]. So, to me, it’s not even what evidence you need to know about, but who’s responsible for making sure that you actually apply the evidence to the intervention? Because so many of our interventions have been driven by peer pressure and public expectation rather than what we know to be the case [scientifically] (IDI1).

Some described the significance of predictive disease modelling to understand the COVID-19 trajectory and inform decisions, as well as to demonstrate to the public the effectiveness of particular measures, which "help[ed] sustain our response" (IDI2). Others, however, perceived that "mathematical models were vastly overused [and] overvalued in decision-making around this pandemic" (IDI1) and that modellers stepped outside their realm of expertise in providing models and policy recommendations through the public media.

Overall, while many actors’ impressions were that the response was evidence-informed, an overarching theme was a lack of clarity and uncertainty with respect to how evidence actually flowed into decision-making processes, as well as what specific evidence was used and how. Participants noted various mechanisms created or already in place prior to COVID-19 that fed data into, and facilitated, decision-making. There was an acknowledgement that multiple forms of evidence—including scientific data, data on public perceptions, as well as public pressure—appeared to have influenced decision-making.

Interface between researchers and policy-makers

There was a general sense that the Ministry supported the use of scientific and research-based evidence specifically. Some actors identified particular Ministry personnel as being especially amenable to research and focused on data to inform decisions and implementation. More broadly, the government-research interface was characterized by one actor as an amicable one, a "research-friendly government", and that the Ministry of Health (MOH), specifically, has a research strategy whereby, "it’s literally within their bureaucracy to become a more evidence-informed organization" (IDI11). The MOH was noted to have funded a research network intended to channel evidence into health policy and practice, and which reported to the research side of the MOH.

Other actors perceived relatively limited engagement with the broader scientific community. Some perceived an overreliance on 'in-house expertise' or a "we can do that [ourselves] mentality" within government that precluded academic researchers’ involvement, as well as a sense of "not really always wanting to engage with academics to answer policy questions because they don’t necessarily see the value that comes" (IDI14). With respect to the role of research, an actor stated:

There needs to be a provincial dialogue around what evidence is and how it gets situated, because there’s been some tension around evidence being produced and not used or at least not used in the way that researchers think that it should be (IDI11).

Those involved in data analytics within the MOH acknowledged a challenge in making epidemiological data available to academic researchers, because "at the time, you’re just trying to get decisions made" (IDI7). Relatedly, a research actor described the rapid instigation of COVID-19 research and pivoting of academic research programs to respond to the pandemic, but perceived a slow uptake of these research efforts from the MOH and PHSA for decision-making and action. Nevertheless, they too acknowledged the challenge of using research evidence, specifically, in an evolving and dynamic pandemic:

I think we’ve got to be realistic about what research in a pandemic situation can realistically contribute within very short timelines. I mean, some of these decisions have to be made very quickly...they were intuitive decisions, I think some of them, rather than necessarily evidence-based decisions (IDI14).

Relatedly, perspectives diverged on the relationship between 'government' and public health expertise, and whether or not public health actors had an independent voice in articulating evidence to inform governance during the pandemic. Largely from Ministry stakeholders, and those within the PHSA, the impressions were that Ministry actors were relying on public health advice and scientific expertise. As one actor articulated, "[the] government actually respected and acknowledged and supported public health expertise" (IDI9). Others emphasized a "trust of the people who understood the problem" (IDI3)—namely, those within public health—and perceived that public health experts were enabled "to take a lead role in the health system, over politics" (IDI12). This perspective was not as widely held by those in the public health sector, as one public health actor expressed, "politicians and bureaucrats waded into public health practice in a way that I don't think was appropriate" and that, "in the context of a pandemic, it’s actually relatively challenging to bring true expert advice because there’s too many right now. Suddenly, everybody’s a public health expert, but especially bureaucrats and politicians." They went on to share that the independence of public health to speak and act—and for politicians to accept independent public health advice—needs to be protected and institutionalized as "core to good governance" (IDI1). Relatedly, an elected official linked this to the absence of a formal, independent science table to advise government and stated that, "I think we should have one established permanently. I think we need to recognize that politicians aren't always the best at discerning scientific evidence and how that should play into decision-making" (IDI15).

These results highlight the divergent perspectives participants had as to the interface between research and policy-making and a lack of understanding regarding process and roles.

Challenges in applying evidence to policy decisions

Perspectives converged with respect to the existence of numerous challenges with and barriers to applying evidence to health policy and decision-making. These related to the quality and breadth of available data, both in terms of absence and abundance. For instance, as one public health actor noted in relation to health policy-making, "you never have enough information. You always have an information shortage, so you're trying to make the best decisions you can in the absence of usually really clear information" (IDI8). On the other hand, as evidence emerged en masse across jurisdictions in the pandemic, there were challenges with synthesizing evidence in a timely fashion for 'real-time' decision-making. A regional health authority actor highlighted this challenge early in the COVID-19 pandemic and perceived that there was not a provincial group bringing new synthesized information to decision-makers on a daily basis (IDI6). Other challenges related to the complexity of the political-public health interface with respect to data and scientific expertise, which "gets debated and needs to be digested by the political process. And then decisions are made" (IDI5). This actor further expressed that debate among experts needs to be balanced with efficient crisis response, that one has to "cut the debate short. For the sake of expediency, you need to react."

It was observed that, in BC’s COVID-19 response, data was gathered from multiple sources with differing data collection procedures, and sometimes with conflicting results—for instance, 'health system data' analyzed by the PHSA and 'public health data' analyzed by the BCCDC. This was observed to present challenges from a political perspective in discerning "who’s actually getting the 'right' answers" (IDI7). An added layer of complexity was reportedly rooted in how to communicate such evidence to the public and "public trust in the numbers" (IDI7), particularly as public understanding of what evidence is, how it is developed, and why it changes, can influence public perceptions of governance.

Finally, as one actor from within the research sector noted, organizationally and governance-wise, the system was "not very well set up to actually use research evidence…if we need to do better at using evidence in practice, we need to fix some of those things. And we actually know what a lot of those things are." For example , "there’s no science framework for how organizations work within that" and " governments shy away from setting science policy " (IDI11). This challenge was framed as having a macro-level dimension, as higher-level leadership structures were observed to not incentivize the development and effective use of research among constituent organizations, and also micro-level implications. From their perspective, researchers will struggle without such policy frameworks to obtain necessary data-sharing agreements with health authorities, nor will they be able to successfully navigate other barriers to conducting action-oriented research that informs policy and practice.

Similarly, a research actor perceived that the COVID-19 pandemic highlighted pre-existing fragmentation, "a pretty disjointed sort of enterprise" in how research is organized in the province:

I think pandemics need strong leadership and I think pandemic research response needed probably stronger leadership than it had. And I think that’s to do with [how] no one really knew who was in charge because no one really was given the role of being truly in charge of the research response (IDI14).

This individual underscored that, at the time of the interview, there were nearly 600 separate research projects being conducted in BC that focused on COVID-19. From their perspective, this reflected the need for more centralized direction to provide leadership, coordinate research efforts, and catalyze collaborations.

Overall, respondents perceived a lack of coordination and continuity across data sources, and a lack of explicit guidelines on evidence-use in the decision-making process, which resulted in a sense of fragmentation. The tension between the processes involved in research and the need for rapid decision-making was perceived as a barrier to using evidence to inform policy.

This study explored the use of evidence to inform early COVID-19 decision-making within British Columbia, Canada, from the perspectives of decision-makers themselves. Findings underscore the complexity of synthesizing and applying evidence (i.e., ‘scientific’ or research-based evidence most commonly discussed) to support public health policy in 'real-time', particularly in the context of public health crisis response. Despite a substantial and long-established literature on evidence-based clinical decision-making [ 23 , 24 ], understanding is more limited as to how public health crisis decision-making can be evidence-informed or evidence-based. By contributing to a growing global scholarship of retrospective examinations of COVID-19 decision-making processes [ 25 , 26 , 27 , 28 ], our study aimed to broaden this understanding and, thus, support the strengthening of public health emergency preparedness in Canada, and globally.

Specifically, based on our findings on evidence-based public health practice, we found that decision-makers clearly emphasized ‘evidence-based’ or ‘evidence-informed’ as meaning ‘scientific’ evidence. They acknowledged other forms of evidence such as professional expertise and contextual information as influencing factors. We identified four key points related to the process of evidence-use in BC's COVID-19 decision-making, with broader implications as well:

Role Differences: The tensions we observed primarily related to a lack of clarity among the various agencies involved as to their respective roles and responsibilities in a public health emergency, a finding that aligns with research on evidence-use in prior pandemics in Canada [ 29 ]. Relatedly, scientists and policy-makers experienced challenges with communication and information-flow between one another and the public, which may reflect their different values and standards, framing of issues and goals, and language [ 30 ].

Barriers to Evidence-Use: Coordination and consistency in how data are collected across jurisdictions reportedly impeded efficiency and timeliness of decision-making. Lancaster and Rhodes (2020) suggest that evidence itself should be treated as a process, rather than a commodity, in evidence-based practice [ 31 ]. Thus, shifting the dialogue from 'barriers to evidence use' to an approach that fosters dialogue across different forms of evidence and different actors in the process may be beneficial.

Use of Evidence in Public Health versus Medicine: Evidence-based public health can be conflated with the concept of evidence-based medicine, though these are distinct in the type of information that needs to be considered. While ‘research evidence’ was the primary type of evidence used, other important types of evidence informed policy decisions in the COVID-19 public health emergency—for example, previous experience, public values, and preferences. This concurs with Brownson’s (2009) framework of factors driving decision-making in evidence-based public health [ 32 ]. Namely, that a balance between multiple factors, situated in particular environmental and organizational context, shapes decision-making: 1) best available research evidence; 2) clients'/population characteristics, state, needs, values, and preferences; and 3) resources, including a practitioner’s expertise. Thus, any evaluation of evidence-use in public health policy must take into consideration this multiplicity of factors at play, and draw on frameworks specific to public health [ 33 ]. Moreover, public health decision-making requires much more attention to behavioural factors and non-clinical impacts, which is distinct from the largely biology-focused lens of evidence-based medicine.

Transparency: Many participants emphasized a lack of explanation about why certain decisions were made and a lack of understanding about who was involved in decisions and how those decisions were made. This point was confirmed by a recent report on lessons learned in BC during the COVID-19 pandemic in which the authors describe " the desire to know more about the reasons why decisions were taken " as a " recurring theme " (13:66). These findings point to a need for clear and transparent mechanisms for channeling evidence, irrespective of the form used, into public health crisis decision-making.

Our findings also pointed to challenges associated with the infrastructure for utilizing research evidence in BC policy-making, specifically a need for more centralized authority on the research side of the public health emergency response to avoid duplication of efforts and more effectively synthesize findings for efficient use. Yet, as a participant questioned, what is the realistic role of research in a public health crisis response? Generally, most evidence used to inform crisis response measures is local epidemiological data or modelling data [ 7 ]. As corroborated by our findings, challenges exist in coordinating data collection and synthesis of these local data across jurisdictions to inform 'real-time' decision-making, let alone to feed into primary research studies [ 34 ].

On the other hand, as was the case in the COVID-19 pandemic, a 'high noise' research environment soon became another challenge as data became available to researchers. Various mechanisms have been established to try and address these challenges amid the COVID-19 pandemic, both to synthesize scientific evidence globally and to create channels for research evidence to support timely decision-making. For instance: 1) research networks and collaborations are working to coordinate research efforts (e.g., COVID-END network [ 35 ]); 2) independent research panels or committees within jurisdictions provide scientific advice to inform decision-making; and 3) research foundations, funding agencies, and platforms for knowledge mobilization (e.g., academic journals) continue to streamline funding through targeted calls for COVID-19 research grant proposals, or for publication of COVID-19 research articles. While our findings describe the varied forms of evidence used in COVID-19 policy-making—beyond scientific evidence—they also point to the opportunity for further investments in infrastructure that coordinates, streamlines, and strengthens collaborations between health researchers and decision-makers that results in timely uptake of results into policy decisions.

Finally, in considering these findings, it is important to note the study's scope and limitations: We focused on evidence use in a single public health emergency, in a single province. Future research could expand this inquiry to a multi-site analysis of evidence-use in pandemic policy-making, with an eye to synthesizing lessons learned and best practices. Additionally, our sample of participants included only one elected official, so perspectives were limited from this type of role. The majority of participants were health officials who primarily referred to and discussed evidence as ‘scientific’ or research-based evidence. Further work could explore the facilitators and barriers to evidence-use from the perspectives of elected officials and Ministry personnel, particularly with respect to the forms of evidence—considered broadly—and other varied inputs, that shape decision-making in the public sphere. This could include a more in-depth examination of policy implementation and how the potential societal consequences of implementation factor into public health decision-making.

We found that the policy decisions made during the initial stages of the COVID-19 pandemic were perceived by actors in BC's response as informed by—not always based on—scientific evidence, specifically; however, decision-makers also considered other contextual factors and drew on prior pandemic-related experience to inform decision-making, as is common in evidence-based public health practice [ 32 ]. The respondents' experiences point to specific areas that need to be considered in planning for future public health emergencies, including information flow between policy-makers and researchers, coordination in how data are collected, and transparency in how decisions are made—all of which reflect a need to improve communication. Furthermore, shifting the discourse from evidence as a commodity to evidence-use as a process will be helpful in addressing barriers to evidence-use, as well as increasing understanding about the public health decision-making process as distinct from clinical medicine. Finally, there is a critical need for clear mechanisms that channel evidence (whether ‘scientific’, research-based, or otherwise) into health crisis decision-making, including identifying and communicating the decision-making process to those producing and synthesizing evidence. The COVID-19 pandemic experience is an opportunity to reflect on what needs to be done to guild our public health systems for the future [ 36 , 37 ]. Understanding and responding to the complexities of decision-making as we move forward, particularly with respect to the synthesis and use of evidence, can contribute to strengthening preparedness for future public health emergencies.

Availability of data and materials

The data that support the findings of this study are not publicly available to maintain the confidentiality of research participants.

The terms 'evidence-informed' and 'evidence-based' decision-making are used throughout this paper, though are distinct. The term 'evidence-informed' suggests that evidence is used and considered, though not necessarily solely determinative in decision-making [ 38 ].

The Provincial Health Services Authority (PHSA) works with the Ministry of Health (MOH) and regional health authorities to oversee the coordination and delivery of programs.

The Office of the Provincial Health Officer (PHO) has binding legal authority in the case of an emergency, and responsibility to monitor the health of BC’s population and provide independent advice to Ministers and public offices on public health issues.

The British Columbia Centre for Disease Control (BCCDC) is a program of the PHSA and provides provincial and national disease surveillance, detection, treatment, prevention, and consultation.

Abbreviations

British Columbia

British Columbia Centre for Disease Control

Coronavirus Disease 2019

Medical Health Officer

Ministry of Health

Provincial Health Officer

Provincial Health Services Authority

Severe Acute Respiratory Syndrome Coronavirus—2

University of British Columbia

Rubin O, Errett NA, Upshur R, Baekkeskov E. The challenges facing evidence-based decision making in the initial response to COVID-19. Scand J Public Health. 2021;49(7):790–6.

Article   PubMed   Google Scholar  

Williams GA, Ulla Díez SM, Figueras J, Lessof S, Ulla SM. Translating evidence into policy during the COVID-19 pandemic: bridging science and policy (and politics). Eurohealth (Lond). 2020;26(2):29–48.

Google Scholar  

Vickery J, Atkinson P, Lin L, Rubin O, Upshur R, Yeoh EK, et al. Challenges to evidence-informed decision-making in the context of pandemics: qualitative study of COVID-19 policy advisor perspectives. BMJ Glob Heal. 2022;7(4):1–10.

Piper J, Gomis B, Lee K. “Guided by science and evidence”? The politics of border management in Canada’s response to the COVID-19 pandemic. Front Polit Sci. 2022;4

Cairney P. The UK government’s COVID-19 policy: what does “Guided by the science” mean in practice? Front Polit Sci. 2021;3(March):1–14.

Colman E, Wanat M, Goossens H, Tonkin-Crine S, Anthierens S. Following the science? Views from scientists on government advisory boards during the COVID-19 pandemic: a qualitative interview study in five European countries. BMJ Glob Heal. 2021;6(9):1–11.

Salajan A, Tsolova S, Ciotti M, Suk JE. To what extent does evidence support decision making during infectious disease outbreaks? A scoping literature review. Evid Policy. 2020;16(3):453–75.

Article   Google Scholar  

Cairney P. The UK government’s COVID-19 policy: assessing evidence-informed policy analysis in real time. Br Polit. 2021;16(1):90–116.

Lancaster K, Rhodes T, Rosengarten M. Making evidence and policy in public health emergencies: lessons from COVID-19 for adaptive evidence-making and intervention. Evid Policy. 2020;16(3):477–90.

Yang K. What can COVID-19 tell us about evidence-based management? Am Rev Public Adm. 2020;50(6–7):706–12.

Parkhurst J. The politics of evidence: from evidence-based policy to the good governance of evidence. Abingdon: Routledge; 2017.

Office of the Prime Minister. Minister of Health Mandate Letter [Internet]. 2021. https://pm.gc.ca/en/mandate-letters/2021/12/16/minister-health-mandate-letter

de Faye B, Perrin D, Trumpy C. COVID-19 lessons learned review: Final report. Victoria, BC; 2022.

First Nations Health Authority. Evolving vaccination plans is science in action: Dr. Bonnie Henry. First Nations Health Authority. 2021.

BC Stats. 2021 Sub-provincial population estimates highlights. Vol. 2021. Victoria, BC; 2022.

Michael Smith Health Research BC [Internet]. 2023. healthresearchbc.ca. Accessed 25 Jan 2023.

Michael Smith Health Research BC. SRAC [Internet]. 2023. https://healthresearchbc.ca/strategic-provincial-advisory-committee-srac/ . Accessed 25 Jan 2023.

Brubacher LJ, Hasan MZ, Sriram V, Keidar S, Wu A, Cheng M, et al. Investigating the influence of institutions, politics, organizations, and governance on the COVID-19 response in British Columbia, Canada: a jurisdictional case study protocol. Heal Res Policy Syst. 2022;20(1):1–10.

Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3:77–101.

DeCuir-Gunby JT, Marshall PL, McCulloch AW. Developing and using a codebook for the analysis of interview data: an example from a professional development research project. Field Methods. 2011;23(2):136–55.

Gale NK, Heath G, Cameron E, Rashid S, Redwood S. Using the framework method for the analysis of qualitative data in multi-disciplinary health research. BMC Med Res Methodol. 2013;13(117):1–8.

Creswell JW, Miller DL. Determining validity in qualitative inquiry. Theory Pract. 2000;39(3):124–30.

Sackett D. How to read clinical journals: I. Why to read them and how to start reading them critically. Can Med Assoc J. 1981;1245:555–8.

Evidence Based Medicine Working Group. Evidence-based medicine: a new approach to teaching the practice of medicine. JAMA Netw. 1992;268(17):2420–5.

Allin S, Fitzpatrick T, Marchildon GP, Quesnel-Vallée A. The federal government and Canada’s COVID-19 responses: from “we’re ready, we’re prepared” to “fires are burning.” Heal Econ Policy Law. 2022;17(1):76–94.

Bollyky TJ, Hulland EN, Barber RM, Collins JK, Kiernan S, Moses M, et al. Pandemic preparedness and COVID-19: an exploratory analysis of infection and fatality rates, and contextual factors associated with preparedness in 177 countries, from Jan 1, 2020, to Sept 30, 2021. Lancet. 2022;6736(22):1–24.

Kuhlmann S, Hellström M, Ramberg U, Reiter R. Tracing divergence in crisis governance: responses to the COVID-19 pandemic in France, Germany and Sweden compared. Int Rev Adm Sci. 2021;87(3):556–75.

Haldane V, De Foo C, Abdalla SM, Jung AS, Tan M, Wu S, et al. Health systems resilience in managing the COVID-19 pandemic: lessons from 28 countries. Nat Med. 2021;27(6):964–80.

Article   CAS   PubMed   Google Scholar  

Rosella LC, Wilson K, Crowcroft NS, Chu A, Upshur R, Willison D, et al. Pandemic H1N1 in Canada and the use of evidence in developing public health policies—a policy analysis. Soc Sci Med. 2013;83:1–9.

Article   PubMed   PubMed Central   Google Scholar  

Saner M. A map of the interface between science & policy. Ottawa, Ontario; 2007. Report No.: January 1.

Lancaster K, Rhodes T. What prevents health policy being “evidence-based”? New ways to think about evidence, policy and interventions in health. Br Med Bull. 2020;135(1):38–49.

Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.

Rychetnik L, Frommer M, Hawe P, Shiell A. Criteria for evaluating evidence on public health interventions. J Epidemiol Community Health. 2002;56:119–27.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Khan Y, Brown A, Shannon T, Gibson J, Généreux M, Henry B, et al. Public health emergency preparedness: a framework to promote resilience. BMC Public Health. 2018;18(1):1–16.

COVID-19 Evidence Network to Support Decision-Making. COVID-END [Internet]. 2023. https://www.mcmasterforum.org/networks/covid-end . Accessed 25 Jan 2023.

Canadian Institutes of Health Research. Moving forward from the COVID-19 pandemic: 10 opportunities for strengthening Canada’s public health systems. 2022.

Di Ruggiero E, Bhatia D, Umar I, Arpin E, Champagne C, Clavier C, et al. Governing for the public’s health: Governance options for a strengthened and renewed public health system in Canada. 2022.

Adjoa Kumah E, McSherry R, Bettany-Saltikov J, Hamilton S, Hogg J, Whittaker V, et al. Evidence-informed practice versus evidence-based practice educational interventions for improving knowledge, attitudes, understanding, and behavior toward the application of evidence into practice: a comprehensive systematic review of undergraduate studen. Campbell Syst Rev. 2019;15(e1015):1–19.

Download references

Acknowledgements

We would like to extend our gratitude to current and former members of the University of British Columbia Working Group on Health Systems Response to COVID-19 who contributed to various aspects of this study, including Shelly Keidar, Kristina Jenei, Sydney Whiteford, Dr. Md Zabir Hasan, Dr. David M. Patrick, Dr. Maxwell Cameron, Mahrukh Zahid, Dr. Yoel Kornreich, Dr. Tammi Whelan, Austin Wu, Shivangi Khanna, and Candice Ruck.

Financial support for this work was generously provided by the University of British Columbia's Faculty of Medicine (Grant No. GR004683) and Peter Wall Institute for Advanced Studies (Grant No. GR016648), as well as a Canadian Institutes of Health Research Operating Grant (Grant No. GR019157). These funding bodies were not involved in the design of the study, the collection, analysis or interpretation of data, or in the writing of this manuscript.

Author information

Authors and affiliations.

School of Population and Public Health, University of British Columbia, Vancouver, Canada

Laura Jane Brubacher, Chris Y. Lovato, Veena Sriram, Michael Cheng & Peter Berman

School of Public Health Sciences, University of Waterloo, Waterloo, Canada

Laura Jane Brubacher

School of Public Policy and Global Affairs, University of British Columbia, Vancouver, Canada

Veena Sriram

You can also search for this author in PubMed   Google Scholar

Contributions

CYL, PB, and VS obtained funding for and designed the study. LJB, MC, and PB conducted data collection. LJB and VS analyzed the qualitative data. CYL and LJB collaboratively wrote the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Laura Jane Brubacher .

Ethics declarations

Ethics approval and consent to participate.

This case study received the approval of the UBC Behavioural Research Ethics Board (Certificate # H20-02136). Participants provided written informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

Semi-structured interview guide [* = questions used for this specific study]

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Brubacher, L.J., Lovato, C.Y., Sriram, V. et al. The use of evidence to guide decision-making during the COVID-19 pandemic: divergent perspectives from a qualitative case study in British Columbia, Canada. Health Res Policy Sys 22 , 66 (2024). https://doi.org/10.1186/s12961-024-01146-2

Download citation

Received : 08 February 2023

Accepted : 29 April 2024

Published : 03 June 2024

DOI : https://doi.org/10.1186/s12961-024-01146-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Decision-making
  • Public health
  • Policy-making
  • Qualitative

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

guidelines for research study

  • A Medical School That Looks More Like California
  • Measuring Meals
  • Clinical Trial Offers Hope for Spina Bifida Cure

When Should You Neuter or Spay Your Dog?

Updated guidelines to decrease risk of certain cancers, joint disorders.

  • by Trina Wood
  • May 28, 2024

English Mastiff puppy stands outside on grass. A new UC Davis study updates guidelines on when to neuter or spay a dog to avoid health risks. (Claudio Gennari via Flickr CC by 4.0))

Researchers at the University of California, Davis, have updated their guidelines on when to neuter 40 popular dog varieties by breed and sex. Their recent paper in  Frontiers in Veterinary Science adds five breeds to a line of research that began in 2013 with a study that suggested that early neutering of golden retrievers puts them at increased risk of joint diseases and certain cancers.

That initial study set off a flurry of debate about the best age to neuter other popular breeds . Professors Lynette and Benjamin Hart of the School of Veterinary Medicine, the study’s lead authors, set out to add more breed studies by examining more than a decade of data from thousands of dogs treated at the UC Davis veterinary hospital. Their goal was to provide owners with more information to make the best decision for their animals.

They specifically looked at the correlation between neutering or spaying a dog before 1 year of age and a dog’s risk of developing certain cancers. These include cancers of the lymph nodes, bones, blood vessels or mast cell tumors for some breeds; and joint disorders such as hip or elbow dysplasia, or cranial cruciate ligament tears. Joint disorders and cancers are of particular interest because neutering removes male and female sex hormones that play key roles in important body processes such as closure of bone growth plates.

For the most recent study , they focused on German short/wirehaired pointer, mastiff, Newfoundland, Rhodesian ridgeback and Siberian husky. Data was collected from the UC Davis veterinary hospital’s records that included more than 200 cases for each of these five breeds weighing more than 20 kg (or 44 pounds), spanning January 2000 through December 2020. 

The Harts said their updated guidelines emphasize the importance of personalized decisions regarding the neutering of dogs, considering the dog’s breed, sex and context. A table representing guidelines reflecting the research findings for all 40 breeds that have been studied, including the five new breeds, can be found here .

Health risks different among breeds

“It’s always complicated to consider an alternate paradigm,” said Professor Lynette Hart. “This is a shift from a long-standing model of early spay/neuter practices in the U.S. and much of Europe to neuter by 6 months of age, but important to consider as we see the connections between gonadal hormone withdrawal from early spay/neuter and potential health concerns.”

The study found major differences among these breeds for developing joint disorders and cancers when neutered early. Male and female pointer breeds had elevated joint disorders and increased cancers; male mastiff breeds had increased cranial cruciate ligament tears and lymphoma; female Newfoundland breeds had heightened risks for joint disorders; female Ridgeback breeds had heightened risks for mast cell tumors with very early neutering; and Siberian huskies showed no significant effects on joint disorders or cancers.

“We’re invested in making contributions to people’s relationship with their animals,” said Benjamin Hart, distinguished professor emeritus. “This guidance provides information and options for veterinarians to give pet owners, who should have the final decision-making role for the health and well-being of their animal.”

Their combined research studies will soon be available with others in the open access journal, Frontiers of Veterinary Science, as a free e-book, Effective Options Regarding Spay or Neuter of Dogs . 

Other researchers on this UC Davis study include: Abigail Thigpen, Maya Lee, Miya Babchuk, Jenna Lee, Megan Ho, Sara Clarkson and Juliann Chou with the School of Veterinary Medicine; and Neil Willits with the Department of Statistics.

The research received a small amount of funding from the Center for Companion Animal Health, but was primarily conducted by the above authors as volunteers.

Media Resources

Media Contacts:

Primary Category

Secondary categories.

CRR logo

A Guide to Measuring Wealth, Income, and Replacement Rates in the Health and Retirement Study

Guide by Nilufer Gok , Anqi Chen , and Laura D. Quinby

The Health and Retirement Study (HRS) is a primary source of information on retirement wealth, income, and replacement rates, but calculating these measures requires a host of methodological choices that affect the results.  Since researchers have not yet established clear best practices for dealing with the survey’s complex structure, studies using the HRS are often inconsistent and difficult to replicate.  Additionally, the steep learning curve is daunting for young scholars interested in exploring retirement issues.  The CRR aims to make the HRS more accessible by providing: 1) a methodological guide that identifies the key conceptual and technical choices that must be made when analyzing a household’s financial resources in the HRS and 2) clean, well-documented code that builds on RAND’s efforts to calculate retirement wealth, income, and replacement rates.

Businessman and direction sign to go ahead forward

What Risks Do Near Retirees and Retirees Face from Inflation?

Working Paper by Jean-Pierre Aubry and Laura D. Quinby

Older people discussing topic at a table in a classroom

How Do Households React to Inflation? New Survey Evidence

Senior woman in the supermarket checks her grocery receipt

What Risks Do Near Retirees and Retirees Face from Inflation, and How Do They React?

Special Report by Jean-Pierre Aubry and Laura D. Quinby

Privacy Overview

guidelines for research study

What is decision making?

Signpost with three blank signs on sky backgrounds

Decisions, decisions. When was the last time you struggled with a choice? Maybe it was this morning, when you decided to hit the snooze button—again. Perhaps it was at a restaurant, with a miles-long menu and the server standing over you. Or maybe it was when you left your closet in a shambles after trying on seven different outfits before a big presentation. Often, making a decision—even a seemingly simple one—can be difficult. And people will go to great lengths—and pay serious sums of money—to avoid having to make a choice. The expensive tasting menu at the restaurant, for example. Or limiting your closet choices to black turtlenecks, à la Steve Jobs.

Get to know and directly engage with senior McKinsey experts on decision making

Aaron De Smet is a senior partner in McKinsey’s New Jersey office, Eileen Kelly Rinaudo  is McKinsey’s global director of advancing women executives and is based in the New York office, Frithjof Lund is a senior partner in the Oslo office, and Leigh Weiss is a senior adviser in the Boston office.

If you’ve ever wrestled with a decision at work, you’re definitely not alone. According to McKinsey research, executives spend a significant portion of their time— nearly 40 percent , on average—making decisions. Worse, they believe most of that time is poorly used. People struggle with decisions so much so that we actually get exhausted from having to decide too much, a phenomenon called decision fatigue.

But decision fatigue isn’t the only cost of ineffective decision making. According to a McKinsey survey of more than 1,200 global business leaders, inefficient decision making costs a typical Fortune 500 company 530,000 days  of managers’ time each year, equivalent to about $250 million in annual wages. That’s a lot of turtlenecks.

How can business leaders ease the burden of decision making and put this time and money to better use? Read on to learn the ins and outs of smart decision making—and how to put it to work.

Learn more about our People & Organizational Performance Practice .

How can organizations untangle ineffective decision-making processes?

McKinsey research has shown that agile is the ultimate solution for many organizations looking to streamline their decision making . Agile organizations are more likely to put decision making in the right hands, are faster at reacting to (or anticipating) shifts in the business environment, and often attract top talent who prefer working at companies with greater empowerment and fewer layers of management.

For organizations looking to become more agile, it’s possible to quickly boost decision-making efficiency by categorizing the type of decision to be made and adjusting the approach accordingly. In the next section, we review three types of decision making and how to optimize the process for each.

What are three keys to faster, better decisions?

Business leaders today have access to more sophisticated data than ever before. But it hasn’t necessarily made decision making any easier. For one thing, organizational dynamics—such as unclear roles, overreliance on consensus, and death by committee—can get in the way of straightforward decision making. And more data often means more decisions to be taken, which can become too much for one person, team, or department. This can make it more difficult for leaders to cleanly delegate, which in turn can lead to a decline in productivity.

Leaders are growing increasingly frustrated with broken decision-making processes, slow deliberations, and uneven decision-making outcomes. Fewer than half  of the 1,200 respondents of a McKinsey survey report that decisions are timely, and 61 percent say that at least half the time they spend making decisions is ineffective.

What’s the solution? According to McKinsey research, effective solutions center around categorizing decision types and organizing different processes to support each type. Further, each decision category should be assigned its own practice—stimulating debate, for example, or empowering employees—to yield improvements in effectiveness.

Here are the three decision categories  that matter most to senior leaders, and the standout practice that makes the biggest difference for each type of decision.

  • Big-bet decisions are infrequent but high risk, such as acquisitions. These decisions carry the potential to shape the future of the company, and as a result are generally made by top leaders and the board. Spurring productive debate by assigning someone to argue the case for and against a potential decision can improve big-bet decision making.
  • Cross-cutting decisions, such as pricing, can be frequent and high risk. These are usually made by business unit heads, in cross-functional forums as part of a collaborative process. These types of decisions can be improved by doubling down on process refinement. The ideal process should be one that helps clarify objectives, measures, and targets.
  • Delegated decisions are frequent but low risk and are handled by an individual or working team with some input from others. Delegated decision making can be improved by ensuring that the responsibility for the decision is firmly in the hands of those closest to the work. This approach also enhances engagement and accountability.

In addition, business leaders can take the following four actions to help sustain rapid decision making :

  • Focus on the game-changing decisions, ones that will help an organization create value and serve its purpose.
  • Convene only necessary meetings, and eliminate lengthy reports. Turn unnecessary meetings into emails, and watch productivity bloom. For necessary meetings, provide short, well-prepared prereads to aid in decision making.
  • Clarify the roles of decision makers and other voices. Who has a vote, and who has a voice?
  • Push decision-making authority to the front line—and tolerate mistakes.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

How can business leaders effectively delegate decision making.

Business is more complex and dynamic than ever, meaning business leaders are faced with needing to make more decisions in less time. Decision making takes up an inordinate amount of management’s time—up to 70 percent for some executives—which leads to inefficiencies and opportunity costs.

As discussed above, organizations should treat different types of decisions differently . Decisions should be classified  according to their frequency, risk, and importance. Delegated decisions are the most mysterious for many organizations: they are the most frequent, and yet the least understood. Only about a quarter of survey respondents  report that their organizations make high-quality and speedy delegated decisions. And yet delegated decisions, because they happen so often, can have a big impact on organizational culture.

The key to better delegated decisions is to empower employees by giving them the authority and confidence to act. That means not simply telling employees which decisions they can or can’t make; it means giving employees the tools they need to make high-quality decisions and the right level of guidance as they do so.

Here’s how to support delegation and employee empowerment:

  • Ensure that your organization has a well-defined, universally understood strategy. When the strategic intent of an organization is clear, empowerment is much easier because it allows teams to pull in the same direction.
  • Clearly define roles and responsibilities. At the foundation of all empowerment efforts is a clear understanding of who is responsible for what, including who has input and who doesn’t.
  • Invest in capability building (and coaching) up front. To help managers spend meaningful coaching time, organizations should also invest in managers’ leadership skills.
  • Build an empowerment-oriented culture. Leaders should role model mindsets that promote empowerment, and managers should build the coaching skills they want to see. Managers and employees, in particular, will need to get comfortable with failure as a necessary step to success.
  • Decide when to get involved. Managers should spend effort up front to decide what is worth their focused attention. They should know when it’s appropriate to provide close guidance and when not to.

How can you guard against bias in decision making?

Cognitive bias is real. We all fall prey, no matter how we try to guard ourselves against it. And cognitive and organizational bias undermines good decision making, whether you’re choosing what to have for lunch or whether to put in a bid to acquire another company.

Here are some of the most common cognitive biases and strategies for how to avoid them:

  • Confirmation bias. Often, when we already believe something, our minds seek out information to support that belief—whether or not it is actually true. Confirmation bias  involves overweighting evidence that supports our belief, underweighting evidence against our belief, or even failing to search impartially for evidence in the first place. Confirmation bias is one of the most common traps organizational decision makers fall into. One famous—and painful—example of confirmation bias is when Blockbuster passed up the opportunity  to buy a fledgling Netflix for $50 million in 2000. (Actually, that’s putting it politely. Netflix executives remember being “laughed out” of Blockbuster’s offices.) Fresh off the dot-com bubble burst of 2000, Blockbuster executives likely concluded that Netflix had approached them out of desperation—not that Netflix actually had a baby unicorn on its hands.
  • Herd mentality. First observed by Charles Mackay in his 1841 study of crowd psychology, herd mentality happens when information that’s available to the group is determined to be more useful than privately held knowledge. Individuals buy into this bias because there’s safety in the herd. But ignoring competing viewpoints might ultimately be costly. To counter this, try a teardown exercise , wherein two teams use scenarios, advanced analytics, and role-playing to identify how a herd might react to a decision, and to ensure they can refute public perceptions.
  • Sunk-cost fallacy. Executives frequently hold onto underperforming business units or projects because of emotional or legacy attachment . Equally, business leaders hate shutting projects down . This, researchers say, is due to the ingrained belief that if everyone works hard enough, anything can be turned into gold. McKinsey research indicates two techniques for understanding when to hold on and when to let go. First, change the burden of proof from why an asset should be cut to why it should be retained. Next, categorize business investments according to whether they should be grown, maintained, or disposed of—and follow clearly differentiated investment rules  for each group.
  • Ignoring unpleasant information. Researchers call this the “ostrich effect”—when people figuratively bury their heads in the sand , ignoring information that will make their lives more difficult. One study, for example, found that investors were more likely to check the value of their portfolios when the markets overall were rising, and less likely to do so when the markets were flat or falling. One way to help get around this is to engage in a readout process, where individuals or teams summarize discussions as they happen. This increases the likelihood that everyone leaves a meeting with the same understanding of what was said.
  • Halo effect. Important personal and professional choices are frequently affected by people’s tendency to make specific judgments based on general impressions . Humans are tempted to use simple mental frames to understand complicated ideas, which means we frequently draw conclusions faster than we should. The halo effect is particularly common in hiring decisions. To avoid this bias, structured interviews can help mitigate the essentializing tendency. When candidates are measured against indicators, intuition is less likely to play a role.

For more common biases and how to beat them, check out McKinsey’s Bias Busters Collection .

Learn more about Strategy & Corporate Finance consulting  at McKinsey—and check out job opportunities related to decision making if you’re interested in working at McKinsey.

Articles referenced include:

  • “ Bias busters: When the crowd isn’t necessarily wise ,” McKinsey Quarterly , May 23, 2022, Eileen Kelly Rinaudo , Tim Koller , and Derek Schatz
  • “ Boards and decision making ,” April 8, 2021, Aaron De Smet , Frithjof Lund , Suzanne Nimocks, and Leigh Weiss
  • “ To unlock better decision making, plan better meetings ,” November 9, 2020, Aaron De Smet , Simon London, and Leigh Weiss
  • “ Reimagine decision making to improve speed and quality ,” September 14, 2020, Julie Hughes , J. R. Maxwell , and Leigh Weiss
  • “ For smarter decisions, empower your employees ,” September 9, 2020, Aaron De Smet , Caitlin Hewes, and Leigh Weiss
  • “ Bias busters: Lifting your head from the sand ,” McKinsey Quarterly , August 18, 2020, Eileen Kelly Rinaudo
  • “ Decision making in uncertain times ,” March 24, 2020, Andrea Alexander, Aaron De Smet , and Leigh Weiss
  • “ Bias busters: Avoiding snap judgments ,” McKinsey Quarterly , November 6, 2019, Tim Koller , Dan Lovallo, and Phil Rosenzweig
  • “ Three keys to faster, better decisions ,” McKinsey Quarterly , May 1, 2019, Aaron De Smet , Gregor Jost , and Leigh Weiss
  • “ Decision making in the age of urgency ,” April 30, 2019, Iskandar Aminov, Aaron De Smet , Gregor Jost , and David Mendelsohn
  • “ Bias busters: Pruning projects proactively ,” McKinsey Quarterly , February 6, 2019, Tim Koller , Dan Lovallo, and Zane Williams
  • “ Decision making in your organization: Cutting through the clutter ,” McKinsey Quarterly , January 16, 2018, Aaron De Smet , Simon London, and Leigh Weiss
  • “ Untangling your organization’s decision making ,” McKinsey Quarterly , June 21, 2017, Aaron De Smet , Gerald Lackey, and Leigh Weiss
  • “ Are you ready to decide? ,” McKinsey Quarterly , April 1, 2015, Philip Meissner, Olivier Sibony, and Torsten Wulf.

Signpost with three blank signs on sky backgrounds

Want to know more about decision making?

Related articles.

Three gear wheels in contact

What is productivity?

" "

What is the future of work?

" "

What is leadership?

NTRS - NASA Technical Reports Server

Available downloads, related records.

  • Share full article

Advertisement

Supported by

Alzheimer’s Takes a Financial Toll Long Before Diagnosis, Study Finds

New research shows that people who develop dementia often begin falling behind on bills years earlier.

Ben Casselman

By Ben Casselman

Long before people develop dementia, they often begin falling behind on mortgage payments, credit card bills and other financial obligations, new research shows.

A team of economists and medical experts at the Federal Reserve Bank of New York and Georgetown University combined Medicare records with data from Equifax, the credit bureau, to study how people’s borrowing behavior changed in the years before and after a diagnosis of Alzheimer’s or a similar disorder.

What they found was striking: Credit scores among people who later develop dementia begin falling sharply long before their disease is formally identified. A year before diagnosis, these people were 17.2 percent more likely to be delinquent on their mortgage payments than before the onset of the disease, and 34.3 percent more likely to be delinquent on their credit card bills. The issues start even earlier: The study finds evidence of people falling behind on their debts five years before diagnosis.

“The results are striking in both their clarity and their consistency,” said Carole Roan Gresenz, a Georgetown University economist who was one of the study’s authors. Credit scores and delinquencies, she said, “consistently worsen over time as diagnosis approaches, and so it literally mirrors the changes in cognitive decline that we’re observing.”

The research adds to a growing body of work documenting what many Alzheimer’s patients and their families already know: Decision-making, including on financial matters, can begin to deteriorate long before a diagnosis is made or even suspected. People who are starting to experience cognitive decline may miss payments, make impulsive purchases or put money into risky investments they would not have considered before the disease.

“There’s not just getting forgetful, but our risk tolerance changes,” said Lauren Hersch Nicholas, a professor at the University of Colorado School of Medicine who has studied dementia’s impact on people’s finances. “It might seem suddenly like a good move to move a diversified financial portfolio into some stock that someone recommended.”

People in the early stages of the disease are also vulnerable to scams and fraud, added Dr. Nicholas, who was not involved in the New York Fed research. In a paper published last year , she and several co-authors found that people likely to develop dementia saw their household wealth decline in the decade before diagnosis.

The problems are likely to only grow as the American population ages and more people develop dementia. The New York Fed study estimates that 600,000 delinquencies will occur over the next decade as a result of undiagnosed memory disorders.

That probably understates the impact, the researchers argue. Their data includes only issues that show up on credit reports, such as late payments, not the much broader array of financial impacts that the diseases can cause. Wilbert van der Klaauw, a New York Fed economist who is another of the study’s authors, said that after his mother was diagnosed with Alzheimer’s, his family discovered parking tickets and traffic violations that she had hidden.

“If anything, this is kind of an underestimate of the kind of financial difficulties people can experience,” he said.

Shortly before he was diagnosed with Alzheimer’s, Jay Reinstein bought a BMW he could not afford.

“I went into a showroom and I came home with a BMW,” he said. “My wife was not thrilled.”

At the time, Mr. Reinstein had recently retired as assistant city manager for Fayetteville, N.C. He had been noticing memory issues for years, but dismissed them as a result of his demanding job. Only after his diagnosis did he learn that friends and colleagues had also noticed the changes but had said nothing.

Mr. Reinstein, 63, is fortunate, he added. He has a government pension, and a wife who can keep an eye on his spending. But for those with fewer resources, financial decisions made in the years before diagnosis can have severe consequences, leaving them without money at the time when they will need it most. The authors of the New York Fed study noted that the financial effects they saw predated most of the costs associated with the disease, such as the need for long-term care.

The study expands on past research in part through its sheer scale: Researchers had access to health and financial data on nearly 2.5 million older Americans with chronic health conditions, roughly half a million of whom were diagnosed with Alzheimer’s or related disorders. (The records were anonymized, allowing researchers to combine the two sets of data without having access to identifying details on the individual patients.)

The large amount of data allowed researchers to slice the data more finely than in past studies, looking at the impact of race, sex, household size and other variables. Black people, for example, were more than twice as likely as white people to have financial problems before diagnosis, perhaps because they had fewer resources to begin with, and also because Black patients are often diagnosed later in the course of the disease.

The researchers hoped that the data could eventually allow them to develop a predictive algorithm that could flag people who might be suffering from impaired financial decision-making associated with Alzheimer’s disease — although they stressed that there were unresolved questions about who would have access to such information and how it would be used.

Until then, the researchers said, their findings should be a warning to older Americans and their families that they should prepare for the possibility of an Alzheimer’s diagnosis. That could mean taking steps such as granting a trusted person financial power of attorney, or simply paying attention to signs that someone might be behaving uncharacteristically.

Dr. Nicholas agreed.

“We should be thinking about the possibility of financial difficulties linked to a disease we don’t even know we have,” she said. “Knowing that, people should be on the lookout for these symptoms among friends and family members.”

Pam Belluck contributed reporting.

Tell us about your family's challenges with money management and Alzheimer's.

Ben Casselman writes about economics with a particular focus on stories involving data. He has covered the economy for nearly 20 years, and his recent work has focused on how trends in labor, politics, technology and demographics have shaped the way we live and work. More about Ben Casselman

A Guide to Making Better Financial Moves

Making sense of your finances can be complicated. the tips below can help..

Inheriting money after the death of a loved one while also grieving can be an emotional minefield, particularly for younger adults. Experts share ways to handle it wisely .

Either by choice or because they are priced out of the market, many people plan to never stop renting. Building wealth without home equity  requires a different mind-set.

You may feel richer as you pay your mortgage down and home values go up. As a result, some homeowners end up with a lot of home equity but low retirement savings. Here’s the problem  with that situation.

Can your investment portfolio reflect your values? If you want it to, it is becoming easier with each passing year .

The way advisers handle your retirement money is about to change: More investment professionals will be required to act in their customers’ best interest  when providing advice about their retirement money.

The I.R.S. estimates that 940,000 people who didn’t file their tax returns  in 2020 are due back money. The deadline for filing to get it is May 17.

IMAGES

  1. Research Planning and Research Guidelines

    guidelines for research study

  2. COREQ (Consolidated criteria for reporting qualitative research

    guidelines for research study

  3. How to Present a Research Paper using PowerPoint [Sample + Tips]

    guidelines for research study

  4. Guidelines for Research Fellows, Research

    guidelines for research study

  5. RESEARCH PAPER GUIDELINES The process of writing a research

    guidelines for research study

  6. Table 3 from Critical Appraisal Guidelines for Single Case Study

    guidelines for research study

VIDEO

  1. Opioid Treatment: Guidelines, Research, and Prescribing

  2. THE STORY OF A HARVARD| দি স্টোরি অফ হাবার্ড

  3. General Guidelines

  4. How to Do Research and Get Published

  5. NIH Peer Review Process

  6. Glimpse of Research Paper writing

COMMENTS

  1. Guiding Principles for Ethical Research

    A study should be designed in a way that will get an understandable answer to the important research question. This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices.

  2. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  3. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  4. How to Conduct Responsible Research: A Guide for Graduate Students

    Abstract. Researchers must conduct research responsibly for it to have an impact and to safeguard trust in science. Essential responsibilities of researchers include using rigorous, reproducible research methods, reporting findings in a trustworthy manner, and giving the researchers who contributed appropriate authorship credit.

  5. A Beginner's Guide to Starting the Research Process

    This describes who the problem affects, why research is needed, and how your research project will contribute to solving it. >>Read more about defining a research problem. Step 3: Formulate research questions. Next, based on the problem statement, you need to write one or more research questions. These target exactly what you want to find out.

  6. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  7. Ethical Considerations in Research

    The study ended only once its existence was made public and it was judged to be "medically unjustified." Ethical failures like these resulted in severe harm to participants, wasted resources, and lower trust in science and scientists. This is why all research institutions have strict ethical guidelines for performing research.

  8. APA-approved standards and guidelines

    Public interest guidelines and standards provide psychologists with the rationale and guidance for advancing multiculturalism, diversity, and social justice in psychological education, research, and practice. * Developed by APA's Public Interest Directorate to aid psychologists in their practice with special populations.

  9. How to write a research study protocol

    A study protocol is an important document that specifies the research plan for a clinical study. Many funders such as the NHS Health Research Authority encourage researchers to publish their study protocols to create a record of the methodology and reduce duplication of research effort.

  10. Understanding Research Ethics

    Research ethics are moral principles that need to be adhered to when conducting a research study as well as when writing a scientific article, with the prime aim of avoiding deception or intent to harm study's participants, the scientific community, and society. Practicing and adhering to research ethics is essential for personal integrity as ...

  11. The Basics

    Informed consent is the process of providing you with key information about a research study before you decide whether to accept the offer to take part. The process of informed consent continues throughout the study. ... Researchers follow clinical trials guidelines when deciding who can participate, in a study. These guidelines are called ...

  12. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  13. Five principles for research ethics

    4. Respect confidentiality and privacy. Upholding individuals' rights to confidentiality and privacy is a central tenet of every psychologist's work. However, many privacy issues are idiosyncratic to the research population, writes Susan Folkman, PhD, in "Ethics in Research with Human Participants" (APA, 2000).

  14. NIH and Other Federal Guidelines & Policies for Clinical Research

    Policies and Guidelines for Monitoring Clinical Research. Review the NIH and other federal agency policies for data and safety monitoring in the conduct of clinical trials to ensure the safety of research participants and the appropriate and ethical conduct of the study. Learn the NIAMS requirements and guidelines for reportable events as well ...

  15. Research Reporting Guidelines and Initiatives: By Organization

    This chart lists the major biomedical research reporting guidelines that provide advice for reporting research methods and findings. They usually "specify a minimum set of items required for a clear and transparent account of what was done and what was found in a research study, reflecting, in particular, issues that might introduce bias into the research" (Adapted from the EQUATOR Network ...

  16. What Is Ethics in Research and Why Is It Important?

    Many different research ethics policies would hold that Tom has acted unethically by fabricating data. If this study were sponsored by a federal agency, such as the NIH, his actions would constitute a form of research misconduct, which the government defines as "fabrication, falsification, or plagiarism" (or FFP). Actions that nearly all ...

  17. (Pdf) a Guide to Research Writing

    5. Select the research methodology. The researcher has to begin to formulate one or more hypotheses, research questions and. research objectives, decide on the type of data needed, and select the ...

  18. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    a research study. Research texts and journals refer to critiquing the literature, critical analysis, reviewing the literature, evaluation and appraisal of the literature which are in essence the same thing (Bassett and Bassett, 2003). Terminology in research can be confusing for the novice research reader where a term like 'random' refers to an

  19. Research Guides: Study Design 101: Practice Guideline

    A practice guideline focusing on the best way to prevent sunburn when wearing sunscreen involved forming a multidisciplinary panel of experts (dermatologists, oncologists, sunscreen chemists, etc.). These experts searched the literature and identified 123 research articles on sunscreen and sunburn prevention for appraisal.

  20. Reporting Guidelines

    Some of the reporting guidelines for common study designs are: Randomized controlled trials - CONSORT. Systematic reviews - PRISMA. Observational studies - STROBE. Case reports - CARE. Qualitative research - COREQ. Pre-clinical animal studies - ARRIVE. Peer reviewers may be asked to use these checklists when assessing your manuscript.

  21. Research Methodology

    This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately. Data Management:

  22. Ten simple rules for good research practice

    This, in turn, provides the foundation for improvement by identifying and advocating for good research practices (GRPs). Indeed, powerful solutions are available, for example, preregistration of study protocols and statistical analysis plans, sharing of data and analysis code, and adherence to reporting guidelines.

  23. Research Paper

    Actual research papers may have different structures, contents, and formats depending on the field of study, research question, data collection and analysis methods, and other factors. Students should always consult with their professors or supervisors for specific guidelines and expectations for their research papers.

  24. The use of evidence to guide decision-making during the COVID-19

    Study context. This qualitative study was conducted in the province of British Columbia (BC), Canada, a jurisdiction with a population of approximately five million people [].Within BC's health sector, key actors involved in the policy response to COVID-19 included: elected officials, the BC Government's Ministry of Health (MOH), the Provincial Health Services Authority (PHSA), Footnote 2 ...

  25. When Should You Neuter or Spay Your Dog

    A new UC Davis study updates guidelines on when to neuter or spay a dog to avoid health risks. (Claudio Gennari via Flickr CC by 4.0) Researchers at the University of California, Davis, have updated their guidelines on when to neuter 40 popular dog varieties by breed and sex. Their recent paper in ...

  26. A Guide to Measuring Wealth, Income, and Replacement Rates in the

    The Health and Retirement Study (HRS) is a primary source of information on retirement wealth, income, and replacement rates, but calculating these measures requires a host of methodological choices that affect the results. Since researchers have not yet established clear best practices for dealing with the survey's complex structure, studies using the HRS are often inconsistent and ...

  27. What is decision making?

    According to McKinsey research, executives spend a significant portion of their time— nearly 40 percent, on average—making decisions. Worse, they believe most of that time is poorly used. People struggle with decisions so much so that we actually get exhausted from having to decide too much, a phenomenon called decision fatigue.

  28. NTRS

    Further, the study does not provide a comprehensive assessment of the available literature but rather offers an overview of past and present research being conducted in the field of thermoplastic composites. The information provided may be used to identify gaps and help guide future research and development.

  29. Alzheimer's Takes a Financial Toll Long Before Diagnosis, Study Finds

    The study expands on past research in part through its sheer scale: Researchers had access to health and financial data on nearly 2.5 million older Americans with chronic health conditions ...

  30. Microorganisms

    The bacterial communities related to seaweed can vary considerably across different locations, and these variations influence the seaweed's nutrition, growth, and development. To study this further, we evaluated the bacteria found on the green marine seaweed Ulva prolifera from Garorim Bay and Muan Bay, two key locations on Republic of Korea's west coast. Our analysis found notable ...