U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HCA Healthc J Med
  • v.1(2); 2020
  • PMC10324782

Logo of hcahjm

Introduction to Research Statistical Analysis: An Overview of the Basics

Christian vandever.

1 HCA Healthcare Graduate Medical Education

Description

This article covers many statistical ideas essential to research statistical analysis. Sample size is explained through the concepts of statistical significance level and power. Variable types and definitions are included to clarify necessities for how the analysis will be interpreted. Categorical and quantitative variable types are defined, as well as response and predictor variables. Statistical tests described include t-tests, ANOVA and chi-square tests. Multiple regression is also explored for both logistic and linear regression. Finally, the most common statistics produced by these methods are explored.

Introduction

Statistical analysis is necessary for any research project seeking to make quantitative conclusions. The following is a primer for research-based statistical analysis. It is intended to be a high-level overview of appropriate statistical testing, while not diving too deep into any specific methodology. Some of the information is more applicable to retrospective projects, where analysis is performed on data that has already been collected, but most of it will be suitable to any type of research. This primer will help the reader understand research results in coordination with a statistician, not to perform the actual analysis. Analysis is commonly performed using statistical programming software such as R, SAS or SPSS. These allow for analysis to be replicated while minimizing the risk for an error. Resources are listed later for those working on analysis without a statistician.

After coming up with a hypothesis for a study, including any variables to be used, one of the first steps is to think about the patient population to apply the question. Results are only relevant to the population that the underlying data represents. Since it is impractical to include everyone with a certain condition, a subset of the population of interest should be taken. This subset should be large enough to have power, which means there is enough data to deliver significant results and accurately reflect the study’s population.

The first statistics of interest are related to significance level and power, alpha and beta. Alpha (α) is the significance level and probability of a type I error, the rejection of the null hypothesis when it is true. The null hypothesis is generally that there is no difference between the groups compared. A type I error is also known as a false positive. An example would be an analysis that finds one medication statistically better than another, when in reality there is no difference in efficacy between the two. Beta (β) is the probability of a type II error, the failure to reject the null hypothesis when it is actually false. A type II error is also known as a false negative. This occurs when the analysis finds there is no difference in two medications when in reality one works better than the other. Power is defined as 1-β and should be calculated prior to running any sort of statistical testing. Ideally, alpha should be as small as possible while power should be as large as possible. Power generally increases with a larger sample size, but so does cost and the effect of any bias in the study design. Additionally, as the sample size gets bigger, the chance for a statistically significant result goes up even though these results can be small differences that do not matter practically. Power calculators include the magnitude of the effect in order to combat the potential for exaggeration and only give significant results that have an actual impact. The calculators take inputs like the mean, effect size and desired power, and output the required minimum sample size for analysis. Effect size is calculated using statistical information on the variables of interest. If that information is not available, most tests have commonly used values for small, medium or large effect sizes.

When the desired patient population is decided, the next step is to define the variables previously chosen to be included. Variables come in different types that determine which statistical methods are appropriate and useful. One way variables can be split is into categorical and quantitative variables. ( Table 1 ) Categorical variables place patients into groups, such as gender, race and smoking status. Quantitative variables measure or count some quantity of interest. Common quantitative variables in research include age and weight. An important note is that there can often be a choice for whether to treat a variable as quantitative or categorical. For example, in a study looking at body mass index (BMI), BMI could be defined as a quantitative variable or as a categorical variable, with each patient’s BMI listed as a category (underweight, normal, overweight, and obese) rather than the discrete value. The decision whether a variable is quantitative or categorical will affect what conclusions can be made when interpreting results from statistical tests. Keep in mind that since quantitative variables are treated on a continuous scale it would be inappropriate to transform a variable like which medication was given into a quantitative variable with values 1, 2 and 3.

Categorical vs. Quantitative Variables

Both of these types of variables can also be split into response and predictor variables. ( Table 2 ) Predictor variables are explanatory, or independent, variables that help explain changes in a response variable. Conversely, response variables are outcome, or dependent, variables whose changes can be partially explained by the predictor variables.

Response vs. Predictor Variables

Choosing the correct statistical test depends on the types of variables defined and the question being answered. The appropriate test is determined by the variables being compared. Some common statistical tests include t-tests, ANOVA and chi-square tests.

T-tests compare whether there are differences in a quantitative variable between two values of a categorical variable. For example, a t-test could be useful to compare the length of stay for knee replacement surgery patients between those that took apixaban and those that took rivaroxaban. A t-test could examine whether there is a statistically significant difference in the length of stay between the two groups. The t-test will output a p-value, a number between zero and one, which represents the probability that the two groups could be as different as they are in the data, if they were actually the same. A value closer to zero suggests that the difference, in this case for length of stay, is more statistically significant than a number closer to one. Prior to collecting the data, set a significance level, the previously defined alpha. Alpha is typically set at 0.05, but is commonly reduced in order to limit the chance of a type I error, or false positive. Going back to the example above, if alpha is set at 0.05 and the analysis gives a p-value of 0.039, then a statistically significant difference in length of stay is observed between apixaban and rivaroxaban patients. If the analysis gives a p-value of 0.91, then there was no statistical evidence of a difference in length of stay between the two medications. Other statistical summaries or methods examine how big of a difference that might be. These other summaries are known as post-hoc analysis since they are performed after the original test to provide additional context to the results.

Analysis of variance, or ANOVA, tests can observe mean differences in a quantitative variable between values of a categorical variable, typically with three or more values to distinguish from a t-test. ANOVA could add patients given dabigatran to the previous population and evaluate whether the length of stay was significantly different across the three medications. If the p-value is lower than the designated significance level then the hypothesis that length of stay was the same across the three medications is rejected. Summaries and post-hoc tests also could be performed to look at the differences between length of stay and which individual medications may have observed statistically significant differences in length of stay from the other medications. A chi-square test examines the association between two categorical variables. An example would be to consider whether the rate of having a post-operative bleed is the same across patients provided with apixaban, rivaroxaban and dabigatran. A chi-square test can compute a p-value determining whether the bleeding rates were significantly different or not. Post-hoc tests could then give the bleeding rate for each medication, as well as a breakdown as to which specific medications may have a significantly different bleeding rate from each other.

A slightly more advanced way of examining a question can come through multiple regression. Regression allows more predictor variables to be analyzed and can act as a control when looking at associations between variables. Common control variables are age, sex and any comorbidities likely to affect the outcome variable that are not closely related to the other explanatory variables. Control variables can be especially important in reducing the effect of bias in a retrospective population. Since retrospective data was not built with the research question in mind, it is important to eliminate threats to the validity of the analysis. Testing that controls for confounding variables, such as regression, is often more valuable with retrospective data because it can ease these concerns. The two main types of regression are linear and logistic. Linear regression is used to predict differences in a quantitative, continuous response variable, such as length of stay. Logistic regression predicts differences in a dichotomous, categorical response variable, such as 90-day readmission. So whether the outcome variable is categorical or quantitative, regression can be appropriate. An example for each of these types could be found in two similar cases. For both examples define the predictor variables as age, gender and anticoagulant usage. In the first, use the predictor variables in a linear regression to evaluate their individual effects on length of stay, a quantitative variable. For the second, use the same predictor variables in a logistic regression to evaluate their individual effects on whether the patient had a 90-day readmission, a dichotomous categorical variable. Analysis can compute a p-value for each included predictor variable to determine whether they are significantly associated. The statistical tests in this article generate an associated test statistic which determines the probability the results could be acquired given that there is no association between the compared variables. These results often come with coefficients which can give the degree of the association and the degree to which one variable changes with another. Most tests, including all listed in this article, also have confidence intervals, which give a range for the correlation with a specified level of confidence. Even if these tests do not give statistically significant results, the results are still important. Not reporting statistically insignificant findings creates a bias in research. Ideas can be repeated enough times that eventually statistically significant results are reached, even though there is no true significance. In some cases with very large sample sizes, p-values will almost always be significant. In this case the effect size is critical as even the smallest, meaningless differences can be found to be statistically significant.

These variables and tests are just some things to keep in mind before, during and after the analysis process in order to make sure that the statistical reports are supporting the questions being answered. The patient population, types of variables and statistical tests are all important things to consider in the process of statistical analysis. Any results are only as useful as the process used to obtain them. This primer can be used as a reference to help ensure appropriate statistical analysis.

Funding Statement

This research was supported (in whole or in part) by HCA Healthcare and/or an HCA Healthcare affiliated entity.

Conflicts of Interest

The author declares he has no conflicts of interest.

Christian Vandever is an employee of HCA Healthcare Graduate Medical Education, an organization affiliated with the journal’s publisher.

This research was supported (in whole or in part) by HCA Healthcare and/or an HCA Healthcare affiliated entity. The views expressed in this publication represent those of the author(s) and do not necessarily represent the official views of HCA Healthcare or any of its affiliated entities.

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • How to Report Statistics

How to Report Statistics

Ensure appropriateness and rigor, avoid flexibility and above all never manipulate results

In many fields, a statistical analysis forms the heart of both the methods and results sections of a manuscript. Learn how to report statistical analyses, and what other context is important for publication success and future reproducibility.

A matter of principle

First and foremost, the statistical methods employed in research must always be:

Checklist icon

Appropriate for the study design

Data management icon

Rigorously reported in sufficient detail for others to reproduce the analysis

Fairness icon

Free of manipulation, selective reporting, or other forms of “spin”

Just as importantly, statistical practices must never be manipulated or misused . Misrepresenting data, selectively reporting results or searching for patterns  that can be presented as statistically significant, in an attempt to yield a conclusion that is believed to be more worthy of attention or publication is a serious ethical violation. Although it may seem harmless, using statistics to “spin” results can prevent publication, undermine a published study, or lead to investigation and retraction.

Supporting public trust in science through transparency and consistency

Along with clear methods and transparent study design, the appropriate use of statistical methods and analyses impacts editorial evaluation and readers’ understanding and trust in science.

In 2011  False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant exposed that “flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates” and demonstrated “how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis”.

Arguably, such problems with flexible analysis lead to the “ reproducibility crisis ” that we read about today. 

A constant principle of rigorous science The appropriate, rigorous, and transparent use of statistics is a constant principle of rigorous, transparent, and Open Science. Aim to be thorough, even if a particular journal doesn’t require the same level of detail. Trust in science is all of our responsibility. You cannot create any problems by exceeding a minimum standard of information and reporting.

how to read a statistical research paper

Sound statistical practices

While it is hard to provide statistical guidelines that are relevant for all disciplines, types of research, and all analytical techniques,  adherence to rigorous and appropriate principles remains key. Here are some ways to ensure your statistics are sound.

Define your analytical methodology before you begin Take the time to consider and develop a thorough study design that defines your line of inquiry, what you plan to do, what data you will collect, and how you will analyze it. (If you applied for research grants or ethical approval, you probably already have a plan in hand!) Refer back to your study design at key moments in the research process, and above all, stick to it.

To avoid flexibility and improve the odds of acceptance, preregister your study design with a journal Many journals offer the option to submit a study design for peer review before research begins through a practice known as preregistration. If the editors approve your study design, you’ll receive a provisional acceptance for a future research article reporting the results. Preregistering is a great way to head off any intentional or unintentional flexibility in analysis.  By declaring your analytical approach in advance you’ll increase the credibility and reproducibility of your results and help address publication bias, too. Getting peer review feedback on your study design and analysis plan before it has begun (when you can still make changes!) makes your research even stronger AND increases your chances of publication—even if the results are negative or null. Never underestimate how much you can help increase the public’s trust in science by planning your research in this way.

Imagine replicating or extending your own work, years in the future Imagine that you are describing your approach to statistical analysis for your future self, in exactly the same way as we have described for writing your methods section . What would you need to know to replicate or extend your own work? When you consider that you might be at a different institution, working with different colleagues,  using different programs, applications, resources — or maybe even adopting new statistical techniques that have emerged — you can help yourself imagine the level of reporting specificity that you yourself would require to redo or extend your work. Consider:

  • Which details would you need to be reminded of? 
  • What did you do to the raw data before analysis?
  • Did the purpose of the analysis change before or during the experiments?
  • What participants did you decide to exclude? 
  • What process did you adjust, during your work? 

Even if a necessary adjustment you made was not ideal, transparency is the key to ensuring this is not regarded as an issue in the future. It is far better to transparently convey any non-optimal techniques or constraints than to conceal them, which could result in reproducibility or ethical issues downstream.

Existing standards, checklists, guidelines for specific disciplines

You can apply the Open Science practices outlined above no matter what your area of expertise—but in many cases, you may still need more detailed guidance specific to your own field. Many  disciplines, fields, and projects have worked hard to develop guidelines and resources  to help with statistics, and to identify and avoid bad statistical practices. Below, you’ll find some of the key materials. 

TIP: Do you have a specific journal in mind?

Be sure to read the submission guidelines for the specific journal you are submitting to, in order to discover any journal- or field-specific policies, initiatives or tools to utilize.

Articles on statistical methods and reporting

Makin, T.R.,  Orban de Xivry, J. Science Forum: Ten common statistical mistakes to watch out for when writing or reviewing a manuscript . eLife 2019;8:e48175 (2019).  https://doi.org/10.7554/eLife.48175  

Munafò, M., Nosek, B., Bishop, D. et al. A manifesto for reproducible science . Nat Hum Behav 1, 0021 (2017). https://doi.org/10.1038/s41562-016-0021    

Writing tips

Your use of statistics should be rigorous, appropriate, and uncompromising in avoidance of analytical flexibility. While this is difficult, do not compromise on rigorous standards for credibility!

What to do

  • Remember that trust in science is everyone’s responsibility.
  • Keep in mind future replicability.
  • Consider preregistering your analysis plan to have it (i) reviewed before results are collected to check problems before they occur and (ii) to avoid any analytical flexibility.
  • Follow principles, but also checklists and field- and journal-specific guidelines.
  • Consider a commitment to rigorous and transparent science a personal responsibility, and not simple adhering to journal guidelines.
  • Be specific about all decisions made during the experiments that someone reproducing your work would need to know.
  • Consider a course in advanced and new statistics, if you feel you have not focused on it enough during your research training.

What not to do

Don’t

  • Misuse statistics to influence significance or other interpretations of results
  • Conduct your statistical analyses if you are unsure of what you are doing—seek feedback (e.g. via preregistration) from a statistical specialist first.
  • How to Write a Great Title
  • How to Write an Abstract
  • How to Write Your Methods
  • How to Write Discussions and Conclusions
  • How to Edit Your Work

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

  • EXPLORE Random Article

How to Find Statistics for a Research Paper

Last Updated: March 10, 2024 References

This article was co-authored by wikiHow staff writer, Jennifer Mueller, JD . Jennifer Mueller is a wikiHow Content Creator. She specializes in reviewing, fact-checking, and evaluating wikiHow's content to ensure thoroughness and accuracy. Jennifer holds a JD from Indiana University Maurer School of Law in 2006. There are 8 references cited in this article, which can be found at the bottom of the page. This article has been viewed 24,724 times.

When you're writing a research paper, particularly in social sciences such as political science or sociology, statistics can help you back up your conclusions with solid data. You typically can find relevant statistics using online sources. However, it's important to accurately assess the reliability of the source. You also need to understand whether the statistics you've found strengthen or undermine your arguments or conclusions before you incorporate them into your writing. [1] X Research source [2] X Trustworthy Source University of North Carolina Writing Center UNC's on-campus and online instructional service that provides assistance to students, faculty, and others during the writing process Go to source

Identifying the Data You Need

Step 1 Outline your points or arguments.

  • For example, if you're writing a research paper for a sociology class on the effect of crime in inner cities, you may want to make the point that high school graduation rates decrease as the rate of violent crime increases.
  • To support that point, you would need data about high school graduation rates in specific inner cities, as well as violent crime rates in the same areas.
  • From that data, you would want to find statistics that show the trends in those two rates. Then you can compare those statistics to reach a correlation that would (potentially) support your point.

Step 2 Do some background research.

  • Background research also can clue you in to words or phrases that are commonly used by academics, researchers, and statisticians examining the same issues you're discussing in your research paper.
  • A basic familiarity with your topic can help you identify additional statistics that you might not have thought of before.
  • For example, in reading about the effect of violent crime in inner cities, you may find an article discussing how children coming from high-crime neighborhoods have higher rates of PTSD than children who grow up in peaceful suburbs.
  • The issue of PTSD is something you potentially could weave into your research paper, although you'd have to do more digging into the source of the statistics themselves.
  • Keep in mind when you're reading on background, this isn't necessarily limited to material that you might use as a source for your research paper. You're just trying to familiarize yourself with the subject generally.

Step 3 Distinguish between descriptive and inferential statistics.

  • With a descriptive statistic, those who collected the data got information for every person included in a specific, limited group.
  • "Only 2 percent of the students in McKinley High School's senior class have red hair" is an example of a descriptive statistic. All the students in the senior class have been accounted for, and the statistic describes only that group.
  • However, if the statisticians used the county high school's senior class as a representative sample of the county as a whole, the result would be an inferential statistic.
  • The inferential version would be phrased "According to our study, approximately 2 percent of the people in McKinley County have red hair." The statisticians didn't check the hair color of every person who lived in the county.

Step 4 Brainstorm search terms.

  • Finding the best key words can be an art form. Using what you learned from your background research, try to use words academics or other researchers in the field use when discussing your topic.
  • You not only want to search for specific words, but also synonyms for those words. You also might search for both broader categories and narrower examples of related phenomena.
  • For example, "violent crime" is a broad category that may include crimes such as assault, rape, and murder. You may not be able to find statistics that specifically track violent crime generally, but you should be able to find statistics on the murder rate in a given area.
  • If you're looking for statistics related to a particular geographic area, you'll need to be flexible there as well. For example, if you can't find statistics that relate solely to a particular neighborhood, you may want to expand outward to the city or even the county.

Step 5 Locate relevant studies and polls.

  • While you can run a general internet search using your key words to potentially find statistics you can use in your research paper, knowing specific sources can help you find reliable statistics more quickly.
  • For example, if you're looking for statistics related to various demographics in the United States, the U.S. government has many statistics available at www.usa.gov/statistics.
  • You also can check the U.S. Census Bureau's website to retrieve census statistics and data.
  • The NationMaster website collects data from the CIA World Factbook and other sources to create a wealth of statistics comparing different countries on a number of measures.

Evaluating Sources

Step 1 Judge the source's reliability.

  • Find out who was responsible for collecting the data, and why. If the organization or group behind the data collection and creation of the statistics has an ideological or political mission, their statistics may be suspect.
  • Essentially, if someone is creating statistics to support a particular position or prove their arguments, you cannot trust those statistics. There are many ways raw data can be manipulated to show trends or correlations that don't necessarily reflect reality.
  • Government sources typically are highly reliable, as are most university studies. However, even with university studies you want to see if the study was funded in whole or in part by a group or organization with an ideological or political motivation or bias.

Step 2 Understand the background of the data.

  • To explore the background adequately, use the journalistic standard of the "5 w's" – who, what, when, where, and why.
  • This means you'll want to find out who carried out the study (or, in the case of a poll, who asked the questions), what questions were asked, when was the study or poll conducted, and why the study or poll was conducted.
  • The answers to these questions will help you understand the purpose of the statistical research that was conducted, and whether it would be helpful in your own research paper.

Step 3 Interpret the statistics yourself.

  • You may find the statistics set forth in a report that describes these statistics and what they mean.
  • However, just because someone else has explained the meaning of the statistics doesn't mean you should necessarily take their word for it.
  • Draw on your understanding of the background of the study or poll, and look at the interpretation the author presents critically.
  • Remove the statistics themselves from the text of the report, for example by copying them into a table. Then you can interpret them on your own without being distracted by the author's interpretation.
  • If you create a table of your own from a statistical report, make sure you label it accurately so you can cite the source of the statistics later if you decide to include them in your research paper.

Step 4 Use caution when producing your own statistics.

  • If you're looking at raw data, you may need to actually calculate the statistics yourself. If you don't have any experience with statistics, talk to someone who does.
  • Your teacher or professor may be able to help you understand how to calculate the statistics correctly.
  • Even if you have access to a statistics program, there's no guarantee that the result you get actually will be accurate unless you know what information to provide the program. Remember the common phrase with computer programs: "Garbage in, garbage out."
  • Don't assume you can just divide two numbers to get a percentage, for example. There are other probability elements that must be taken into account.

Writing with Statistics

Step 1 Use statistical terms correctly.

  • For example, the word "average" is one you often see in everyday writing. However, when you're writing about statistics, the word "average" could mean up to three different things.
  • The word "average" can be used to mean the median (the middle value in the set of data), the mean (the result when you add all the values in the set and then divide by the quantity of numbers in the set), or the mode (the number or value in the set that occurs most frequently).
  • Therefore, if you read "average," you need to know which of these definitions is meant.
  • You also want to make sure that any two or more statistics you're comparing are using the same definition of "average." Not doing so could lead to a significant misinterpretation of your statistics and what they mean in the context of your research.

Step 2 Focus on presentation and readability.

  • Charts and graphs also can be useful even when you are referencing the statistics within your text. Using graphical elements can break up the text and enhance reader understanding.
  • Tables, charts, and graphs can be especially beneficial if you ultimately will have to give a presentation of your research paper, either to your class or to teachers or professors.
  • As difficult as statistics are to follow in print, they can be even more difficult to follow when someone is merely telling them to you.
  • To test the readability of the statistics in your paper, read those paragraphs out loud to yourself. If you find yourself stumbling over them or getting confused as you read, it's likely anyone else will stumble too when reading them for the first time.

Step 3 Choose statistics that support your arguments.

  • This often has as much to do with how you describe the statistics as the specific statistics you use.
  • Keep in mind that numbers themselves are neutral – it is your interpretation of those numbers that gives them meaning.

Step 4 Present the data in context.

  • For example, if you present the statistic that the murder rate in one neighborhood increased by 500 percent, and in the same period high school graduation rates decreased by 300 percent, these numbers are virtually meaningless without context.
  • You don't know what a 500 percent increase entails unless you know what the rate was before the period measured by the statistic.
  • When you say "500 percent," it sounds like a large amount, but if there was only one murder before the period measured by the statistic, then what you're actually saying is that during that period there were five murders.
  • Additionally, your statistics may be more meaningful if you can compare them to similar statistics in other areas.
  • Think of it in terms of a scientific experiment. If scientists are studying the effects of a particular drug to treat a disease, they also include a control group that doesn't take the drug. Comparing the test group to the control group helps show the drug's effectiveness.

Step 5 Cite the source for your statistics correctly.

  • For example, you might write "According to the FBI, violent crime in McKinley County increased by 37 percent between the years 2000 and 2012."
  • A textual citation provides immediate authority to the statistics you're using, allowing your readers to trust the statistics and move on to the next point.
  • On the other hand, if you don't state where the statistics came from, your reader may be too busy mentally questioning the source of your statistics to fully grasp the point you're trying to make.

Expert Q&A

You might also like.

Ask for Feedback

  • ↑ https://owl.english.purdue.edu/owl/resource/672/1/
  • ↑ http://writingcenter.unc.edu/handouts/statistics/
  • ↑ https://www.nationmaster.com/country-info/stats
  • ↑ https://www.usa.gov/statistics
  • ↑ https://owl.english.purdue.edu/owl/resource/672/02/
  • ↑ http://libguides.lib.msu.edu/datastats
  • ↑ https://owl.english.purdue.edu/owl/resource/672/06/
  • ↑ https://owl.english.purdue.edu/owl/resource/672/04/

About this article

Jennifer Mueller, JD

Did this article help you?

Ask for Feedback

  • About wikiHow
  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info
  • Harvard Library
  • Research Guides
  • Faculty of Arts & Sciences Libraries

PHYSCI 10: Quantum and Statistical, and Computational Foundations of Chemistry

How to read a scholarly article.

  • Scholarly vs Popular Sources
  • Peer Review
  • How to get the full text
  • Find Dissertations and Theses
  • Data Visualization
  • Managing Citations
  • Writing Help
  • How to Read a Scholarly Article - brief video from Western Libraries
  • Infographic: How to read a scientific paper "Because scientific articles are different from other texts, like novels or newspaper stories, they should be read differently."
  • How to Read and Comprehend Scientific Research Articles brief video from the University of Minnesota Libraries
  • << Previous: Peer Review
  • Next: Find Articles >>
  • Last Updated: Oct 13, 2021 3:19 PM
  • URL: https://guides.library.harvard.edu/PS10

Harvard University Digital Accessibility Policy

Your Account

Manage your account, subscriptions and profile.

MyKomen Health

ShareForCures

In Your Community  

In Your Community

View resources and events in your local community.

Change your location:

Susan G. Komen®

Susan G. Komen®

One moment can change everything.

How to Read a Research Table

The tables in this section present the research findings that drive many recommendations and standards of practice related to breast cancer.

Research tables are useful for presenting data. They show a lot of information in a simple format, but they can be hard to understand if you don’t work with them every day.

Here, we describe some basic concepts that may help you read and understand research tables. The sample table below gives examples.

The numbered table items are described below. You will see many of these items in all of the tables.

Sample table – Alcohol and breast cancer risk

Selection criteria.

Studies vary in how well they help answer scientific questions. When reviewing the research on a topic, it’s important to recognize “good” studies. Good studies are well-designed.

Most scientific reviews set standards for the studies they include. These standards are called “selection criteria” and are listed for each table in this section. These selection criteria help make sure well-designed studies are included in the table.

Types of studies

The types of studies (for example, randomized controlled trial, prospective cohort, case-control) included in each table are listed in the selection criteria.

Learn about the strengths and weaknesses of different types of research studies .

Selection criteria for most tables include the minimum number of cases of breast cancer or participants for the studies in the table.

Large studies have more statistical power than small studies. This means the results from large studies are less likely to be due to chance than results from small studies.

The power of large numbers

You can see the power of large numbers if you think about flipping a coin. Say you are trying to figure out whether a coin is fixed so that it lands on “heads” more than “tails.” A fair coin would land on heads half the time. So, you want to test whether the coin lands on heads more than half of the time.

If you flip the coin twice and get 2 heads, you don’t have a lot of evidence. It wouldn’t be surprising to flip a fair coin and get 2 heads in a row. With 2 coin flips, you can’t be sure whether you have a fair coin or not. Even 3 or 4 heads in a row wouldn’t be surprising for a fair coin.

If, however, you flipped the coin 20 times and got mostly heads, you would start to think the coin might be fixed.

With an increasing number of observations, you have more evidence on which to base your conclusions. So, you have more confidence in your conclusions. It’s a similar idea in research.

Example of study size in breast cancer research

Say you’re interested in finding out whether or not alcohol use increases the risk of breast cancer.

If there are only a few cases of breast cancer among the alcohol drinkers and the non-drinkers, you won’t have much confidence drawing conclusions.

If, however, there are hundreds of breast cancer cases, it’s easier to draw firm conclusions about a link between alcohol and breast cancer. With more evidence, you have more confidence in your findings.

The importance of study design and study quality

Study design (the type of research study) and study quality are also important. For example, a small, well-designed study may be better than a large, poorly-designed study. However, when all else is equal, a larger number of people in a study means the study is better able to answer research questions.

Learn about different types of research studies .

The studies

The first column (from the left) lists either the name of the study or the name of the first author of the published article.

Below each table, there’s a reference list so you can find the original published articles.

Sometimes, a table will report the results of only one analysis. This can occur for a few reasons. Either there’s only one study that meets the selection criteria or there’s a report that combines data from many studies into one large analysis.

Study population

The second column describes the people in each study.

  • For randomized controlled trials, the study population is the total number of people who were randomized at the start of the study to either the treatment (or intervention) group or the control group.
  • For prospective cohort studies, the study population is the number of people at the start of the study (baseline cohort).
  • For case-control studies, the study population is the number of cases and the number of controls.

In some tables, more details on the people in the study are included. 

Length of follow-up

Randomized controlled trials and prospective cohort studies follow people forward in time to see who will have the outcome of interest (such as breast cancer).

For these studies, one column shows the length of follow-up time. This is the number or months or years people in the study were followed.

Because case-control studies don’t follow people forward in time, there are no data on follow-up time for these studies.

Tables that focus on cumulative risk may also show the length of follow-up. These tables give the length of time, or age range, used to compute cumulative risk (for example, the cumulative risk of breast cancer up to age 70).

Learn more about cumulative risk . 

   

Other information

Some tables have columns with other information on the study population or the topic being studied. For example, the table Exercise and Breast Cancer Risk has a column with the comparisons of exercise used in the studies.

This extra information gives more details about the studies and shows how the studies are similar to (and different from) each other.

Studies on the same topic can differ in important ways. They may define “high” and “low” levels of a risk factor differently. Studies may look at outcomes among women of different ages or menopausal status.

These differences are important to keep in mind when you review the findings in a table. They may help explain differences in study findings. 

Understanding the numbers

All of the information in the tables is important, but the main purpose of the tables is to present the numbers that show the risk, survival or other measures for each topic. These numbers are shown in the remaining columns of the tables.

The headings of the columns tell you what the numbers represent. For example:

  • What is the outcome of interest? Is it breast cancer? Is it 5-year survival? Is it breast cancer recurrence?
  • Are groups being compared to each other? If so, what groups are being compared?

Relative risks

Most often, findings are reported as relative risks. A relative risk shows how much higher, how much lower or whether there’s no difference in risk for people with a certain risk factor compared to the risk in people without the factor.

A relative risk compares 2 absolute risks.

  • The numerator (the top number in a fraction) is the absolute risk among people with the risk factor.
  • The denominator (the bottom number) is the absolute risk among those without the risk factor.

The absolute risk of those with the factor divided by the absolute risk of those without the factor gives the relative risk. 

The confidence interval around a relative risk helps show whether or not the relative risk is statistically significant (whether or not the finding is likely due to chance).

Learn more about confidence intervals .

Example of relative risk

Say a study shows women who don’t exercise (inactive women) have a 25 percent increase in breast cancer risk compared to women who do exercise (active women).

This statistic is a relative risk (the relative risk is 1.25). It means the inactive women were 25 percent more likely to develop breast cancer than women who exercised.

Learn more about relative risk .

Confidence intervals

A 95 percent confidence interval (95% CI) around a risk measure means there’s a 95 percent chance the “true” measure falls within the interval.

Because there’s random error in studies, and study populations are only samples of much larger populations, a single study doesn’t give the “one” correct answer. There’s always a range of likely answers. A single study gives a “best estimate” along with a 95 % CI of a likely range.

Most scientific studies report risk measures, such as relative risks, odds ratios and averages, with 95% CI.

Confidence intervals and statistical significance

For relative risks and odds ratios, a 95% CI that includes the number 1.0 means there’s no link between an exposure (such as a risk factor or a treatment) and an outcome (such as breast cancer or survival).

When this happens, the results are not statistically significant. This means any link between the exposure and outcome is likely due to chance.

If a 95% CI does not include 1.0, the results are statistically significant. This means there’s likely a true link between an exposure and an outcome.

Examples of confidence intervals

A few examples from the sample table above may help explain statistical significance.

The EPIC study found a relative risk of breast cancer of 1.07, with a 95% CI of 0.96 to 1.19. In the table, you will see 1.07 (0.96-1.19).

Women in the EPIC study who drank 1-2 drinks per day had a 7 percent higher risk of breast cancer than women who did not drink alcohol. The 95% CI of 0.96 to 1.19 includes 1.0. This means these results are not statistically significant and the increased risk of breast cancer is likely due to chance.

The Million Women’s Study found a relative risk of breast cancer of 1.13 with a 95% CI of 1.10 to 1.16. This is shown as 1.13 (1.10-1.16) in the table.

Women in the Million Women’s Study who drank 1-2 drinks per day had a 13 percent higher risk of breast cancer than women who did not drink alcohol. In this case, the 95% CI of 1.10 to 1.16 does not include 1.0. So, these results are statistically significant and suggest there’s likely a true link between alcohol and breast cancer.

For any topic, it’s important to look at the findings as a whole. In the sample table above, most studies show a statistically significant increase in risk among women who drink alcohol compared to women who don’t drink alcohol. Thus, the findings as a whole suggest alcohol increases the risk of breast cancer.

Summary relative risks

Summary relative risks from meta-analyses.

A meta-analysis takes relative risks reported in different studies and “averages” them to come up with a single, summary measure. Findings from a meta-analysis can give stronger conclusions than findings from a single study.

Summary relative risks from pooled analyses

A pooled analysis uses data from multiple studies to give a summary measure. It combines the data from each person in each of the studies into one large data set and analyses the data as if it were one big study. A pooled analysis is almost always better than a meta-analysis.

In a meta-analysis, researchers combine the results from different studies. In a pooled analysis, researchers combine the individual data from the different studies. This usually gives more statistical power than a meta-analyses. More statistical power means it’s more likely the results are not simply due to chance.

Cumulative risk

Sometimes, study findings are presented as a cumulative risk (risk up to a certain age). This risk is often shown as a percentage.

A cumulative risk may show the risk of breast cancer for a certain group of people up to a certain age. Say the cumulative risk up to age 70 for women with a risk factor is 20 percent. This means by age 70, 20 percent of the women (or 1 in 5) with the risk factor will get breast cancer.

Lifetime risk is a cumulative risk. It shows the risk of getting breast cancer during your lifetime (or up to a certain age). Women in the U.S. have a 13 percent lifetime risk of getting breast cancer. This means 1 in 8 women in the U.S. will get breast cancer during their lifetime.

Learn more about lifetime risk .

Sensitivity and specificity

Some tables show study findings on the sensitivity and specificity of screening tests. These measures describe the quality of a breast cancer screening test.

  • Sensitivity  shows how well the screening test shows who truly has breast cancer. A sensitivity of 90 percent means 90 percent of people tested who truly have breast cancer are correctly identified as having cancer.
  • Specificity  shows how well the screening test shows who truly does not have breast cancer. A specificity of 90 percent means 90 percent of the people who do not have breast cancer are correctly identified as not having cancer.

The goals of any screening test are:

  • To correctly identify everyone who has a certain disease (100 percent sensitivity)
  • To correctly identify everyone who does not have the disease (100 percent specificity)

A perfect test would correctly identify everyone with no mistakes. There would be no:

  • False negatives (when people who have the disease are missed by the test)
  • False positives (when healthy people are incorrectly shown to have the disease)

No screening test has perfect (100 percent) sensitivity and perfect (100 percent) specificity. There’s always a trade-off between the two. When a test gains sensitivity, it loses some specificity.

Learn more about sensitivity and specificity .

Finding studies

You may want more detail about a study than is given in the summary table. To help you find this information, the references for all the studies in a table are listed below the table.

Each reference includes the:

  • Authors of the study article
  • Title of the article
  • Year the article was published
  • Title and specific issue of the medical journal where the article appeared

PubMed , the National Library of Medicine’s search engine, is a good source for finding summaries of science and medical journal articles (called abstracts).

For some abstracts, PubMed also has links to the full text articles. Most medical journals have websites and offer their articles either for free or for a fee.

If you live near a university with a medical school or public health school, you may be able to go to the school’s medical library to get a copy of an article. Local public libraries may not carry medical journals, but they may be able to find a copy of an article from another source.

More information on research studies

If you’re interested in learning more about health research, a basic epidemiology textbook may be a good place to start. The National Cancer Institute also has information on epidemiology studies and randomized controlled trials.

Updated 07/25/22

TOOLS & RESOURCES

how to read a statistical research paper

In Your Own Words

How has having breast cancer changed your outlook?

Share Your Story or Read Others

StatAnalytica

How To Write a Statistical Research Paper: Tips, Topics, Outline

Statistical Research Paper

Working on a research paper can be a bit challenging. Some people even opt for paying online writing companies to do the job for them. While this might seem like a better solution, it can cost you a lot of money. A cheaper option is to search online for the critical parts of your essay. Your data should come from reliable sources for your research paper to be authentic. You will also need to introduce your work to your readers. It should be straightforward and relevant to the topic.  With this in mind, here is a guideline to help you succeed in your research writing. But before that, let’s see what the outline should look like.

The Outline

Table of Contents

How to write a statistical analysis paper is a puzzle many people find difficult to crack. It’s not such a challenging task as you might think, especially if you learn some helpful tips to make the writing process easier. It’s just like working on any other essay. You only need to get the format and structure right and study the process. Here is what the general outline should look like:

  • introduction;
  • problem statement;
  • objectives;
  • methodology;
  • data examination;
  • discussion;
  • conclusion and recommendations.

Let us now see some tips that can help you become a better statistical researcher.

  • Top 99+ Trending Statistics Research Topics for Students

Tips for Writing Statistics Research Paper

If you are wondering how people write their papers, you are in the right place. We’ll take a look at a few pointers that can help you come up with amazing work.

Choose A Topic

Basically, this is the most important stage of your essay. Whether you want to pay for it or not, you need a simple and accessible topic to write about. Usually, the paid research papers have a well-formed and clear topic. It helps your paper to stand out. Start off by explaining to your audience what your papers are all about. Also, check whether there is enough data to support your idea. The weaker the topic is, the harder your work will be. Is the potential theme within the realm of statistics? Can the question at hand be solved with the help of the available data? These are some of the questions someone should answer first. In the end, the topic you opt for should provide sufficient space for independent information collection and analysis.

Collect Data

This stage relies heavily on the quantity of data sources and the method used to collect them. Keep in mind that you must stick to the chosen methodology throughout your essay. It is also important to explain why you opted for the data collection method used. Plus, be cautious when collecting information. One simple mistake can compromise the entire work. You can source your data from reliable sources like google, read published articles, or experiment with your own findings. However, if your instructor provides certain recommendations, follow them instead. Don’t twist the information to fit your interest to avoid losing originality. And in case no recommendations are given, ask your instructor to provide some.

Write Body Paragraphs

Use the information garnered to create the main body of your essay. After identifying an applicable area of interest, use the data to build your paragraphs. You can start off by making a rough draft of your findings and then use it as a guide for your main essay. The next step is to construe numerical figures and make conclusions. This stage requires your proficiency in interpreting statistics. Integrate your math engagement strategies to break down those figures and pinpoint only the most meaningful parts of them. Also, include some common counterpoints and support the information with specific examples.

Create Your Essay

Now that you have all the appropriate materials at hand, this section will be easy. Simply note down all the information gathered, citing your sources as well. Make sure not to copy and paste directly to avoid plagiarism. Your content should be unique and easy to read, too. We recommend proofreading and polishing your work before making it public. In addition, be on the lookout for any grammatical, spelling, or punctuation mistakes.

This section is a summary of all your findings. Explain the importance of what you are doing. You can also include suggestions for future work. Make sure to restate what you mentioned in the introduction and touch a little bit on the method used to collect and analyze your data. In short, sum up everything you’ve written in your essay.

How to Find Statistical Topics for your Paper

Statistics is a discipline that involves collecting, analyzing, organizing, presenting, and interpreting data. If you are looking for the right topic for your work, here are a few things to consider.

●   Start by finding out what topics have already been worked on and pick the remaining areas.

●   Consider recent developments in your field of study that may inspire a new topic.

●   Think about any specific questions or problems that you have come across on your own that could be explored further.

●   Ask your advisor or mentor for suggestions.

●   Review conference proceedings, journal articles, and other publications.

●   Try using a brainstorming technique. For instance, list out related keywords and combine them in different ways to generate new ideas.

Try out some of these tips. Be sure to find something that will work for you.

Working on a statistics paper can be quite challenging to work on. But with the right information sources, everything becomes easy. This guide will help you reveal the secret of preparing such essays. Also, don’t forget to do more reading to broaden your knowledge. You can find statistics research paper examples and refer to them for ideas. Nonetheless, if you’re still not confident enough, you can always hire a trustworthy writing company to get the job done.

Related Posts

application-of-data-miningapplication-of-data-mining

Top Useful Applications of Data Mining in Different Fields

WHAT ARE THE BENEFICIAL FACTORS OF RENTING 1 BHK FLATS IN HYDERABAD?

What Are The Beneficial Factors Of Renting 1 BHK Flats In Hyderabad?

Read our research on: Abortion | Podcasts | Election 2024

Regions & Countries

What the data says about abortion in the u.s..

Pew Research Center has conducted many surveys about abortion over the years, providing a lens into Americans’ views on whether the procedure should be legal, among a host of other questions.

In a  Center survey  conducted nearly a year after the Supreme Court’s June 2022 decision that  ended the constitutional right to abortion , 62% of U.S. adults said the practice should be legal in all or most cases, while 36% said it should be illegal in all or most cases. Another survey conducted a few months before the decision showed that relatively few Americans take an absolutist view on the issue .

Find answers to common questions about abortion in America, based on data from the Centers for Disease Control and Prevention (CDC) and the Guttmacher Institute, which have tracked these patterns for several decades:

How many abortions are there in the U.S. each year?

How has the number of abortions in the u.s. changed over time, what is the abortion rate among women in the u.s. how has it changed over time, what are the most common types of abortion, how many abortion providers are there in the u.s., and how has that number changed, what percentage of abortions are for women who live in a different state from the abortion provider, what are the demographics of women who have had abortions, when during pregnancy do most abortions occur, how often are there medical complications from abortion.

This compilation of data on abortion in the United States draws mainly from two sources: the Centers for Disease Control and Prevention (CDC) and the Guttmacher Institute, both of which have regularly compiled national abortion data for approximately half a century, and which collect their data in different ways.

The CDC data that is highlighted in this post comes from the agency’s “abortion surveillance” reports, which have been published annually since 1974 (and which have included data from 1969). Its figures from 1973 through 1996 include data from all 50 states, the District of Columbia and New York City – 52 “reporting areas” in all. Since 1997, the CDC’s totals have lacked data from some states (most notably California) for the years that those states did not report data to the agency. The four reporting areas that did not submit data to the CDC in 2021 – California, Maryland, New Hampshire and New Jersey – accounted for approximately 25% of all legal induced abortions in the U.S. in 2020, according to Guttmacher’s data. Most states, though,  do  have data in the reports, and the figures for the vast majority of them came from each state’s central health agency, while for some states, the figures came from hospitals and other medical facilities.

Discussion of CDC abortion data involving women’s state of residence, marital status, race, ethnicity, age, abortion history and the number of previous live births excludes the low share of abortions where that information was not supplied. Read the methodology for the CDC’s latest abortion surveillance report , which includes data from 2021, for more details. Previous reports can be found at  stacks.cdc.gov  by entering “abortion surveillance” into the search box.

For the numbers of deaths caused by induced abortions in 1963 and 1965, this analysis looks at reports by the then-U.S. Department of Health, Education and Welfare, a precursor to the Department of Health and Human Services. In computing those figures, we excluded abortions listed in the report under the categories “spontaneous or unspecified” or as “other.” (“Spontaneous abortion” is another way of referring to miscarriages.)

Guttmacher data in this post comes from national surveys of abortion providers that Guttmacher has conducted 19 times since 1973. Guttmacher compiles its figures after contacting every known provider of abortions – clinics, hospitals and physicians’ offices – in the country. It uses questionnaires and health department data, and it provides estimates for abortion providers that don’t respond to its inquiries. (In 2020, the last year for which it has released data on the number of abortions in the U.S., it used estimates for 12% of abortions.) For most of the 2000s, Guttmacher has conducted these national surveys every three years, each time getting abortion data for the prior two years. For each interim year, Guttmacher has calculated estimates based on trends from its own figures and from other data.

The latest full summary of Guttmacher data came in the institute’s report titled “Abortion Incidence and Service Availability in the United States, 2020.” It includes figures for 2020 and 2019 and estimates for 2018. The report includes a methods section.

In addition, this post uses data from StatPearls, an online health care resource, on complications from abortion.

An exact answer is hard to come by. The CDC and the Guttmacher Institute have each tried to measure this for around half a century, but they use different methods and publish different figures.

The last year for which the CDC reported a yearly national total for abortions is 2021. It found there were 625,978 abortions in the District of Columbia and the 46 states with available data that year, up from 597,355 in those states and D.C. in 2020. The corresponding figure for 2019 was 607,720.

The last year for which Guttmacher reported a yearly national total was 2020. It said there were 930,160 abortions that year in all 50 states and the District of Columbia, compared with 916,460 in 2019.

  • How the CDC gets its data: It compiles figures that are voluntarily reported by states’ central health agencies, including separate figures for New York City and the District of Columbia. Its latest totals do not include figures from California, Maryland, New Hampshire or New Jersey, which did not report data to the CDC. ( Read the methodology from the latest CDC report .)
  • How Guttmacher gets its data: It compiles its figures after contacting every known abortion provider – clinics, hospitals and physicians’ offices – in the country. It uses questionnaires and health department data, then provides estimates for abortion providers that don’t respond. Guttmacher’s figures are higher than the CDC’s in part because they include data (and in some instances, estimates) from all 50 states. ( Read the institute’s latest full report and methodology .)

While the Guttmacher Institute supports abortion rights, its empirical data on abortions in the U.S. has been widely cited by  groups  and  publications  across the political spectrum, including by a  number of those  that  disagree with its positions .

These estimates from Guttmacher and the CDC are results of multiyear efforts to collect data on abortion across the U.S. Last year, Guttmacher also began publishing less precise estimates every few months , based on a much smaller sample of providers.

The figures reported by these organizations include only legal induced abortions conducted by clinics, hospitals or physicians’ offices, or those that make use of abortion pills dispensed from certified facilities such as clinics or physicians’ offices. They do not account for the use of abortion pills that were obtained  outside of clinical settings .

(Back to top)

A line chart showing the changing number of legal abortions in the U.S. since the 1970s.

The annual number of U.S. abortions rose for years after Roe v. Wade legalized the procedure in 1973, reaching its highest levels around the late 1980s and early 1990s, according to both the CDC and Guttmacher. Since then, abortions have generally decreased at what a CDC analysis called  “a slow yet steady pace.”

Guttmacher says the number of abortions occurring in the U.S. in 2020 was 40% lower than it was in 1991. According to the CDC, the number was 36% lower in 2021 than in 1991, looking just at the District of Columbia and the 46 states that reported both of those years.

(The corresponding line graph shows the long-term trend in the number of legal abortions reported by both organizations. To allow for consistent comparisons over time, the CDC figures in the chart have been adjusted to ensure that the same states are counted from one year to the next. Using that approach, the CDC figure for 2021 is 622,108 legal abortions.)

There have been occasional breaks in this long-term pattern of decline – during the middle of the first decade of the 2000s, and then again in the late 2010s. The CDC reported modest 1% and 2% increases in abortions in 2018 and 2019, and then, after a 2% decrease in 2020, a 5% increase in 2021. Guttmacher reported an 8% increase over the three-year period from 2017 to 2020.

As noted above, these figures do not include abortions that use pills obtained outside of clinical settings.

Guttmacher says that in 2020 there were 14.4 abortions in the U.S. per 1,000 women ages 15 to 44. Its data shows that the rate of abortions among women has generally been declining in the U.S. since 1981, when it reported there were 29.3 abortions per 1,000 women in that age range.

The CDC says that in 2021, there were 11.6 abortions in the U.S. per 1,000 women ages 15 to 44. (That figure excludes data from California, the District of Columbia, Maryland, New Hampshire and New Jersey.) Like Guttmacher’s data, the CDC’s figures also suggest a general decline in the abortion rate over time. In 1980, when the CDC reported on all 50 states and D.C., it said there were 25 abortions per 1,000 women ages 15 to 44.

That said, both Guttmacher and the CDC say there were slight increases in the rate of abortions during the late 2010s and early 2020s. Guttmacher says the abortion rate per 1,000 women ages 15 to 44 rose from 13.5 in 2017 to 14.4 in 2020. The CDC says it rose from 11.2 per 1,000 in 2017 to 11.4 in 2019, before falling back to 11.1 in 2020 and then rising again to 11.6 in 2021. (The CDC’s figures for those years exclude data from California, D.C., Maryland, New Hampshire and New Jersey.)

The CDC broadly divides abortions into two categories: surgical abortions and medication abortions, which involve pills. Since the Food and Drug Administration first approved abortion pills in 2000, their use has increased over time as a share of abortions nationally, according to both the CDC and Guttmacher.

The majority of abortions in the U.S. now involve pills, according to both the CDC and Guttmacher. The CDC says 56% of U.S. abortions in 2021 involved pills, up from 53% in 2020 and 44% in 2019. Its figures for 2021 include the District of Columbia and 44 states that provided this data; its figures for 2020 include D.C. and 44 states (though not all of the same states as in 2021), and its figures for 2019 include D.C. and 45 states.

Guttmacher, which measures this every three years, says 53% of U.S. abortions involved pills in 2020, up from 39% in 2017.

Two pills commonly used together for medication abortions are mifepristone, which, taken first, blocks hormones that support a pregnancy, and misoprostol, which then causes the uterus to empty. According to the FDA, medication abortions are safe  until 10 weeks into pregnancy.

Surgical abortions conducted  during the first trimester  of pregnancy typically use a suction process, while the relatively few surgical abortions that occur  during the second trimester  of a pregnancy typically use a process called dilation and evacuation, according to the UCLA School of Medicine.

In 2020, there were 1,603 facilities in the U.S. that provided abortions,  according to Guttmacher . This included 807 clinics, 530 hospitals and 266 physicians’ offices.

A horizontal stacked bar chart showing the total number of abortion providers down since 1982.

While clinics make up half of the facilities that provide abortions, they are the sites where the vast majority (96%) of abortions are administered, either through procedures or the distribution of pills, according to Guttmacher’s 2020 data. (This includes 54% of abortions that are administered at specialized abortion clinics and 43% at nonspecialized clinics.) Hospitals made up 33% of the facilities that provided abortions in 2020 but accounted for only 3% of abortions that year, while just 1% of abortions were conducted by physicians’ offices.

Looking just at clinics – that is, the total number of specialized abortion clinics and nonspecialized clinics in the U.S. – Guttmacher found the total virtually unchanged between 2017 (808 clinics) and 2020 (807 clinics). However, there were regional differences. In the Midwest, the number of clinics that provide abortions increased by 11% during those years, and in the West by 6%. The number of clinics  decreased  during those years by 9% in the Northeast and 3% in the South.

The total number of abortion providers has declined dramatically since the 1980s. In 1982, according to Guttmacher, there were 2,908 facilities providing abortions in the U.S., including 789 clinics, 1,405 hospitals and 714 physicians’ offices.

The CDC does not track the number of abortion providers.

In the District of Columbia and the 46 states that provided abortion and residency information to the CDC in 2021, 10.9% of all abortions were performed on women known to live outside the state where the abortion occurred – slightly higher than the percentage in 2020 (9.7%). That year, D.C. and 46 states (though not the same ones as in 2021) reported abortion and residency data. (The total number of abortions used in these calculations included figures for women with both known and unknown residential status.)

The share of reported abortions performed on women outside their state of residence was much higher before the 1973 Roe decision that stopped states from banning abortion. In 1972, 41% of all abortions in D.C. and the 20 states that provided this information to the CDC that year were performed on women outside their state of residence. In 1973, the corresponding figure was 21% in the District of Columbia and the 41 states that provided this information, and in 1974 it was 11% in D.C. and the 43 states that provided data.

In the District of Columbia and the 46 states that reported age data to  the CDC in 2021, the majority of women who had abortions (57%) were in their 20s, while about three-in-ten (31%) were in their 30s. Teens ages 13 to 19 accounted for 8% of those who had abortions, while women ages 40 to 44 accounted for about 4%.

The vast majority of women who had abortions in 2021 were unmarried (87%), while married women accounted for 13%, according to  the CDC , which had data on this from 37 states.

A pie chart showing that, in 2021, majority of abortions were for women who had never had one before.

In the District of Columbia, New York City (but not the rest of New York) and the 31 states that reported racial and ethnic data on abortion to  the CDC , 42% of all women who had abortions in 2021 were non-Hispanic Black, while 30% were non-Hispanic White, 22% were Hispanic and 6% were of other races.

Looking at abortion rates among those ages 15 to 44, there were 28.6 abortions per 1,000 non-Hispanic Black women in 2021; 12.3 abortions per 1,000 Hispanic women; 6.4 abortions per 1,000 non-Hispanic White women; and 9.2 abortions per 1,000 women of other races, the  CDC reported  from those same 31 states, D.C. and New York City.

For 57% of U.S. women who had induced abortions in 2021, it was the first time they had ever had one,  according to the CDC.  For nearly a quarter (24%), it was their second abortion. For 11% of women who had an abortion that year, it was their third, and for 8% it was their fourth or more. These CDC figures include data from 41 states and New York City, but not the rest of New York.

A bar chart showing that most U.S. abortions in 2021 were for women who had previously given birth.

Nearly four-in-ten women who had abortions in 2021 (39%) had no previous live births at the time they had an abortion,  according to the CDC . Almost a quarter (24%) of women who had abortions in 2021 had one previous live birth, 20% had two previous live births, 10% had three, and 7% had four or more previous live births. These CDC figures include data from 41 states and New York City, but not the rest of New York.

The vast majority of abortions occur during the first trimester of a pregnancy. In 2021, 93% of abortions occurred during the first trimester – that is, at or before 13 weeks of gestation,  according to the CDC . An additional 6% occurred between 14 and 20 weeks of pregnancy, and about 1% were performed at 21 weeks or more of gestation. These CDC figures include data from 40 states and New York City, but not the rest of New York.

About 2% of all abortions in the U.S. involve some type of complication for the woman , according to an article in StatPearls, an online health care resource. “Most complications are considered minor such as pain, bleeding, infection and post-anesthesia complications,” according to the article.

The CDC calculates  case-fatality rates for women from induced abortions – that is, how many women die from abortion-related complications, for every 100,000 legal abortions that occur in the U.S .  The rate was lowest during the most recent period examined by the agency (2013 to 2020), when there were 0.45 deaths to women per 100,000 legal induced abortions. The case-fatality rate reported by the CDC was highest during the first period examined by the agency (1973 to 1977), when it was 2.09 deaths to women per 100,000 legal induced abortions. During the five-year periods in between, the figure ranged from 0.52 (from 1993 to 1997) to 0.78 (from 1978 to 1982).

The CDC calculates death rates by five-year and seven-year periods because of year-to-year fluctuation in the numbers and due to the relatively low number of women who die from legal induced abortions.

In 2020, the last year for which the CDC has information , six women in the U.S. died due to complications from induced abortions. Four women died in this way in 2019, two in 2018, and three in 2017. (These deaths all followed legal abortions.) Since 1990, the annual number of deaths among women due to legal induced abortion has ranged from two to 12.

The annual number of reported deaths from induced abortions (legal and illegal) tended to be higher in the 1980s, when it ranged from nine to 16, and from 1972 to 1979, when it ranged from 13 to 63. One driver of the decline was the drop in deaths from illegal abortions. There were 39 deaths from illegal abortions in 1972, the last full year before Roe v. Wade. The total fell to 19 in 1973 and to single digits or zero every year after that. (The number of deaths from legal abortions has also declined since then, though with some slight variation over time.)

The number of deaths from induced abortions was considerably higher in the 1960s than afterward. For instance, there were 119 deaths from induced abortions in  1963  and 99 in  1965 , according to reports by the then-U.S. Department of Health, Education and Welfare, a precursor to the Department of Health and Human Services. The CDC is a division of Health and Human Services.

Note: This is an update of a post originally published May 27, 2022, and first updated June 24, 2022.

how to read a statistical research paper

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

Key facts about the abortion debate in America

Public opinion on abortion, three-in-ten or more democrats and republicans don’t agree with their party on abortion, partisanship a bigger factor than geography in views of abortion access locally, do state laws on abortion reflect public opinion, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: How Different Fields Are Using GenAI to Redefine Roles

  • Maryam Alavi

Examples from customer support, management consulting, professional writing, legal analysis, and software and technology.

The interactive, conversational, analytical, and generative features of GenAI offer support for creativity, problem-solving, and processing and digestion of large bodies of information. Therefore, these features can act as cognitive resources for knowledge workers. Moreover, the capabilities of GenAI can mitigate various hindrances to effective performance that knowledge workers may encounter in their jobs, including time pressure, gaps in knowledge and skills, and negative feelings (such as boredom stemming from repetitive tasks or frustration arising from interactions with dissatisfied customers). Empirical research and field observations have already begun to reveal the value of GenAI capabilities and their potential for job crafting.

There is an expectation that implementing new and emerging Generative AI (GenAI) tools enhances the effectiveness and competitiveness of organizations. This belief is evidenced by current and planned investments in GenAI tools, especially by firms in knowledge-intensive industries such as finance, healthcare, and entertainment, among others. According to forecasts, enterprise spending on GenAI will increase by two-fold in 2024 and grow to $151.1 billion by 2027 .

  • Maryam Alavi is the Elizabeth D. & Thomas M. Holder Chair & Professor of IT Management, Scheller College of Business, Georgia Institute of Technology .

Partner Center

  • Share full article

Advertisement

Supported by

What the Data Says About Pandemic School Closures, Four Years Later

The more time students spent in remote instruction, the further they fell behind. And, experts say, extended closures did little to stop the spread of Covid.

Sarah Mervosh

By Sarah Mervosh ,  Claire Cain Miller and Francesca Paris

Four years ago this month, schools nationwide began to shut down, igniting one of the most polarizing and partisan debates of the pandemic.

Some schools, often in Republican-led states and rural areas, reopened by fall 2020. Others, typically in large cities and states led by Democrats, would not fully reopen for another year.

A variety of data — about children’s academic outcomes and about the spread of Covid-19 — has accumulated in the time since. Today, there is broad acknowledgment among many public health and education experts that extended school closures did not significantly stop the spread of Covid, while the academic harms for children have been large and long-lasting.

While poverty and other factors also played a role, remote learning was a key driver of academic declines during the pandemic, research shows — a finding that held true across income levels.

Source: Fahle, Kane, Patterson, Reardon, Staiger and Stuart, “ School District and Community Factors Associated With Learning Loss During the COVID-19 Pandemic .” Score changes are measured from 2019 to 2022. In-person means a district offered traditional in-person learning, even if not all students were in-person.

“There’s fairly good consensus that, in general, as a society, we probably kept kids out of school longer than we should have,” said Dr. Sean O’Leary, a pediatric infectious disease specialist who helped write guidance for the American Academy of Pediatrics, which recommended in June 2020 that schools reopen with safety measures in place.

There were no easy decisions at the time. Officials had to weigh the risks of an emerging virus against the academic and mental health consequences of closing schools. And even schools that reopened quickly, by the fall of 2020, have seen lasting effects.

But as experts plan for the next public health emergency, whatever it may be, a growing body of research shows that pandemic school closures came at a steep cost to students.

The longer schools were closed, the more students fell behind.

At the state level, more time spent in remote or hybrid instruction in the 2020-21 school year was associated with larger drops in test scores, according to a New York Times analysis of school closure data and results from the National Assessment of Educational Progress , an authoritative exam administered to a national sample of fourth- and eighth-grade students.

At the school district level, that finding also holds, according to an analysis of test scores from third through eighth grade in thousands of U.S. districts, led by researchers at Stanford and Harvard. In districts where students spent most of the 2020-21 school year learning remotely, they fell more than half a grade behind in math on average, while in districts that spent most of the year in person they lost just over a third of a grade.

( A separate study of nearly 10,000 schools found similar results.)

Such losses can be hard to overcome, without significant interventions. The most recent test scores, from spring 2023, show that students, overall, are not caught up from their pandemic losses , with larger gaps remaining among students that lost the most ground to begin with. Students in districts that were remote or hybrid the longest — at least 90 percent of the 2020-21 school year — still had almost double the ground to make up compared with students in districts that allowed students back for most of the year.

Some time in person was better than no time.

As districts shifted toward in-person learning as the year went on, students that were offered a hybrid schedule (a few hours or days a week in person, with the rest online) did better, on average, than those in places where school was fully remote, but worse than those in places that had school fully in person.

Students in hybrid or remote learning, 2020-21

80% of students

Some schools return online, as Covid-19 cases surge. Vaccinations start for high-priority groups.

Teachers are eligible for the Covid vaccine in more than half of states.

Most districts end the year in-person or hybrid.

Source: Burbio audit of more than 1,200 school districts representing 47 percent of U.S. K-12 enrollment. Note: Learning mode was defined based on the most in-person option available to students.

Income and family background also made a big difference.

A second factor associated with academic declines during the pandemic was a community’s poverty level. Comparing districts with similar remote learning policies, poorer districts had steeper losses.

But in-person learning still mattered: Looking at districts with similar poverty levels, remote learning was associated with greater declines.

A community’s poverty rate and the length of school closures had a “roughly equal” effect on student outcomes, said Sean F. Reardon, a professor of poverty and inequality in education at Stanford, who led a district-level analysis with Thomas J. Kane, an economist at Harvard.

Score changes are measured from 2019 to 2022. Poorest and richest are the top and bottom 20% of districts by percent of students on free/reduced lunch. Mostly in-person and mostly remote are districts that offered traditional in-person learning for more than 90 percent or less than 10 percent of the 2020-21 year.

But the combination — poverty and remote learning — was particularly harmful. For each week spent remote, students in poor districts experienced steeper losses in math than peers in richer districts.

That is notable, because poor districts were also more likely to stay remote for longer .

Some of the country’s largest poor districts are in Democratic-leaning cities that took a more cautious approach to the virus. Poor areas, and Black and Hispanic communities , also suffered higher Covid death rates, making many families and teachers in those districts hesitant to return.

“We wanted to survive,” said Sarah Carpenter, the executive director of Memphis Lift, a parent advocacy group in Memphis, where schools were closed until spring 2021 .

“But I also think, man, looking back, I wish our kids could have gone back to school much quicker,” she added, citing the academic effects.

Other things were also associated with worse student outcomes, including increased anxiety and depression among adults in children’s lives, and the overall restriction of social activity in a community, according to the Stanford and Harvard research .

Even short closures had long-term consequences for children.

While being in school was on average better for academic outcomes, it wasn’t a guarantee. Some districts that opened early, like those in Cherokee County, Ga., a suburb of Atlanta, and Hanover County, Va., lost significant learning and remain behind.

At the same time, many schools are seeing more anxiety and behavioral outbursts among students. And chronic absenteeism from school has surged across demographic groups .

These are signs, experts say, that even short-term closures, and the pandemic more broadly, had lasting effects on the culture of education.

“There was almost, in the Covid era, a sense of, ‘We give up, we’re just trying to keep body and soul together,’ and I think that was corrosive to the higher expectations of schools,” said Margaret Spellings, an education secretary under President George W. Bush who is now chief executive of the Bipartisan Policy Center.

Closing schools did not appear to significantly slow Covid’s spread.

Perhaps the biggest question that hung over school reopenings: Was it safe?

That was largely unknown in the spring of 2020, when schools first shut down. But several experts said that had changed by the fall of 2020, when there were initial signs that children were less likely to become seriously ill, and growing evidence from Europe and parts of the United States that opening schools, with safety measures, did not lead to significantly more transmission.

“Infectious disease leaders have generally agreed that school closures were not an important strategy in stemming the spread of Covid,” said Dr. Jeanne Noble, who directed the Covid response at the U.C.S.F. Parnassus emergency department.

Politically, though, there remains some disagreement about when, exactly, it was safe to reopen school.

Republican governors who pushed to open schools sooner have claimed credit for their approach, while Democrats and teachers’ unions have emphasized their commitment to safety and their investment in helping students recover.

“I do believe it was the right decision,” said Jerry T. Jordan, president of the Philadelphia Federation of Teachers, which resisted returning to school in person over concerns about the availability of vaccines and poor ventilation in school buildings. Philadelphia schools waited to partially reopen until the spring of 2021 , a decision Mr. Jordan believes saved lives.

“It doesn’t matter what is going on in the building and how much people are learning if people are getting the virus and running the potential of dying,” he said.

Pandemic school closures offer lessons for the future.

Though the next health crisis may have different particulars, with different risk calculations, the consequences of closing schools are now well established, experts say.

In the future, infectious disease experts said, they hoped decisions would be guided more by epidemiological data as it emerged, taking into account the trade-offs.

“Could we have used data to better guide our decision making? Yes,” said Dr. Uzma N. Hasan, division chief of pediatric infectious diseases at RWJBarnabas Health in Livingston, N.J. “Fear should not guide our decision making.”

Source: Fahle, Kane, Patterson, Reardon, Staiger and Stuart, “ School District and Community Factors Associated With Learning Loss During the Covid-19 Pandemic. ”

The study used estimates of learning loss from the Stanford Education Data Archive . For closure lengths, the study averaged district-level estimates of time spent in remote and hybrid learning compiled by the Covid-19 School Data Hub (C.S.D.H.) and American Enterprise Institute (A.E.I.) . The A.E.I. data defines remote status by whether there was an in-person or hybrid option, even if some students chose to remain virtual. In the C.S.D.H. data set, districts are defined as remote if “all or most” students were virtual.

An earlier version of this article misstated a job description of Dr. Jeanne Noble. She directed the Covid response at the U.C.S.F. Parnassus emergency department. She did not direct the Covid response for the University of California, San Francisco health system.

How we handle corrections

Sarah Mervosh covers education for The Times, focusing on K-12 schools. More about Sarah Mervosh

Claire Cain Miller writes about gender, families and the future of work for The Upshot. She joined The Times in 2008 and was part of a team that won a Pulitzer Prize in 2018 for public service for reporting on workplace sexual harassment issues. More about Claire Cain Miller

Francesca Paris is a Times reporter working with data and graphics for The Upshot. More about Francesca Paris

  • Quantum Research

Landmark IBM error correction paper published on the cover of Nature

Ibm has created a quantum error-correcting code about 10 times more efficient than prior methods — a milestone in quantum computing research..

Landmark IBM error correction paper published on the cover of Nature

27 Mar 2024

Rafi Letzter

Share this blog

Today, the paper detailing those results was published as the cover story of the scientific journal Nature. 1

Last year, we demonstrated that quantum computers had entered the era of utility , where they are now capable of running quantum circuits better than classical computers can. Over the next few years, we expect to find speedups over classical computing and extract business value from these systems. But there are also algorithms with mathematically proven speedups over leading classical methods that require tuning quantum circuits with hundreds of millions, to billions, of gates. Expanding our quantum computing toolkit to include those algorithms requires us to find a way to compute that corrects the errors inherent to quantum systems — what we call quantum error correction.

Read how a paper from IBM and UC Berkeley shows a path toward useful quantum computing

Quantum error correction requires that we encode quantum information into more qubits than we would otherwise need. However, achieving quantum error correction in a scalable and fault-tolerant way has, to this point, been out of reach without considering scales of one million or more physical qubits. Our new result published today greatly reduces that overhead, and shows that error correction is within reach.

While quantum error correction theory dates back three decades, theoretical error correction techniques capable of running valuable quantum circuits on real hardware have been too impractical to deploy on quantum system. In our new paper, we introduce a new code, which we call the gross code , that overcomes that limitation.

This code is part of our broader strategy to bring useful quantum computing to the world.

While error correction is not a solved problem, this new code makes clear the path toward running quantum circuits with a billion gates or more on our superconducting transmon qubit hardware.

What is error correction?

Quantum information is fragile and susceptible to noise — environmental noise, noise from the control electronics, hardware imperfections, state preparation and measurement errors, and more. In order to run quantum circuits with millions to billions of gates, quantum error correction will be required.

Error correction works by building redundancy into quantum circuits. Many qubits work together to protect a piece of quantum information that a single qubit might lose to errors and noise.

On classical computers, the concept of redundancy is pretty straightforward. Classical error correction involves storing the same piece of information across multiple bits. Instead of storing a 1 as a 1 or a 0 as a 0, the computer might record 11111 or 00000. That way, if an error flips a minority of bits, the computer can treat 11001 as 1, or 10001 as 0. It’s fairly easy to build in more redundancy as needed to introduce finer error correction.

Things are more complicated on quantum computers. Quantum information cannot be copied and pasted like classical information, and the information stored in quantum bits is more complicated than classical data. And of course, qubits can decohere quickly, forgetting their stored information.

Research has shown that quantum fault tolerance is possible, and there are many error correcting schemes on the books. The most popular one is called the “surface code,” where qubits are arranged on a two-dimensional lattice and units of information are encoded into sub-units of the lattice.

But these schemes have problems.

First, they only work if the hardware’s error rates are better than some threshold determined by the specific scheme and the properties of the noise itself — and beating those thresholds can be a challenge.

Second, many of those schemes scale inefficiently — as you build larger quantum computers, the number of extra qubits needed for error correction far outpaces the number of qubits the code can store.

At practical code sizes where many errors can be corrected, the surface code uses hundreds of physical qubits per encoded qubit worth of quantum information, or more. So, while the surface code is useful for benchmarking and learning about error correction, it’s probably not the end of the story for fault-tolerant quantum computers.

Exploring “good” codes

The field of error correction buzzed with excitement in 2022 when Pavel Panteleev and Gleb Kalachev at Moscow State University published a landmark paper proving that there exist asymptotically good codes — codes where the number of extra qubits needed levels off as the quality of the code increases.

This has spurred a lot of new work in error correction, especially in the same family of codes that the surface code hails from, called quantum low-density parity check, or qLDPC codes. These qLDPC codes are quantum error correcting codes where the operations responsible for checking whether or not an error has occurred only have to act on a few qubits, and each qubit only has to participate in a few checks.

But this work was highly theoretical, focused on proving the possibility of this kind of error correction. It didn’t take into account the real constraints of building quantum computers. Most importantly, some qLDPC codes would require many qubits in a system to be physically linked to high numbers of other qubits. In practice, that would require quantum processors folded in on themselves in psychedelic hyper-dimensional origami, or entombed in wildly complex rats’ nests of wires.

In our paper, we looked for fault-tolerant quantum memory with a low qubit overhead, high error threshold, and a large code distance.

High-threshold and low-overhead fault-tolerant quantum memory

Bravyi, S., Cross, A., Gambetta, J., et al. High-threshold and low-overhead fault-tolerant quantum memory. Nature (2024). https://doi.org/10.1038/s41586-024-07107-7

In our Nature paper, we specifically looked for fault-tolerant quantum memory with a low qubit overhead, high error threshold, and a large code distance.

Let’s break that down:

Fault-tolerant: The circuits used to detect errors won't spread those errors around too badly in the process, and they can be corrected faster than they occur

Quantum memory: In this paper, we are only encoding and storing quantum information. We are not yet doing calculations on the encoded quantum information.

High error threshold: The higher the threshold, the higher amount of hardware errors the code will allow while still being fault tolerant. We were looking for a code that allowed us to operate the memory reliably at physical error rates as high as 0.001, so we wanted a threshold close to 1 percent.

Large code distance: Distance is the measure of how robust the code is — how many errors it takes to completely flip the value from 0 to 1 and vice versa. In the case of 00000 and 11111, the distance is 5. We wanted one with a large code distance that corrects more than just a couple errors. Large-distance codes can suppress noise by orders of magnitude even if the hardware quality is only marginally better than the code threshold. In contrast, codes with a small distance become useful only if the hardware quality is significantly better than the code threshold.

Low qubit overhead: Overhead is the number of extra qubits required for correcting errors. We want the number of qubits required to do error correction to be far less than we need for a surface code of the same quality, or distance.

We’re excited to report that our team’s mathematical analysis found concrete examples of qLDPC codes that met all of these required conditions. These fall into a family of codes called “Bivariate Bicycle (BB)” codes. And they are going to shape not only our research going forward, but how we architect physical quantum systems.

The gross code

While many qLDPC code families show great promise for advancing error correction theory, most aren’t necessarily pragmatic for real-world application. Our new codes lend themselves better to practical implementation because each qubit needs only to connect to six others, and the connections can be routed on just two layers.

To get an idea of how the qubits are connected, imagine they are put onto a square grid, like a piece of graph paper. Curl up this piece of graph paper so that it forms a tube, and connect the ends of the tube to make a donut. On this donut, each qubit is connected to its four neighbors and two qubits that are farther away on the surface of the donut. No more connections needed.

The good news is we don’t actually have to embed our qubits onto a donut to make these codes work — we can accomplish this by folding the surface differently and adding a few other long-range connectors to satisfy mathematical requirements of the code. It’s an engineering challenge, but much more feasible than a hyper-dimensional shape.

We explored some codes that have this architecture and focused on a particular [[144,12,12]] code. We call this code the gross code because 144 is a gross (or a dozen dozen). It requires 144 qubits to store data — but in our specific implementation, it also uses another 144 qubits to check for errors, so this instance of the code uses 288 qubits. It stores 12 logical qubits well enough that fewer than 12 errors can be detected. Thus: [[144,12,12]].

Using the gross code, you can protect 12 logical qubits for roughly a million cycles of error checks using 288 qubits. Doing roughly the same task with the surface code would require nearly 3,000 qubits.

This is a milestone. We are still looking for qLDPC codes with even more efficient architectures, and our research on performing error-corrected calculations using these codes is ongoing. But with this publication, the future of error correction looks bright.

fig1-Tanner Graphs of Surface and Bivariate Bicycle Codes.png

Fig. 1 | Tanner graphs of surface and BB codes.

Fig. 1 | Tanner graphs of surface and BB codes. a, Tanner graph of a surface code, for comparison. b, Tanner graph of a BB code with parameters [[144, 12, 12]] embedded into a torus. Any edge of the Tanner graph connects a data and a check vertex. Data qubits associated with the registers q(L) and q(R) are shown by blue and orange circles. Each vertex has six incident edges including four short-range edges (pointing north, south, east and west) and two long-range edges. We only show a few long-range edges to avoid clutter. Dashed and solid edges indicate two planar subgraphs spanning the Tanner graph, see the Methods. c, Sketch of a Tanner graph extension for measuring Z ˉ \={Z} and X ˉ \={X} following ref. 50, attaching to a surface code. The ancilla corresponding to the X ˉ \={X} measurement can be connected to a surface code, enabling load-store operations for all logical qubits by means of quantum teleportation and some logical unitaries. This extended Tanner graph also has an implementation in a thickness-2 architecture through the A and B edges (Methods).

Syndrome measurement circuit

Fig. 2 | Syndrome measurement circuit.

Fig. 2 | Syndrome measurement circuit. Full cycle of syndrome measurements relying on seven layers of CNOTs. We provide a local view of the circuit that only includes one data qubit from each register q(L) and q(R) . The circuit is symmetric under horizontal and vertical shifts of the Tanner graph. Each data qubit is coupled by CNOTs with three X-check and three Z-check qubits: see the Methods for more details.

Why error correction matters

Today, our users benefit from novel error mitigation techniques — methods for reducing or eliminating the effect of noise when calculating observables, alongside our work suppressing errors at the hardware level. This work brought us into the era of quantum utility. IBM researchers and partners all over the world are exploring practical applications of quantum computing today with existing quantum systems. Error mitigation lets users begin looking for quantum advantage on real quantum hardware.

But error mitigation comes with its own overhead, requiring running the same executions repeatedly so that classical computers can use statistical methods to extract an accurate result. This limits the scale of the programs you can run, and increasing that scale requires tools beyond error mitigation — like error correction.

Last year, we debuted a new roadmap laying out our plan to continuously improve quantum computers over the next decade. This new paper is an important example of how we plan to continuously increasing the complexity (number of gates) of the quantum circuits that can be run on our hardware. It will allow us to transition from running circuits with 15,000 gates to 100 million, or even 1 billion gates.

Bravyi, S., Cross, A.W., Gambetta, J.M. et al. High-threshold and low-overhead fault-tolerant quantum memory. Nature 627, 778–782 (2024). https://doi.org/10.1038/s41586-024-07107-7

Start using our 100+ qubit systems

Keep exploring, computing with error-corrected quantum computers.

Logical gates with magic state distillation

Logical gates with magic state distillation

Error correcting codes for near-term quantum computers

Error correcting codes for near-term quantum computers

how to read a statistical research paper

A new paper from IBM and UC Berkeley shows a path toward useful quantum computing

Sean ‘Diddy’ Combs faces sweeping sex-trafficking inquiry: What the feds have, need to prove

Law enforcement officers and cars on a street with police tape.

  • Show more sharing options
  • Copy Link URL Copied!

Over the last few months, a legendary name in the music world has faced a series of shocking allegations of sexual abuse.

In civil lawsuits, four women have accused Sean “Diddy” Combs of rape, assault and other abuses, dating back three decades. One of the allegations involved a minor. The claims sent shock waves through the music industry and put Combs’ entertainment empire in jeopardy.

Now, the hip-hop mogul’s legal troubles have worsened considerably.

Law enforcement sources told The Times that Combs is the subject of a sweeping inquiry into sex-trafficking allegations that resulted in a federal raid Monday at his estates in Los Angeles and Miami.

A law enforcement agent carries a bag of evidence to a van as federal agents stand at the entrance to a property belonging to rapper Sean "Diddy" Combs, Monday, March 25, 2024, on Star Island in Miami Beach, Fla. Two properties belonging to Combs in Los Angeles and Miami were searched Monday by federal Homeland Security Investigations agents and other law enforcement as part of an ongoing sex trafficking investigation by federal authorities in New York, two law enforcement officials told The Associated Press. (AP Photo/Rebecca Blackwell)

Sean ‘Diddy’ Combs’ L.A., Miami homes raided in sex-trafficking inquiry, sources say

Agents search Sean Combs’ Holmby Hills and Miami mansions as part of a federal inquiry into sex trafficking allegations, law enforcement sources said.

March 26, 2024

Authorities have declined to comment on the case, and Combs has not been charged with any crime. But the scene of dozens of Department of Homeland Security agents — guns drawn — searching Combs’ properties underscored the seriousness of the investigation.

At the same time as the raids, police in Miami arrested Brendan Paul, a man described in a recent lawsuit against Combs as a confidant and drug “mule.” Miami-Dade police took Paul, 25, into custody on suspicion of possession of cocaine and a controlled substance-laced candy, records show.

Paul was arrested at Miami Opa-Locka Executive Airport, where TMZ posted video showing Combs walking around Monday afternoon. An affidavit reviewed by the Miami Herald alleged that police working with Homeland Security found drugs in Paul’s bag. There is nothing in Miami court records connecting Combs to Paul, who was later released on $2,500 bail.

The arrest, however, is the latest in a string of legal woes tied to Combs.

Sources with knowledge of the sex-trafficking investigation into Combs, who spoke on condition of anonymity because they were not authorized to speak publicly, said federal authorities have interviewed at least three women, but it’s unclear whether any are among those who have filed suit.

Photo illustration of Sean Diddy Combs with half his face falling into small square pieces

Behind the calamitous fall of hip-hop mogul Sean ‘Diddy’ Combs

In the wake of multiple lawsuits filed against him, former members of Combs’ inner circle told The Times that his alleged misconduct against women goes back decades.

Dec. 13, 2023

Legal experts say it could take time to build a criminal case against Combs but note that the civil suits could offer investigators a road map.

Dmitry Gorin, a former L.A. County sex-crimes prosecutor who is now in private practice, said the allegations in the lawsuits would likely have been enough for a judge to grant search warrants for Combs’ homes.

Investigators probably would seek authorization to “search for videos or photographs on any devices connected to the target ... anywhere where digital images can be found in connection to sexual conduct that would have been recorded,” Gorin said.

Shawn Holley, an attorney for Combs, did not respond to requests for comment, but Aaron Dyer, another of his lawyers, on Tuesday called the raids a “witch hunt” and “a gross overuse of military-level force.”

“Yesterday, there was a gross overuse of military-level force as search warrants were executed at Mr. Combs’ residences,” Dyer said in a statement. “This unprecedented ambush — paired with an advanced, coordinated media presence — leads to a premature rush to judgment of Mr. Combs and is nothing more than a witch hunt based on meritless accusations made in civil lawsuits. There has been no finding of criminal or civil liability with any of these allegations.”

Combs has previously denied any wrongdoing.

Sean Combs arrives at a pre-Grammy party

Gorin and other legal experts said investigators could be focused, in part, on the sexual assault allegations involving a minor. If a minor is moved across state lines for the purpose of sex, “that is enough for at least an argument ... of sex trafficking because somebody underage cannot consent,” Gorin said.

“Sex trafficking for adults usually involves some sort of coercion or other restraints,” he said, and can be tougher to prove. Prosecutors would need to show you “encouraged somebody to engage in sexual activity for money or some other inducement.”

Coercion, he added, is not limited to threats of violence. It could involve being held against one’s will or someone simply saying, “I don’t want to participate in group sex, and now I’m being forced to.”

Homeland Security investigates most sex-trafficking operations for the federal government. Legal experts say one possibility why the agency could be involved in this case is because the women involved in the allegations against Combs could be from other countries.

Sean "Diddy" Combs wears a satiny red puffer suit while holding a microphone onstage with two hands

Sean ‘Diddy’ Combs sexual harassment suit includes notable music industry names

A new suit from music producer Rodney “Lil Rod” Jones makes new, explosive claims about Combs’ alleged assaults and misconduct in granular detail, naming several prominent artists and music executives as well.

Feb. 28, 2024

Meghan Blanco, a defense attorney who has handled sexual trafficking cases, said they can be “incredibly difficult cases to prove.”

“They have [in the Combs case] convinced one or more federal magistrates they had enough probable cause for one or more search warrants,” Blanco said. “Given the scope of the investigation, it seems they are further along than most investigations.”

Combs’ legal troubles have been building for months.

His former girlfriend, Casandra Ventura, the singer known as Cassie, accused him of rape and repeated physical assaults and said he forced her to have sex with male prostitutes in front of him. Joi Dickerson-Neal accused Combs in a suit of drugging and raping her in 1991, recording the attack and then distributing the footage without her consent.

Liza Gardner filed a third suit in which she claimed Combs and R&B singer Aaron Hall sexually assaulted her. Hall could not be reached for comment.

Another lawsuit alleges that Combs and former Bad Boy label president Harve Pierre gang-raped and sex-trafficked a 17-year-old girl. Pierre said in a statement that the allegations were “disgusting,” “false” and a “desperate attempt for financial gain.”

After the filing of the fourth suit, Combs wrote on Instagram: “Enough is enough. For the last couple of weeks, I have sat silently and watched people try to assassinate my character, destroy my reputation and my legacy. Sickening allegations have been made against me by individuals looking for a quick payday. Let me be absolutely clear: I did not do any of the awful things being alleged. I will fight for my name, my family and for the truth.”

Last month, producer Rodney “Lil Rod” Jones filed a federal lawsuit against Combs accusing him of sexually harassing and threatening him for more than a year. The suit includes mention of Paul in connection with “the affairs ... involving dealing in controlled substances.”

On Monday, the suit was amended to include Oscar winner Cuba Gooding Jr. as a co-defendant in the lawsuit.

Sean "Diddy" Combs holds an award up and cheers.

Cuba Gooding Jr. added as co-defendant in Lil Rod’s lawsuit against Diddy

Cuba Gooding Jr. is added as a co-defendant in a lawsuit against Sean ‘Diddy’ Combs. Record producer Rodney ‘Lil Rod’ Jones accuses the actor of sexual assault.

Blanco said prosecutors “are going to look carefully for corroboration — the numbers of people accusing the person of similar acts.” Beyond that, they will be looking for videos, recordings and cellphone records that place people in the same locations or text messages or other discussions at the time of the alleged acts.

She said prosecutors are trying to build a record of incidents that happened some time ago.

Douglas Wigdor, a lawyer for Ventura and another, unnamed plaintiff, said in response to reports of the search warrant issued against Combs: “We will always support law enforcement when it seeks to prosecute those that have violated the law. Hopefully, this is the beginning of a process that will hold Mr. Combs responsible for his depraved conduct.”

Wigdor on Tuesday called his clients “courageous and credible witnesses.”

“To the extent there is a prosecution and they want our clients to testify truthfully,” he said, “I think they will and that will be damning evidence.”

The searches Monday in L.A. and Miami sparked worldwide attention.

Sean Combs arrives at a pre-Grammy party

Diddy’s ‘Love’ producer Lil Rod accuses him and associates of sexual assault, illicit behavior

Rodney ‘Lil Rod’ Jones has filed a bombshell lawsuit against Sean ‘Diddy’ Combs accusing the media mogul of sexually harassing and threatening him.

Feb. 27, 2024

His 17,000-square-foot Holmby Hills mansion, where Combs debuted his last album a year ago, was flooded with Homeland Security agents who gathered evidence on behalf of an investigation being run by the Southern District of New York, according to law enforcement officials familiar with the inquiry.

Two of Combs’ sons were briefly detained at the Holmby Hills property as agents searched the mansion in footage captured by FOX11 Los Angeles.

Both Blanco and Gorin said prosecutors will have to examine the accusers’ motives for coming forward and whether they are motivated by financial gain. They are sure to look for inconsistencies in any allegations, they said.

Any defense, Blanco added, will question why the accusers are only now coming forward and whether they have an incentive beyond justice.

“It comes down to credibility,” she said.

Times staff writers Stacy Perman and Nardine Saad contributed to this report.

More to Read

Sean "Diddy" Combs

Feds want Sean ‘Diddy’ Combs’ communications, flight records in sex-trafficking probe

March 29, 2024

Left, Daphne Joy. Right, Rapper 50 Cent.

50 Cent denies Daphne Joy’s rape allegation after trolling her over mention in Diddy lawsuit

A law enforcement officer leads out a canine as federal agents stand at the entrance to a property belonging to rapper Sean "Diddy" Combs, Monday, March 25, 2024, on Star Island in Miami Beach, Fla. Two properties belonging to Combs in Los Angeles and Miami were searched Monday by federal Homeland Security Investigations agents and other law enforcement as part of an ongoing sex trafficking investigation by federal authorities in New York, two law enforcement officials told The Associated Press. (AP Photo/Rebecca Blackwell)

Inside the Sean ‘Diddy’ Combs’ raids: Emptied safes, dismantled electronics, gun-toting feds

March 28, 2024

Start your day right

Sign up for Essential California for news, features and recommendations from the L.A. Times and beyond in your inbox six days a week.

You may occasionally receive promotional content from the Los Angeles Times.

how to read a statistical research paper

Richard Winton is an investigative crime writer for the Los Angeles Times and part of the team that won the Pulitzer Prize for public service in 2011. Known as @lacrimes on Twitter, during almost 30 years at The Times he also has been part of the breaking news staff that won Pulitzers in 1998, 2004 and 2016.

More From the Los Angeles Times

Partial solar eclipse April 8

How to watch the solar eclipse from California — and avoid heartbreak if chasing ‘totality’

April 1, 2024

According to Riverside Police Department PIO, Ryan Railsback, Just after 12 p.m. March 31, 2024, officers responded to a reported shooting that just occurred in the 7600 block of Canberra Way, located in the Mission Grove neighborhood of Riverside. Once there, they located two shooting victims and fire/paramedics began treating them. One victim was pronounced deceased at the scene and a second victim was transported to a local hospital in unknown condition. We do have a person detained. (ONSCENE.TV)

Fatal shooting shakes Riverside gated community on Easter Sunday

March 31, 2024

A new slide covering the roadway just south of Mill Creek, along the Big Sur Coast. Crews continue to respond at numerous locations on Highway 1, which are showing significant instability as a result of ongoing rain event.

Part of Highway 1 near Big Sur crumbles as new landslide closes more of historic roadway

Massive fire rips through apartment complex in lomita.

March 30, 2024

How To Write A Research Paper

Find Sources For A Research Paper

Cathy A.

How to Find Sources For a Research Paper | A Guide

10 min read

Published on: Mar 26, 2024

Last updated on: Mar 25, 2024

How to find sources for a research paper

People also read

How to Write a Research Paper Step by Step

How to Write a Proposal For a Research Paper in 10 Steps

A Comprehensive Guide to Creating a Research Paper Outline

Types of Research - Methodologies and Characteristics

300+ Engaging Research Paper Topics to Get You Started

Interesting Psychology Research Topics & Ideas

Qualitative Research - Types, Methods & Examples

Understanding Quantitative Research - Definition, Types, Examples, And More

Research Paper Example - Examples for Different Formats

How To Start A Research Paper - Steps With Examples

How to Write an Abstract That Captivates Your Readers

How To Write a Literature Review for a Research Paper | Steps & Examples

Types of Qualitative Research Methods - An Overview

Understanding Qualitative vs. Quantitative Research - A Complete Guide

How to Cite a Research Paper in Different Citation Styles

Easy Sociology Research Topics for Your Next Project

200+ Outstanding History Research Paper Topics With Expert Tips

How To Write a Hypothesis in a Research Paper | Steps & Examples

How to Write an Introduction for a Research Paper - A Step-by-Step Guide

How to Write a Good Research Paper Title

How to Write a Conclusion for a Research Paper in 3 Simple Steps

How to Write an Abstract For a Research Paper with Examples

How To Write a Thesis For a Research Paper Step by Step

How to Write a Discussion For a Research Paper | Objectives, Steps & Examples

How to Write the Results Section of a Research Paper - Structure and Tips

How to Write a Problem Statement for a Research Paper in 6 Steps

How To Write The Methods Section of a Research Paper Step-by-Step

Share this article

Research papers are an essential part of academic life, but one of the most challenging aspects can be finding credible sources to support your arguments. 

With the vast amount of information available online, it's easy to feel overwhelmed. However, by following some simple steps, you can streamline the process of finding reliable sources for your research paper . 

In this guide, we'll break down the process into easy-to-follow steps to help you find the best sources for your paper.

On This Page On This Page -->

Step 1: Define Your Topic and Research Questions

Before you venture into your quest for sources, it's essential to have a clear understanding of your research topic and the specific questions you aim to address. Define the scope of your paper and identify keywords and key concepts that will guide your search for relevant sources.

Step 2: Utilize Academic Databases

Academic databases are treasure troves of scholarly articles, research papers, and academic journals covering a wide range of subjects. Institutions often provide access to these databases through their libraries. Some popular academic databases include:

  • IEEE Xplore
  • Google Scholar

These databases allow you to search for peer-reviewed articles and academic papers related to your topic. 

Use advanced search features to narrow down your results based on publication date, author, and keywords .

Academic Resources Classified by Discipline

Here's a breakdown of prominent databases categorized by academic discipline:

Step 3: Explore Library Catalogs

Your university or local library's catalog is another valuable resource for finding sources. Library catalogs contain books, periodicals, and other materials that may not be available online. 

Use the catalog's search function to locate relevant books, journals, and other materials that can contribute to your research.

Step 4: Consult Bibliographies and References

When you find a relevant source, take note of its bibliography or make a list of sources for the research paper. These lists often contain citations to other works that may be useful for your research. 

By exploring the references cited in a particular source, you can uncover additional resources and expand your understanding of the topic.

Step 5: Boolean Operators for Effective Searches

Boolean operators are words or symbols used to refine search queries by defining the relationships between search terms. The three primary operators include "AND," which narrows searches by requiring all terms to be present; "OR," which broadens searches by including either term or both; and "NOT," which excludes specific terms to refine results further. 

Most databases provide advanced search features for seamless application of Boolean logic.

Step 6: Consider Primary Sources 

Depending on your research topic, primary sources such as interviews, surveys, archival documents, and original data sets can provide valuable insights and support for your arguments. 

Primary sources offer firsthand accounts and original perspectives on historical events, social phenomena, and scientific discoveries.

Step 7: Evaluate the Credibility of Sources

Not all sources are created equal, and it's crucial to evaluate the credibility and reliability of the information you encounter. 

Consider the author's credentials, the publication venue, and whether the source is peer-reviewed. Look for evidence of bias or conflicts of interest that may undermine the source's credibility.

Step 8: Keep Track of Your Sources

As you gather sources for your research paper, maintain a systematic record of the materials you consult.  Keep track of bibliographic information, including author names, publication dates, titles, and page numbers . This information will be invaluable when citing your sources and creating a bibliography or works cited page.

Other Online Sources

In addition to academic databases and library catalogs, exploring popular online sources can provide valuable insights and perspectives on your research topic.  Here are some types of online sources you can consider:

Websites hosted by reputable organizations, institutions, and experts (such as the New York Times) can offer valuable information and analysis on a wide range of topics. Look for websites belonging to universities, research institutions, government agencies, and established non-profit organizations.

Crowdsourced Encyclopedias like Wikipedia

While Wikipedia can provide a broad overview of a topic and lead you to other sources, it's essential to verify the information found there with more authoritative sources. 

Use Wikipedia as a starting point for your research, but rely on peer-reviewed journal articles and academic sources for in-depth analysis and evidence.

Tips for Assessing the Credibility of Online Sources

When using online sources, it's important to exercise caution and critically evaluate the credibility and reliability of the information you find. Here are some tips for assessing the credibility of online sources:

  • Check the Domain Extension: Look for websites with domain extensions that indicate credibility. URLs ending in .edu are educational resources, while URLs ending in .gov are government-related resources. These sites often provide reliable and authoritative information.
  • Look for DOIs (Digital Object Identifiers): DOIs are unique alphanumeric strings assigned to scholarly articles and indicate that the article has been published in a peer-reviewed, scientific journal. Finding a DOI can help you assess the scholarly rigor of the source.
  • Evaluate the Authorship and Credentials: Consider the qualifications and expertise of the author or organization behind the website or blog. Look for information about the author's credentials, affiliations, and expertise in the subject matter.
  • Consider the Currency and Relevance: Assess how up-to-date the information is and whether it aligns with the scope and focus of your research. Look for recent publications and timely analyses that reflect current trends and developments in the field.

Wrapping it up!

Finding sources for your research paper may seem like a challenge, but by following these steps, you can locate credible sources to support your arguments and enhance the quality of your paper. 

By approaching the research process systematically and critically evaluating the information you encounter, you can produce a well-researched and compelling research paper.

If you are struggling with finding credible sources or have time constraints, do not hesitate to seek writing help for your research papers . CollegeEssay.org has professional writers ready to assist you. 

Connect with our essay writing service now and receive expert guidance and support to elevate your research paper to the next level.

Cathy A. (Law)

For more than five years now, Cathy has been one of our most hardworking authors on the platform. With a Masters degree in mass communication, she knows the ins and outs of professional writing. Clients often leave her glowing reviews for being an amazing writer who takes her work very seriously.

Paper Due? Why Suffer? That’s our Job!

Get Help

Keep reading

How to find sources for a research paper

  • Privacy Policy
  • Cookies Policy
  • Terms of Use
  • Refunds & Cancellations
  • Our Writers
  • Success Stories
  • Our Guarantees
  • Affiliate Program
  • Referral Program
  • AI Essay Writer

Disclaimer: All client orders are completed by our team of highly qualified human writers. The essays and papers provided by us are not to be used for submission but rather as learning models only.

how to read a statistical research paper

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Atmospheric observations in China show rise in emissions of a potent greenhouse gas

Press contact :.

Two men in hardhats and safety vests, seen from behind, inspect a forest of electrical pylons and wires on a cloudless day

Previous image Next image

To achieve the aspirational goal of the Paris Agreement on climate change — limiting the increase in global average surface temperature to 1.5 degrees Celsius above preindustrial levels — will require its 196 signatories to dramatically reduce their greenhouse gas (GHG) emissions. Those greenhouse gases differ widely in their global warming potential (GWP), or ability to absorb radiative energy and thereby warm the Earth’s surface. For example, measured over a 100-year period, the GWP of methane is about 28 times that of carbon dioxide (CO 2 ), and the GWP of sulfur hexafluoride (SF 6 ) is 24,300 times that of CO 2 , according to the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report . 

Used primarily in high-voltage electrical switchgear in electric power grids, SF 6  is one of the most potent greenhouse gases on Earth. In the 21st century, atmospheric concentrations of SF 6 have risen sharply along with global electric power demand, threatening the world’s efforts to stabilize the climate. This heightened demand for electric power is particularly pronounced in China, which has dominated the expansion of the global power industry in the past decade. Quantifying China’s contribution to global SF 6 emissions — and pinpointing its sources in the country — could lead that nation to implement new measures to reduce them, and thereby reduce, if not eliminate, an impediment to the Paris Agreement’s aspirational goal. 

To that end, a new study by researchers at the MIT Joint Program on the Science and Policy of Global Change , Fudan University, Peking University, University of Bristol , and Meteorological Observation Center of China Meteorological Administration determined total SF 6  emissions in China over 2011-21 from atmospheric observations collected from nine stations within a Chinese network, including one station from the Advanced Global Atmospheric Gases Experiment ( AGAGE ) network. For comparison, global total emissions were determined from five globally distributed, relatively unpolluted “background” AGAGE stations, involving additional researchers from the Scripps Institution of Oceanography and CSIRO , Australia's National Science Agency.

The researchers found that SF 6  emissions in China almost doubled from 2.6 gigagrams (Gg) per year in 2011, when they accounted for 34 percent of global SF 6 emissions, to 5.1 Gg per year in 2021, when they accounted for 57 percent of global total SF 6  emissions. This increase from China over the 10-year period — some of it emerging from the country’s less-populated western regions — was larger than the global total SF 6 emissions rise, highlighting the importance of lowering SF 6 emissions from China in the future.

The open-access study, which appears in the journal Nature Communications , explores prospects for future SF 6 emissions reduction in China.

“Adopting maintenance practices that minimize SF 6  leakage rates or using SF 6 -free equipment or SF 6  substitutes in the electric power grid will benefit greenhouse-gas mitigation in China,” says Minde An , a postdoc at the MIT Center for Global Change Science (CGCS) and the study’s lead author. “We see our findings as a first step in quantifying the problem and identifying how it can be addressed.”

Emissions of SF 6  are expected to last more than 1,000 years in the atmosphere, raising the stakes for policymakers in China and around the world.

“Any increase in SF 6 emissions this century will effectively alter our planet’s radiative budget — the balance between incoming energy from the sun and outgoing energy from the Earth — far beyond the multi-decadal time frame of current climate policies,” says MIT Joint Program and CGCS Director Ronald Prinn , a coauthor of the study. “So it’s imperative that China and all other nations take immediate action to reduce, and ultimately eliminate, their SF 6 emissions.”

The study was supported by the National Key Research and Development Program of China and Shanghai B&R Joint Laboratory Project, the U.S. National Aeronautics and Space Administration, and other funding agencies.  

Share this news article on:

Related links.

  • Ronald Prinn
  • Joint Program on the Science and Policy of Global Change
  • Center for Global Change Science
  • Department of Earth, Atmospheric and Planetary Sciences

Related Topics

  • Electric grid
  • Climate change
  • Environment
  • International relations

Related Articles

Jason Jay sits on a curved blue sectional couch inside an office

Creating the steps to make organizational sustainability work

Associate Professor Asegun Henry is researching how to use superheated metals like molten tin to store heat from a concentrated solar power system, so it can be used to generate electricity as needed.

Tackling greenhouse gases

Scientists in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) have for the first time determined the strength of the stratosphere’s circulation, based on observations of key chemicals traveling within this atmospheric layer.

Strength of global stratospheric circulation measured for first time

Previous item Next item

More MIT News

Stylized collage shows a vintage photo of an airplane collaged with isometric illustrations of office and healthcare workers.

Most work is new work, long-term study of U.S. census data shows

Read full story →

Stylized illustration uses a vintage lithograph print of steel workers collaged with an isometric illustration of an automated car factory assembly line.

Does technology help or hurt employment?

6x6 grid of purple squares containing yellow shapes representing phonon stability boundaries. A diagonal row of squares from top left to bottom right shows graphical maps of the boundaries.

A first-ever complete map for elastic strain engineering

Rafael Jaramillo sits in his office and looks to the side. A large wrench sits on the window sill. The desk is covered in white paper with many drawings and notes on it.

“Life is short, so aim high”

Oil field rigs overlayed with analytics data

Shining a light on oil fields to make them more sustainable

Three close up photos of speakers at a conference: Julie Shah, Ben Armstrong, and Kate Kellogg

MIT launches Working Group on Generative AI and the Work of the Future

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram
  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

  • Your Health
  • Treatments & Tests
  • Health Inc.
  • Public Health

How to Thrive as You Age

Women who do strength training live longer. how much is enough.

Allison Aubrey - 2015 square

Allison Aubrey

how to read a statistical research paper

Strength training is good for everyone, but women who train regularly get a significantly higher boost in longevity than men. Gary Yeowell/Getty Images hide caption

Strength training is good for everyone, but women who train regularly get a significantly higher boost in longevity than men.

Resistance training does more than help us build strong muscles.

A new study finds women who do strength training exercises two to three days a week are more likely to live longer and have a lower risk of death from heart disease, compared to women who do none.

"We were incredibly impressed by the finding," says study author Martha Gulati , who is also the director of preventive cardiology at Cedars Sinai in Los Angeles.

Of the 400,000 people included in the study, only 1 in 5 women did regular weight training. But those who did, saw tremendous benefits.

"What surprised us the most was the fact that women who do muscle strengthening had a reduction in their cardiovascular mortality by 30%," Gulati says. "We don't have many things that reduce mortality in that way."

Strength training is also good for bones, joints, mood and metabolic health. And at a time when many women focus on aerobic activity and hesitate to do weight training, the findings add to the evidence that a combination of both types of exercise is powerful medicine. "Both should be prescribed," Gulati says.

Millions of women are 'under-muscled.' These foods help build strength

Shots - Health News

Millions of women are 'under-muscled.' these foods help build strength.

The findings are part of a larger study, published in The Journal of the American College of Cardiology, which evaluated the differences in the effects of exercise between men and women.

While the study finds that even small doses of exercise are beneficial for everyone, the data show that women need less exercise than men to get the same gains in longevity.

Women who did moderate intensity exercise, such as brisk walking, five times a week, reduced their risk of premature death by 24%, compared to 18% for men.

"The take home message is – let's start moving," says Eric Shiroma , a prevention-focused researcher at the National Heart, Lung, and Blood Institute, part of the National Institutes of Health, which provided grant support for the research..

It's not exactly clear what drives the variance between sexes, but there are physiological differences between men and women, and differences in heart disease risks , too.

People born female have less muscle and lower aerobic capacity in general. Also, women have more capillaries feeding part of their muscles, Shiroma says. The findings show women need to do less exercise to change their baseline of aerobic and muscular strength. "It might be that this relative increase in strength [in women compared to men] is what's driving this difference in benefit," he says.

The results show a little can go a long way. "The benefits start as soon as you start moving," Shiroma says.

It's increasingly common to see female weight lifters and body builders on social media, and many gyms and work-out studios now incorporate weight training into many of their classes and offerings. But, given that about 80% of women in the study said they don't participate in regular weight training, there's still a lot of hesitancy.

how to read a statistical research paper

Ann Martin says her mood improves with resistance training. "It gets your blood flowing," she says. "It feels good." Deb Cutler/Ann Martin hide caption

"I was always the awkward one in gym class back in school days," says Ann Martin, 69, of Wilmington, Del. She shied away from gyms and weight-training machines. Martin has always been a walker, but she realized she needed to build more strength, so last year she started working out with a trainer to learn how to use the equipment. "It's fun now," she says. "I can feel my muscles getting stronger."

Strength training can be intimidating, Shiroma says. "But it's not all bodybuilders trying to lift super amounts of weight." He says there are many ways to incorporate resistance training into your life.

All activities that require your muscles to work against a weight or force count as strength training. This includes the use of weight machines, resistance bands or tubes, as well as all the many ways we can use our own body weight, as we do with push-ups and squats.

The findings of this new study fit with the Physical Activity Guidelines for Americans , which recommend that adults get a minimum of 2.5 hours of moderate-intensity exercise a week, that's about 30 minutes, most days of the week. The guidelines also call for doing strength-based activities at least two days a week.

This 22-Minute Workout Has Everything You Need

This 22-Minute Workout Has Everything You Need

The increase in lifespan can likely be explained in part, by the well-being that comes from the other hidden benefits. Here are 5 ways building strength can boost good health.

1. Strength training helps protect joints.

Physical therapists often recommend resistance training for patients with knee and hip pain. "Strength training protects joints, resulting in less stress through the body," says Todd Wheeler, a physical therapist at MedStar Health Physical Therapy in Washington, D.C . "If joints could talk, they would say 'It's not my fault I'm irritated," Wheeler says. They'd blame it on weak muscles. He says strong muscles support the joints, which can help decrease joint pain. Wheeler recommends starting small and simply. For instance, try a few squats and table pushups. "Listen to your body and gradually increase intensity over time," he says.

2. Building muscle burns more calories

Aerobic exercise – such as running and cycling – typically burns more calories in real time compared to strength training. But people who weight train can get a boost in calorie burning over the long-term.

"When you're doing resistance training, you're building muscle. That muscle requires energy," says Bryant Johnson , a trainer who wrote The RBJ Workout . So, adding muscle mass can help people burn more calories.

Dr. Gulati also points to research that shows weight lifting and resistance training can help people lose more fat and improve body composition.

3. Resistance training protects against injuries and falls

As we've reported, millions of Americans, especially women, are under-muscled, and muscle mass is a predictor of longevity .

Since muscle mass peaks in our 30s and then starts a long, slow decline, we need to take steps to slow this down. If we don't do strength training exercise, we're more likely to become weak, increasing the risk of falls, which is the top cause of death among older adults in the U.S.

And since muscle loss - also known as sarcopenia - affects more than 45% of older adults in the U.S., "it's important to know about it and take steps to prevent it," says Richard Joseph , a wellness focused physician. He says strength training improves bone density which also protects against injuries and falls.

Joseph says people can get the biggest bang for their buck when they're starting out by focusing on lower body exercises that work big muscle groups in the legs.

4. Strength training helps control blood sugar

About 1 in 3 adults in the U.S. has prediabetes. Strength training can help control blood sugar by clearing glucose out of the bloodstream.

When we use our muscles during exercise, whether it's pushing, pulling, lifting or moving, they require more glucose for energy. This explains why exercise after meals can help control blood sugar.

And a recent study found strength training can be even more effective than aerobic activity in controlling blood sugar in people with diabetes.

5. Muscle building may help boost mood

A meta-analysis published in the medical journal JAMA Psychiatry in 2018, which included the results of more than 30 clinical trials, found a reduction in symptoms of depression among people who did weight training two times a week or more.

Strength training has also been shown to improve depressive symptoms in people at risk of metabolic disease. And, research shows strength training can tamp down anxiety, too.

Ann Martin says it makes sense that our moods improve when we move. "It gets your blood flowing," she says. "It feels good."

Scientists can tell how fast you're aging. Now, the trick is to slow it down

Scientists can tell how fast you're aging. Now, the trick is to slow it down

This story was edited by Jane Greenhalgh

  • resistance training
  • strength training
  • weight training
  • heart disease

IMAGES

  1. Analysis In A Research Paper

    how to read a statistical research paper

  2. How To Write Statistics Paper

    how to read a statistical research paper

  3. How To Write a Statistical Research Paper: Tips, Topics, Outline

    how to read a statistical research paper

  4. (PDF) The most-cited statistical papers

    how to read a statistical research paper

  5. Statistical research paper examples

    how to read a statistical research paper

  6. MATH 115 ELEMENTARY STATISTICS RESEARCH PAPER

    how to read a statistical research paper

VIDEO

  1. Statistical Foundations

  2. Research methodology and statistical inference Paper|M.A Economics 2nd Semester

  3. Statistical Analysis ( BCOM ) 2023 paper

  4. Building Research Design and Statistical Analysis using Data

  5. How to choose STATISTICAL TOOLS for research data analysis

  6. 26/02/2024 Question Paper with Answer for Statistical Officer (Statistics Dept.) Exam

COMMENTS

  1. How To Read A Scientific Manuscript

    One should read the title and Abstract first to establish a blueprint for what the author(s) wants to convey related to their research. The next step in reading a manuscript will depend upon one's prior knowledge of the topic, goals of reading the paper, level of concentration/time to devote to reading, and overall interest.

  2. The Beginner's Guide to Statistical Analysis

    Table of contents. Step 1: Write your hypotheses and plan your research design. Step 2: Collect data from a sample. Step 3: Summarize your data with descriptive statistics. Step 4: Test hypotheses or make estimates with inferential statistics.

  3. PDF STAT 572 Critical reading How to read a research paper

    After Step I, skim through the paper, and check the following. Is the paper theory-heavy? Does it contain highly technical proofs? How many? What kind of mathematical results are involved? Is the paper a follow-up of classic or well-known paper or result? Is it part of a \cluster" of related papers by the same group of authors? Are the data ...

  4. Introduction to Research Statistical Analysis: An Overview of the

    Introduction. Statistical analysis is necessary for any research project seeking to make quantitative conclusions. The following is a primer for research-based statistical analysis. It is intended to be a high-level overview of appropriate statistical testing, while not diving too deep into any specific methodology.

  5. PDF Anatomy of a Statistics Paper (with examples)

    important writing you will do for the paper. IMHO your reader will either be interested and continuing on with your paper, or... A scholarly introduction is respectful of the literature. In my experience, the introduction is part of a paper that I will outline relatively early in the process, but will nish and repeatedly edit at the end of the ...

  6. Understanding Statistics and Journal Articles in Research with

    This lecture provides an overview of how to access, evaluate, understand, and summarize scholarly journal articles. See the article referenced in the video a...

  7. How To Read A Paper

    On this page you will find links to articles in the BMJ that explain how to read and interpret different kinds of research papers: Papers that go beyond numbers (qualitative research) Trisha Greenhalgh, Rod Taylor Papers that summarise other papers (systematic reviews and meta-analyses) Trisha

  8. How to Report Statistics

    In many fields, a statistical analysis forms the heart of both the methods and results sections of a manuscript. Learn how to report statistical analyses, and what other context is important for publication success and future reproducibility. A matter of principle. First and foremost, the statistical methods employed in research must always be:

  9. How to read a paper: Statistics for the non-statistician. II

    This article continues the checklist of questions that will help you to appraise the statistical validity of a paper. The first of this pair of articles was published last week.1 Has correlation been distinguished from regression, and has the correlation coefficient ( r value) been calculated and interpreted correctly? For many non-statisticians, the terms "correlation" and "regression ...

  10. Descriptive Statistics

    Types of descriptive statistics. There are 3 main types of descriptive statistics: The distribution concerns the frequency of each value. The central tendency concerns the averages of the values. The variability or dispersion concerns how spread out the values are. You can apply these to assess only one variable at a time, in univariate ...

  11. Inferential Statistics

    Example: Inferential statistics. You randomly select a sample of 11th graders in your state and collect data on their SAT scores and other characteristics. You can use inferential statistics to make estimates and test hypotheses about the whole population of 11th graders in the state based on your sample data.

  12. How to Find Statistics for a Research Paper: 14 Steps

    Identifying the Data You Need. 1. Outline your points or arguments. Before you can figure out what kind of statistics you need, you should have a sense of what your research paper is about. A basic outline of the points you want to make or hypotheses you're trying to prove can help you narrow your focus. [3]

  13. How to Read a Scholarly Article

    Infographic: How to read a scientific paper "Because scientific articles are different from other texts, like novels or newspaper stories, they should be read differently." How to Read and Comprehend Scientific Research Articles

  14. How to Read a Research Table

    The 95% CI of 0.96 to 1.19 includes 1.0. This means these results are not statistically significant and the increased risk of breast cancer is likely due to chance. The Million Women's Study found a relative risk of breast cancer of 1.13 with a 95% CI of 1.10 to 1.16. This is shown as 1.13 (1.10-1.16) in the table.

  15. These are the statistics papers you just have to read

    While none of these papers actually need to be read, I really think it might help statistics Ph.D. students to get a sense of the gap between research practice and statistical theory and the problems and efforts of communication between statisticians and applied scientists.

  16. How to Write Statistics Research Paper

    Prepare and add supporting materials that will help you illustrate findings: graphs, diagrams, charts, tables, etc. You can use them in a paper's body or add them as an appendix; these elements will support your points and serve as extra proof for your findings. Last but not least: Write a concluding paragraph for your statistics research paper.

  17. Reasoning With Statistics: How To Read Quantitative Research

    This course aims to help students gain sufficient knowledge of statistical methods to intelligently read quantitative research in fields ranging from chemistry to speech. Designed to help students gain sufficient knowledge of statistical methods to intelligently read quantitative research in fields ranging from chemistry to speech. No prior background in statistics or advanced mathematics in ...

  18. The Beginner's Guide to Statistical Analysis

    Table of contents. Step 1: Write your hypotheses and plan your research design. Step 2: Collect data from a sample. Step 3: Summarise your data with descriptive statistics. Step 4: Test hypotheses or make estimates with inferential statistics.

  19. How To Write a Statistical Research Paper: Tips, Topics, Outline

    Explain the importance of what you are doing. You can also include suggestions for future work. Make sure to restate what you mentioned in the introduction and touch a little bit on the method used to collect and analyze your data. In short, sum up everything you've written in your essay.

  20. What the data says about abortion in the U.S.

    About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions.

  21. Research: How Different Fields Are Using GenAI to Redefine Roles

    Examples from customer support, management consulting, professional writing, legal analysis, and software and technology.

  22. What the Data Says About Pandemic School Closures, Four Years Later

    At the school district level, that finding also holds, according to an analysis of test scores from third through eighth grade in thousands of U.S. districts, led by researchers at Stanford and ...

  23. IBM Quantum Computing Blog

    To get an idea of how the qubits are connected, imagine they are put onto a square grid, like a piece of graph paper. Curl up this piece of graph paper so that it forms a tube, and connect the ends of the tube to make a donut.

  24. Inside the sex-trafficking investigation into Sean 'Diddy' Combs

    Legal experts say it could take time to build a criminal case against Combs but note that the civil suits could offer investigators a road map.

  25. How to Find Sources For a Research Paper in Easy Steps

    Finding sources for your research paper may seem like a challenge, but by following these steps, you can locate credible sources to support your arguments and enhance the quality of your paper. By approaching the research process systematically and critically evaluating the information you encounter, you can produce a well-researched and ...

  26. Atmospheric observations in China show rise in emissions of a potent

    To achieve the aspirational goal of the Paris Agreement on climate change — limiting the increase in global average surface temperature to 1.5 degrees Celsius above preindustrial levels — will require its 196 signatories to dramatically reduce their greenhouse gas (GHG) emissions. Those greenhouse gases differ widely in their global warming potential (GWP), or ability to absorb radiative ...

  27. Strength training boosts longevity, mood and metabolism as it builds

    Strength training is good for everyone. But women who train regularly, reduce their risk of death from heart disease significantly. And here are 5 other hidden benefits of building muscle.